Threat Level: green Handler on Duty: Tom Webb

SANS ISC: InfoSec Handlers Diary Blog - Internet Storm Center Diary 2015-12-07 InfoSec Handlers Diary Blog

Sign Up for Free!   Forgot Password?
Log In or Sign Up for Free!

hashcat and oclHashcat are now open source

Published: 2015-12-07
Last Updated: 2015-12-28 00:52:48 UTC
by Rick Wanner (Version: 1)
0 comment(s)

For those of you in the pentesting world, atom, the principal developer of hashcat and oclHashcat has announced that they are going to be released to open source.  In the release he reveals a number of good reasons why it makes sense to do this at this time, but the biggest one being to permit advancement in the bitsliced DES GPU kernels.  Essentially in order to take full advantage of the bitsliced GPU capabilities requires recompilation of the kernel at run time, and this requires the source code to hashcat be available.  "Bit slicing allows to reach a much higher cracking rate of DES-based algorithms (LM, Oracle, DEScrypt, RACF). DEScrypt, for instance, which is well known on Unix-like systems, can reach a performance gain of 300-400% with the bit slice technique."

These capabilities are available with the the version 2.0 code which was released on github right after the announcement.


-- Rick Wanner MSISE - rwanner at isc dot sans dot edu - - Twitter:namedeplume (Protected)

0 comment(s)

Offensive Countermeasures against stolen passswords

Published: 2015-12-07
Last Updated: 2015-12-07 20:46:49 UTC
by Mark Baggett (Version: 2)
3 comment(s)

A while ago I shared a diary on offensive counter measures against stolen Windows hashes.  You can review that diary here.

This one is for Linux!   This fun tweet by @nixcraft showed how an attacker could us Bash terminal commands to move the cursor and disguise the contents of a file.

By moving the cursor around and printing over existing lines in the script the attackers hide the evil nature of the file.   As @nixcrafts tweet "Hacking like its 1999" implies this has been around for a while, but it is still pretty fun.   After reading this tweet it occurred to me that I could use this same technique to protect my /etc/shadow file when an attacker steals a copy of it and/or displays it with cat, tail or some other command that processes terminal cursor movements.    Let's give it a try!  Here is what at attacker sees when they steal your hashes from your Linux machine...

OH NO!!  There is the student accounts hash displayed in all its glory for the attacker to steal and crack.   Enter Liam Neeson.   Liam Neeson is a small python program that inserts terminal cursor movements to disguise your /etc/shadow file.  Here is the help options and an example of running the script to protect my etc shadow file.

So what does an attacker see now that the file is protected by  This..

Now your password hashes are safe. Notice that the student hash is no longer visible.  NOTE TO ALL ATTACKERS:  If you do hack a server protected by Liam Neeson the proper response is to erase the shadow file and replace it with a file that simply says "Good Luck".    

You can download a copy of from here:

Would I use this in production?   Probably not.   Your logins will still work and your system will function properly until you add another user.   I don't know what will happen when you try to add a user to the end of that file.  It is unlikely that attackers will leave you alone based on this defense.   As an attacker it would only spark my interest.   BUT the concept is a good one if you are a little more subtle.  Look at servers sitting in your DMZ where the users will not change.   Then make small subtle changes to the hashes that appear when attackers view the files.  Here is an example where I just overwrote a part of the hash with the word "Changed hash".   

Again this is done with the intention of being obvious so that you can see what it is doing.  But, if I just overwrote a portion of the password hash with characters that appear to be part of the password hash then an attacker is likely to steal and try to crack those modified password hashes.  

Of course there are obvious limitations.   This will only deceive attackers who display the file with a command that interprets the cursor movements.   But... defense indepth... every little bit helps.   

Check out my Python class and learn how to create tools like this.  SEC573 Python for Penetration testers covers topics every defender can use to protect their network.  Non-programmers are welcome! We will start with the essentials and teach you to code.

Come check out Python in Orlando Florida, Berlin Germany or Canberra Australia!!   For dates and details click here.

Follow me on Twitter at @MarkBaggett  (I tweeted about this one a few months ago)



3 comment(s)

Continuous Monitoring for Random Strings

Published: 2015-12-07
Last Updated: 2015-12-07 16:42:00 UTC
by Mark Baggett (Version: 2)
0 comment(s)

Greeting ISC readers.  Mark Baggett here.  Back in August I released a tool called that will help to identify random characters in just about any string by looking at the frequency of occurrence of character pairs.  It can be used to successfully identify randomly generated host names in DNS packets, SSL certificates and other text based logs.  I would encourage you to read the original blog for full details on the tool.   You can find the original post here:

For our click averse reader, here is the TLDR version.

  1. You build frequency tables based on normal artifacts in your environment.  For more accurate measurements you should create a separate frequency table for each type of artifact.   For example, you might create a separate frequency table for:
    1. DNS Host names that are normally seen for your environment
    2. Names of files on your file server
    3. Host names inside of SSL certificates
    4. URLs for a specific Web Application
    5. * Insert any string you want to measure here *
  2. You measure strings observed in your environment against your "normal" frequency tables and any strings with character frequency pairs that are different that your frequency tables will have low scores. and freq.exe are command line tools designed to measure a single string.  It wasn’t designed for high speed continuous monitoring.   If you tried to use to measure everything coming out of your SecurityOnion sensor, integrate it into Bro logs or do any enterprise monitoring it would be overwhelmed by the volume of requests and fail miserably.  Justin Henderson contacted me last week and pointed out the problem.  To resolve this issue I am releasing is a multithreaded web based API that will allow you to quickly query your frequency tables.   The server isn’t intended to replace   Instead, after building a frequency table of normal strings in your environment with, you start a server up to allow services to measure various strings against that table.  You can run multiple servers to provide access to different frequency tables.   When starting the server you must provide a TCP port number and a frequency table.    Here is the help for the program:

Once the server is started you can use anything capable of making a web request to measure a string.   If you make an invalid request the server will provide you with documentation on the API syntax:

Although the API supports both measuring and updating tables, I recommend only using the measure command.   If you need to update tables I recommend using  There are a couple of reasons for this.   First, you should only use screened, known good data when building your frequency tables.  Second, every time the frequency tables are updated the server will flush its cache so you should expect there to be a performance hit.   Between each request to the web server the server will check to see if you hit Control-C and if you did it will clean up the threads and save the updates to the frequency tables before quiting.

Here is an example of starting the server (Step 1) and measuring some strings using Powershell’s (New Object Net.WebClient). DownloadString(“”) (Step 2) and stopping the server by hitting CONTROL-C (Step 3).

Powershell is awesome, but there are lots of ways to query the server.   You could simply use wget or curl from a bash prompt.   Here is an example of using wget to query the API:

Notice that, in this case, we have to escape the & with the backslash.  This can also be integrated into your SEM and enterprise monitoring systems.   Justin Henderson, GSE #108 and enterprise defender extraordinaire, has already done some testing integrating this into his infrastructure and he was kind enough to share those results.  I’ll let him share that with you…  Justin?

By Justin Henderson @SecurityMapper

When I first saw I instantly saw the potential for large scale frequency analysis using enterprise logs.  To make the story short, I finally found some free time and attempted to put to use during log ingestion and that’s when we discovered it didn’t like being called constantly.  After sharing what I was trying to do with Mark he whipped out his mad awesome python skills and next thing you know I’m doing a beta of

In my environment I am using Elasticsearch, Logstash, and Kibana, or ELK for short, as my log collection, storage, and reporting.  Logstash is the component that is ingesting logs and parsing them.  As a result, I added a call to in the configuration files for any logs I want to do frequency analysis with.  Below is an example of how I’m making a call to from within a bro DNS configuration on Logstash:

The full configuration file can be found in GitHub and is called bro_dns.conf.  To see this file or many others visit it at

The initial testing of went off without a hitch.  I did a burn in test of over 4 million DNS records running through in about 36 hours and it worked and remained stable.  Now with all this data it is easy to look for random generated domains. 

For example, logs with low scores (more likely to be random) look like:

While logs with high scores or data that looks “normal” look like:

As you can see, based on my frequency tables, falls within an expected frequency score while things like are considered random based on my frequency tables.

Now think of the real world use cases for this…  Let’s take the example of malware exploiting a system and calling down a payload such as Meterpreter.  The malware may be pulled down from a web server using a random domain name, URL path, and/or filename.  At this point we as defenders should have DNS logs, proxy logs, and possibly file metadata logs from something like Bro.  After the payload is launched it commonly will create a service with a randomly generated name and then delete this service.  At this point we additionally have a Windows event log with a random service name and a Windows event log on the deletion of that service.  

Now I’m not saying the stars will always align…. but with frequency analysis we now have four sources to test for entropy or randomness.  That now is four sources where one attack could have possibly been discovered.  And the use cases could just go on… 

To sum it all up, thanks and a big shout out to Mark for first creating and now  It’s another step in the right direction for defense.

Thanks Justin!

To Download a copy of and follow this link here to my github page:

Does fall short of something you need it to do?   Send me an email and I'll see if I can make it work for you.  Or come check out my Python class and learn how to adopt the code yourself!.  SEC573 Python for Penetration testers covers topics every defender can use to protect their network.  Non-programmers are welcome! We will start with the essentials and teach you to code.

Come check out Python in Orlando Florida, Berlin Germany or Canberra Australia!!   For dates and details click here.

Follow me on Twitter at @MarkBaggett

Follow Justin at @securitymapper

So what do you think?   Leave me a comment.






0 comment(s)
Diary Archives