It's Phishing Season! In fact, it's ALWAYS Phishing Season!
It's always great to hear from our readers, we just got this note in from Tom on a phish that he recently encountered:
One of my followers on Twitter (whose account was likely hacked or fell victim to this scam) sent me the following DM:
hilarious pic! bit.ly/KIbUqq
That bit.ly URL redirects to:
http://tvviiter.com/log-in/q2/?session_timeout=iajb864?emgzw
That site is clearly impersonating the Twitter.com site, and attempts to trick users into typing in their username and password. As of this writing (May 30, 2012 12:18pm EDT), the site is still available.
The whois record shows it as registered to "XIN NET TECHNOLOGY CORPORATION" in Shanghai, China. The whois record also have an HTML "script" tag in it, which may be an attempt to XSS users using web-based WHOIS services (though I did not try loading the JS file to find out).
While I've certainly seen reply spam on Twitter, I don't recall ever seeing this type of DM spam leading to phishing before. I thought that you guys might find it interesting!
I sent a message using Twitter's online support form, and I also submitted the URL to Google's SafeBrowsing list.
This was just too good an example to pass up writing about. Things to watch out for:
- Any link you're asked to click on, in any context is a risk - READ THE UNDERLYING LINK to verify that you're going where you think you are.
- If it's a shortened link (bit.ly or whatever), check it with a sacrificial VM or from a sandboxed browser that you trust is actually partiitioned and "safe"
- Before you click the link - READ THE LINK AGAIN - the "vv" instead of a "w" character in twitter is a nice touch, easy to miss
- Finally, before clicking the link, DON'T CLICK THE LINK. Cut and paste it into your browser rather than clicking it directly.
If you've got any other pointers, or if I've missed anything, please use our comment to .. well... comment !
===============
Rob VandenBrink
Metafore
What's in Your Lab?
The discussion about labs got me thinking about what we all have in our personal labs. The "What's in your lab?" question is a standard one that I ask in interviews, it says a lot about a person's interests and commitment to those interests.
I just revamped my lab (thanks to my local "cheap servers off lease" company and eBay). Previously I was able to downsize and host my entire lab on my laptop with a farm of virtual machine and a fleet of external USB drives, but as I ramp up my requirements for permanent servers (an MS Project server, an SCP server, a web honeypot and an army of permanent, cpu and memory hungry pentest VMs), I had to put some permanent hosts back in.
So to host all this, I put in 3 ESX servers with 20 cores altogether (thanks eBay!). I picked up a 4 gig fiber channel switch and 4 HBAs for a song, also on eBay. I had an older XEON server with lots of drive bays, so I filled it up with 1TB SATA drives and a SATA raid controller - with a fiber channel HBA and Openfiler, I've now got a decent Fiber Channel SAN (with iSCSI and NFS thrown in for good measure). Add a decent switch and firewall for VLAN support and network segmentation, and this starts to look a whole lot like something useful !! The goal was that after it's all bolted together, I can do almost anything in the lab without physically being there.
I still keep lots of my lab on the laptop VM farm - for instance my Dynamips servers for WAN simulation are all still local, so are a few Linux VMs that I use for coding in one language or another for instance.
Enough about my lab - what's in your lab? Have you found a neat, cheap way of filling a lab need you think others might benefit from? Do you host your lab on a laptop for convenience, or do you have a rack in your basement (or at work)? Please use our comment form and let us know!
===============
Rob VandenBrink
Metafore
Too Big to Fail / Too Big to Learn?
There's an interesting trend that I've been noticing in datacenters over the last few years. The pendulum has swung towards infrastructure that is getting too expensive to replicate in a test environment.
In years past, there may have been a chassis switch and a number of routers. Essentially these would run the same operating system with very similar features that smaller, less expensive units from the same vendor might run. The servers would run Windows, Linux or some other OS, running on physical or virtual platforms. Even with virtualization, this was all easy to set up in a lab.
These days though, on the server side we're now seeing more 10Gbps networking, FCoE (Fiber Channel over Ethernet), and more blade type servers. These all run into larger dollars - not insurmountable for a business, as often last year's blade chassis can be used for testing and staging. However, all of this is generally out of the reach of someone who's putting their own lab together.
On the networking side things are much more skewed. In many organizations today's core networks are nothing like last year's network. We're seeing way more 10Gbps switches, both in the core and at top of rack in datacenters. In most cases, these switches run completely different operating systems than we've seen in the past (though the CLI often looks similar).
As mentioned previously , Fiber Channel over Ethernet is being seen more often - as the name implies, FCoE shares more with Fiber Channel than with Ethernet. Routers are still doing the core routing services on the same OS that we've seen in the past, but we're also seeing lots more private MPLS implementations than before.
Storage as always is a one-off in the datacenter. Almost nobody has a spare enterprise SAN to play with, though it's becoming more common to have Fiber Channel switches in a corporate lab. Not to mention the proliferation of Load Balancers, Web Application Firewalls and other specialized one-off infrastructure gear that are becoming much more common these days than in the past.
So why is this on the ISC page today? Because in combination, this adds up to a few negative things:
- Especially on the networking and storage side, the costs involved mean that it's becoming very difficult to truly test changes to the production environment before implementation. So changes are planned based on the product documentation, and perhaps input from the vendor technical support group. In years past, the change would have been tested in advance and likely would have gone live the first time. What we're seeing more frequently now is testing during the change window, and often it will take several change windows to "get it right".
- From a security point of view, this means that organizations are becoming much more likely to NOT keep their most critical infrastructure up to date. From a Manager's point of view, change equals risk. And any changes to the core components now can affect EVERYTHING - from traditional server and workstation apps to storage to voice systems.
- At the other end of the spectrum, while you can still cruise ebay and put together a killer lab for yourself, it's just not possible to put some of these more expensive but common datacenter components into a personal lab
What really comes out of this is that without a test environment, it becomes incredibly difficult to become a true expert in these new technologies. As we make our infrastructure too big to fail, it effectively becomes too big to learn. To become an expert you either need to work for the vendor, or you need to be a consultant with large clients and a good lab. This makes any troubleshooting more difficult (making managers even more change-adverse)
What do you think? Have I missed any important points, or am I off base? Please use our comment for for feedback !
===============
Rob VandenBrink
Metafore
Comments