CSAM: False Positives, and Managing the Devils
Continuing our theme of False Positives this month, I’d like to talk about the process of managing false positives we encounter in the course of analysis. False positives will almost always show at some point during a security analysis, which leads to unwanted additional work on the part of either the sysadmins, security teams, or both. Even worse, continued false positives can lead to complacency during analysis, where things are ‘assumed’ false because they have been seen before, and allowed to pass as normal when indeed it would be a symptom of malicious behavior.
Managing false positives in our testing and analysis is part of the overall security process, which can be used to identify and eliminate false positives. Pieces of the process which are key to the lifecycle management are:
-Configuration Management (we need to know what we have on our hosts, and what it should be doing)
-Ports, Protocols, and Services baseline (need to know what we have on the wire, and where it’s going)
-Continuous Monitoring (Either monitoring the wire, or the host; this will tell us when a condition occurs which requires our attention)
An ideal scenario in an operating environment may run something like this: “A Continuous Monitoring program alerts that a vulnerability exists on a host. A review of the configuration of the host shows that the vulnerability does not exist, and a verification can be made from the traffic logs which reveal that no traffic associated with the vulnerability has transited the wire. The Continuous Monitoring application should be updated to reflect that the specific vulnerability reported on that specific host is a false positive, and should be flagged accordingly in future monitoring. The network monitoring would *not* be updated, because it did not flag a false positive, leaving the defense-in-depth approach in tact.”
Now, this is *ideal*, and a very high level, but it hopefully gives some ideas on how false positives could be managed within the enterprise, and the processes that contribute. We would really like to hear how false positives are managed in other enterprise environments, so let us know. :)
tony d0t carothers --gmail
Comments
In other words, when a new version or update for the monitoring tools was deployed, all the known false positives were "unchecked" as acceptable to see if they would show up again with the new update. If they did show up again, we would re-validate them and where a fix was still not available we would "check" them off as known false positives and pressed on with our work.
Also, at least once a year we would review all of the known false positives and re-validate if they were still false positives. I know this sounds like extra work, but the effort demonstrated due diligence and due care in our efforts to keep the monitoring tools accurate in their analysis.
Lastly, in several cases we tried to implement threshold and trend analysis into the false positives mix. If at one point we determined that a certain number of the false positives collected was normal, we would try to set the tools to alert to any dramatic increase in the number of false positive entries. I cannot say if this would have helped because we never had one of those alerts pop for us, but we felt better that if something dramatic was happening across our enterpise, we would get some kind of notice.
Good documentation of each false positive is crucial for the above processes to work well.
In every shop I have worked in, the most desired goal was to find a way to end the false positive entries outright. Sometimes, after working with developers or vendors, we would find a fix or configuration change that would make sense from a security standpoint while reducing the false positive output.
All of this takes time, talent, and persistence of purpose. Sadly, more often than not all three are in short supply.
Anonymous
Oct 27th 2014
9 years ago