Today, we solicited ideas regarding log analysis and correlation in enterprise environment. Logs in a large environment can be overwhelming, Gigs and Gigs of logs can be generated every hour by tens of thousands of devices in the environment. To get a clear picture of what's happening in the environment and to get audit trail, we must analysis and store the logs properly. Any tips and tricks on the log strategy our readers would like to share? Do you filter events before a centralized collection point? Do you attept to collect as much as possible from all devices (eg. IDS with full signature set) and then trim down the events later at your log analysis engine?
*** UPDATE ***
Claudiu Rusnac uses syslog-ng and liked its ability to sort based upon date, and also hostname. He also liked Arcsight for aggregation and correlation on all the win32, unix, firewall, router/switch, IDS logs.
Ronaldo C Vasconcellos reminded us of the great article written by Marcus Ranum as a great resource on filtering events. http://www.ranum.com/security/computer_security/papers/ai/
Chad liked Cisco MARS (formerly Protego) and agree that it is good for network based events but relatively very weak on the host side.
Chris Reynolds developed a customized ASP/SQL solution for MS servers to parse log files and store them in SQL database. It will also trigger an email alert on interesting events
Jeff Bryner summerize the logs in RSS feeds then use a RSS supported browser (such as firefox) to view as news stories.
Jul 7th 2005
1 decade ago