If my security agents were not working correctly, then I would get an alert. Since no one said there is a problem with my security agents, then everything must be ok with them. These are just a couple of the assumptions that we make as cybersecurity practitioners each day about the security agents that serve to protect our respective organizations. While it is preferable to think that everything is ok, it is much better to validate that assumption regularly.
I have been fortunate to work in cybersecurity for many years and at several diverse types of organizations. During that time, I always found it helpful to check on the status of the security agents periodically. I have found that by scheduling regular and recurring calendar reminders, I can better validate the assumption that the security agents are working as intended. Specific areas of focus include both confirming the security agent is installed correctly and that it is performing the actions specified in the policy.
Central monitoring consoles are a great place to start for security agents that have not communicated back to the console within an acceptable time. The output from the console can be compared to the Inventory and Control of Hardware Assets to ensure that every system has a security agent installed. Whether an automated or manual task, this practical step can help to validate that assumption.
What assumptions can you validate today? Think about that over the weekend and determine to take action on Monday morning! By being intentional to validate the health of your security agents, you can do a great deal to validate the assumptions you are making.
How to a how long can you stand not to know when your security agents are not working as expected? Let us know of your successes in the comments section below!
Russell EubanksLeading Cybersecurity Change: Building a Security-Based Culture - Security Awareness Summit & Training 2021
Oct 19th 2019
|Thread locked Subscribe||
Oct 19th 2019
1 year ago
A related practice I always try to implement is mechanisms to generate "heartbeat" log entries in systems.
These are implementations where a simple "still here" log entry is created regularly at a known and reasonable interval in a log for each system I am monitoring.
This is best done locally, so that if communications between a system and the log aggregation is interrupted, it is possible to see if log services were interrupted while communications were down.
Doing this is not always possible on every system, but the systems that are important should have some way to implement a service locally that only does this one thing.
Heartbeat log entries can sometimes also be established at system and application log levels. More is better than not enough.
How the interval is decided for a given system is completely dependent on the risk and criticality assessments of systems to be monitored.
I have found that a simple log entry every 60 seconds to be reasonable for most critical or high risk profile systems. Anything more aggressive may become burdensome to collect or assess.
For less critical or lower risk systems, a log every 5 minutes is usually determined to be acceptable.
Regardless, the heartbeat schedule should coincide with any policy or standards or SLAs regarding monitoring of system activity and status.
What would this heartbeat log be used for?
A regularly schedule log entry can be watched by SIEM or log inquiries for monitoring or investigation purposes.
A break in the heartbeat log for a given system can indicate something is amiss either at the source system or in communications to the log aggregator.
Even if log shipping is interrupted by communications, a good log aggregator will "catch up" after comms are re-established.
- Ability to detect when log shipping processes are interrupted. Usually this is a networking issue, but could also be a configuration issue.
- Ability to detect when log services have been interrupted. If log services are stopped (without prior expectations) - it should become a WTF moment and require immediate investigation.
- Ability to detect when systems are up or down in a short timeframe. This is a very active monitoring of logs, but possible if one sets up their SIEM solution appropriately.
There are some systems that will not generate many logs if they are not very busy. This silence can be deafening if that system is considered critical for sustained operations.
There are also some systems that do not support local log generation. In such cases, it may be possible to create some mechanism that forces the creation of a known log entry in the target system. I like to envision this as one system "poking" another system on a regular basis.
As you mention in your article, it is easy to become complacent in the functionality of common services and systems.
Forcing a heartbeat mechanism into the monitoring data can help mitigate complacency.
Oct 21st 2019
1 year ago