4 Best Practices for Today’s Log Management

spyLog management has long been considered a staple of most organization’s security platforms. It is the foundation for threat detection, data analytics and compliance audits. It provides the definitive record of what has occurred in your company’s digital universe.

However

There are a few process hurdles that apply to most log management deployments. The data is static unless it is correlated, analyzed and remediated in real or near real time. And for most companies, it collects lots and lots of data. So much so, that without automations, the process of finding threats and anomalies is difficult at best. The protracted lag time between initial breach and discovery often means the damage is done long before any counter-measures can be deployed. Lastly, the modern enterprise is typically decentralized. Applications and data are no longer completely contained in centralized hardware (like a mainframe or purpose-built servers). However, the monitoring and log management of all the activity must be centralized. Very often companies are deploying log management solutions against only a portion of their enterprise footprint.

Considering the accelerated adoption of web and cloud technologies. Instead of having a few servers that controlled application infrastructure stacks, companies now have dozens, hundreds, or even thousands of servers that each play a smaller role within the stack. Whereas this has made IT leaner and more agile, it also made aggregation of all the data silos more difficult. In that vein, here are a five best practices to apply a log management deployment.

  1. Aggregate ALL Logs

The key practice is to ensure that that you “see” a complete picture of all that is accessing, pinging, and otherwise utilizing your current and expanded environment. Poor sharing of data will simply create vulnerability gaps and or require exponential labor to coordinate all the additional silos of data.  It’s necessary to capture as much data as possible and then ship it to your log analytics or SIEM system. That way, you’ll have a database of valuable information to access and analyze on demand. Due to the high granularity and complexities of modern systems such as CloudAccess’ log management solution, you can more easily determine root causes of an potential anomalies from logs that were never captured.

  1.  Ask the Right Questions

Where most organizations fail is they don’t properly utilize the log information that they’ve acquired. How many times did you chase an issue only to find out that it was in the logs all the time but you were just not looking at the right place? To be able to derive actionable insight from your logs, there are a set of mainstay questions needed to be asked before moving into the granularity of the answers. Who is accessing the network? What assets are they accessing? Where are they accessing the asset from? When (what time/day of week) are they doing this? Are there set permissions to allow such activity? For example, there’s an event log that notes Joe from accounting has accessed information from behind a firewall. However, the ping from “Joe” is coming from Belarus and we know that Joe is not in Belarus, he is in the Los Angeles office and these access attempt are happening at 3a.m local time on a Sunday. This is a red flag, right?. This is the sort of analytic that needs to happen, but can only do so if all the data is centralized and the right questions asked. The key to having a successful log analytics implementation is to have a large library of actionable insights that you can derive from your log files. Elsewise, you will be looking for a needle in a giant needlestack.

  1. Keeping your Applications Safe

A considerable rule of thumb is your log collection and management process should not damage the application it helps monitor. This is true with regard to whether the application is legacy-based or virtual. It sounds like a no-brainer, but sometimes the requirement for continuous monitoring overwhelms the resources on the application side. To ensure this does not happen, you need to make sure they are well configured. Because there are parameters that control the frequency of log collection and distribution, this may impact server-resource utilization.

  1. System Scalability

As your organization or the demand for a specific application grows, so will the amount of log data. It is critical to make sure that your log analytics and management capabilities can scale accordingly. It’s fairly easy to get started with a small server to process logs, but you need to make sure it will be up and running when you need it most (such as whenever bursts of logs occur). When your system has a problem, it will generate much more data than usual, which can consequently break your log analytics system in times of need.

The modern log management deployment has evolved beyond the simple catch, read and report aspects. It is now the base which provides the insight to risk and threat management, proactive protection of enterprise assets and most importantly, automates and initiates several processes that add to the overall security platform. Like the old saying building a house starts with a solid foundation, so does your entire security initiative require the proper care and maintenance of your log management solution.

Tags: , , ,