When it comes to network forensics and security investigations, there are two main challenges that any public or private organization will face. Breaches are definitely costly.

A 2015 IBM-sponsored Ponemon Institute study revealed that the number of cyber attacks is increasing rapidly, with damages averaging $1.57 million for each attack due to costs associated with damage to reputation, customer churn and increased new customer acquisition.

It is obvious, then, that the first challenge of any IT or security team will always be to reduce the likelihood that their network will be breached by a cyber criminal. To achieve this, forensics teams need to be able to investigate more (if not all) of the security alerts that they receive from their IDS/IPS/SIEM devices on a daily basis, says Mandana Javaheri, CTO at Savvius.

Most medium to large companies receive upward of 500 alerts every day, but there isn’t enough manpower to investigate every alert. Most teams only investigate alerts with the highest severity level, and ignore the alerts with a lower severity simply because they can’t keep up with the volume.

Where I see the biggest opportunity to improve network security is to automate the process of data collection for alerts and organize them by alert priority. This can be an enormous advantage because it makes security investigators far more productive with their limited time, and enables them to investigate far more alerts every day.

Sophisticated hackers are often able to disguise their attacks so that they appear less severe, but each alert that does not get investigated may become a breach, so any organization really needs to reduce the chance of a breach by automating as much of the process as possible and allowing investigators to look at as many alerts as they can.

Okay, so we’ve talked about stopping breaches, but in today’s security environment enterprises need to be prepared for the inevitable breach when it does happen. So the second challenge for IT organizations is to minimize the impact of breaches that could not be stopped.

And that’s being able to capture and easily retrieve network-level data for a long period of time becomes so valuable, because it allows investigators to go back in time and drill down to specific events that help them isolate the cause of the breach. Many of the breaches happened months before they were ever detected.

In this respect, the automation process carried out by a product like Vigil for data collection really helps with both of those challenges. It allows IT teams to organize the data they collect from the network by specific and unique criteria, such as alert severity or traffic to a particular server.

When an investigator needs to go back in time to see what was happening on the network, packet data is the most reliable and information-rich source. If the organization only collected relevant network traffic for a short time, like a few hours or a couple of days, then in all likelihood the investigators will not find what they are looking for after a breach has been detected.

On the flip side, it’s not practical or cost-efficient for an enterprise to collect all of the network traffic all of the time. Anyone who has carried out an investigation will know that having too much data to sift through can be almost as bad as having none at all, so an automated process that intelligently selects the packets related to a possible breach helps enormously with the productivity and the efficiency of any investigation.

The reality today is that investigative teams are understaffed. This is true at small and medium-sized firms, but also surprisingly true at most large corporations. The ideal way to resolve the issue of manpower is to help each person be more efficient, and this can only be achieved by making the investigative process faster.

Automating the process to selectively capture and store only the potentially suspicious traffic, and making that data available investigators even months after a possible breach, goes a long way to ensuring that enterprises can plug holes as soon as they are discovered.

The author of this blog is Mandana Javaheri, CTO at Savvius.