May 30, 2017

Got NetFlow and Metadata – Why do I need packets?

It’s all about time.

When it comes to network monitoring, NetFlow and Metadata-based tools allow engineers to get a handle on traffic usage, statistics, capacity, and even security attacks. They quickly help us visualize the conversations and applications involved in congestion, as well as hone in on strange traffic behavior. It would be difficult (and overkill at times) to use packet data to show the same traffic statistics.

So then, why are packets necessary for analysis and monitoring?

In most cases, NetFlow and Metadata do not show us packet timing, which is critical when isolating the root cause of performance issues. To better understand why, let’s look at how NetFlow works.

NetFlow 101

A packet enters a router. (A packet walks into a bar... )

First, the router determines if this packet is the first of its kind, or if this is part of a flow it has seen before. The header values it uses to determine this are typically:

Source IP

Destination IP

Source Port

Destination Port

Protocol ID (in IP header)

Diff Srv Value

Ingress Interface

If a technology like Flexible NetFlow is in use, additional key fields can be configured for tracking more flow detail, but these are typically the big seven. Next, if the router determines that the flow is new, a record will be created to track the packets and bytes as well as the forwarding decisions that were made. At a certain interval, the router will export the flow records to a collector that can turn the messy data into pretty charts and graphs.

Why is this not enough for troubleshooting?

NetFlow and Metadata has its strong points, which are capacity monitoring and security analysis. However, if we want to dig into the weeds of today’s problems, we need the full packet headers – sometimes the payload too – and the packet timing. We need to see the delta timing between packets to be able to accurately measure network delay, application delay, server delay, and other TCP weirdness. Flow monitoring will give us one-way statistics, but it does not show us the delta time between packets in both directions. These measurements are often the key to isolating the problem domain.

The devil with the details.

We used to be able to sift through a haystack of packets somewhat quickly to find the connection, conversation, or pain point that was killing our application performance. But today, with the increase in link capacity from 1Gbps to 10/40/100Gbps, our haystack has turned into a mountain range. Packets are great, but now we need automation and wire-level filtering to help us manage the mountain of detail. There are several tools that help us do that. One that I have been working with recently is the Viavi Gigastor with Apex. Check them out when you get a second.

The quick takeaway point from this article is to emphasize that NetFlow and Metadata solutions have a nice place in network traffic monitoring. They can help us get a handle on capacity and link usage, including monitoring for attacks. However, since we have neither the inter-packet delta timing, nor the full packet payload, these tools often aren’t enough to get to the bottom of the problem. Going forward, IT organizations must have a packet storage and APM solution in place to stay ahead of performance issues that impact business applications.

Leave a Reply

Your email address will not be published. Required fields are marked *

Packet Pioneer is devoted to helping engineers and developers of all experience levels gain comfort with packet analysis.
CONTACT US
crossmenu linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram