In IT, dependable metrics become our Canary in a coal mine. We use them as indicators of issues. Like a dead canary in a coal mine, they don’t know exactly how much they have been exposed or exactly how bad it is but they know they need to get the hell out of there. In the world of Operational intelligence, we can use metrics as indicators of which parts of the proverbial shaft are having issues and need to be adjusted, sealed off or abandoned altogether. To continue in the same vein as my previous post I wanted to discuss the benefits of the FLOW_TURN trigger when trying to get a baseline performance of specific servers and transactions and you don’t want to drill into layer 7 data as much as you just want to check the layer 4 performance between two hosts Extrahop has the FLOW_TURN trigger that will allow you to take the next step in layer 4 flow metrics by looking at the following:
Request Transfer: Time it took for the client to make the request
Response Transfer: Time it took for the server to respond
Request Bytes: Size of the Request
Response Bytes: Size of the Response
Transaction Process Time: The time it took for the transaction to complete. You may have a fast network with acceptable request and response times but you may note serious tprocess times which could indicate the kind of server delay we discussed in some of the Edgesight posts.
In today’s Virtualized environment you may see things like:
- A Four Port NIC with a 4x1GB port channel plugged into a 133mhz bus
- 20 or more VMs sharing a 1GB Port Channel
- Backups and volume mirror going on over the production network.
These are things that may manifest themselves as slowness of either the application or slow response from your Clients or servers. What the FLOW_TURN metric gives you is the ability to see the basic transport speeds of the Client and Server as well as the process time of the transaction. Setting up a trigger to allow you to harvest this data will lay the foundation for quality historical data on the baseline performance of specific servers during specific times of the day. The trigger itself is a few lines of code.
log(“ProcTime ” + Turn.tprocess)
” eh_event=FLOW_TURN” +
Then you assign the trigger to specific servers that you want to monitor (If you are using the Developer Edition of Extrahop in a home lab just assign to all) then you will start collecting metrics. In my case I am using Splunk to collect Extrahop Metrics as they are the standard for big data archiving and fast queries. Below you see the results of the following Query:
sourcetype=”Syslog” FLOW_TURN | stats count(_time) as Total_Sessions avg(tprocess) avg(TurnReqXfer) avg(TurnRespXfer) by ClientIP ServerIP ServerPort
This will produce a grid view like the one below:
Note in this grid below you see the client/server and port as well as the total sessions. With that you then see the Transfer metrics for both the Client and Server as well as the process time. The important things to note here:
- If you have a really long avg(tprocess) time, double check the number of sessions. A single instance of an avg(tprocess) of 30000ms is not as big of a deal as 60,000 instances of an 800ms avg(tprocess). Also keep in mind that Database servers that may be performing data warehousing may have high avg(tprocess) metrics because they are building reports.
- Note the ClientIP Subnets as you may have an issue with an MDF where clients from a specific floor or across a frame relay connection are experiencing high avg(TurnReqXfer) numbers.
If you want to see the average request transfer time by Subnet use the following Query: (I only have one subnet in my lab so I only had one result)
sourcetype=”Syslog” FLOW_TURN | rex field=_raw “ClientIP=(?<subnet>\d+\.\d+\.\d+\.)” | stats avg(TurnReqXfer) by subnet
If you want to track a servers transaction process time you would use the query below:
FLOW_TURN ServerIP=”192.168.1.61″ | timechart avg(tprocess) span=1m
Note in the graph below you can see the transaction process time for the server 192.168.1.61 throughout the day. This can give you a baseline so that you know when you are out of what (or when the Canary has died)
While I am not trying to take what we do for a living and say that it is as simple as swinging a hammer in a coal mine but for the longest time, this type of wire data has not been readily accessible unless you had a “Tools team” working full time on a seven figure investment in a mega APM Product. This took me less than 15 minutes to set up and I was able to quickly get a holistic view of the performance of my servers as well as start to build baselines so that I know when the servers are out of the norm. I have had my fill of APM products that I need an entourage to deploy or have a dozen drill downs to answer a simple question, is my server out of whack?
In the absence of data, people fill those gaps with whatever they want and they will take creative license to speculate. The systems team will blame the code and the network, the Network team will blame the server and the code the developers will blame the Systems admins and Network team. With this simple canary herding tool, I can now fill that gap with actual data.
If the Client or Server transfer times are slow we can ask the Network team to look into it, if the tprocess time is slow it could be a SQL table indexing issue or a server resource issue. If nothing else, you have initial metrics to start with and a way to monitor if they go over a certain threshold. When integrated with a big-data platform like Splunk, you have long term baseline data to reference.
A lot of time there is no question the canary has died, it’s just getting down to which canary died.
Extrahop now as a Discovery Edition that you can download and test for free (Including FLOW_TICK and FLOW_TURN triggers).
Thanks for reading!!!
John M. Smith