Nerd on a wire: The ExtraHop Citrix “Alpha” Bundle

So I have been working on a new Citrix bundle for ExtraHop customers and potential customers for a few weeks now. ExtraHop complements and expands on HDX insight by furthering your visibility into your Citrix infrastructure allowing you to get, not only ICA metrics on such things as ICA Channel impact, Latency and Launch times, we also provide you with the ability to break the information out by Subnet, Campus (friendly name), custom dashboards that empower lower paid staff as well as keep Director/VP types informed. In this post I want to discuss the Citrix Alpha Bundle and go over what it has so far and what can be included in it.

How ExtraHop works:
For those who do not know, ExtraHop is a completely agentless wire data analytics platform that provides L7 operational intelligence by passively observing L7 conversations. We do not poll SNMP or WMI libraries, ExtraHop works from a SPAN port and observes traffic and provides information based on what is observed at up to 20GB per second. In the case of Citrix, we do considerably more than observe the ICA Channel and I will show you in this post just how much incredibly valuable information is available on the wire that is relevant to Citrix engineers and architects. Learn more about ExtraHop’s solution for Citrix VDI monitoring.

For more on how ExtraHop works with Citrix Environments please visit:
Learn more about ExtraHop’s solution for Citrix VDI monitoring.

How are we gathering custom Metrics?
The way we gather these metrics is to pull the metrics from the spanned port through a process called triggers. ExtraHop allows us to logically create device groups and collate your VDA (ICA listeners for either XenDesktop or XenApp) and apply triggers specifically to those device groups. Device groups can be created using a variety of methods including IP Address, Naming convention, type (ICA,DB,HTTP,FTP,etc) as well as static.

Now, to earmark specific metrics to JUST your Citrix environment you apply the necessary triggers to the device group.
(All triggers are pre-written and you won’t have to write anything, worst case, you may have to edit some of them).

Results:
The triggers we have written create what is called an “App Container”. Below is a screen shot of the types of metrics that we are gathering in our Triggers. While it does not even remotely cover all that we can do I will explain a few of the metrics for you.

Citrix Infrastructure Page

Infrastructure Metrics: (all metrics drill down)

  • Slow Citrix Launches: We have set in the trigger to classify any Citrix Launch time in excess of 30 seconds to increment the Slow Launch counter. This is something you have the ability to change within the trigger itself.
  • CIFS Errors: We log the CIFS errors so that you can see things like “Access Denied” or “Path Not Found”. Anyone who has had a DFS share removed and the ensuing 90 logon while Windows pines for the missing drive letter knows what I am talking about.
  • Fast Citrix Launches: A better name is probably normal Citrix Launches, for the existing trigger set this would be anything under 30 seconds. Your thresholds can be customized.
  • DDC1/DDC2 Registrations: The sample data did not have any XenDesktop but this was a custom XML trigger written to count the number of times a VDA registers with a DDC so that you can see the distribution of your VDA’s across your XenDesktop Infrastructure.
  • DNS Response Errors: Self-explanatory, be aware if you have not looked at you DNS before. Because DNS failures happen on the wire, this is a huge blind spot for agent based solutions. I was shocked the first time I actually saw my DNS failure rate.
  • DNS Timeouts: Even more damning than response errors, these are flat out failures. These, like response errors, can indicate Active Directory issues due to an overworked DC/GC or misconfigured Sites and Services
  • Citrix I/O Stalls: While we do not measure CPU/Disk/Memory, we can see I/O related issues via the zero windows. When a system is experiencing I/O binding, it will close the TCP window. When this happens we will see it and it is an indication of an I/O related issue.
  • Server I/O Stalls: This is basically the opposite of a Citrix I/O stall, if a back end Database server is acting up and someone is on the phone and they say “Citrix”, we all saw the Citrix “Get out of jail free” card…the call will be sent to the Citrix team. This provides the Citrix team the ability to see that the back end server is having I/O related issues and not waste their time doing someone else’s troubleshooting which, in my 16 years of supporting Citrix, was about 70% of the time.

Launch Metrics: (Chart includes drill down)
When you click on the chart under the Launch Metrics chart you will be given a list of Launch/Login metrics based on the following metrics below.

  • Launch by Subnet: We collect this to see if there is an issue with a specific subnet
  • Launch by App: As some applications may have wrappers that launch or have parameters that make external connections we provide launch times by applications.
  • Launch by Server: We provide this metric so that you can easily see if login issues are specific to a particular server.
  • Launch by User: This will let you validate a specific user having issues or you can note things like “hey, all these users belong to the accounting OU” maybe there is an issue with a drive letter or login script.
  • Login by User: This is the login info which is how fast A/D logged them in.

Login vs. Launch:
At a previous employer, what we noted was that if we had a really long Load time accompanied by a really long login time we needed to look at the A/D infrastructure (Sites and Services, Domain Controllers) and a long Load time accompanied with a short login time would indicate issues with long profile loads, etc. The idea is that the login time should be around 90% of the load time meaning post-login, not much goes on.

XML Broker Performance:
We are one of the few platforms that can provide visibility into the performance of the XML broker. While not part of the ICA Channel it is an important part of your overall Citrix infrastructure. Slow XML brokering can cause slow launches, the lack of applications being painted, etc. We can also provide reporting on STA errors that we see as we have visibility into XML traffic between the Netscaler/Web Interface and the Secure Ticket Authority (STA).

DNS Performance:
If you have applications that are not run directly on your Citrix servers and you are not using host files, DNS performance is extremely important. The drill down into the DNS Performance chart will provide performance by client and by server. If you see a specific DNS Server that is having issues you may be able to escalate it to the A/D and DNS teams.

Citrix Latency Metrics Page

  • Latency by Subnet: Many network engineers will geo-spatially organize their campus/enterprise/WAN by network ID. One of the Operational Intelligence benefits of ExtraHop is that we can use triggers to logically organize the performance by subnet allowing a Citrix team to break out the performance by Subnet. If given a list of friendly names, we can actually provide a mapping of location-to-NETID. Example: 192.168.252.0 is your 3rd floor on your main campus. We can provide the actual friendly name if you want. This can be very useful in quickly identifying problem areas, especially for larger Citrix environments.
  • High Latency Users: For this metric, any user who crosses the 300ms threshold is placed into the High Latency area. The idea is that this chart should be sparse but if you see a large amount of data here you may find the need to investigate further. Also, double check the users you note here with the chart below that will include the overall user latency. You may have an instance where a user’s latency was high due to them wandering too far away from an access point or overall network issues but find that when you look at the Latency by UserID chart that they overall latency was acceptable.
  • Latency by UserID: This metric is to measure the latency by user ID regardless of how good/bad it was.
  • Latency by Client IP: This is the latency by individual client IP. I think that I may change this to include the latency by VDA (XenDesktop or XenApp). This can be valuable to know if a specific set of VDA listeners are having issues.

Below is the drill down for the Latency by Subnet chart. This will allow you to see if you have an issue with a specific subnet within your organization. Example: You get a rash of calls about type-ahead delays, the helpdesk/first responder does not put together that they are all from the same topological area. The information below will allow the Citrix engineer to quickly diagnose the issue if the problem is a faulty switch in an MDF or an issue with a LEC over an MPLS cloud. Below we have set the netmask to /24 but that can be changed to accommodate however you have subnetted your environment.

Citrix PVS Performance Page
I haven’t had a great deal of PVS experience outside of what I have set up in my lab. In the last few years I had sort of morphed into a DevOps/Netscaler/INFOSEC role with the groups I was in. That said, because we are on the wire we are able to see the turn timing for your PVS traffic. I won’t go into the same detail here as with the previous two pages but what you are looking at is a heat map of your PVS turn times. In general I have noticed, other than when things are not working well, that the turn timing should be in the single digits. I will practice breaking my PVS environment to see what else I can look at. I have tested this with a few customers but their PVS environments were working fine and no matter how many times I ask them to break them they just don’t seem compelled to. I have also included Client/Server request transfer time as well as size to allow the Citrix team to check for anomalies.

Detecting Blue Screening Images:
One thing I have come across while being on a team that used PVS was that occasionally something would go wrong and systems would blue screen. Most reboots happen overnight and so it can be somewhat difficult to get into work every day and know right away which servers did not come up the previous night without some sort of manual task. Below is the use of a DHCP trigger that counts the number of requests. In the Sesame Street spirit, when you look below you can sort of see that “one of these kids is doin’ his own thing….”. Note that most of the PVS driven systems have a couple of DHCP requests and 192.168.1.156 has 30. Why? Because I created a mac address record on the PVS Server and PXE booted XenServer image from my VMWare Workstation which produced a Blue Screen.

In those environments with hundreds or even thousands of servers, the ability to see blue screening systems (or systems that are perpetually rebooting) can be very valuable. The information below is from our new Universal Payload Analysis event that the TME team wrote for us to gather DHCP statistics.

Other things we can add
While my lab is pretty small and I don’t have any apps like PeopleSoft, Oracle Financials or basic Client Server applications. ExtraHop has the ability to map out your HTTP/Database/Tiered applications for you and make sure that you see the performance of all of your enterprise applications as they pertain to Citrix. By adding the HTTP request/response events we can see ALL URI’s and their performance as well as any 500 series errors. We can see slow stored procedures for database calls that are made from the Citrix servers. You can also classify SOAP/REST based calls by placing those applications in their own App Container and position your team to report on the performance of downstream applications that can be a sore spot for Citrix teams when they are held accountable for the performance because the front end was published on Citrix.

Empowering First Responders
When you have a small lab of 3-4 VDA’s and a limited amount of demo data it is a little tough to get too detailed here but I wanted to show ways that we can empower some of your first responders. One of the challenges with Citrix support is that it can get very expensive really fast. Normally, calls may go from a helpdesk to a level 2 and if it is still an issue, to a level 3 engineer. With Citrix, calls have a habit of going from the helpdesk directly to the Citrix engineer and this can make supporting it very expensive. If we can position first responders to be able to resolve the issue during the initial call it can be a considerable savings to the organization. For this we have created a “Citrix Operations” app container and while it is somewhat limited here, the idea is that we can put specific information in here that could make supporting remote Citrix users much easier.

Below you see a list of metrics, the App Container below allows a service desk resource to actively click on Open/Closed sessions and get the following information.

(I know the text box says “On the left” but due to how my wordpress theme is set up I just couldn’t fit them side by side.  I mean below here…..sorry…)

From this page, the first responder can see if there are any database errors, if we want, we can put the HTTP 500 errors all in real time. The engine updates every 30 seconds so by the time the user calls in, they will be able to go in and see their Citrix experience. This can be custom retrofitted for your specific applications/environment.

So why are you writing this?
I would love to talk to a larger Citrix deployment to find out what the current pain points are for you and find out what additional custom information can be netted with triggers so that we can make that data readily available. Like I said, I have supported Citrix for about 16 years but the last few I was more on the cloud side. If I could get an idea of what blind spots you have in your environment I am positive we can find a way to provide visibility into it with wire data analytics. If you are a PowerShell guru, you will love the JavaScript trigger interface that we have and you should have no trouble editing the triggers to suit your own environment or even writing your own triggers. Wire data analytics provides relevant data to more than just the Citrix team. At one of my previous employers we had the DBAs, Citrix/Server team and INFOSEC all leveraging Wire Data from ExtraHop.

So can I haz it?
In fact, yes, you can have it for free if you like, we offer a Discovery edition that has ICA enabled that will allow you to keep up to 24 hours of data but will do everything that you see in this post. You have a few options but if you don’t want to go down the sales cycle you can download the discovery edition (you will get a call from inside sales to pick your brain) or you can get an evaluation of either a physical or virtual appliance but for that I have to get your area rep involved (you will not regret working with us and we will not make the process painful). Because we are passive and we gather information with no agents we can sit back passively and observe your environment with zero impact on your servers. If you want to set this up, just shoot me an email at johnsmith@extrahop.com and I will provide you the Citrix Alpha bundle after you have downloaded the Discovery Edition or requested an Evaluation.

The discovery edition can be downloaded from the link below, the entire process was pretty painless, I went through it a few weeks ago. After signing up we can get you access to the documentation you will need and a forum account.

http://www.extrahop.com/products/appliances/extrahop-discovery-edition/

Thanks for reading and please let me know if you would like to contribute.

John Smith

Finally a book on Edgesight (Yes…it is a little late but…)

I have been a bit busy but I was asked to review a book written by a Citrix Architect named Vaqar Hasan out of Toronto Canada.  While Edgesight has been declared End of Life in 2016, it still provides the most detailed metrics for XenApp environments out there today.  Also, most folks are still on XenApp 6.5 and will remain so for at least two years so I believe the content is still very relevant to today’s XenApp environments as Citrix shops ease into XenDesktop 7 App Edition.  Also, according to the Citrix website there will be extended support until 2020 for XenApp 6.5 which is a very possible scenario for some folks.

While it does not offer a great deal of ad hoc queries like the EdgesightUnderTheHood.com site does, it offers some very nice details on laying the groundwork for your Edgesight implementation with detailed instructions on setting up alerts, emails, and environmental considerations such as anti-virus exclusions.

While I wish this book would have been available in 2008, there has not been a great deal of literature around Edgesight and he is only asking ten dollars for the Kindle edition.  I think it is important that we support the few IT experts in this industry who take the effort to publish good content and the cost is low enough that you can put it on your corporate p-card.

So if you don’t mind, maybe you can support this guy for helping out.

Thanks!

John

Image

http://www.amazon.com/Instant-EdgeSight-XenApp-Vaqar-Hasan-ebook/dp/B00ESX19VO/ref=sr_1_1?ie=UTF8&qid=1386594285&sr=8-1&keywords=Edgesight

Moving non-Edgesight data over to new Blog at http://wiredata.net

As you have noticed, I have been writing quite a bit about Extrahop/Splunk.  Most of you are aware that Edgesight is NOT dead and lives on having added real-time monitoring to go with an archival strategy.  I plan to continue writing about it on this site but I am moving the Extrahop information over to http://wiredata.net.

I will write some about Netscalers, INFOSEC and HDX Insight as well as Extrahop and Splunk.

Please head over and have a look, I have recorded one video and I plan to add at least ten more.  Follow it @wiredata

Let the finger pointing BEGIN! (..and end) Canary Herding With Extrahop FLOW_TURN

In IT, dependable metrics become our Canary in a coal mine. We use them as indicators of issues. Like a dead canary in a coal mine, they don’t know exactly how much they have been exposed or exactly how bad it is but they know they need to get the hell out of there. In the world of Operational intelligence, we can use metrics as indicators of which parts of the proverbial shaft are having issues and need to be adjusted, sealed off or abandoned altogether. To continue in the same vein as my previous post I wanted to discuss the benefits of the FLOW_TURN trigger when trying to get a baseline performance of specific servers and transactions and you don’t want to drill into layer 7 data as much as you just want to check the layer 4 performance between two hosts Extrahop has the FLOW_TURN trigger that will allow you to take the next step in layer 4 flow metrics by looking at the following:

Request Transfer: Time it took for the client to make the request
Response Transfer: Time it took for the server to respond
Request Bytes: Size of the Request
Response Bytes: Size of the Response
Transaction Process Time: The time it took for the transaction to complete. You may have a fast network with acceptable request and response times but you may note serious tprocess times which could indicate the kind of server delay we discussed in some of the Edgesight posts.

In today’s Virtualized environment you may see things like:

  • A Four Port NIC with a 4x1GB port channel plugged into a 133mhz bus
  • 20 or more VMs sharing a 1GB Port Channel
  • Backups and volume mirror going on over the production network.

These are things that may manifest themselves as slowness of either the application or slow response from your Clients or servers. What the FLOW_TURN metric gives you is the ability to see the basic transport speeds of the Client and Server as well as the process time of the transaction. Setting up a trigger to allow you to harvest this data will lay the foundation for quality historical data on the baseline performance of specific servers during specific times of the day. The trigger itself is a few lines of code.

log(“ProcTime ” + Turn.tprocess)
RemoteSyslog.info(
” eh_event=FLOW_TURN” +
” ClientIP=”+Flow.client.ipaddr+
” ServerIP=”+Flow.server.ipaddr+
” ServerPort=”+Flow.server.port+
” ServerName=”+Flow.server.device.dnsNames[0]+
” TurnReqXfer=”+Turn.reqXfer+
” TurnRespXfer=”+Turn.rspXfer+
” tprocess=”+Turn.tprocess

)

Then you assign the trigger to specific servers that you want to monitor (If you are using the Developer Edition of Extrahop in a home lab just assign to all) then you will start collecting metrics. In my case I am using Splunk to collect Extrahop Metrics as they are the standard for big data archiving and fast queries. Below you see the results of the following Query:
sourcetype=”Syslog” FLOW_TURN | stats count(_time) as Total_Sessions avg(tprocess) avg(TurnReqXfer) avg(TurnRespXfer) by ClientIP ServerIP ServerPort

This will produce a grid view like the one below:
Note in this grid below you see the client/server and port as well as the total sessions. With that you then see the Transfer metrics for both the Client and Server as well as the process time. The important things to note here:

  • If you have a really long avg(tprocess) time, double check the number of sessions. A single instance of an avg(tprocess) of 30000ms is not as big of a deal as 60,000 instances of an 800ms avg(tprocess). Also keep in mind that Database servers that may be performing data warehousing may have high avg(tprocess) metrics because they are building reports.
  • Note the ClientIP Subnets as you may have an issue with an MDF where clients from a specific floor or across a frame relay connection are experiencing high avg(TurnReqXfer) numbers.

If you want to see the average request transfer time by Subnet use the following Query: (I only have one subnet in my lab so I only had one result)

sourcetype=”Syslog” FLOW_TURN | rex field=_raw “ClientIP=(?<subnet>\d+\.\d+\.\d+\.)” | stats avg(TurnReqXfer) by subnet

If you want to track a servers transaction process time you would use the query below:

sourcetype=”Syslog”
FLOW_TURN ServerIP=”192.168.1.61″ | timechart avg(tprocess) span=1m

Note in the graph below you can see the transaction process time for the server 192.168.1.61 throughout the day. This can give you a baseline so that you know when you are out of what (or when the Canary has died)

Conclusion:
While I am not trying to take what we do for a living and say that it is as simple as swinging a hammer in a coal mine but for the longest time, this type of wire data has not been readily accessible unless you had a “Tools team” working full time on a seven figure investment in a mega APM Product.  This took me less than 15 minutes to set up and I was able to quickly get a holistic view of the performance of my servers as well as start to build baselines so that I know when the servers are out of the norm. I have had my fill of APM products that I need an entourage to deploy or have a dozen drill downs to answer a simple question, is my server out of whack?

In the absence of data, people fill those gaps with whatever they want and they will take creative license to speculate. The systems team will blame the code and the network, the Network team will blame the server and the code the developers will blame the Systems admins and Network team. With this simple canary herding tool, I can now fill that gap with actual data.

If the Client or Server transfer times are slow we can ask the Network team to look into it, if the tprocess time is slow it could be a SQL table indexing issue or a server resource issue. If nothing else, you have initial metrics to start with and a way to monitor if they go over a certain threshold. When integrated with a big-data platform like Splunk, you have long term baseline data to reference.

A lot of time there is no question the canary has died, it’s just getting down to which canary died.

Extrahop now as a Discovery Edition that you can download and test for free (Including FLOW_TICK and FLOW_TURN triggers).

http://www.extrahop.com/discovery/

Thanks for reading!!!

John M. Smith


Go with the Flow! Extrahop’s FLOW_TICK feature

I was test driving the new 3.10 firmware of Extrahop and I noticed a new feature that I had not seen before (it may have been there in 3.9 and I just missed it). There is a new trigger called FLOW_TICK, that basically monitors connectivity between two devices at layer 4 allowing you to see the response times between two devices regardless of L7 Protocol. This can be very valuable if you just want to see if there is a network related issue in the communication between two nodes. Say, you have an HL7 interface or a SQL Server that an application connects to. You are now able to capture flows between those two devices or even look at the Round Trip time of tiered applications from the client, to the web farm to the back end database. When you integrate it with Splunk you get an excellent table or chart of the conversation between the nodes.

The Trigger:
The first step is to set up a triggler and select the “FLOW_TICK” event.

Then click on the Editor and enter in the following Text: (You can copy/Paste the text and it should appear as the graphic below)

log(“RTT ” + Flow.roundTripTime)
RemoteSyslog.info(
” eh_event=FLOW_TICK” +
” ClientIP=”+Flow.client.ipaddr+
” ServerIP=”+Flow.server.ipaddr+
” ServerPort=”+Flow.server.port+
” ServerName=”+Flow.server.device.dnsNames[0]+
” RTT=”+Flow.roundTripTime
)

Integration with Splunk:
So if you have your integration with Splunk set up, you can start consulting your Splunk interface to see the performance of your layer 4 conversations using the following Text:
sourcetype=”Syslog” FLOW_TICK | stats count(_time) as TotalSessions avg(RTT) by ClientIP ServerIP ServerPort

This should give you a table that looks like this: (Note you have the Client/Server the Port and the total number of sessions as well as the Round Trip Time)

If you want to narrow your search down you can simply put a filter into the first part of your Splunk Query: (Example, if I wanted to just look at SQL Traffic I would type the following Query)
sourcetype=”Syslog” FLOW_TICK 1433
| stats count(_time) as TotalSessions avg(RTT) by ClientIP ServerIP ServerPort

By adding the 1433 (or whatever port you want to filter on) you can restrict to just that port. You can also enter in the IP Address you wish to filter on as well.

INFOSEC Advantage:
Perhaps an even better function of the FLOW_TICK event is the ability to monitor egress points within your network. One of my soapbox issues in INFOSEC is the fact that practitioners beat their chests about what incoming packets they block but until recently, the few that got in could take whatever the hell they wanted and leave unmolested. Even a mall security guard knows that nothing is actually stolen until it leaves the building. If a system is infected with Malware you have the ability, when you integrate it with Splunk and the Google Maps add-on, to see outgoing connections over odd ports. If you see a client on your server segment (not workstation segment) making a 6000 connections to a server in China over port 8016 maybe that is, maybe, something you should look into.

When you integrate with the Splunk Google Maps add-on you can use the following search:
sourcetype=”Syslog” FLOW_TICK | rex field=_raw “ServerIP=(?<IP>.[^:]+)\sServerPort” | rex field=_raw “ServerIP=(?<NetID>\b\d{1,3}\.\d{1,3}\.\d{1,3})” |geoip IP | stats avg(RTT) by ClientIP IP ServerPort IP_city IP_region_name IP_country_name

This will yield the following table: (Note that you can see a number of connections leaving the network to make connections in China and New Zealand, the Chinese connections I made on purpose for this lab and the New Zealand connections are NTP connections embedded into XenServer)

If you suspected you were infected with Malware and you wanted to see which subnets were infected you would use the following Splunk Query:
sourcetype=”Syslog” FLOW_TICK
%MalwareDestinationAddress%
| rex field=_raw “ServerIP=(?<IP>.[^:]+)\sServerPort” | rex field=_raw “ClientIP=(?<NetID>\b\d{1,3}\.\d{1,3}\.\d{1,3})” | geoip IP | stats count(_time) by NetID

Geospatial representation:
Even better, if you want to do some big-time geospatial analysis with Extrahop and Splunk you can actually use the Google Maps application you can enter the following query into Splunk:
sourcetype=”Syslog” FLOW_TICK | rex field=_raw “ServerIP=(?<IP>.[^:]+)\sServerPort” | rex field=_raw “ClientIP=(?<NetID>\b\d{1,3}\.\d{1,3}\.\d{1,3})” |geoip IP | stats avg(RTT) by ClientIP NetID IP ServerPort IP_city IP_region_name IP_country_name | geoip IP

Conclusion:
I apologize for the RegEx on the ServerIP field, for some reason I wasn’t getting consistent results with my data. You should be able to geocode the ServerIP field without any issues. As you can see, the FLOW_TICK gives you the ability to monitor the layer 4 communications between any two hosts and when you integrate it with Splunk you get some outstanding reporting. You could actually look at the average Round Trip Time to a specific SQL Server or Web Server by Subnet. This could quickly allow you to diagnose issues in the MDF or if you have a problem on the actual server. From an INFOSEC standpoint, this is fantastic, your INFOSEC team would love to get this kind of data on a daily basis. Previously, I used to use a custom Edgesight Query to deliver a report to me that I would look over every morning to see if anything looked inconsistent. If you see an IP making a 3389 connection to an IP on FIOS or COMCAST than you know they are RDPing home. More importantly, the idea that an INFOSEC team is going to be able to be responsible for everyone’s security is absurd. We, as SyS Admins and Shared Services folks need to take responsibility for our own security. Periodically validating EGRESS is a great way to find out quickly if Malware is running amok on your network.

Thanks for reading

John M. Smith

Where wire data meets machine data

So what exactly IS wire data? We have all heard a lot about Machine data but most folks do not know what wire data is or how it can both augment your existing Operational Intelligence endeavor as well as provide better metrics than traditional APM solutions. Extrahop makes the claim that they are an Agentless solution. They are not unique in the claim but I believe they are pretty unique in the technology. It comes down to a case of trolling and polling. Example: A scripted SNMP process is “polling” a server to see if there are any retransmissions. Conversely Extrahop is Trolling for data as a passive network monitor and sees the retransmissions as they are occurring on the wire. Polling is great as long as the condition you are worried about is happening at the time you do the polling. It is similar to saying “Are you having retransmissions?” (SNMP Polling) vs. “I see you are having a problem with retransmissions”. Both are agentless but there is a profound difference in terms of the value each solution delivers.

Where an agent driven solution will provide insight into CPU, Disk and Memory, wire data will give you the performance metrics of the actual layer 7 applications. It will tell you what your ICA Latency as it is measured on the wire, it will tell you what SQL Statements are running slow and which ones are not, it will tell you which DNS records are failing. The key thing to understand is that Extrahop works as a surveillance tool and is not running on a specific server and asking WMI fields what their current values are. This is profoundly different than what we see in traditional tools in the last 10-12 years.

When Machine data meets Wire Data:
I am now over 9 months into my Extrahop deployment and we have recently started a POC with Splunk and the first task I performed was to integrate Extrahop wire data into the Splunk Big Data back end. All I can say is that it has been like yin and yang. I am extremely pleased with how the two products integrate together and the fusion of wire data with machine data will give my organization a level of visibility that they have never had before. This is, in my opinion, the last piece to the Operational Intelligence puzzle.

In this post I want to talk about three areas where we have been able to see profound improvement in our environment and some of the ways we have leveraged Splunk and Extrahop to accomplish this.

How does data get from Extrahop to Splunk?
Extrahop has this technology called Triggers. Basically, there is a mammoth amount of data flowing through your Extrahop appliances (up to 20GB per Second) and we are able to tap into that data as it is flowing by and send it to Splunk via Syslog. This allows me to tap into CIFS, SQL, ICA, MQSeries, HTTP, MySQL, NFS, ORACLE as well as other Layer 7 flows and allow me to send information from those flows (Such as Statement, Client IP, Server and Process Time for SQL) to Splunk where I can take advantage of their parsing and big data business intelligence. This takes data right off the wire and puts it directly into Splunk. Just like any Unix based system or Cisco device that is set to use Syslog, what I like about Extrahop is the ability to discriminate between what you send to Splunk and what you don’t send to Splunk.

Extrahop/Splunk Integration: SQL Server Queries

Grabbing SQL Queries and reporting on their performance:
One of the most profound metrics we have noted since we started integrating Splunk and Extrahop was the ability to create a flow then cherry pick metrics from it. Below you will see a pair of Extrahop Triggers (the drivers for Splunk integration) the first trigger builds the flow by taking the DB.statement and the DB.procedure fields (pre-parsed on the wire) and creating a flow that you can then tap into when you send you syslog message in the next trigger.

The stmt (var stmt) variable, refers to the flow that we just created above, we will instantiate this flow and pull from it key metrics such as statement and procedure and couple it with the DB.tprocess and then tie in the process time of specific SQL Statements.

At the bottom you see the RemoteSyslog.info command that sends the data to Splunk (or KIWI with SQL, or what we call “skunk”).

Note below, I am NOT logging the database name but that is a trigger option in Extrahop if you have more than one database that uses similar table names. Also note, the condition if (DB.tprocess >=0). I am basically grabbing every database process. This measurement is in milliseconds so if you only wanted to check database queries that took longer than one second it would read if (DB.tprocess>=1000)


For myself, I assign both of these triggers to my Citrix XenApp servers and they are able to report on the database transactions that occur in my Citrix environment. Obviously, you can apply this triggers to your webservices, individual clients as well as the database servers themselves. In my case, I already had a device group for the XenApp servers.

This translates into the metrics you see below where Splunk automatically parses the data for me (YES!) and I am ready to start drilling into it to find problem queries, tables and databases.

Below you see how easy (well, I recommend the O’Reily “Regular Expressions” book) it is to now parse your wire data to provide the performance of specific queries. As you can see below, this allows you to see the performance of specific queries and get an understanding of how specific tables (and their corresponding indexes) are performing. The information you see in the graphic below can be delivered to Splunk in real-time and you can get this kind of insight without running SQL Profiler. If they are logging into the application with Windows credentials, you will have the user ID as well.

Also, you don’t have to know regex every time, you can save the query below as a macro and never have to type the regex ever again. You can also make that rex field a static column. I am NOT an regex guru, I managed to get every field parsed with a book and Google.

For me this now allows you to report on average process time by:

  • Database Server
  • User ID
  • Database Table (if you know a little regex)
  • Database
  • Client Subnet
  • Client IP Address
  • Individual Stored Procedure

Basically, once your data is indexed by Splunk’s Big Data back end, you have “baseball stats” as in it is crazy what you can report on (Example: who hit the most home runs from the left side of the plate in an outdoor stadium during the month of July). You can get every bit as granular as that in your reporting and even more.

Extrahop/Splunk Integration: DNS Errors

Few issues will be as maddening and infuriating as DNS Resolution issues. A windows client can pine for resolution for as long as five seconds. This can create some serious hourglass time for your end users and impact the performance of tiered applications. An errant mistake in a .conf file mapping to an incorrect host can be an absolute needle in a haystack. With the Splunk integration, (Extrahop’s own console does a great job of this as well) you can actually integrate the DNS lookup failures as the happen in real time and take them off the wire and into your Splunk Big Data platform. Below you see the raw data as it happens. (I literally went to a DOS prompt and started typing NSLOOKUP on random thought up names. (The great irony being that in this age of domain squatting, 1/3 of them actually came back!!!!). As my mentor and brother James “Jim” Smith once told me “if you have issues that are sometimes there and sometimes not there, it’s probably DNS” or “If DNS is not absolutely pristine, funny things happen” or my all-time favorite quote from my Brother Jim “Put that GOD DAMN PTR record in or I will kick your phucking ass!” Needless to say, my brother Jim is rather fond of the DNS Failure record keeping of Extrahop.

Below you see a very simple trigger that essentially logs the client IP, the DNS Server IP and the DNS Query that was attempted, the condition is set so that it is triggered in the event of an error.

Below is the resultant raw data.


As with the SQL Data we had above, we have more Parsing goodness because we are integrating the data into Splunk: Note the server cycling through the domain DNS Suffix thus doubling the number of failures.

So within the same vein as “baseball stats” you can report on DNS lookup failures by DNS Query (As you see above, those records who fail to be looked up most often) but you also have the ability to report on the following:

  • DNS Failures by Client IP and by DNS Query (Which server has the misconfigured conf file)
  • DNS Failures by DNS Server
  • DNS Failures by Subnet (bad DHCP setting?)

Proper DNS pruning and maintenance takes time, until Extrahop I cannot think of how I would monitor DNS failures outside of wireshark (Great tool but not much big data or business intelligence behind it). The ability to keep track of DNS failures will go a very long way in providing needed information to keep the DNS records tight. This will translate into faster logon times (especially if SRV lookups are failing) and better overall client-server and nth-tiered application performance.

Extrahop/Splunk Integration: Citrix Launch Times

One of the more common complaints of Citrix admins is the slow launch times. There are a number of variables that Extrahop can help you measure but for this section we will simply cover how to keep track of your launch times.

Below you see a basic trigger that will keep track of the load time and login time. I track both of these metrics as often, if the login time is 80-90% of the overall load time you likely need to take a look at group policies or possibly loopback processing. This can give you an idea of where to start. If you have a low logiinTime metric but a high loadTime metric it could be something Network/DNS related. You create this query and assign it to all XenApp Servers.


The Raw Data: Below you see the raw data, I am not getting a username yet, there is a trick to that I will cover later but you see below I get the Client Name, Client IP, Server IP and I would have my server name if my DNS was in order (luckily my brother Jim isn’t here)

As with the previous two examples, you now can start to generate metrics on application launch performance.


And once again with the baseball stats theme you can get the following metrics once your Extrahop data is integrated into Splunk:

  • Average Launch time by UserName
  • Average Launch time by Client Name
  • Average Launch time by Client IP
  • Average Launch time by Customer Subnet (using some regex)
  • Average Launch time by Application (as you see above)
  • Average Launch time by XenApp Server (pinpoint a problem XenApp server)

Conclusion:
While I did not show the Extrahop console in this post the Extrahop console is quite good, I wanted to show how you could integrate wire data into your Splunk platform and make it available to you along with your machine data. While you are not going to see CPU, Disk IOPS or Memory utilization on the wire, you will seem some extremely telling and valuable data. I believe that all systems and system related issues will manifest themselves on the wire at some point. An overloaded SQL Server will start giving you slower ProcessTime metrics. A flapping switch in an MDF at a remote site might start showing slower Launch Times in Citrix and a misconfigured .conf file may cause lookup failures for your tiered applications that you run. These are all metrics that may not manifest themselves with agent driven tools but you can note them on the wire. Think of Extrahop as your “Pit Boss” engaging in performance surveillance of your overall systems.

I have found the integration between Splunk and Extrahop gives me a level of visibility that I have never had in my career. This is the perfect merger of two fantastic data sources.

In the future I hope to cover integration for HTTP, CIFS as well as discuss the security benefits of sending wire data to Splunk.

Thanks for reading.

John M. Smith

Useful Regex statements

Getting the Subnet ID (24 bit Mask)
This is the REX statement that will let you query for a 24 bit subnet ID.  This will let you check Citrix Latency and Load/Launch times by Subnet within a customer’s network.

rex field=client_ip “(?<net_id>\d+\.\d+\.\d+)” | stats avg(Load) count(load_time) by net_id

Getting performance on SQL INSERT statements:

The REGEX below will allow you to get the actual table that an insert command is updating.  This could be useful to see if SQL write actions are not performing as expected.  This REX will parse out the table name so that you can check the performance of specific tables.

rex field=_raw “Statement=insert INTO\s(?<Table>.[^\s]+)”

Getting the Table Name within a SELECT statement:
The REX statement below allows you to get the table that a select statement is running against.  Mapping the performance by Table name may give you an indication that you need to re-index.
| rex field=_raw “[fF][rR][oO][mM]\s(?<Table>.[^\s]+)”

From Buzzword to Buziness: A conversation about Operation Intelligence with 3 Pioneers at Citrix Synergy

From Buzzword to Buziness: A conversation about Operation Intelligence with 3 Pioneers at Citrix Synergy

While a family commitment is keeping me from moderating my Geek Speak session you get a “Moderator Upgrade” in the form of Splunk’s Brandon Shell.  

As we have discussed, there are changes to Citrix’s Edgesight Product as well as some great innovations by smaller APM players that are well worth looking into.

This Geek Speak features the CEO of the recent “Best of Interop” awardee Extrahop, Jesse Rothstein, Jason Conger, an architect for Splunk and Dana Gutride, an architect for Citrix’s next generation of monitoring.  

In this session we will question the dominant paradigm surrounding monitoring the user experience and engage in a discussion about what it is to leverage Operational Intelligence.