Many people will remember the nostalgic old days of photography –having
a nice analog camera, they went into a shop and bought a film, containing
max 36 possible pics, so each shot should actually make it - put them into
camera, made pictures, sent them to a lab – waiting days for return of pics,
hoping they were fine – and the unique situation was captured well. You can
say that this process is similar to the state how users are working today with
packet-trace file analysis.
› Having something very important to analyze (a service may have failed)
› Create a trace file (hopefully not too late)
› If finished – use Wireshark to Manually dive deep into the analysis – hoping to understand what was wrong.
› No automation, trace file by trace file, each sniffed by a single user – each consuming time – often hours.
› In the meantime – packets continue to flow – unseen and not analyzed.
Typical capture-based monitoring solution have a very limited scope of analysis and filtering. They may -or not- provide a few packet analysis experts – typical TCP and HTTP- based - and the user has to do the work of understanding the experts and the packets, drilling into details and creating an “intelligent” perspective of the vast amount of data.
If the issues of a user are part of the small scope of such provided experts from such capture appliance – they need to dive deep into by them self, understand if good or bad - what packets do show. <i>E.g. if the problem is somewhere in an SAP application, database or SSL - protocols which are widely used but not covered by standard experts – generic capture-based monitoring would not really work anymore to assist a user with analytics. He could just define a trace filter and create trace files for the time by himself, analyze them by himself etc. - the manual process like done since decades.
The Wireshark users now has the next problem. They must analyze trace files by a limited size, for a short period, file by file – filter by filter, often packet by packet – to be able to understand the issue he may look for and make a conclusion about cause of issues. Usually he must know in advance – for what he is looking for.
No threshold management, no real “analysis-profiles” allowing him to understand if something is good or bad what they see. The trace data is stored in trace files only, so repeating an analysis of a trace file needs to create the same process and effort to understand the information contained on the trace files. Forwarding the trace files to other team members requires the transfer of the file – often via email – another user to open it and run through the same analysis process again.
This sounds extreme time-and resource-consuming. And it is. Multiple users are in case of incident busy with digging into small or large trace files, few or many – to understand the problem and it cause and conditions – often for many hours or even days.
In case of critical incidents important business service may be down, waiting for the conclusion of the network analyst –who is working with the trace files and its huge amount of data.
iPAC-TM aims on a digital workflow with trace files.
Allowing thousands of trace files fully-automated analyzed, visualized and correlated – into the single pane of management data – saving precious operational time, reducing significant duration of incidents in IT-departments. Capture-file-related monitoring based on Wireshark-analytics is the dream and the vision of many technical users:
“What if I could use Wireshark™ for monitoring, using Wireshark™ features and its vast amount of display filters – for monitoring, for longtime statistics, multi-hour-graphs, forwarded events or traps, as information source for global enterprise fault management”.
The vision would be: take the data direct from the packet stream or from files created by various trace tools like tcpdump, Tshark, or a capture appliance – analyze them on the fly by using a deep expert profile and write the data and expert info into a database, visualize them in dashboards, compare data against thresholds and create red/green-states of conditions to understand quickly which metrics have been hit – so users don’t need to run manually through the traces, but just watch the dashboards and check if incident occurrences happen. This vision is the base for IPAC -TM – Trace Monitoring.
iPAC Trace Management Main Features
IPAC-Trace Monitor is a software – part of INS genius SLIC suite – enabling the user to view the informal content of thousands of trace files in a dashboard - aggregated, analyzed, threshold compared, prioritized and grouped into main categories: Application, connection, network.
Thousands of files – Constant analysis
A user can import a large number of files – 1000ds and more- and provide a constant statistic over time – like any other monitoring solution.
User friendly Dashboards
Just with a glance a user can understand:
Are there any issues in my trace files
To what category they belong too (network, application, connection)
Which exact metric was causing that?
What threshold was crossed
Direct access to the trace file
Drilldowns and category specific
views (here application view) allow deep insights- continuously over time - for days, hours or seconds
With Deep analysis iPAC-TraceMonitor is utilizing Wireshark display filters – which can do a lot more than most other analysis solutions. Thousands of protocol-dependent prefilters are defined, analysis expert exist for a wide range of protocols. By using each possible Wireshark-Display filter in IPAC-TM - user can pretty much use every byte in the packet flow – as monitoring and incident condition.
iPAC-TM Under the Hood
Analysis profiles are pre-configured filter-and-threshold definitions which will be applied to a trace-analysis. A profile is a configuration of defined filters and symptoms – pretty much each byte in a packet or a Wireshark-expert-analysis (like tcp_out_of_order) can be configured as symptom. Files will be analyzed very deep according to these profiles – and symptom are generated based on the analysis. Eg. if SSL uses TLS1.2 can be defined as condition, an occurrence on non-TLS1.2 packets can be seen and defined as symptom. Same can be done with performance metrics like LDAP.time, DNS.Time, DNS. responseCodes, HTTP return codes etc. – which can be included in a specific profile and symptoms created if a threshold is exceeded.
User have a certain defined request to analyze deeply and constantly, like an application, a security behavior, a server or service - etc. and defines his request as an analysis scenario.
A typical analysis workflow starts with a definition of a scenario:
Object - What I need to analyze.
Conditions - filter conditions.
Data source - the traces source (files, active Wireshark/tcpdump, capture appliance).
Options - if treated for analysis purpose (like de-duplication, merging).
Saving location - (scenario specific directory).
Intelligence - What analysis profile should be used.
Such a scenario gives the user the ability to start a longtime-monitoring process on a deepest level – focus on this scenario and create scenario-related incidents and events. Many scenarios can be defined and processed parallel – so one scenario can work on the web shop using deep SSL and HTTP metrics, another can monitor SAP services and another the DNS replies – same time.
Trace-based events can be correlated with other existing management data, if coming from Network, Systems, Logfiles or security devices in a single dashboard- like SLIC Correlation insight. They can create the significant data – which can feed a service management platform with the intelligence to create complete cause & effect chains for complex IT-services.
Download iPAC Trace Manager facts sheet