Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [tracecompass-dev] lttng-ust custom latency views?

Hi Martin,

 

See below for my answers.

 

BR,

Bernd

 

From: tracecompass-dev-bounces@xxxxxxxxxxx <tracecompass-dev-bounces@xxxxxxxxxxx> On Behalf Of Martin Oberhuber
Sent: July-12-18 5:14 AM
To: tracecompass developer discussions <tracecompass-dev@xxxxxxxxxxx>
Subject: Re: [tracecompass-dev] lttng-ust custom latency views?

 

Hello Genevieve and Jonathan,

 

many thanks for the quick answer! - Using the XML Analyses, I've been able to cook up exactly what I wanted in a matter of a couple hours.

I've started from Trace Compass irq_analysis_lttng.xml , and replacing "lttng2.kernel.tracetype" by "lttng2.ust.tracetype" quickly got me there.

 

In the <fsm> state machines, I found it a little surprising that it failed to produce any statistics when I reduced the original 3 states

(initial, in_irq, after_irq) by just 2 states (initial, in_irq, initial). But the extra state doesn't hurt me so I can certainly live with that.

 

In terms of usability, it would be cool if the "Duration vs Count" histogram allowed to filter by a specific state machine only

(for example only the timer tick irq), just like the "Time vs Duration" already allows with its checkboxes.

Any plans in this area, or is there any workaround that could be applied for the filtering?

[Bernd Hufmann] Yes, that would be a great addition to be able to select what to show in the histogram. Right now, there is no plan to add this. Maybe using different XML analysis could achieve this as a workaround.

 

Regarding the error message from lttng-analyses, it turned out to be my misuse:

Since I had a kernel trace and an ust trace, I had pointed the lttng-analyses to the common basedir of both,

which was invalid. Once I pointed it to the kernel/ folder it worked as expected. Perhaps there is room for

improvement here? - Also, the preconfigured lttng-analyses descriptors still don't work for me in trace compass.

But when I manually add "lttng-cputop-mi" as an external analysis, it works as expected.

[Bernd Hufmann] Not sure what you mean with “preconfigured lttng-analyses descriptors still doesn’t work”. What do you observe in Trace Compass when it fails?

 

Thanks again for your help !

Martin

 

On 11 Jul 2018, at 18:42, Genevieve Bastien <gbastien+lttng@xxxxxxxxxxxx> wrote:

 

Hi Martin,

 

On the Trace Compass, as Jonathan mentioned, the XML analyses [1] might be a good starting point. The segments analyses in particular include min/max, latency and statistics analyses with some built-in views. If you decide to go this way, here's also a blog post by one Trace Compass user about his usage of the XML analyses [2].

 

But doing python analyses is a reasonable approach too. I'm not too sure what the error may be though. Have you tried doing the same not on an ARM board? Or with more recent version of lttng-analyses?

 

[1] http://archive.eclipse.org/tracecompass/doc/stable/org.eclipse.tracecompass.doc.user/Data-driven-analysis.html
[2] http://tooslowexception.com/analyzing-runtime-coreclr-events-from-linux-trace-compass/


Cheers,
Geneviève

On 2018-07-11 09:37 AM, Martin Oberhuber wrote:

Hello TraceCompass experts,

 

I have instrumented my embedded application with lttng-ust, adding 4 tracepoints without arguments (a1, a2, a3, a4). My app is running on a Linux 3.14.70 ARM board, and I'm using lttng 2.6.0 "Gaia". I'm uploading the data to an Ubuntu host. On the host, I can run babeltrace to get the raw tracing data.

 

Now I'd like to get some statistics about the time my app takes between a1->a2, a2->a3, etc. I'd like to see the min/max, navigate to the min/max, graph latency over time and see a histogram of latency distribution. Similar to the System Call Latency Views in Trace Compass.

 

I was hoping that I could write such an analysis in Python, deriving from the lttng-analyses. So I've tried that on a Kernel trace on my ARM board first (lttng-analyses-record , version v0.6.1). But uploading the Kernel trace to the host and running a simple command like "lttng-cputop-mi mytrace" just thinks a while and then prints

{"error-message": "Error: Cannot run analysis: 'State' object has no attribute 'tracer_version'"}

 

My questions:

  • Is it a reasonable approach writing an external analysis for my lttng-ust in Python? Or is there any simpler / better way getting the latency statistics that I want?
  • Any clue what I could do to get lttng-analyses to work in my environment?
  • How could I move on getting the latency statistics?

Thanks a lot for any hints!

Martin




_______________________________________________
tracecompass-dev mailing list
tracecompass-dev@xxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/tracecompass-dev

 

_______________________________________________
tracecompass-dev mailing list
tracecompass-dev@xxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/tracecompass-dev

 


Back to the top