Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [tracecompass-dev] Scalability

Hi Doug

 

The protocol is on GitHub. See link [1] for a human readable format, and [2] for the source repository. Right now, it’s a swagger definition for REST. The protocol is for visualization of trace content, such as, tables, time graphs (Gantt), xy charts etc. That means that the raw traces have been read and certain information have been extracted, analyzed and prepared for visualization on a trace server.

 

CTF is for more for storing the trace content, the raw input for analyzes and visualization.

 

We are at a preliminary stage. It’s a good time jump on and shape the protocol. We are looking forward for any feedback. J

 

BR,

Bernd

 

[1] https://theia-ide.github.io/trace-server-protocol/

[2] https://github.com/theia-ide/trace-server-protocol

 

From: tracecompass-dev-bounces@xxxxxxxxxxx <tracecompass-dev-bounces@xxxxxxxxxxx> On Behalf Of Doug Schaefer
Sent: March-12-19 1:23 PM
To: tracecompass-dev@xxxxxxxxxxx
Subject: Re: [tracecompass-dev] Scalability

 

Thanks Bernd, is there a github repo or similar where I can track the progress of this?

 

I just wonder what such a protocol would look like. Given CTF has a very flexible structure, I would imagine most analyzers and visualizers would be pretty custom to the metadata in the traces. Both the server and client would need to be extensible which means the protocol between then needs to be uber extensible, or just a simple JSON RPC one.

 

As an aside, this also has risen my interest in a proper browser widget for Eclipse. It would be lot easier if all the IDEs could host the same client 😉.

 

Anyway, as I mentioned, I'm just starting out on this journey. One thing is for certain, given all the sources for traces, especially with the heterogeneous systems I am seeing, we should be able to build up a strong ecosystem here if we have a platform that's compelling.

 

Cheers,

Doug.

 

On Tue, 2019-03-12 at 15:57 +0000, Bernd Hufmann wrote:

Hi Doug

 

Great to see your interest this area.

 

>> BTW, my focus on libbabeltrace is to allow for a full range of tooling for our users using the language of their choice. The python binding is particularly interesting. As we move forward into the new world of IDEs, I can see a node.js binding being interesting as well. And it may even make sense to use it in Java tooling like TraceCompass using JNA. That's the vision at least, but first, baby steps .

 

For the integration with Trace Compass in Eclipse, we chose to implement a CTF parser in Java to avoid using JNI/JNA to libbabeltrace. This was for performance reasons over JNI/JNA interface. We had implemented that at the beginning when Trace Compass supported LTTng version 0.x (prior CTF) and the performance was not sufficient enough.

 

For the integration in next generation IDEs, we are working on defining a Trace Server Protocol (TSP) as an interface between a client and server for exchanging of trace date for visualization purposes. It’s a similar idea as the LSP (Language Server Protocol) or DAP (Debug Server protocol). The TSP interface is on a higher level than what a trace reading interface like the CTF trace reader in Java or over your suggested JNA interface over libbabeltrace would provide.

 

We have some promising results for a Typescript front-end based on the next generation IDE Theia that uses the TSP to a trace server using Trace Compass core features as backend. We can already visualize various Trace Compass views. It’s still preliminary, but we could demo it if you’re interested.

 

BR

Bernd

 

 

From: tracecompass-dev-bounces@xxxxxxxxxxx <tracecompass-dev-bounces@xxxxxxxxxxx> On Behalf Of Doug Schaefer
Sent: March-12-19 11:23 AM
To: tracecompass-dev@xxxxxxxxxxx; jonathan.rajotte-julien@xxxxxxxxxxxx
Cc: lttng-dev@xxxxxxxxxxxxxxx
Subject: Re: [tracecompass-dev] Scalability

 

It should be easy to generate a random one with the class I mentioned below and fill it with 16 million events. BTW, two of the int fields are 5 and 10 bits respectively, but I'm not sure that matters (or at least it shouldn't).

 

I'll also take a look at the babeltrace code and see what I can see.

 

BTW, my focus on libbabeltrace is to allow for a full range of tooling for our users using the language of their choice. The python binding is particularly interesting. As we move forward into the new world of IDEs, I can see a node.js binding being interesting as well. And it may even make sense to use it in Java tooling like TraceCompass using JNA. That's the vision at least, but first, baby steps .

 

Doug.

 

On Tue, 2019-03-12 at 11:03 -0400, Jonathan Rajotte-Julien wrote:

Hi Doug,
 
On Tue, Mar 12, 2019 at 02:56:26PM +0000, Doug Schaefer wrote:
Thanks, Matthew, I ended up flushing every 100K events and that seemed OK.
 
My biggest worry is on the read side. Babeltrace blew up on that file. It's a pretty simple trace (for now) with a single event class with 4 ints and a sequence of 32-bit unsigned ints which is usually only 2 elements long.
 
This is not I expect. Still eithet jgalar or eep might have a more insight on
this. CCing the lttng-dev mailing list.
 
Is there any way to share us a similar trace? Either with a generator or we can
provide a link for you to upload it. The current limit on bugs.lttng.org is a
bit too small for such trace.
 
 
Aside from that, am very pleased with the how easy CTF is to work with. Looking forward to doing more.
 
Doug.
 
On Tue, 2019-03-12 at 14:41 +0000, Matthew Khouzam wrote:
 
Hi Doug,
 
 
Great to hear you're coming to a standard! I don't know if trace compass will scale properly as I don't know what the trace configuration is.
 
 
I have one of our trace which has 16 million events, the size of each event is around 32 bytes giving me a 540MB file. My first attempt at simply writing out the CTF events without a flush ran out of virtual memory. I then flushed after every event which made each event take 32K. So I found a middle ground and the resulting stream file is close to the same size.
 
 
 
My suggestion is to have 1 MB packets. This makes seeking very efficient. If each event is 32 bytes, basically flush every 25k events or so.
 
 
Please keep us posted!
 
 
Matthew.
 
________________________________
From: tracecompass-dev-bounces@xxxxxxxxxxx <tracecompass-dev-bounces@xxxxxxxxxxx> on behalf of Doug Schaefer <dschaefer@xxxxxxxxxxxxxx>
Sent: Tuesday, March 12, 2019 10:30:15 AM
To: tracecompass-dev@xxxxxxxxxxx
Subject: [tracecompass-dev] Scalability
 
Hey gang,
 
We're finally starting to look at converting our custom traces into CTF so we can leverage tools like TraceCompass and, of course, contribute to it. One thing I quickly ran into is a scalability issue I'm seeing with libbabeltrace.
 
I have one of our trace which has 16 million events, the size of each event is around 32 bytes giving me a 540MB file. My first attempt at simply writing out the CTF events without a flush ran out of virtual memory. I then flushed after every event which made each event take 32K. So I found a middle ground and the resulting stream file is close to the same size.
 
But when I use babeltrace to print it out, I ran out of virtual memory. I then hand coded a reader and simply adding the trace to the context caused the memory issue. It really looks like libbabeltrace (version 1.5 from the Ubuntu 18.04 distro), tries to inflate the events into it's internal representation for the entire trace. I need to do more investigation to confirm that.
 
So my question for this list, would TraceCompass do better? Does it have it's own parsing libraries?
 
Thanks,
Doug
 
_______________________________________________
tracecompass-dev mailing list
tracecompass-dev@xxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://urldefense.proofpoint.com/v2/url?u=https-3A__www.eclipse.org_mailman_listinfo_tracecompass-2Ddev&d=DwIBAg&c=yzoHOc_ZK-sxl-kfGNSEvlJYanssXN3q-lhj0sp26wE&r=NrrbvTHWa2Nbp_kAN0Hl1o3lM1WAwSes64uBjxjNhMc&m=ZoCx61VE_sHGa6j3DXehbqjX5P1NGEuDtEFaGORCh9k&s=qi9q1zvDaC9cJCci0y123O-j66M643YwxRJccJCzg_c&e=
 
 
 

Back to the top