Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [tracecompass-dev] Question about scaling of data driven analysis

Hi Robbert,

State System scalability is a known issue when dealing with analysis with many attributes.
There is a patch addressing state system scalability available at, if you are working from source.
Can you tell us if that works out for you?
If you are using RCP, would you mind sending over your traces and analysis so I can see if it makes a difference?


Dear Trace Compass developers,

I am trying to apply Trace Compass to custom, text-based traces and to this end have created a parser and an xml-analysis.
My question relates to the scaling of the application or the analysis to larger traces.
In case of a large trace (200MB) my analysis takes about 2 hours to complete, whereas for small traces (10MB) it takes seconds.
What can I do to increase the performance of my xml analysis?

I have noticed that the issue may be caused because of the size of my state system structure, which is about 4 levels deep but on each level there may be >1k different entries. E.g.:

|- trace
| | - thingid1
| | - subthingid 1
| | - somevalue
| ...
| | - thigid1000
| | - subthingid 1
| ...
| | - subthingid 1000
| | - someOtherValue

I have noticed that the getRelevantInterval method in org.eclipse.tracecompass.internal.statesystem.core.backend.historytree.HTNode takes up quite a lot of time.
Also, I found this bug, which discusses that CTF traces can be packetized.
Is there a similar way to handle custom (text-based) traces?

I would greatly appreciate any pointers you can give me.

Kind regards,


Back to the top