Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
[tracecompass-dev] Question about scaling of data driven analysis

Dear Trace Compass developers,

I am trying to apply Trace Compass to custom, text-based traces and to this end have created a parser and an xml-analysis.
My question relates to the scaling of the application or the analysis to larger traces.
In case of a large trace (200MB) my analysis takes about 2 hours to complete, whereas for small traces (10MB) it takes seconds.
What can I do to increase the performance of my xml analysis?

I have noticed that the issue may be caused because of the size of my state system structure, which is about 4 levels deep but on each level there may be >1k different entries. E.g.:

|- trace
| | - thingid1
| | - subthingid 1
| | - somevalue
| ...
| | - thigid1000
| | - subthingid 1
| ...
| | - subthingid 1000
| | - someOtherValue


I have noticed that the getRelevantInterval method in org.eclipse.tracecompass.internal.statesystem.core.backend.historytree.HTNode takes up quite a lot of time.
Also, I found this bug https://bugs.eclipse.org/bugs/show_bug.cgi?id=492525, which discusses that CTF traces can be packetized.
Is there a similar way to handle custom (text-based) traces?

I would greatly appreciate any pointers you can give me.


Kind regards,

Robbert 


Back to the top