Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [tracecompass-dev] Java VM out of memory with TraceCompass reading generic CTF

Yes, please post a bug with the trace (anonymized if needed)

On 17-04-10 08:52 AM, Genevieve Bastien wrote:
> Hi Rocky,
> Be assured, you are _not_ pushing Trace Compass past its intended use,
> not at all. That is exactly what the tool is intended for: analyse any
> kind of trace.
> That said, with new use cases like this one, there may be unforeseen
> bottlenecks, and some analyses are known not to scale (the segment
> store ones, like the System Call latency analysis, or some XML
> analyses). Do you have any of those open? What views are opened in
> your UI that might cause this OOME? Try to close all views (except
> maybe the statistics) and see if it still causes the error. And then
> reopen them one by one to see if one of them seems to be problematic.
> As for the number of streams, I recall some people have done tracing
> on a Xeon Phi with 256 cores, so I think there were 256 streams and it
> worked well, so that should not be the problem.
> You can also open a bug on bugzilla and attach the trace, then we can
> investigate what may be the problem with this specific trace.
> Cheers,
> Geneviève
> On 2017-04-09 02:01 AM, Rocky Dunlap wrote:
>> While reading in a generic CTF trace using TraceCompass, my Java VM
>> is running out of memory causing Eclipse to shut down.  This is with
>> Neon.3 with latest release of TraceCompass.
>> The error:
>> Java HotSpot(TM) 64-Bit Server VM warning: INFO:
>> os::commit_memory(0x000000078f900000, 1048576, 0) failed;
>> error='Cannot allocate memory' (errno=12)
>> #
>> # There is insufficient memory for the Java Runtime Environment to
>> continue.
>> # Native memory allocation (mmap) failed to map 1048576 bytes for
>> committing reserved memory.
>> # An error report file with more information is saved as:
>> # /home/rocky/eclipse/cupid-dev/eclipse/hs_err_pid14944.log
>> I am wondering if I am pushing the limits farther than intended with
>> TraceCompass, or if there is something else going on?  The trace
>> itself is custom, generated using the Babeltrace C library.  It seems
>> to read fine from the command line using Babeltrace.
>> When I import into TraceCompass, it will appears to be successful,
>> but after clicking around the trace event log for a few seconds, the
>> JVM will fail completely with the error above and exit.
>> The trace itself is CTF with 50 streams. The stream files are between
>> 4MB and 13MB each and the total size of all streams is about 350MB. 
>> I assume the 350MB of binary event data will blow up substantially
>> when converted into Java objects, but still this does not seem
>> terribly bad.  However, I wanted to check to see if right off the bat
>> it is clear that I am pushing TraceCompass past its intended use, or
>> if these kinds of stats should be reasonable with the tool.
>> It may be that the underlying issue is mostly due to the large number
>> of streams, not necessarily the size of the data, but that is a
>> complete guess with little foundation. It does appear that there are
>> a lot of Java threads in the hs_err_pidXXX.log file, so I was just
>> wondering if they are getting out of hand with the number of streams.
>> If it would help, I am happy to provide both the full CTF trace
>> itself to see if you can reproduce and also the hs_err_pidXXX.log file.
>> Thanks,
>> Rocky
>> _______________________________________________
>> tracecompass-dev mailing list
>> tracecompass-dev@xxxxxxxxxxx
>> To change your delivery options, retrieve your password, or unsubscribe from this list, visit
> _______________________________________________
> tracecompass-dev mailing list
> tracecompass-dev@xxxxxxxxxxx
> To change your delivery options, retrieve your password, or unsubscribe from this list, visit

Back to the top