|Eclipse slows down while/after profiling code [message #26663]
||Fri, 19 August 2005 21:14
Originally posted by: becker.kestrel.edu|
I installed TPTP 4.0.0 in my eclipse distribution (3.1, J2SE 1.5.0_04,
Windows XP Pro on a P4 3.0 with 1G mem, RAserver version 4.0.0) and the
performance of the Eclipse platform was greatly affected to the point that
I cannot run anything else after a profiling section finishes. During the
profiling the IDE is completely unusable since it takes forever to refresh
Even after the end of the profiled execution, the memory consumption of
the javaw process running eclipse keeps increasing.
After some time Eclipse runs out of memory and I need to restart it.
I am using -vmargs -Xmx 512m -Xms 64m when running Eclipse.
Did anybody experience the same behavior? Any parameters setting I could
use to avoid these problems.
|Collecting complete sequence diagrams bad for profiling (Re: Eclipse slows down while/after profilin [message #27736 is a reply to message #26827]
||Tue, 23 August 2005 12:49
| Oliver Schoett
Registered: July 2009
>It's likely that you are profiling a lot of data that is getting processes
>by loaders even after the agent is terminated. I would recommend using
>proper filtering to only profile data that you are interested in. TPTP's
>profiling agent is designed to collect and report any data that the JVM
>reports. The classic 'hello world' program usually reports about 2000
>classes (the majority of which are Java APIs) if no filtration is used. So
>you can imagine the amount of data that is collected if a moderately complex
>program is profiled without proper filtration.
There used to be a design problem the Hyades build 3.0.0 that made it
nearly useless for profiling applications that run more than a few
minutes. It appeared to me that while the profiler was running, Eclipse
memory consumption would increase linearly until the available memory
was exhausted (512 MB would last approx. 15 minutes and slow down the
system unacceptably well before that).
My interpretation of this was that the Hyades profiler accumulated the
entire sequence diagram of the run (i. e., every single procedure call),
which required linearly increasing memory. Even worse, it could only
show the sequence diagram, which is unusably detailed, instead of the
accumulated call times that the Eclipse Profiler
When a function is called 1000 times and each time calls another
function 1000 times, this requires two storage locations instead of a
million, and displays the accumulated times for these function calls,
which is almost always enough for performance analysis (the sequence
diagram displayed by Hyades 3.0.0 would show a million calls and hence
be nearly useless).
The Eclipse Profiler thus gets it right on two points:
1. Collecting the accumulated call times requires memory only
according to the complexity of the call graph rather than the
length of the program run,
2. the accumulated times for the function calls are usually what is
wanted and needed for performance analysis.
For Hyades release 4, a feature has been introduced that allows one to
display accumulated call times (thus presumably achieving point 2.); see
bug 51293 (https://bugs.eclipse.org/bugs/show_bug.cgi?id=51293).
However, I do not know whether point 1. has been addressed yet. Until
it becomes possible to switch not only the display, but also the
collection of data to accumulated call times, Hyades will remain useless
for my work (which requires tuning batches that will run 4 - 100 hours
in reality and still need to run 10 - 60 minutes in test runs.)
Is there any information on whether the data collection problem has been
or will be addressed?
Powered by FUDForum
. Page generated in 0.02009 seconds