[
Date Prev][
Date Next][
Thread Prev][
Thread Next][
Date Index][
Thread Index]
[
List Home]
RE: [cdt-core-dev] Bad scalability news
|
You can do this with most commercial java profilers. There are also
some open source ones, but I haven't used them so I can't comment
directly on them. Usually you would get the application into a known
state, say open and close the editor a few times. Then you can force
garbage collection (the profiler should have a button to do this). Then
take a heap "bookmark" (different profilers call it different things).
From then on instance counts should be shown as a +/- from the bookmark.
Open and close the editor again. Then force another GC. Any objects
that show positive counts indicate possible memory leaks. If you are
talking about an automated way of doing it that may be more difficult,
but some profilers do include APIs. Not sure if they will be sufficient
or not.
I haven't done this in a while, and tools have improved quite a bit, so
there may be an easier way as well. However, this has worked for me in
the past. Hope this helps.
Jeremiah
-----Original Message-----
From: Chris Songer [mailto:songer@xxxxxxxxxxxxx]
Sent: Wednesday, May 12, 2004 12:08 PM
To: cdt-core-dev@xxxxxxxxxxx
Subject: Re: [cdt-core-dev] Bad scalability news
Clever test!
It certainly explains some inexplicable behavior we have been seeing in
testing. We just did some chatting about it internally. It "should" be
easy
to diagnose. Do what you are doing but call gc(), dump all the objects
allocated in the jvm, open editor, close editor, call gc() dump all the
objects allocated in the jvm. That should point the mighty finger of
blame
pretty clearly.
Unfortunately no one here yet knows how to dump all the objects
allocated
in the jvm. Hprof is lovely, but to nail this we really don't want to
track
everything for all time, we just want to dump twice and see what's new.
Any
pointers on how to do that?
Thanks!
-Chris
At 09:47 AM 5/12/2004 -0400, Thomas Fletcher wrote:
>Folks,
>
> I've been running some tests to reproduce some user scenarios we
>have been looking at (with CDT 1.2.* thus far) and have some not so
>great news.
>
>Running the attached code, which is an object contribution to an IFile
>which simply spins in a loop opening the editor for that file and then
>closing it I've found the following:
>
>Running against the .project file (or any other text file)
>-> 260 000+ iterations and still going strong
>
>Running against a simple .c file:
>-> 1 047 iterations then an out of memory error on the editor
>
>Running against a simple .c file but with System.gc() after every
>interation:
>-> 1 050 iterations then an out of memory error on the editor
>
>Running against a simple .c file which has no header
>files #included:
>-> 1 076 iterations then an out of memory error on the editor
>
>Running against a simple .c file but with the outline
>view closed
>-> 1 099 iterations then an out of memory error on the editor
>
>These tests were run on a Windows XP system (512M RAM) in a runtime
>workbench using CDT 1.2.1 and Eclipse 2.1.2. The workbench was using
>the "stock" arguments for eclipse. Changing the memory settings (or
>the contents of the file) just ended up delaying (or speeding up) the
>crash.
>
>I'll be filing a bugzilla entry to track this, but thought that others
>might be interested in the results and have comments on the validity of
>the tests.
>
_______________________________________________
cdt-core-dev mailing list
cdt-core-dev@xxxxxxxxxxx
http://dev.eclipse.org/mailman/listinfo/cdt-core-dev