(no subject) [message #687301] |
Sat, 28 May 2011 08:12 |
Eclipse User |
|
|
|
I made a heap dump while 50% of my model is loaded from a file to the
CDO server, and another one shortly before 100%. I compared the
resulting heap dumps using MemoryAnalyser, and found two suspect
instance increasements responsible for memory consumption:
Class Name
| Objects #1 | Objects #2 | Shallow Heap #1 | Shallow
Heap #2
----------------------------------------------------------------------------------------------------------------------------------------------------
org.eclipse.emf.cdo.internal.common.revision.AbstractCDORevisionCache$CacheSoftReference|
642 | +35.283 | 30.816 | +1.693.584
org.eclipse.emf.cdo.internal.common.id.CDOIDObjectLongImpl
| 117.549 | +43.008 | 1.880.784 |
+688.128
----------------------------------------------------------------------------------------------------------------------------------------------------
So there are 1.693.584 more instances of CacheSoftReference in the
second heap dump.
Viewing the Paths To GC Roots, I found out that they are referenced in
CDORevisionCacheNonAuditing by this hash map:
private Map<CDOID, Reference<InternalCDORevision>> revisions = new
HashMap<CDOID, Reference<InternalCDORevision>>();
As I understand the cache concept, the application behavior will not be
changed if the cached is cleared (except for performance). So to solve
the problem of growing RAM usage, would a java.util.WeakHashMap not do a
better job here? So the cache can partially be cleared if there is to
less RAM.
Regarding the increasing size of instances of CDOIDObjectLongImpl, there
play the key role in the above mentioned hashmap, but are also used as
key and value at
org.eclipse.emf.cdo.server.internal.db.mapping.horizontal.ObjectTypeCache$MemoryCache:
private Map<CDOID, CDOID> memoryCache;
... which is assigned a LinkedHashMap<CDOID, CDOID> instance.
As it is a cache concept, shouldnt it also be a WeakHashMap? Ok, the
"Linked" behavior is lost, but I not understood how it is used in this
code. As I read the documentation, LinkedHashMap can define an iteration
order in different ways (the iteration order of a simple HashMap is more
undefined), but I found no code where actually an iterator of
memoryCache is queried.
To conclude, I found these two memory leaks in CDO (beside a memory leak
in my own code, but I was aware of this before and maybe I will change
the responsible POJO structure to a CDO structure, which on the one side
must be stored in the DB but on the other side can then be garbage
collected on the client), but as these two leaks are fortunately located
in cache functionality, this should be easily solved by using weak
references.
|
|
|
Powered by
FUDForum. Page generated in 0.06265 seconds