Eclipse Community Forums
Forum Search:

Search      Help    Register    Login    Home
Home » Eclipse Projects » EclipseLink » QueuableWeakCacheKey growing endlessly(Number of application entities keep growing and are garbage collected, but eclipselink internal objects just keep growing leading to degraded application performance and ultimately applicaiton termina)
QueuableWeakCacheKey growing endlessly [message #811455] Fri, 02 March 2012 11:10 Go to next message
Manpreet Singh is currently offline Manpreet SinghFriend
Messages: 5
Registered: December 2010
Junior Member
The problem is related to ever growing org.eclipse.persistence.internal.identitymaps.QueueableWeakCahceKey
and
org.eclipse.persistence.internal.identitymaps.QueueableWeakCahceKey$CacheKeyReference

The snapshot of the jvisualvm showing the memory used for the JVM
index.php/fa/7376/0/ is attached.


The JVM memory snapshot at that time is as below:
index.php/fa/7377/0/


The issue is that the application memory just keeps growing even though the number of Application Entities remain constant (data is written to DB, updated) and are properly garbage collected but the eclipselink objects "QueueableWeakCahceKey" (and a few other) just keep on increasing with time, leading to a lot of GC happening and ultimately the application dying from Out of Memory error.

All the application entities' cache type are marked as Weak.

Following are the eclipselink persistence settings:
eclipselink.jdbc.cache-statements=true
eclipselink.jdbc.bind-parameters=true
eclipselink.cache.size.default = 2000
eclipselink.jdbc.batch-writing=JDBC
eclipselink.jdbc.batch-writing.size=500
eclipselink.cache.type.myapp.datamodel.Event=Weak
eclipselink.cache.type.myapp.datamodel.DeviceData=Weak
eclipselink.cache.type.myapp.datamodel.DeviceDataPK=Weak
eclipselink.cache.type.myapp.datamodel.DeviceEvents=Weak
eclipselink.cache.type.myapp.datamodel.DeviceEventsPK=Weak
eclipselink.cache.type.myapp.datamodel.DeviceProfile=Weak
eclipselink.cache.type.myapp.datamodel.DeviceProfilePK=Weak
eclipselink.cache.type.myapp.datamodel.Inventorydata=Weak
eclipselink.cache.type.myapp.datamodel.InventorydataPK=Weak
eclipselink.cache.type.myapp.datamodel.Fault=Weak
eclipselink.cache.type.myapp.datamodel.FaultPK=Weak
eclipselink.cache.type.myapp.datamodel.DeviceKey=Weak
eclipselink.cache.type.myapp.datamodel.DeviceKeyPK=Weak
eclipselink.persistence-context.reference-mode=WEAK


myapp.datamodel.* are Entities

Environment details:
Database is MS SQL Server 2008
Operating system is : Windows 2008 R2 (64 bit)
JVM details are:
JVM: Java HotSpot(TM) 64-Bit Server VM (20.5-b03, mixed mode)
Java: version 1.6.0_30, vendor Sun Microsystems Inc.
JVM arguments:
-XX:+UseConcMarkSweepGC
-Xms128m
-Xmx512m

Concurrent Mark sweep is used. But default GC has also been tried. With the same results.

We have tried to find documentation about this but no success. Can someone please help us out with this. We have spent a lot of time trying different combinations of GC and memory settings.

If someone can just give details on what could be the reason of linear increase in the number of org.eclipse.persistence.internal.identitymaps.QueueableWeakCahceKey, it will be really appreciated.


Thanks in advance!!





Re: QueuableWeakCacheKey growing endlessly [message #811672 is a reply to message #811455] Fri, 02 March 2012 17:15 Go to previous messageGo to next message
Chris Delahunt is currently offline Chris DelahuntFriend
Messages: 1039
Registered: July 2009
Senior Member
My guess is this is a result of the way the application is using EntityManagers and holding onto them without ever clearing them. The property "eclipselink.persistence-context.reference-mode=WEAK" is what is responsible for the QueueableWeakCacheKey being used; the other properties refer to the shared cache settings. This setting does not mean you should use multiple EntityManager instances and hold onto them forever without clearing them; They still have overhead, and from the problem description, you should be clearing or releasing them occassionally.

From the snap shot, there are twice as many CacheKeyReference as there are QueueableWeakCacheKey, so it looks like your application is reusing EntityManager contexts, but must also be doing alot of batch reads that bring in large numbers of entities. reference-mode=WEAK creates a cache that can expand to fill available memory. Because clean up is expensive, it uses a threshold value that it checks against the current size to determine if it should go through and remove QueueableWeakCacheKey that have had their weak references GCd. This only occurs when a new object is added to the cache, and to prevent it from occuring repeatedly when there are no QueueableWeakCacheKey to clean up, it will increase the threshold to match the size if the size is larger when done. This is a problem for your application because once this is increased, there is no process to decrease the threshold value. So even though the entities themselves eventually get garbage collected, the cache will still hold QueueableWeakCacheKey for those references for the life of the EntityManager (or until it is cleared).

You will need to evaluate how you are holding onto the EntityManagers and how they are being reused to see if you can clear them at appropriate points, or have larger bulk transactions use one that can be released when done.

Best Regards,
Chris
Re: QueuableWeakCacheKey growing endlessly [message #812063 is a reply to message #811672] Sat, 03 March 2012 06:37 Go to previous messageGo to next message
Manpreet Singh is currently offline Manpreet SinghFriend
Messages: 5
Registered: December 2010
Junior Member
Thanks Chris for your prompt response!!

My application is holding only one instance of the entity manager for the complete application life cycle. This is created right in the beginning via
emf = Persistence.createEntityManagerFactory("OBDBPU", configOverrides);
em = emf.createEntityManager();

The application is an OLTP type application, it has the following working:
1. In the beginning it loads all the pertinent objects in the EM cache.
2. On an input event it writes out data to a table (myapp.Event) and then the reference is GCed because this object is just written to a table and never needed anymore in the application. This entity looks to be the major cause of ever increasing QueueableWeakCahceKey. All other objects are constants after a certain point of time in the application.
3. On another input it updates/inserts records to multiple other tables, these objects are required in cache because they are mostly updated.
4. The writing to the database is carried out in batches of 500, whether its an insert or an update.

I rely on having the cache with me to enhance the performance of the system. If I would clear the em frequently, it would defeat the purpose i guess.
How do u suggest i go about this? Is there an api call where in I can check the Threshold that you refer and perhaps do a clear on certain conditions, perhaps when i am 50% of the total heap.

Or I could just do a em.clear() after "n" minutes always.

Please suggest.

Best Regards
Manpreet





Re: QueuableWeakCacheKey growing endlessly [message #812255 is a reply to message #812063] Sat, 03 March 2012 13:45 Go to previous message
Manpreet Singh is currently offline Manpreet SinghFriend
Messages: 5
Registered: December 2010
Junior Member
Will add some more pertinent information about the application and why closing/clearing the EM is not a valid option.

During Startup we load some entities into memory using JPA.
These object are inserted into a rule engine's memory and kept there.
The rule engine processes incoming messages.
When one or more our objects are affected by the rule engine..
The rule engine calls methods of these objects to make changes including adding or changing cascading objects..
Sometimes these method calls can cause JPA to load some cascaded data using lazy loading
Finally these objects are persisted.
Once we do a batch size of say 500 messages, we issue a commit.
then go back to processing the next batch of incoming messages.

If we close or clear the first instance of the EM (that was created at startup) all the objects added in the rule engine memory will be gone and then we will have to start a new one just before starting to process a new batch of incoming messages.
Now when the rules touch one of the loaded objects, will they need to be attached to the EM ? if so how ?
How will eclipselink know these objects ? Will it remember that it had loaded them from the previous EM ?
This is our issue.

Thanks
Manpreet
Previous Topic:Absence of @DiscriminatorColumn causes exception: Unknown column DTYPE
Next Topic:Outer-Joining Subclasses on Queries gives java.sql.SQLException: Too many tables
Goto Forum:
  


Current Time: Thu Nov 27 04:48:58 GMT 2014

Powered by FUDForum. Page generated in 0.02494 seconds
.:: Contact :: Home ::.

Powered by: FUDforum 3.0.2.
Copyright ©2001-2010 FUDforum Bulletin Board Software