Skip to main content


Eclipse Community Forums
Forum Search:

Search      Help    Register    Login    Home
Home » Eclipse Projects » EclipseLink » AbstractSession.checkAndRefreshInvalidObject(..) not refreshing the passed object(org.eclipse.persistence.internal.sessions.AbstractSession.checkAndRefreshInvalidObject(..) not refreshing the passed object)
AbstractSession.checkAndRefreshInvalidObject(..) not refreshing the passed object [message #1704012] Thu, 06 August 2015 16:22 Go to next message
Andreas Schmidt is currently offline Andreas SchmidtFriend
Messages: 33
Registered: July 2009
Location: Zurich, Switzerland
Member
While investigating an issue regarding stale data within optimistic locks (eventually causing an OptimisticLockException) in a setup with two nodes and JMS synchronized caching, I started to debug into EclipseLink and to my understanding found the place where the problem arises but unfortunately I do not quite understand its reason.

Please note: I looked in the forum for a similar question and did not find any, should I have missed one, please feel free to move this topic. Secondly, I am fairly new to EclipseLink, so maybe this question is a no-brainer, unfortunately it is not for me Smile

In org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.cloneAndRegisterObject the object is checked for validity with checkAndRefreshInvalidObject(). The query which is issued eventually after this.executeQuery(query) in checkAndRefreshInvalidObject() returns the correct lock code, but when the control flow returns to cloneAndRegisterObject(), the object "original" still has the old values which are incorporated into "workingClone" a few lines later.

I was a bit surprised to see, that checkAndRefreshObject() has no return value and the object "original" is not updated directly or rather the result of the query within it is simply discarded. I further assume, there should be some behind-the-scenes updating initiated by "query.refreshIdentityMapResult()" in checkAndRefreshObject(), correct?

Environment: EclipseLink 2.6.0, Oracle-DB, Two Payara nodes with a shared cache synchronized by ActiveMQ.
Additional remarks: The table in question uses SINGLE_TABLE inheritance, and the child has a column OPT_LOCK annotated with @Version.

My questions would be:

  1. If checkAndRefreshInvalidObject(original, parentCacheKey, descriptor) is called, where is the actual refresh happening if isConsideredInvalid(..) returns true (which is the case for the scenario being described)?
  2. Should the first parameter of checkAndRefreshInvalidObject(original, parentCacheKey, descriptor) have been updated already after the call to checkAndRefreshInvalidObject()?
  3. If not, how are the refreshed attributes merged into the variable workingClone in org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.class.cloneAndRegisterObject(..), because after checkAndRefreshInvalidObject(..), the method keeps working the the possibly stale "original" or rather creates a "workingClone" of it?
  4. I checked the members "object" on unitOfWorkCacheKey & parentCacheKey in cloneAndRegisterObject(..) and they are both null, should the refresh of checkAndRefreshInvalidObject(..) be visible in these members instead?
  5. The overall logic of cloneAndRegisterObject(..) puzzles me a bit: Even if the results of checkAndRefeshInvalidObject(..) end up in the UOW session, the method keeps working with the old value "original" and does not retrieve it somehow out of the session afterwards. Therefore I suppose, in either UnitOfWorkImpl.populateAndRegisterObject(..) or among the interactions between ObjectLevelReadQuery and ObjectBuilder should be some registration in UOW session happening. Still, even if this were the case, how are these changes incorporated into workingCopy at the end of cloneAndRegisterObject(..)?

[Updated on: Mon, 10 August 2015 15:30]

Report message to a moderator

Re: AbstractSession.checkAndRefreshInvalidObject(..) not refreshing the passed object [message #1705178 is a reply to message #1704012] Wed, 12 August 2015 15:58 Go to previous messageGo to next message
Chris Delahunt is currently offline Chris DelahuntFriend
Messages: 1389
Registered: July 2009
Senior Member
If I were looking into this, my first question would be where is the 'original' from? Invalidation is meant to allow refreshing the cached instance, so if 'original' is from the cache, it would have been refreshed. I haven't looked through the methods and code paths you have specified - I hope someone else might, or that I can come back later to answer - but these methods generally get called in different situations where it might not want to refresh the object directly, such as in a merge situation. The refresh from invalidation should not overwrite any changes you have made in your managed entity, but may need to be performed on the cached instance so EclipseLink can determine what has changed. The answers to your questions are very dependent on the context the methods are called under. Cache keys are used for locking purposes, so if the object isn't in the cache, new cache keys are created and they might not have object references built yet.

You might find it easier to get help with the initial problem, the stale data and when you are seeing the issue, and use that to help drive your investigations on what is happening internally. What setting are you using for cache sync, and when are you seeing the issue?
Re: AbstractSession.checkAndRefreshInvalidObject(..) not refreshing the passed object [message #1705264 is a reply to message #1705178] Thu, 13 August 2015 11:21 Go to previous messageGo to next message
Andreas Schmidt is currently offline Andreas SchmidtFriend
Messages: 33
Registered: July 2009
Location: Zurich, Switzerland
Member
Dear Chris,

thank you for your explanations and thoughts! I think now I have some idea regarding the cause of the observed behaviour and at least to me it seems, like there is a bug when the transaction is considered dirty due to an intermediate flush when it comes to reading from the database and merging into the variable original in org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.cloneAndRegisterObject. For the sake of consistency, I will update the original (no pun intended) post with the current findings and the remaining questions. While editing the first post, I came to the conclusion that the changes would most likely turn it into an incomprehensible mess and a new post summarizing the findings and containing the remaining an new questions is probably the better way to go.

Best regards
Andreas

[Updated on: Thu, 13 August 2015 11:26]

Report message to a moderator

Re: AbstractSession.checkAndRefreshInvalidObject(..) not refreshing the passed object [message #1705279 is a reply to message #1704012] Thu, 13 August 2015 12:26 Go to previous message
Andreas Schmidt is currently offline Andreas SchmidtFriend
Messages: 33
Registered: July 2009
Location: Zurich, Switzerland
Member
tl;dr With UnitOfWorkImpl.checkAndRefreshInvalidObject(original, parentCacheKey, descriptor), where parentCacheKey is the variable used to access the variable original at a later point in ObjectBuilder, is it actually possible to manifest the changes visible on the database directly in the first level cache, and not the second level cache, if the object being updated is the member object of parentCacheKey (which by my current understanding is its second level cache representation)?


After investigating the issue further, I guess I have now some understanding about what is causing the expected behaviour. First I would like to summarize my insights, if anyone notices any fundamental flaws within them, I would appreciate the corrections Smile

Andreas Schmidt wrote on Thu, 06 August 2015 12:22

While investigating an issue regarding stale data within optimistic locks (eventually causing an OptimisticLockException) in a setup with two nodes and JMS synchronized caching, I started to debug into EclipseLink and to my understanding found the place where the problem arises but unfortunately I do not quite understand its reason.

In org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.cloneAndRegisterObject the object is checked for validity with checkAndRefreshInvalidObject(). The query which is issued eventually after this.executeQuery(query) in checkAndRefreshInvalidObject() returns the correct lock code, but when the control flow returns to cloneAndRegisterObject(), the object "original" still has the old values which are incorporated into "workingClone" a few lines later.

I was a bit surprised to see, that checkAndRefreshObject() has no return value and the object "original" is not updated directly or rather the result of the query within it is simply discarded. I further assume, there should be some behind-the-scenes updating initiated by "query.refreshIdentityMapResult()" in checkAndRefreshObject(), correct?


The object is updated in org.eclipse.persistence.internal.descriptors.ObjectBuilder. One possible path is through buildWorkingCopyCloneNormally which later on goes passes through buildObject where the CacheKey of the parent session is retrieved and with it the object reference from checkAndRefreshInvalidObject() which is then updated in buildAttributesIntoObject.

Andreas Schmidt wrote on Thu, 06 August 2015 12:22

Environment: EclipseLink 2.6.0, Oracle-DB, Two Payara nodes with a shared cache synchronized by ActiveMQ.
Additional remarks: The table in question uses SINGLE_TABLE inheritance, and the child has a column OPT_LOCK annotated with @Version.

Two important details missing were


Andreas Schmidt wrote on Thu, 06 August 2015 12:22

My questions would be:

  1. If checkAndRefreshInvalidObject(original, parentCacheKey, descriptor) is called, where is the actual refresh happening if isConsideredInvalid(..) returns true (which is the case for the scenario being described)?
  2. Should the first parameter of checkAndRefreshInvalidObject(original, parentCacheKey, descriptor) have been updated already after the call to checkAndRefreshInvalidObject()?
  3. If not, how are the refreshed attributes merged into the variable workingClone in org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.class.cloneAndRegisterObject(..), because after checkAndRefreshInvalidObject(..), the method keeps working the the possibly stale "original" or rather creates a "workingClone" of it?
  4. I checked the members "object" on unitOfWorkCacheKey & parentCacheKey in cloneAndRegisterObject(..) and they are both null, should the refresh of checkAndRefreshInvalidObject(..) be visible in these members instead?
  5. The overall logic of cloneAndRegisterObject(..) puzzles me a bit: Even if the results of checkAndRefeshInvalidObject(..) end up in the UOW session, the method keeps working with the old value "original" and does not retrieve it somehow out of the session afterwards. Therefore I suppose, in either UnitOfWorkImpl.populateAndRegisterObject(..) or among the interactions between ObjectLevelReadQuery and ObjectBuilder should be some registration in UOW session happening. Still, even if this were the case, how are these changes incorporated into workingCopy at the end of cloneAndRegisterObject(..)?



  1. In org.eclipse.persistence.internal.descriptors.ObjectBuilder (see above)
  2. Yes
  3. Redundant by the previous answer
  4. I think it should be visible on parentCacheKey but not on unitOfWorkCacheKey, which may be part of the root cause of the issue.
  5. Redundant by previous answers.


While we might be able to circumvent the issue by skipping the intermediate flush, I still think this could be a bug which might be a design issue with the call hierarchy and the parameters of checkAndRefreshInvalidObject(original, parentCacheKey, descriptor) , although I am rather uncertain about the latter, as my experience with EclipseLink is still very limited and I might not see some interdependencies yet Smile

My train of thought is as follows:
If a flush occurs within a transaction, the transaction is marked as dirty and the shared persistence unit cache is ignored and according to https://www.eclipse.org/eclipselink/documentation/2.6/concepts/cache003.htm the objects are built directly in the first level cache, ie. in the unit of work context.
By my understanding, this means, with respect to checkAndRefreshInvalidObject(original, parentCacheKey, descriptor), if the CacheKey has been invalidated, the data should be retrieved from the database but because the transaction is dirty not be written to the second level cache but be created in the first level cache directly.
Within the depths of ObjectBuilder, I think the object original from checkAndRefreshInvalidObject is retrieved to be updated by buildAttributesIntoObject by its CacheKey, in particular the parentCacheKey, for example by
ObjectBuilder:965: cacheKey = session.retrieveCacheKey(primaryKey, concreteDescriptor, joinManager, query);
.

My new question are, whether it is actually possible with the current implementation to materialize the fresh data from the database directly in the persistence context cache effectively bypassing the second level cache?
As far as I understood, the CacheKey passed into checkAndRefreshInvalidObject(original, parentCacheKey, descriptor) is the second level cache representation of the object having a reference to it in its member object. If the parentCacheKey has to be used in ObjectBuilder to access and update the object original from checkAndRefreshInvalidObject, will these changes not be visible automatically in the second level cache as well?

The reason, why the update of the variable original was not visible after checkAndRefreshInvalidObject(original, parentCacheKey, descriptor), I think by now was the fact, that the code flow went through ObjectBuilder:2207: workingClone = buildNewInstance(); originating from buildWorkingCopyCloneFromRow due to unitOfWork.wasTransactionBegunPrematurely() being true in buildObjectInUnitOfWork. With workingClone = buildNewInstance(), the updates made by ObjectBuilder:2250: buildAttributesIntoWorkingCopyClone will not affect the object original from checkAndRefreshInvalidObject.

One way to make the changes on the object original is by replacing
ObjectBuilder:2250: workingClone = buildNewInstance()

with
workingClone = unitOfWork.getParentIdentityMapSession(query).retrieveCacheKey(primaryKey, descriptor, joinManager, query).getObject()

but there I definitely lack the experience with the code base to have any clue regarding possible side effects.

[Updated on: Thu, 13 August 2015 14:42]

Report message to a moderator

Previous Topic:cache coordination & Jgroups
Next Topic:Repeatble unit indirectly referenced by server session
Goto Forum:
  


Current Time: Sat Apr 20 00:25:57 GMT 2024

Powered by FUDForum. Page generated in 0.03848 seconds
.:: Contact :: Home ::.

Powered by: FUDforum 3.0.2.
Copyright ©2001-2010 FUDforum Bulletin Board Software

Back to the top