Skip to main content


Eclipse Community Forums
Forum Search:

Search      Help    Register    Login    Home
Home » Eclipse Projects » EclipseLink » Order of mapping resolution
Order of mapping resolution [message #380506] Tue, 26 August 2008 14:32 Go to next message
Tom Denley is currently offline Tom DenleyFriend
Messages: 4
Registered: July 2009
Junior Member
I am observing some interesting side-effects when using back-references
where part of the foreign key is qualified by a grand-parent in the
relationship. This is fairly in-depth, but I will do my best to explain it
succinctly.

I have a three-tier one-to-many hierarchy, described by the following
tables. Each table is keyed on a unique sequence-generated ID, and a
string code which splits the schema vertically.

GRANDPARENT (GRANDPARENT_ID, CODE)
PARENT (PARENT_ID, CODE)
CHIILD (CHILD_ID, CODE)

My object model for this domain gives each object its own identifier, but
stores the code on the grandparent only.

GrandParent
+ int identifier
+ String code
+ Collection<Parent> children

Parent
+ int identifier
+ GrandParent backRef
+ Collection<Child> children

Child
+ int identifier
+ Parent backRef

I map this through EclipseLink (We use the POJO ORM API as we came from
TopLink) using direct-to-field mappings for the identifier fields; batch
read one-to-many mappings for the forward going relationships; and
one-to-one mappings for the back-references.

This appears to work fine, but I noticed that in some scenarios fetches
were retrieving Parent objects whose Child objects were missing.

Debugging through the EclipseLink code, I noticed that for some
one-to-many relationships, EclipseLink is trying to pull child records
(from the pool of previously batch-read child candidates) using a correct
parent sequence number key, but a code of null. Looking further into this,
I discovered that EclipseLink is deriving the code by walking up the
back-reference to the GrandParent, but that this back-reference hadn't
been populated yet and was null. It appears that, for the mapping to work
correctly, the backRef property of each domain object must be the first
one to be populated, before the children property is addressed.

Looking in more detail at the way in which EclipseLink populates its
relational mappings reveals the reason for the erratic behaviour we were
observing. The order in which EclipseLink processes relational mappings is
deemed to be unimportant, and is casually achieved by iterating over the
result of a call to descriptor.getMappings() in the
buildAttributesIntoObject method in
org.eclipse.persistence.internal.descriptors.ObjectBuilder. This simply
returns the key-set of the Map of relationships (keyed by their
descriptors), and hence the order is arbitrarily determined by the hash
values of the descriptors and the hashing function of the JVM. Small
changes to our mappings were causing these hash values to change, and
subsequently the order of relational mapping population to change.

Arguably, this is both a bug in EclipseLink and an oversight in our
domain. Clearly, nobody in EclipseLink has ever considered the order of
population of relational descriptors to be important. There are no efforts
in the code to intelligently calculate the order, and there is no evidence
of any exposure of that order to the user through the EclipseLink IDE. We
should therefore consider that, although entirely achievable, it is
incorrect to build a domain in such a way that the order of relational
mapping evaluation becomes important.

The resolution for us, therefore, has been to re-architect our domain
objects so that the code is propagated throughout, and to adjust the
EclipseLink mappings to make use of these de-normalised codes so as to
prevent the need for back-references to be populated first. However, to me
this represents a pollution of our domain, and for this reason I am
posting here to see if anybody else has experienced similar problems, or
if anyone connected with EclipseLink would like to make a comment.

Regards,
Tom
Re: Order of mapping resolution [message #380510 is a reply to message #380506] Tue, 26 August 2008 18:05 Go to previous messageGo to next message
Gordon Yorke is currently offline Gordon YorkeFriend
Messages: 78
Registered: July 2009
Member
Actually EclipseLink does provide a means to order the mappings. By
default however all the relational mappings are grouped together at the
same 'weight' and amoung the relational mappings the order is not defined
until set by the developer. Using a DescriptorCustomizer the mappings can
be iterated over and a weight set. A weight is just an integer between 1
and Integer.MAX_VALUE. All non-relational mappings (ie. @Basic) are
weighted at 1 so picking a number larger than 1 but less than
Integer.MAX_VALUE will insure that the back mapping is processed before
the 'forward' mapping is processed.
--Gordon
Re: Order of mapping resolution [message #380526 is a reply to message #380506] Wed, 27 August 2008 13:24 Go to previous messageGo to next message
James is currently offline JamesFriend
Messages: 272
Registered: July 2009
Senior Member
Also ensure you are using indirection (lazy) on all your relationships.

-- James
Re: Order of mapping resolution [message #380530 is a reply to message #380510] Wed, 27 August 2008 16:01 Go to previous messageGo to next message
Tom Denley is currently offline Tom DenleyFriend
Messages: 4
Registered: July 2009
Junior Member
Thanks Gordon, I've applied a weighting of 2 to my back reference mappings
and the missing rows have reappeared.

I'm now having a similar problem with writes though. I'm getting an
exception calculating changes when the commit takes place, with the
message "Primary Key can not be set to null". I've had a look, and a
grandchild is being processed through the
DeferredChangeDetectionPolicy.calculateChanges method. Here the
ObjectChangeSet.getPrimaryKeys() call is returning null, and examination
reveals the back reference on the grandchild to be null. I've checked the
objects I'm persisting, and all of the back references appear to be set at
the point I merge.

Any ideas where I should be looking to resolve this?

Thanks again,
-- Tom
Re: Order of mapping resolution [message #380535 is a reply to message #380530] Thu, 28 August 2008 13:17 Go to previous messageGo to next message
Tom Denley is currently offline Tom DenleyFriend
Messages: 4
Registered: July 2009
Junior Member
Looking further into this problem with persistence, it appears that the
back-reference is null for one of the child object clones held within the
unit of work cache.

My test case is retrieving an existing grandparent-parents-children
dataset from the database, and adding a new parent (with children) to the
retrieved grandparent. When the subsequent commit fails, it is because one
of the retrieved child objects has a null back-reference, but only for the
cached clone, as the equivalent child object as retrieved by the earlier
fetch has its back-reference set correctly.

I'm currently hitting a breakpoint in the buildCloneFromRow method in
ForeignRedferenceMapping. Here the call to valueFromRow is returning null.

I'm going to continue working through this, but any pointers or insights
would be greatly appreciated.

Cheers,
Tom
Re: Order of mapping resolution [message #380542 is a reply to message #380535] Thu, 28 August 2008 16:59 Go to previous messageGo to next message
Gordon Yorke is currently offline Gordon YorkeFriend
Messages: 78
Registered: July 2009
Member
Is it possible that your application is setting the back pointer to null?
Perhaps through CascadeType.MERGE? There would be no reason for
EclipseLink to update the backpointer. If it was populated when read into
the Persistence Context then it will remain set until set to null by the
application or refreshed.
--Gordon
Re: Order of mapping resolution [message #380558 is a reply to message #380542] Fri, 29 August 2008 10:40 Go to previous message
Tom Denley is currently offline Tom DenleyFriend
Messages: 4
Registered: July 2009
Junior Member
I modified my backpointer mappings to use method accessing, and added a
conditional breakpoint to the setter method when it is called with null.
During the database fetch, I am hitting this breakpoint on a Child object
with the following stack trace:

Thread [main] (Suspended (breakpoint at line 100 in Child))
Child.setBackRef(Parent) line: 100
GeneratedMethodAccessor145.invoke(Object, Object[]) line: not available
DelegatingMethodAccessorImpl.invoke(Object, Object[]) line: 25
Method.invoke(Object, Object...) line: 585
PrivilegedAccessController.invokeMethod(Method, Object, Object[]) line:
501
MethodAttributeAccessor.setAttributeValueInObject(Object, Object) line:
126
OneToOneMapping(DatabaseMapping).setAttributeValueInObject(O bject,
Object) line: 1108
OneToOneMapping(ForeignReferenceMapping).buildCloneFromRow(D atabaseRow,
Object, ObjectLevelReadQuery, UnitOfWork, Session) line: 224
ObjectBuilder.buildAttributesIntoWorkingCopyClone(Object,
ObjectLevelReadQuery, DatabaseRow, UnitOfWork, boolean) line: 1107
ObjectBuilder.buildWorkingCopyCloneFromRow(ObjectLevelReadQu ery,
DatabaseRow, UnitOfWork, Vector) line: 1219
ObjectBuilder.buildObjectInUnitOfWork(ObjectLevelReadQuery, DatabaseRow,
UnitOfWork, Vector, Descriptor) line: 410
ObjectBuilder.buildObject(ObjectLevelReadQuery, DatabaseRow) line: 376
ReadAllQuery(ObjectLevelReadQuery).buildObject(DatabaseRow) line: 451
ReadAllQuery(ObjectLevelReadQuery).registerIndividualResult( Object,
UnitOfWork, boolean) line: 1701
ReadAllQuery(ObjectLevelReadQuery).conformIndividualResult(O bject,
UnitOfWork, DatabaseRow, Expression, IdentityHashtable, boolean) line: 615
ReadAllQuery.conformResult(Object, UnitOfWork, DatabaseRow, boolean)
line: 339
ReadAllQuery.registerResultInUnitOfWork(Object, UnitOfWork, DatabaseRow,
boolean) line: 669
ReadAllQuery.executeObjectLevelReadQuery() line: 466
ReadAllQuery(ObjectLevelReadQuery).executeDatabaseQuery() line: 800
ReadAllQuery(DatabaseQuery).execute(Session, DatabaseRow) line: 603
ReadAllQuery(ObjectLevelReadQuery).execute(Session, DatabaseRow) line:
768
ReadAllQuery.execute(Session, DatabaseRow) line: 436
ReadAllQuery(ObjectLevelReadQuery).executeInUnitOfWork(UnitO fWork,
DatabaseRow) line: 825
RepeatableWriteUnitOfWork(UnitOfWork).internalExecuteQuery(D atabaseQuery,
DatabaseRow) line: 2532
RepeatableWriteUnitOfWork(Session).executeQuery(DatabaseQuer y,
DatabaseRow) line: 981
...
NoIndirectionPolicy.valueFromQuery(ReadQuery, DatabaseRow, Session) line:
262
OneToManyMapping(ForeignReferenceMapping).valueFromRow(Datab aseRow,
ObjectLevelReadQuery, Session) line: 1105
OneToManyMapping.valueFromRow(DatabaseRow, ObjectLevelReadQuery, Session)
line: 833
OneToManyMapping(ForeignReferenceMapping).buildCloneFromRow( DatabaseRow,
Object, ObjectLevelReadQuery, UnitOfWork, Session) line: 221
ObjectBuilder.buildAttributesIntoWorkingCopyClone(Object,
ObjectLevelReadQuery, DatabaseRow, UnitOfWork, boolean) line: 1107
ObjectBuilder.buildWorkingCopyCloneFromRow(ObjectLevelReadQu ery,
DatabaseRow, UnitOfWork, Vector) line: 1219
ObjectBuilder.buildObjectInUnitOfWork(ObjectLevelReadQuery, DatabaseRow,
UnitOfWork, Vector, Descriptor) line: 410
ObjectBuilder.buildObject(ObjectLevelReadQuery, DatabaseRow) line: 376
ReadObjectQuery(ObjectLevelReadQuery).buildObject(DatabaseRo w) line: 451
ReadObjectQuery(ObjectLevelReadQuery).registerIndividualResu lt(Object,
UnitOfWork, boolean) line: 1701
ReadObjectQuery(ObjectLevelReadQuery).conformIndividualResul t(Object,
UnitOfWork, DatabaseRow, Expression, IdentityHashtable, boolean) line: 615
ReadObjectQuery.conformResult(Object, UnitOfWork, DatabaseRow, boolean)
line: 321
ReadObjectQuery.registerResultInUnitOfWork(Object, UnitOfWork,
DatabaseRow, boolean) line: 586
ReadObjectQuery.executeObjectLevelReadQuery() line: 403
ReadObjectQuery(ObjectLevelReadQuery).executeDatabaseQuery() line: 800
ReadObjectQuery(DatabaseQuery).execute(Session, DatabaseRow) line: 603
ReadObjectQuery(ObjectLevelReadQuery).execute(Session, DatabaseRow) line:
768
ReadObjectQuery.execute(Session, DatabaseRow) line: 370
ReadObjectQuery(ObjectLevelReadQuery).executeInUnitOfWork(Un itOfWork,
DatabaseRow) line: 825
RepeatableWriteUnitOfWork(UnitOfWork).internalExecuteQuery(D atabaseQuery,
DatabaseRow) line: 2532
RepeatableWriteUnitOfWork(Session).executeQuery(DatabaseQuer y,
DatabaseRow) line: 981
RepeatableWriteUnitOfWork(Session).executeQuery(DatabaseQuer y) line: 938
...
MyRepository.fetch(String, String) line: 220

This suggests that the batch fetch for Child objects is not working
properly. Indeed, looking at the query result, I see that seven Child
instances have been fetched, five of which having their backRef property
correctly populated, and two having it set to null.
Previous Topic:Broken link on web site
Next Topic:TIMESTAMP calculations
Goto Forum:
  


Current Time: Fri Apr 26 03:55:25 GMT 2024

Powered by FUDForum. Page generated in 0.03140 seconds
.:: Contact :: Home ::.

Powered by: FUDforum 3.0.2.
Copyright ©2001-2010 FUDforum Bulletin Board Software

Back to the top