Skip to main content

Eclipse Community Forums
Forum Search:

Search      Help    Register    Login    Home
Home » Eclipse Projects » EclipseLink » ConcurrentModificationException prior to updating entity.
ConcurrentModificationException prior to updating entity. [message #506072] Tue, 05 January 2010 20:22 Go to next message
beamso is currently offline beamsoFriend
Messages: 7
Registered: January 2010
Junior Member

I'm attempting to update an existing J2EE application from using Toplink on glassfish v2.1.1 to using EclipseLink on glassfish v3. The application is using bean-managed persistence with explicit transactions.

When creating/updating entities, JPA queries may be executed in the transaction prior to EntityManager.persist() or EntityManager.merge() being called on the object being created/updated. This code flow worked perfectly under Toplink. But under EclipseLink I'm getting ConcurrentModificationExceptions.

I've done some googling on the exception and the first point made was making sure that the EntityManager wasn't accessed from multiple threads. The EntityManager is stored in a ThreadLocal and is only accessed on the one thread.

Any ideas on what's wrong here would be great.

Following is the stacktrace:

Caused by: java.util.ConcurrentModificationException
	at java.util.IdentityHashMap$IdentityHashMapIterator.nextIndex(
	at java.util.IdentityHashMap$
	at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.calculateChanges(
	at org.eclipse.persistence.internal.sessions.RepeatableWriteUnitOfWork.writeChanges(
	at org.eclipse.persistence.internal.jpa.EntityManagerImpl.flush(
	at org.eclipse.persistence.internal.jpa.EJBQueryImpl.performPreQueryFlush(
	at org.eclipse.persistence.internal.jpa.EJBQueryImpl.executeReadQuery(
	at org.eclipse.persistence.internal.jpa.EJBQueryImpl.getResultList(
	at net.beamso.project.persistence.controllers.JpaDao.getFirstResultFromQuery(
Re: ConcurrentModificationException prior to updating entity. [message #506546 is a reply to message #506072] Thu, 07 January 2010 22:17 Go to previous messageGo to next message
beamso is currently offline beamsoFriend
Messages: 7
Registered: January 2010
Junior Member
I don't mean to reply to myself but I worked it partially out.

My JPA entities are annotated with javax.validation annotations, and I had code to perform validation, both on the entity as a standalone object and to compare it with data already stored in the database.

JPA v2 picks up the annotations and was running through the validation at a pre-persist stage. It seems like the pre-persist stage is during the transaction but after the SQL has been called (the entity gets an id via a sequence, and the entity had it's id set).

By turning off the automatic validation in the persistence.xml my code is working as expected.

Is there some limit as to what code you can run during the automatic validation stages?
Re: ConcurrentModificationException prior to updating entity. [message #511541 is a reply to message #506072] Mon, 01 February 2010 23:08 Go to previous messageGo to next message
Frank Sauer is currently offline Frank SauerFriend
Messages: 14
Registered: July 2009
Junior Member
keep in mind that that exception is not related to persistence at all. What this means is that while you are iterating over a collection, that collection is being modified as well. My guess is that something is being removed from it without using the iterator's remove() method.
Re: ConcurrentModificationException prior to updating entity. [message #511557 is a reply to message #511541] Tue, 02 February 2010 00:42 Go to previous messageGo to next message
beamso is currently offline beamsoFriend
Messages: 7
Registered: January 2010
Junior Member
I should have come back and replied to this thread earlier.

Turns out you can set the FlushMode of the transaction at runtime. This transaction now runs successfully when the FlushMode is set to FlushModeType.COMMIT. I think that queries I was doing during validation confused EclipseLink.

I can assure you that I wasn't removing something from a collection inside a for loop, otherwise the error would have popped up when using executing the same code using Toplink.
Re: ConcurrentModificationException prior to updating entity. [message #511697 is a reply to message #511557] Tue, 02 February 2010 15:01 Go to previous messageGo to next message
Chris Delahunt is currently offline Chris DelahuntFriend
Messages: 1380
Registered: July 2009
Senior Member

From the sounds of it, the problem was that an event was getting triggered during a flush or commit. That event fired a query which because of the flushMode setttings, caused the transaction to again flush. This caused the exception since it was already being flushed, and EclipseLink internals started operating on a collection being used in the initial flush/commit.

While this situation is not a good one to be in, this should have been fixed in bug depending on the version being used.

Best regards,
Re: ConcurrentModificationException prior to updating entity. [message #1695495 is a reply to message #511697] Fri, 15 May 2015 10:07 Go to previous message
Nuno Godinho de Matos is currently offline Nuno Godinho de MatosFriend
Messages: 34
Registered: September 2012
Hi there,

I had a very similar situation under eclipse 3.2.2 version glassfish

	at java.util.IdentityHashMap$IdentityHashMapIterator.nextIndex(
	at java.util.IdentityHashMap$
	at org.eclipse.persistence.internal.queries.ContainerPolicy.mergeChanges(
	at org.eclipse.persistence.mappings.CollectionMapping.mergeChangesIntoObject(
	at org.eclipse.persistence.internal.descriptors.ObjectBuilder.mergeChangesIntoObject(
	at org.eclipse.persistence.internal.sessions.MergeManager.mergeChangesOfWorkingCopyIntoOriginal(
	at org.eclipse.persistence.internal.sessions.MergeManager.mergeChangesOfWorkingCopyIntoOriginal(
	at org.eclipse.persistence.internal.sessions.MergeManager.mergeChanges(
	at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.mergeChangesIntoParent(
	at org.eclipse.persistence.internal.sessions.RepeatableWriteUnitOfWork.mergeChangesIntoParent(
	at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.mergeClonesAfterCompletion(
	at org.eclipse.persistence.transaction.AbstractSynchronizationListener.afterCompletion(
	at org.eclipse.persistence.transaction.JTASynchronizationListener.afterCompletion(
	at com.sun.jts.jta.SynchronizationImpl.after_completion(
	at com.sun.jts.CosTransactions.RegisteredSyncs.distributeAfter(
	at com.sun.jts.CosTransactions.TopCoordinator.afterCompletion(
	at com.sun.jts.CosTransactions.CoordinatorTerm.commit(
	at com.sun.jts.CosTransactions.TerminatorImpl.commit(
	at com.sun.jts.CosTransactions.CurrentImpl.commit(
	at com.sun.jts.jta.TransactionManagerImpl.commit(
	at com.sun.enterprise.transaction.jts.JavaEETransactionManagerJTSDelegate.commitDistributedTransaction(
	at com.sun.enterprise.transaction.JavaEETransactionManagerSimplified.commit(
	at com.sun.ejb.containers.BaseContainer.completeNewTx(
	at com.sun.ejb.containers.BaseContainer.postInvokeTx(
	at com.sun.ejb.containers.BaseContainer.postInvoke(
	at com.sun.ejb.containers.BaseContainer.postInvoke(
	at com.sun.ejb.containers.EJBObjectInvocationHandler.invoke(
	at com.sun.ejb.containers.EJBObjectInvocationHandlerDelegate.invoke(
	at com.sun.proxy.$Proxy405.processTelegram(Unknown Source)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(
	at java.lang.reflect.Method.invoke(

The exception was not deterministic, which was very surprising since I was not able to find any concurrent modifications taking place at the same time. The test that was having this merge process explosion,
Essentially was running single threaded:
The test would be doing something of the sort.

MyRemoteEjb ejb = GlasfisshClicentUtil.giveMeRemoteEJB();


And on the server side you would have a dummy entity such as a Closet entity where you would modify a one-to-many relationship doing something like:


The commit of the transaciton was blowing up during the merge process of this Closet One-To-Many relationship.

While debuging the code, whenever I did not see the exception happen, I was able to check the following.

The ChangeSetEntity of eclipse link had for this one too many attribute a changeSet with addedEntities.
And the added entity on the change set would apperently be a "clone".

If were to use the eclise link APIs to search for the entity in this added change set on the eclipse link server session cache, I could clearly see:
The entity that is part of the change set is an object id different from the entity that is on the server session cache.
One of them, the one in the changeset has a @Version higher than the on on the server ssion cache, which has a @Version equal to the one I see in the databsae.

However, when the concurrent modification excpetion happened, the blow up was that eclipse link was doing the following:

(1) Create an interator over the entities on the addChangeSet.
(2) Itereation.hasNext() = true
(3) jacket = <--- fetches the clone
(4) runMergePOrocess on jacket
(5) iterator.hasNext() <----- It was not supposed to have a next but some times it would say, yes there were more changed entities
(6) <----- blow up

And in this case, if I had the break point before the call to, I could see that the entity on change set appeared to no longer be a "clone" it was the exact same entity that existed in the server session cache. it looked like it was the original.

The most surprsing part, is that the code that seemed to lead this concurrent modification exception to happen was a eclipse link even listener.

The persistence.xml of the project breaking up hat this in there:

<!--property name="eclipselink.descriptor.customizer.Closet" value="dispatch.listener.StagingClosetStatusChangeListener" /-->

By commenting this eclipse link property the merge process stopped breaking.
But the fact is, the code in this change listener, as far as i could see while debugging it:
It did absolutely nothing.
It looked at the ChangeSet instance - it looked at the things that had changed to see if they were interesting, but it modified nothing.
It simply looked at it and went away.
The code looked perefectly harmless.
Of course - some geters in eclipse link - such as on lazy loaded lists on entiteis, are totally booby trapped, so it is hard to say convincingly - it looks harmless but are you sure about side effects?

Well in this case, I looked like peeking at the change set would - in non detemrinistic mannger - break the merge process of the one-to-many relationshp, and all of this seemed to take place on single thread running on the server soide (which should make the explosion deterministic, but it did not).

I was never able to suceed in puting a breakpoin on the idnetity map from which the iterator that broke actualy being modified, so I was not ever able to discover what eclipse link code exactly triggered the modification of the change set manipulated in the merge process.

I was not satisfied at all with the solution of having to take away the change listener.
I also know that I could have just hacked the problem away by discaring the iterator, and modifying the source coe of eclipse link to loop ofver a an indepent new ArrayList<Entities> , instead of depending on on collection that is being hammered somehow.

So - I have no conclusion on what happened here, but it looked like listening ot changeset events is not the best idea.

This is the method that was exploding:
     * INTERNAL:
     * Merge changes from the source to the target object. Because this is a 
     * collection mapping, values are added to or removed from the collection 
     * based on the change set.
    public void mergeChanges(CollectionChangeRecord changeRecord, Object valueOfTarget, boolean shouldMergeCascadeParts, MergeManager mergeManager, AbstractSession targetSession) {
        ObjectChangeSet objectChanges;        
        // Step 1 - iterate over the removed changes and remove them from the container.
        Iterator removeObjects = changeRecord.getRemoveObjectList().keySet().iterator();
        // Ensure the collection is synchronized while changes are being made,
        // clone also synchronizes on collection (does not have cache key read-lock for indirection).
        // Must synchronize of the real collection as the clone does so.
        Object synchronizedValueOfTarget = valueOfTarget;
        if (valueOfTarget instanceof IndirectCollection) {
            synchronizedValueOfTarget = ((IndirectCollection)valueOfTarget).getDelegateObject();
        synchronized (synchronizedValueOfTarget) {
            while (removeObjects.hasNext()) {
                objectChanges = (ObjectChangeSet);                
                removeFrom(objectChanges.getOldKey(), objectChanges.getTargetVersionOfSourceObject(mergeManager, targetSession), valueOfTarget, targetSession);
                if (!mergeManager.shouldMergeChangesIntoDistributedCache()) {
            // Step 2 - iterate over the added changes and add them to the container.
            Iterator addObjects = changeRecord.getAddObjectList().keySet().iterator();                
            while (addObjects.hasNext()) {
                objectChanges = (ObjectChangeSet);  <----------------------- Right here . But it was simply not possible to figure out who modified changeRecord.getAddObjectList() or why
                Object object = null;                    
                if (shouldMergeCascadeParts) {
                    object = mergeCascadeParts(objectChanges, mergeManager, targetSession);
                if (object == null) {
                    // Retrieve the object to be added to the collection.
                    object = objectChanges.getTargetVersionOfSourceObject(mergeManager, targetSession, false);
                // I am assuming that at this point the above merge will have created a new object if required
                if (mergeManager.shouldMergeChangesIntoDistributedCache()) {
                    //bug#4458089 and 4454532- check if collection contains new item before adding during merge into distributed cache					
                    if (!contains(object, valueOfTarget, mergeManager.getSession())) {
                        addInto(objectChanges.getNewKey(), object, valueOfTarget, mergeManager.getSession());
                } else {
                    addInto(objectChanges.getNewKey(), object, valueOfTarget, mergeManager.getSession());    

By the way, one final remark.

The line comments that you guys put on the code are great! They do help a lot.
However, and please note, I am well aware I am saying this about an old version of code of eclipse link and its the newer versions the comment might be unfair, but Seriously:
Please, do improve the javadoc of the private methods.

The logic that is being executed by eclipse link is for sure non trivial most of the time, and the only helpful documentaiton that there is on the code is to be found are the sporadic line comments placed in the code. Going into an arbitrary method of eclipse-link and understanding the code, is like going to the midle of the desert hoping to find a drop of water.

I would say that at the very least, the parameters of these methods should be properly document with @param javadoc.

Such as @param changeRecord
Instance encapsulating the changes done within the context of a transaction on an attribute of a JPA instance close, such as, for example, changes done on a One-To-Many attribute.

Then Ok! I go into the method and I know what you guys are talking about. What information is in the instance variable. What granularity of information is in there. And such...

And also, it would be quite helpful when we are dealing for exmaple with methods that deal with merging:
That the variables have much clearer names, that specify, for example:
"hey this is a clone and is suppose dot get its changes copied over to an original in the server session cache"
"this thing is an original - it reflects the current state of the database - it comes from the global manage eclipse session - and it just about to get the updates of clone to ensure that the server session cache does not become stale in relation to the databsae state ", etc...
It simply has to be less painful to go into a method and know what data is being manipulated.

When you call a variable something like "object", quite frankly, it's just useless variable name. It does not take a lot of effort to come up with something a bit more meaningful.

While I can understand being somewhat lazy on comments on trivial code, you guys are writing non trivial code. So please improve the self documentation of the code, so that we can do a better job of help ing ourselves when we have issues with the framework.

Kindest regards.

[Updated on: Fri, 15 May 2015 10:34]

Report message to a moderator

Previous Topic:Can we have multiple dataSources to single database
Next Topic:Basic(fetch=LAZY)
Goto Forum:

Current Time: Sun Mar 07 06:05:58 GMT 2021

Powered by FUDForum. Page generated in 0.02464 seconds
.:: Contact :: Home ::.

Powered by: FUDforum 3.0.2.
Copyright ©2001-2010 FUDforum Bulletin Board Software

Back to the top