Home » Modeling » Compare » EMF Compare - Scalability problems(testing scalability, loading fragments, OOM Java heap space)
|EMF Compare - Scalability problems [message #1432050]
||Fri, 26 September 2014 14:35
| Márton Tunner
Registered: September 2014
First of all, I am a bit confused about the version of EMF Compare I am using, so I took two screenshots (maybe you need them) which I can not share while being a newbie here... The version should be 2.1.3.(201402040808) or 3.0.2.(201402040808). Actually, it was integrated in an earlier version of Eclipse Modeling (Modeling Tools).
I was testing scalability of EMF Compare with some models. There is no real hierarchy in these models, the root element is containing other elements directly.
I have modified my reference models by deleting 20 random elements from them and an "Eclipse Product" was doing the comparison.
- ~6000 elements --> Execution time: ~540s, Memory usage: ~3GB
- ~6000 elements --> Execution time: ~9s, Memory usage: ~160MB
- ~23000 elements --> Execution time: ~100s, Memory usage: ~1GB
- ~43000 elements --> java.lang.OutOfMemoryError: Java heap space
I have changed vmargs (-Xmx6g, even -XX:MaxPermSize:1024m which would not be necessary) but it did not help. According to Windows Task Manager, this Eclipse Product is using 200-500MB RAM. I do not know why I always get OOM error. Any ideas?
Anyways, I should work with larger models (without UUID too), so I would need this kind of scalability which is mentioned on the website of EMF Compare:
"EMF Compare has been designed with scalability in mind in order to support comparisons of large fragmented models.
It only loads the fragments susceptible to have changed and then compares only these parts. Only the necessary fragments (minimized scope) of the compared models are loaded, bringing fast differences computing along with an optimal memory footprint.
This way, EMFCompare is able to compare models with millions of elements in a number of steps proportional to the number of differences."
How should this work? Do I have to add manually fragments (making more fragments from a model) - I think it is not an option generally - or can EMF Compare do this if it gets the root of the model? If it can, why did it not work? Does it need higher tree depth?
I have another problem with loading resources, which is not an EMF Compare specific problem, but maybe you can help me. I wanted to modify a model (~ 160000 elements, ~40MB) but my program was still at the following line, did not move on even after 1 hour:
Resource resource = resourceSet.getResource(URI.createURI(modelPath), true);
I do not know how much time I should have waited for...
The situation was very similar to the Java heap space problem above: 200-500 MB ram and low cpu usage (based on Windows Task Manager) but I stopped it, so I did not get any exceptions. I have changed vmargs (increased -Xmx) in this case too. What do you think, what can I do?
Thank you in advance,
|Re: EMF Compare - Scalability problems [message #1433802 is a reply to message #1432050]
||Mon, 29 September 2014 08:55
|| Laurent Goubet
Registered: July 2009
The version of EMF Compare we can infer from the full qualifier of your plugins. The proper version number is what you see on the feature. Look into Help > about > installation details. Select the "features" tab and tell us the version number of org.eclipse.emf.compare (though like I said, "201402040808" is enough for us to know you're using 2.1.3. You might want to give 3.0.1 (released for Eclipse Luna SR1) a try, it's compatible with the same range of Eclipse versions yet includes a number of fixes and improvements (and some of these improvements concern the performances).
1) What are your numbers representing? How did you get the "memory usage" figures? Specifically, I don't understand why you tell us that you reach "1GB" of memory usage with your "23000 elements" model by fail with OOM errors with a product using 200-500MB?
You might also be hitting issues with the memory allocation. You give here a number of "3GB" of heap... So I assume you are using a 64bit Eclipse on a 64bit java? If you are not, please consider switching. 32bit vms have trouble allocating large heap size (> 1.5-1.6 GB).
I would also like to know what kind of model you are loading since some are more "memory-vore" than others.
2) EMF Compare is designed for scalability, but "scalability" in the case of EMF, means that you need to have fragmented models. EMF just cannot load very large models in memory... and _we_ need to load 2 or 3 versions of these models in memory at once, without even considering our own comparison model (the result) which also adds to the burden.
So here, your answer is "yes", you need to create fragments manually. Fragmenting a model is basically dividing it in more "usable" group of elements. How did you create your models in the first place? How did you delete elements from them? Were you using EMF or some other modeler? Whatever you were using, was it really a good experience to try and find your way in models containing 40k/100k elements? We are using fragmentation to chop the model down to individual groups of information a user can better grasp. This also makes memory management (and speed of use) easier, since only the fragments and their dependencies need to be loaded to read, change, or compare the model(s).
3) You basically hit the nail here : when EMF doesn't manage to load your models in memory, neither does EMF Compare. The problem here being that the model is just too large to be loaded as a single chunk. The line you pointed to is simply the "loading" line for EMF : it tries and read the XML to load it into a Resource. The extremely long time instead of an OOM is just another symptom of the memory issue : the garbage collector was working full-time to retrieve memory, and the program itself was filling up this memory just as fast... before the garbage collector just retrieved it anew.
The size of models that can be loaded through EMF will vary depending on the kind of models (I suppose that an Ecore model would fit into a smaller memory chunk than a like-sized (on disk) UML model for example).
This goes back to : how did you create the models in the first place? If they are "export"ed version of models created through another modeler (such as enterprise architect), they may contain some information that can be stripped away to limit the number of elements. Your best bet would still be to fragment the model in smalled units.
Current Time: Tue Aug 03 22:02:29 GMT 2021
Powered by FUDForum
. Page generated in 0.02002 seconds