[QVTO] advice needed for large models [message #854804] |
Tue, 24 April 2012 04:42  |
Eclipse User |
|
|
|
Hi all,
I'm using QVTO programmatically and have experienced that with larger
input data sets I easily run in to a hard "gc overhead limit reached"
error. I realize that this is also closely related to the transformation
script, but are there any general guidelines how to handle large data
sets? Are there any options or tweaks when configuring the
ExecutionContext so that it might create less runtime data? And what
happens if the model itself is too large to be all in memory at one
time? Is there any demand loading/unloading?
Any input or pointers on the matter will be greatly appreciated!
Thanks
Marius
|
|
|
Re: [QVTO] advice needed for large models [message #855092 is a reply to message #854804] |
Tue, 24 April 2012 10:10   |
Eclipse User |
|
|
|
Hi
Assuming that you have done the obvious things like increase the Java
heap, disable tracing, and that you do not use allInstances() or
unnavigable opposites....
A complex transformation may need access to all the data, so I don't
think there is a general solution for QVTo.
In practice your transformation may be localized so that it is amenable
to streaming the model through in fragments. Assuming that QVTo does not
support this directly, you could have a stream reader that passes model
fragments for local transformation, and then have a stream writer that
combines the result fragments.
Alternatively you might contrive to keep the model in a repository such
as CDO so that you only need a small portion in memory at any time.
One day, a declarative transformation language, such as QVTr, could have
streaming operation as one of its compilation strategies.
Regards
Ed Willink
On 24/04/2012 09:42, Marius Gröger wrote:
> Hi all,
>
> I'm using QVTO programmatically and have experienced that with larger
> input data sets I easily run in to a hard "gc overhead limit reached"
> error. I realize that this is also closely related to the transformation
> script, but are there any general guidelines how to handle large data
> sets? Are there any options or tweaks when configuring the
> ExecutionContext so that it might create less runtime data? And what
> happens if the model itself is too large to be all in memory at one
> time? Is there any demand loading/unloading?
>
> Any input or pointers on the matter will be greatly appreciated!
>
> Thanks
> Marius
|
|
|
|
|
|
|
|
Re: [QVTO] advice needed for large models [message #855945 is a reply to message #855849] |
Wed, 25 April 2012 04:16  |
Eclipse User |
|
|
|
Hi
More appropriate is to wonder why garbage collection should be able to
recover anything anyway.
If you're loading a model, transforming then saving, then its not until
the save is complete and the ResourceSet is emotied that garbage
collection might start.
EMF models have many references with additional references buried in
adapters. It is a non-trivial activity to isolate some part of an EMF
Resource so that it can be collected.
I suggest creating a small test model and using VisualVM to debug why
garbage collection fails at the point where you think it could succeed.
Regards
Ed Willink
On 25/04/2012 07:36, Marius Gröger wrote:
> We have a very hierarchical model. I still wonder to what extent EMF
> and QVTo at least try to let go of objects which are not needed
> anymore and allow them to be garbage collected?
|
|
|
Powered by
FUDForum. Page generated in 0.04871 seconds