Skip to main content


Eclipse Community Forums
Forum Search:

Search      Help    Register    Login    Home
Home » Modeling » TMF (Xtext) » Memory consumption(Idioms to reduce memory consumption)
Memory consumption [message #635107] Mon, 25 October 2010 16:10 Go to next message
Mark Christiaens is currently offline Mark ChristiaensFriend
Messages: 63
Registered: October 2010
Member
I'm trying to build a basic VHDL editor on top of XText. I've got the grammar into a reasonable shape and am experimenting now with loading some actual files. Most VHDL files are small (maybe 100 KiB) but a few are very large (library files). Loading those files is slow (half a minute) but what worries me even more is the memory consumption. I have a VHDL file of 2.3 MB and the resulting Java heap after opening the file is around 700 MB.

Are there any idioms that I should avoid or apply in the grammar (or elsewhere) to keep the memory consumption under control?

Mark
Re: Memory consumption [message #635191 is a reply to message #635107] Mon, 25 October 2010 22:16 Go to previous messageGo to next message
Sebastian Zarnekow is currently offline Sebastian ZarnekowFriend
Messages: 3118
Registered: July 2009
Senior Member
Hi Mark,

unfortunately there is not much what you can do besides splitting your
files into smaller ones. We are aware of the high memory consumption
when large files are involved and we will analyze the reasons carefully
for Xtext 2.0 to reduce the overall footprint. We have some ideas on how
to tackle this problem and we'd be more than happy to have some real
world examples that help us measuring the effects of potential changes
(which will unfortunately not appear in the very near future, I guess).

Regards,
Sebastian
--
Need professional support for Eclipse Modeling?
Go visit: http://xtext.itemis.com

Am 25.10.10 18:10, schrieb Mark Christiaens:
> I'm trying to build a basic VHDL editor on top of XText. I've got the
> grammar into a reasonable shape and am experimenting now with loading
> some actual files. Most VHDL files are small (maybe 100 KiB) but a few
> are very large (library files). Loading those files is slow (half a
> minute) but what worries me even more is the memory consumption. I have
> a VHDL file of 2.3 MB and the resulting Java heap after opening the file
> is around 700 MB.
> Are there any idioms that I should avoid or apply in the grammar (or
> elsewhere) to keep the memory consumption under control?
>
> Mark
Re: Memory consumption [message #635246 is a reply to message #635107] Tue, 26 October 2010 07:46 Go to previous messageGo to next message
Pierre-Alain BOURDIL is currently offline Pierre-Alain BOURDILFriend
Messages: 25
Registered: September 2010
Junior Member
hi,

i'm not sure it will help, but the offical documentation gives an advice for the grammar to improve performance during the parse tree creation phase :

In section "Parse Tree Constructor" :

"For reasons of performance, it is critical that the parse tree constructor takes the most promising branch first and detects wrong branches early. One way to achieve this is to avoid having many rules which return the same type and which are called from within the same alternative in the grammar."


++
Re: Memory consumption [message #635263 is a reply to message #635191] Tue, 26 October 2010 08:33 Go to previous messageGo to next message
Mark Christiaens is currently offline Mark ChristiaensFriend
Messages: 63
Registered: October 2010
Member
Sebastian,

Physically splitting the file is not really an option. The file I'm talking about describes the hardware components that are available to a VHDL designer. That file is provided by other tools and not under our control.

One thing that would be feasible (from a language point of view) is to split this file "conceptually" in subfiles. By this I mean that there are hundreds of independent parts in this file that could be considered as separate files. Is there a way for me to make Xtext think that this one file is multiple smaller files?

Mark
Re: Memory consumption [message #635282 is a reply to message #635263] Tue, 26 October 2010 09:20 Go to previous messageGo to next message
Mark Christiaens is currently offline Mark ChristiaensFriend
Messages: 63
Registered: October 2010
Member
The way I express the grammar does seem to have quite some influence on memory consumption. For example. I have a "declarativepart" rule. It describes a list of declarations (in the beginning of functions, procedures, processes ...). I've "inlined" that rule so that the declarations are now immediately part of the function, procedure, process ... objects instead of embedded into a sub-object. I've done the same for a couple of other rules.

The final memory consumption seems to have dropped with 100-150 MB for my big file.
Re: Memory consumption [message #635289 is a reply to message #635282] Tue, 26 October 2010 09:25 Go to previous messageGo to next message
Sebastian Zarnekow is currently offline Sebastian ZarnekowFriend
Messages: 3118
Registered: July 2009
Senior Member
Hi Mark,

I guess its a combination of the structure of the node model and the
structure of your semantic model. If you use more indirections, more
nodes will be created and thereby more memory is consumed. However I'm
pretty sure there is more potential for effective memory usage in the
framework.

Regards,
Sebastian
--
Need professional support for Eclipse Modeling?
Go visit: http://xtext.itemis.com

Am 26.10.10 11:20, schrieb Mark Christiaens:
> The way I express the grammar does seem to have quite some influence on
> memory consumption. For example. I have a "declarativepart" rule. It
> describes a list of declarations (in the beginning of functions,
> procedures, processes ...). I've "inlined" that rule so that the
> declarations are now immediately part of the function, procedure,
> process ... objects instead of embedded into a sub-object. I've done the
> same for a couple of other rules.
> The final memory consumption seems to have dropped with 100-150 MB for
> my big file.
Re: Memory consumption [message #635505 is a reply to message #635107] Wed, 27 October 2010 07:13 Go to previous messageGo to next message
Roland S. is currently offline Roland S.Friend
Messages: 36
Registered: September 2009
Member
Hi,

I think the reason for this problem is the parse tree. When loading all of our files the size of the ASTs is about 10 mb while the parse tree informations allocate about 600 mb of memory.

Regards,
Roland
Re: Memory consumption [message #635615 is a reply to message #635289] Wed, 27 October 2010 14:15 Go to previous messageGo to next message
Mark Christiaens is currently offline Mark ChristiaensFriend
Messages: 63
Registered: October 2010
Member
Sebastian,

Is there a way to only parse our big file once and store the XText internal data structures somewhere so that next time we re-open the big file, we can load the that stored state?
Re: Memory consumption [message #635709 is a reply to message #635615] Wed, 27 October 2010 19:14 Go to previous messageGo to next message
Sebastian Zarnekow is currently offline Sebastian ZarnekowFriend
Messages: 3118
Registered: July 2009
Senior Member
Hi Mark,

you may try to export the xtext model as xmi, ship it as a library and
refer to this one (which contains the very same information).

Regards,
Sebastian
--
Need professional support for Eclipse Modeling?
Go visit: http://xtext.itemis.com

Am 27.10.10 16:15, schrieb Mark Christiaens:
> Sebastian,
>
> Is there a way to only parse our big file once and store the XText
> internal data structures somewhere so that next time we re-open the big
> file, we can load the that stored state?
Re: Memory consumption [message #636809 is a reply to message #635709] Tue, 02 November 2010 21:29 Go to previous messageGo to next message
Alex K is currently offline Alex KFriend
Messages: 2
Registered: November 2010
Junior Member
I'm having a similar problem with many SMALL files inside a project. Once the Xtext nature is enabled using the plugin for the DSL, the heap goes over the roof with all resources in this DSL loading into memory at the same time although the files are unrelated and surely, any validation could be done sequentially. Even 2.5GB heap is not big enough to complete validation.

Looking for any ideas,
Alex
Re: Memory consumption [message #636901 is a reply to message #636809] Wed, 03 November 2010 09:46 Go to previous messageGo to next message
Jérôme Fouletier is currently offline Jérôme FouletierFriend
Messages: 39
Registered: September 2010
Location: France
Member
Alex,

I think Sven's reply in This other thread may be of use, although I did not try it out yet. It seems the default builder is bound in the SharedModule. However, I am not sure if simply re-binding the IBuilderState in your language's XXXRuntimeModule would be enough, there seem to be more interfaces bound by the BuilderIntegrationFragment.

[Updated on: Wed, 03 November 2010 09:50]

Report message to a moderator

Re: Memory consumption [message #637202 is a reply to message #636901] Thu, 04 November 2010 14:54 Go to previous message
Alex K is currently offline Alex KFriend
Messages: 2
Registered: November 2010
Junior Member
Thanks for that. It seems to be heading in the right direction then. I have so far mitigated the problem by reducing the number of files in the project and reducing the number of syntax errors that are present at the same time. Still any progress on this is very important so please keep us updated.

Thanks,
Alex
Previous Topic:xText Standalone Setup Parsing a DSL from a File
Next Topic:Re: using AbstractXtextTests
Goto Forum:
  


Current Time: Fri Apr 26 18:47:34 GMT 2024

Powered by FUDForum. Page generated in 0.03758 seconds
.:: Contact :: Home ::.

Powered by: FUDforum 3.0.2.
Copyright ©2001-2010 FUDforum Bulletin Board Software

Back to the top