Eclipse Community Forums
Forum Search:

Search      Help    Register    Login    Home
Home » Modeling » TMF (Xtext) » handling huge index
handling huge index [message #972364] Mon, 05 November 2012 15:16 Go to next message
Vlad Dumitrescu is currently offline Vlad DumitrescuFriend
Messages: 322
Registered: July 2009
Location: Gothenburg
Senior Member
Hi all,

I am returning with a problem related to other questions I asked here.

Some of the users of my Xtext plugins have huge projects, with thousands of files. Indexing these projects takes forever and uses a lot of RAM (eventually hitting the limit for 32-bit programs and crashing), so I am investigating if there is some way to improve the situation.

The language has a single global namespace for the top-level constructs. Would it help with more fine-grained containers? For example, is the index (or can it be) "chunked" along container borders and can only parts of it be loaded in memory, with some smart proxies to bridge the gap and a higher-level index to keep track of which chunks contain what?

Is this problem only Xtext's, or is it cause by the way EMF works? (I confess I know too little details about that, apologies if it's a sillier-then-the-rest question)

I have a glimmer of hope (that I won't have to fight this all by myself) in that as Xtend is gaining traction, it will eventually face the same problem. At the moment most references are to Java libraries and Xtend relies on the JDT for many of the references, but if it gets successful then there will be more and more Xtend libraries around.

best regards,
Vlad

Re: handling huge index [message #972689 is a reply to message #972364] Mon, 05 November 2012 20:44 Go to previous messageGo to next message
Sebastian Zarnekow is currently offline Sebastian ZarnekowFriend
Messages: 2936
Registered: July 2009
Senior Member
Hi Vlad,

did you try to reduce the number of exported objects? Reducing the
number of exported cross references may help, too. Actually only one
pointer from a given resource X to a referenced instance I is necessary
for more of the computation that uses the references. So if X has
several objects that refer to I, you could save some memory there.

There is no straight forward way to chunk the index in a reasonable way.
You'd have to implement the BuilderState manually in order to achieve that.

Regards,
Sebastian
--
Looking for professional support for Xtext, Xtend or Eclipse Modeling?
Go visit: http://xtext.itemis.com

Am 05.11.12 16:16, schrieb Vlad Dumitrescu:
> Hi all,
>
> I am returning with a problem related to other questions I asked here.
>
> Some of the users of my Xtext plugins have huge projects, with thousands
> of files. Indexing these projects takes forever and uses a lot of RAM
> (eventually hitting the limit for 32-bit programs and crashing), so I am
> investigating if there is some way to improve the situation.
>
> The language has a single global namespace for the top-level constructs.
> Would it help with more fine-grained containers? For example, is the
> index (or can it be) "chunked" along container borders and can only
> parts of it be loaded in memory, with some smart proxies to bridge the
> gap and a higher-level index to keep track of which chunks contain what?
> Is this problem only Xtext's, or is it cause by the way EMF works? (I
> confess I know too little details about that, apologies if it's a
> sillier-then-the-rest question)
>
> I have a glimmer of hope (that I won't have to fight this all by myself)
> in that as Xtend is gaining traction, it will eventually face the same
> problem. At the moment most references are to Java libraries and Xtend
> relies on the JDT for many of the references, but if it gets successful
> then there will be more and more Xtend libraries around.
>
> best regards,
> Vlad
>
>
Re: handling huge index [message #972722 is a reply to message #972689] Mon, 05 November 2012 21:19 Go to previous messageGo to next message
Vlad Dumitrescu is currently offline Vlad DumitrescuFriend
Messages: 322
Registered: July 2009
Location: Gothenburg
Senior Member
Thank you, Sebastian.

> did you try to reduce the number of exported objects?

Well, a file contains the elements it does, and the language defines what is visible from the outside. I don't see much room to wiggle here.

> There is no straight forward way to chunk the index in a reasonable way.
> You'd have to implement the BuilderState manually in order to achieve that.

I can consider doing that. Let's suppose it is done and working. What I'd like to know is how this would affect querying the index when searching -- is it using any kind of proxies that can be used to load chunks on demand, or would even this part need to be designed and implemented?

I think a good analogy for my situation is implementing Xtend without any JDT backend, so one would have to index the Java libraries, and the Eclipse APIs, and all libraries that a project uses. If you had had this problem, how would you have been approaching it?

best regards,
Vlad
Re: handling huge index [message #972732 is a reply to message #972722] Mon, 05 November 2012 21:26 Go to previous messageGo to next message
Sebastian Zarnekow is currently offline Sebastian ZarnekowFriend
Messages: 2936
Registered: July 2009
Senior Member
Hi Vlad,

I'd exploit the fact that Java classes follow a naming convention so
they can be located in the file system quite efficient. And I'd index
the simple names of the classes. However ..

If you say 'file contains elements and the language defines what's
visible', do you actually mean everything that's potentially visible or
only the things that are queried from the index. In Java, methods and
fields are obviously visible from the outside. Nevertheless they are
usually fetched from a type which was queried from the index. So only
the type has to be indexed but not all it's members. Does that make
sense? Do you already exploit that optimization?

In order to speed up indexing, you may want to use a lightweight parser
that skips most of the parts of your files and only returns the coarse
grained structure.

The index is just the interface IResourceDescriptions. There are no
assumptions about proxies, real instances or such things. Usually it
returns decoupled proxies that are wrapped in IEObjectDecriptions. The
proxies that have to be loaded in the context of a resource set in order
to become meaningful.

Regards,
Sebastian
--
Looking for professional support for Xtext, Xtend or Eclipse Modeling?
Go visit: http://xtext.itemis.com

Am 05.11.12 22:19, schrieb Vlad Dumitrescu:
> Thank you, Sebastian.
>
>> did you try to reduce the number of exported objects?
>
> Well, a file contains the elements it does, and the language defines
> what is visible from the outside. I don't see much room to wiggle here.
>> There is no straight forward way to chunk the index in a reasonable
>> way. You'd have to implement the BuilderState manually in order to
>> achieve that.
>
> I can consider doing that. Let's suppose it is done and working. What
> I'd like to know is how this would affect querying the index when
> searching -- is it using any kind of proxies that can be used to load
> chunks on demand, or would even this part need to be designed and
> implemented?
>
> I think a good analogy for my situation is implementing Xtend without
> any JDT backend, so one would have to index the Java libraries, and the
> Eclipse APIs, and all libraries that a project uses. If you had had this
> problem, how would you have been approaching it?
>
> best regards,
> Vlad
>
Re: handling huge index [message #972818 is a reply to message #972732] Mon, 05 November 2012 22:56 Go to previous messageGo to next message
Vlad Dumitrescu is currently offline Vlad DumitrescuFriend
Messages: 322
Registered: July 2009
Location: Gothenburg
Senior Member
Hi Sebastian,

Let me see if I get it right this time, but let's blame it on being so late in the evening/night Smile

If the index only has the classes, how does the rest of Xtext find all the references to public methods and fields? Must I implement my own IResourceDescription that implements computeExportedObjects and getReferenceDescriptions? Or do I have to fiddle with the ILinkingService to fill the blanks? Is this enough to cover all uses (completion, quick fixes, find references) or do these to be adjusted too?

Thanks a lot again for the help!

regards,
Vlad
Re: handling huge index [message #973404 is a reply to message #972818] Tue, 06 November 2012 09:51 Go to previous messageGo to next message
Sebastian Zarnekow is currently offline Sebastian ZarnekowFriend
Messages: 2936
Registered: July 2009
Senior Member
Hi Vlad,

assuming the index only contains the classes, let's use a simple grammar
snippet like:

StaticFieldAccess:
type = [Class|FQN] '.' field = [Field];

If you want to resolve the field, you'd use code like this in your scope
implementation:

StaticFieldAccess myFieldAccess = /* context */
Class c = myFieldAccess.getType();
return Scopes.scopeFor(c.getFields());

#getType will again trigger the scoping but use the default
implementation which will query the index for a Class. The resource that
contains the class is afterwards loaded into the current resource set so
you get a real instance of the class. Now it's safe to navigate that
code (this is what we call 'local scoping' - everything that's related
to model traversal). So there is no need to index all the fields because
the index will never be queried for them.

Does that make sense to you?

Best regards,
Sebastian
--
Looking for professional support for Xtext, Xtend or Eclipse Modeling?
Go visit: http://xtext.itemis.com

Am 05.11.12 23:56, schrieb Vlad Dumitrescu:
> Hi Sebastian,
>
> Let me see if I get it right this time, but let's blame it on being so
> late in the evening/night :)
>
> If the index only has the classes, how does the rest of Xtext find all
> the references to public methods and fields? Must I implement my own
> IResourceDescription that implements computeExportedObjects and
> getReferenceDescriptions? Or do I have to fiddle with the
> ILinkingService to fill the blanks? Is this enough to cover all uses
> (completion, quick fixes, find references) or do these to be adjusted too?
>
> Thanks a lot again for the help!
>
> regards,
> Vlad
>
Re: handling huge index [message #973506 is a reply to message #972732] Tue, 06 November 2012 11:45 Go to previous messageGo to next message
Vlad Dumitrescu is currently offline Vlad DumitrescuFriend
Messages: 322
Registered: July 2009
Location: Gothenburg
Senior Member
Hi Sebastian,

Thank you very much for your patience!

If only the top-level constructs need to be in the index, then why have the others there in the first place? Is it for performance (when they fit in memory), for ease of implementation (so that for simple cases no special scope implementation is needed), or for some other reason?

Put another way, what is the trade-off? My guess is that we lose the possibility to suggest for completion "deep" elements, it will only work by navigating the scopes incrementally. Also, will the search for references/definitions work at all, i.e. does it query the scopes? What else might be affected?

Regarding the suggestion to use a lighter parser to extract only the top-level information. I still need the detailed structure of the code, is there a way to use a lighter parser for indexing and a heavier one when more info is needed?

best regards,
Vlad
Re: handling huge index [message #973520 is a reply to message #973506] Tue, 06 November 2012 11:58 Go to previous messageGo to next message
Sebastian Zarnekow is currently offline Sebastian ZarnekowFriend
Messages: 2936
Registered: July 2009
Senior Member
Vlad,

things you loose if you don't index all the instances are basically
Cmd+Shift+F3 to find an arbitrary model element. Indexing all named
things is just a convenient default which is suitable for smaller
languages but does not really fit for bigger problems. Therefore it's a
default and easy to adapt. You'll have to look out for scoping semantics
that rely on these defaults, though.

Using a lightweight parser is something that I did not yet try out but
was rather an idea. Yes, it would be necessary to have a real production
parser, too. Benefit here is more about the reduced memory footprint in
the first indexing stage than about parsing speed I guess.

Cheers,
Sebastian
--
Looking for professional support for Xtext, Xtend or Eclipse Modeling?
Go visit: http://xtext.itemis.com

Am 06.11.12 12:45, schrieb Vlad Dumitrescu:
> Hi Sebastian,
>
> Thank you very much for your patience!
>
> If only the top-level constructs need to be in the index, then why have
> the others there in the first place? Is it for performance (when they
> fit in memory), for ease of implementation (so that for simple cases no
> special scope implementation is needed), or for some other reason?
> Put another way, what is the trade-off? My guess is that we lose the
> possibility to suggest for completion "deep" elements, it will only work
> by navigating the scopes incrementally. Also, will the search for
> references/definitions work at all, i.e. does it query the scopes? What
> else might be affected?
>
> Regarding the suggestion to use a lighter parser to extract only the
> top-level information. I still need the detailed structure of the code,
> is there a way to use a lighter parser for indexing and a heavier one
> when more info is needed?
>
> best regards,
> Vlad
>
Re: handling huge index [message #973536 is a reply to message #973520] Tue, 06 November 2012 12:17 Go to previous messageGo to next message
Vlad Dumitrescu is currently offline Vlad DumitrescuFriend
Messages: 322
Registered: July 2009
Location: Gothenburg
Senior Member
Hi Sebastian,

If it's only Ctrl-Shift-F3 that is limited, it's perfectly acceptable. I was worried about the search for references.

Regarding the speed, is not the linking step going to be somewhat slower with this approach?

I have now left only the top-level constructs in the index, and it still takes a while (320 files: 1 minute to create descriptions and 4 minutes to update them and validate) and it still wants a whole deal of memory (topped at 1.2G). Does it matter that the scopes implementations are not in place yet?

Thanks again -- I owe you a lot of beer when I get to Kiel Smile

regards,
Vlad
Re: handling huge index [message #973562 is a reply to message #973536] Tue, 06 November 2012 12:38 Go to previous messageGo to next message
Sebastian Zarnekow is currently offline Sebastian ZarnekowFriend
Messages: 2936
Registered: July 2009
Senior Member
Am 06.11.12 13:17, schrieb Vlad Dumitrescu:
> Hi Sebastian,
>
> If it's only Ctrl-Shift-F3 that is limited, it's perfectly acceptable. I
> was worried about the search for references.
>
> Regarding the speed, is not the linking step going to be somewhat slower
> with this approach?
> I have now left only the top-level constructs in the index, and it still
> takes a while (320 files: 1 minute to create descriptions and 4 minutes
> to update them and validate) and it still wants a whole deal of memory
> (topped at 1.2G). Does it matter that the scopes implementations are not
> in place yet?
>
> Thanks again -- I owe you a lot of beer when I get to Kiel :)
>
> regards,
> Vlad
>

Hi Vlad,

1.2 G sounds quite big to me. You did not reduce the number of reference
descriptions in the first stage, did you?

Local scoping is generally speaking quite efficient (as long as you
don't traverse the complete resource set or similar). As always with
performance: without a profiler it's hard to guess where the time is lost.

Regards,
Sebastian
--
Looking for professional support for Xtext, Xtend or Eclipse Modeling?
Go visit: http://xtext.itemis.com
Re: handling huge index [message #973567 is a reply to message #973562] Tue, 06 November 2012 12:45 Go to previous messageGo to next message
Vlad Dumitrescu is currently offline Vlad DumitrescuFriend
Messages: 322
Registered: July 2009
Location: Gothenburg
Senior Member
Hi,

I have made the QualifiedNameProvider to return non-null values just for the top-level constructs. Is that not enough?

regards,
Vlad
Re: handling huge index [message #973582 is a reply to message #973567] Tue, 06 November 2012 12:59 Go to previous messageGo to next message
Sebastian Zarnekow is currently offline Sebastian ZarnekowFriend
Messages: 2936
Registered: July 2009
Senior Member
Am 06.11.12 13:45, schrieb Vlad Dumitrescu:
> Hi,
>
> I have made the QualifiedNameProvider to return non-null values just for
> the top-level constructs. Is that not enough?
>
> regards,
> Vlad
>

No, that is not enough if you want to reduce the number reference
descriptions. By default all the x-refs that are external to the
resource are indexed. Please refer to one of my prev answers and
customize the resource descriptions strategy for that purpose. You may
want to prototype with no reference descriptions at all to see how big
the improvements could be at best.

Regards,
Sebastian
--
Looking for professional support for Xtext, Xtend or Eclipse Modeling?
Go visit: http://xtext.itemis.com
Re: handling huge index [message #974925 is a reply to message #973582] Wed, 07 November 2012 12:54 Go to previous messageGo to next message
Vlad Dumitrescu is currently offline Vlad DumitrescuFriend
Messages: 322
Registered: July 2009
Location: Gothenburg
Senior Member
Hi!

Quote:

> I have made the QualifiedNameProvider to return non-null values just for
> the top-level constructs. Is that not enough?

No, that is not enough if you want to reduce the number reference
descriptions. By default all the x-refs that are external to the
resource are indexed. Please refer to one of my prev answers and
customize the resource descriptions strategy for that purpose. You may
want to prototype with no reference descriptions at all to see how big
the improvements could be at best.


Do you mean that I should remove all references from the grammar?

This gives indeed a much better memory usage. Creating all resource descriptions still takes a long time (30 minutes full build for 3500 files), but I suppose there is very little to do about that [see below].

However, removing the references effectively turns most IDE functionality to no-ops... Adding full manual handling for completion and search kind of defeats the purpose of using Xtext in the first place, isn't it?

The problem that the user sees is that because the build process locks the workspace, no editing changes can be saved until it is over... If all the work would be done really in the background, while the user accepts that until it's done there may be information missing from the IDE support, then it could maybe work. That is, if Xtext can work while the index is being updated.

regards,
Vlad
Re: handling huge index [message #974975 is a reply to message #974925] Wed, 07 November 2012 13:37 Go to previous messageGo to next message
Sebastian Zarnekow is currently offline Sebastian ZarnekowFriend
Messages: 2936
Registered: July 2009
Senior Member
Hi Vlad,

no, I'm not talking about the grammar but about the index. Obviously the
references are necessary in the grammar to provide IDE support.

Please dive deeper in the the IResourceDescriptionStrategy and friends.

Regards,
Sebastian
--
Looking for professional support for Xtext, Xtend or Eclipse Modeling?
Go visit: http://xtext.itemis.com

Am 07.11.12 13:54, schrieb Vlad Dumitrescu:
> Hi!
>
> Quote:
>> > I have made the QualifiedNameProvider to return non-null values just
>> for
>> > the top-level constructs. Is that not enough?
>>
>> No, that is not enough if you want to reduce the number reference
>> descriptions. By default all the x-refs that are external to the
>> resource are indexed. Please refer to one of my prev answers and
>> customize the resource descriptions strategy for that purpose. You may
>> want to prototype with no reference descriptions at all to see how big
>> the improvements could be at best.
>
>
> Do you mean that I should remove all references from the grammar?
>
> This gives indeed a much better memory usage. Creating all resource
> descriptions still takes a long time (30 minutes full build for 3500
> files), but I suppose there is very little to do about that [see below].
> However, removing the references effectively turns most IDE
> functionality to no-ops... Adding full manual handling for completion
> and search kind of defeats the purpose of using Xtext in the first
> place, isn't it?
>
> The problem that the user sees is that because the build process locks
> the workspace, no editing changes can be saved until it is over... If
> all the work would be done really in the background, while the user
> accepts that until it's done there may be information missing from the
> IDE support, then it could maybe work. That is, if Xtext can work while
> the index is being updated.
>
> regards,
> Vlad
>
Re: handling huge index [message #987119 is a reply to message #974925] Fri, 23 November 2012 15:43 Go to previous messageGo to next message
Knut Wannheden is currently offline Knut WannhedenFriend
Messages: 296
Registered: July 2009
Senior Member
Hi Vlad,

From what I understand you're facing scalability problems both in terms
of memory footprint and performance (a running build will block the user).

Regarding the exported objects, there is one more thing you lose other
than Ctrl-Shift-F3 by exporting less objects: The find references
operation will be "less precise". If you for instance only export the
top-level object of a resource, then that will be reported in the search
result of the find references operation. But note that when you click on
one of the search results it should still take you to the exact source
location where the reference is made.

From that it should also be clear that by reducing the number of
exported reference descriptions (e.g. only one per referenced target
object) the find references operation will only report those (unless you
tweak that). Also advanced operations like rename refactorings may need
some tweaking to work correctly.

Regards,

--knut

On 11/7/12 1:54 PM, Vlad Dumitrescu wrote:
> Hi!
>
> Quote:
>> > I have made the QualifiedNameProvider to return non-null values just
>> for
>> > the top-level constructs. Is that not enough?
>>
>> No, that is not enough if you want to reduce the number reference
>> descriptions. By default all the x-refs that are external to the
>> resource are indexed. Please refer to one of my prev answers and
>> customize the resource descriptions strategy for that purpose. You may
>> want to prototype with no reference descriptions at all to see how big
>> the improvements could be at best.
>
>
> Do you mean that I should remove all references from the grammar?
>
> This gives indeed a much better memory usage. Creating all resource
> descriptions still takes a long time (30 minutes full build for 3500
> files), but I suppose there is very little to do about that [see below].
> However, removing the references effectively turns most IDE
> functionality to no-ops... Adding full manual handling for completion
> and search kind of defeats the purpose of using Xtext in the first
> place, isn't it?
>
> The problem that the user sees is that because the build process locks
> the workspace, no editing changes can be saved until it is over... If
> all the work would be done really in the background, while the user
> accepts that until it's done there may be information missing from the
> IDE support, then it could maybe work. That is, if Xtext can work while
> the index is being updated.
>
> regards,
> Vlad
>
Re: handling huge index [message #987380 is a reply to message #987119] Mon, 26 November 2012 11:21 Go to previous messageGo to next message
Vlad Dumitrescu is currently offline Vlad DumitrescuFriend
Messages: 322
Registered: July 2009
Location: Gothenburg
Senior Member
Thanks a lot for the clarification, Knut!

Unfortunately, reducing the number of exported objects seems to be the only way to at least have a chance to have an useful application: with all of them enabled, we can't use a 32 bit JVM.

I hope that this isn't a systemic problem, but an implementation artifact that can be fixed. What needs to be in memory is only the index database (which should not require hundred of megabytes) and the parse tree of one file, the one currently handled.

best regards,
Vlad
Re: handling huge index [message #987394 is a reply to message #987380] Mon, 26 November 2012 12:57 Go to previous messageGo to next message
Knut Wannheden is currently offline Knut WannhedenFriend
Messages: 296
Registered: July 2009
Senior Member
Hi Vlad,

Reducing the number of exported objects is indeed quite important. Yet
there is one more problem with that approach which I should point out:
It is the exported objects which control when the builder should
invalidate and rebuild dependent sources in an incremental build. As
long as a resource exports an identical set of exported objects, no
dependent resources will be rebuilt.

So if you for a resource only export a single top-level object but not
any nested "declarations", you may find that an incremental rebuild
doesn't properly rebuild all dependent resources. The standard solution
to that problem is to encode some kind of signature into the exported
object's user data (have a look at XbaseResourceDescriptionStrategy and
JvmDeclaredTypeSignatureHashProvider for an example).

If you feel really adventurous you could take a look at my Xtext fork
where I've implemented an Xtext index which is persisted in a H2
relational database: https://github.com/knutwannheden/xtext. Using that
approach I've been able to work with large indexes (30K+ resources with
500K+ exported objects and 2.5M+ references) even on 32-bit JVMs. I hope
to clean that up and contribute it to the Xtext framework.

Regards,

--knut

On 11/26/12 12:21 PM, Vlad Dumitrescu wrote:
> Thanks a lot for the clarification, Knut!
>
> Unfortunately, reducing the number of exported objects seems to be the
> only way to at least have a chance to have an useful application: with
> all of them enabled, we can't use a 32 bit JVM.
>
> I hope that this isn't a systemic problem, but an implementation
> artifact that can be fixed. What needs to be in memory is only the index
> database (which should not require hundred of megabytes) and the parse
> tree of one file, the one currently handled.
> best regards,
> Vlad
>
Re: handling huge index [message #987396 is a reply to message #987394] Mon, 26 November 2012 13:12 Go to previous messageGo to next message
Vlad Dumitrescu is currently offline Vlad DumitrescuFriend
Messages: 322
Registered: July 2009
Location: Gothenburg
Senior Member
Good points, Knut, thanks!

It would be cool if your H2 work will get accepted to the core Xtext! I will give it a try on my projects to see how much it helps.

The most important thing that I must take with me is that making all this work properly will take a lot more time than I estimated from the beginning and there's no way I can keep the original plan. We'll see what the managers say about that...

regards,
Vlad
Re: handling huge index [message #987597 is a reply to message #987396] Tue, 27 November 2012 09:56 Go to previous messageGo to next message
Knut Wannheden is currently offline Knut WannhedenFriend
Messages: 296
Registered: July 2009
Senior Member
Hi Vlad,

On 11/26/12 2:12 PM, Vlad Dumitrescu wrote:> Good points, Knut, thanks!
>
> It would be cool if your H2 work will get accepted to the core Xtext! I
> will give it a try on my projects to see how much it helps.

It would be interesting to hear how the H2 index works for you in case
you get a chance to test it.

Regards,

--knut
Re: handling huge index [message #988319 is a reply to message #987597] Thu, 29 November 2012 11:49 Go to previous messageGo to next message
Vlad Dumitrescu is currently offline Vlad DumitrescuFriend
Messages: 322
Registered: July 2009
Location: Gothenburg
Senior Member
Hi Knut,

I gave your implementation a try and there's good news and there's normal news Smile

In the first round of tests, where I had a large project and the max memory was on the lower end of the scale (1GB), there is a significant difference: the validation/update phase took almost half the time compared to the base xtext (6:15 minutes against 11:00).

I tried then with more memory available and the results were on par.

So I would conclude that the h2 implementation uses memory better and there is less GC when the memory is scarce.

regards,
Vlad
Re: handling huge index [message #988876 is a reply to message #988319] Mon, 03 December 2012 14:13 Go to previous message
Knut Wannheden is currently offline Knut WannhedenFriend
Messages: 296
Registered: July 2009
Senior Member
Hi Vlad,

Thank you very much for your feedback! Your observations pretty much
match what I've observed:

1. Even though the data is externalized into a database, there is nearly
no observed performance degradation.
2. When memory is very low both implementations will become slower.
3. That being said, the memory requirements for the H2 backed index are
generally lower, as the reference descriptions and imported names are
not kept in memory. (For small projects the database backed index will
actually require more memory, as there is a certain overhead introduced
by H2's own caches.)
4. The database backed index can work with considerably larger
workspaces before becoming very slow or even crashing with an OOME.
5. One other nice side effect is that if Eclipse crashes the last built
index is still there after restart.

If I get the time to polish the implementation up in time, I think it
should at least be possible to contribute the Xtext API extensions for
the 2.4 release.

Regards,

--knut

On 11/29/12 12:49 PM, Vlad Dumitrescu wrote:
> Hi Knut,
>
> I gave your implementation a try and there's good news and there's
> normal news :)
>
> In the first round of tests, where I had a large project and the max
> memory was on the lower end of the scale (1GB), there is a significant
> difference: the validation/update phase took almost half the time
> compared to the base xtext (6:15 minutes against 11:00).
>
> I tried then with more memory available and the results were on par.
>
> So I would conclude that the h2 implementation uses memory better and
> there is less GC when the memory is scarce.
> regards,
> Vlad
>
Previous Topic:Comments missing in code makes it hard to develop
Next Topic:Migration from Xtext 1.0 to Xtext 2.31
Goto Forum:
  


Current Time: Tue Nov 25 01:59:53 GMT 2014

Powered by FUDForum. Page generated in 0.03768 seconds
.:: Contact :: Home ::.

Powered by: FUDforum 3.0.2.
Copyright ©2001-2010 FUDforum Bulletin Board Software