How to implement efficient type inferrer/provider? [message #1410719] |
Thu, 21 August 2014 13:03 |
Zoltan Ujhelyi Messages: 392 Registered: July 2015 |
Senior Member |
|
|
Hi all,
The type inference implementation of EMF-IncQuery was very slow during editing, so we have made the type checking validators execute on save. However, I am looking for possible ways to speed it up. However, for the type inference to work in our language, it is not enough to check a single reference but a larger part of the model needs to be traversed. Sadly, this requires multiple traversals of the entire model during editing (e.g. during JvmModelInference and validation).
One trivial way to speed up the process would be to save the calculated types for later use; however, I have no idea how to make sure that everything is disposed after the model is changed.
I have also tried to understand how the BatchScopeProvider and ReentrantTypeResolver solves this issue, but I got lost between the different service implementations.
I would be very grateful for some kind of pointers/ideas where to look for additional ideas/solution. I am either interested in a service that is notified if something is changed, allowing cleaning a cache; or a pointer to understand how the BatchScopeProvider implementations handle this issue.
Thanks for the help,
Zoltán
|
|
|
Re: How to implement efficient type inferrer/provider? [message #1411475 is a reply to message #1410719] |
Sat, 23 August 2014 12:49 |
|
On 21/08/2014 15:03, Zoltan Ujhelyi wrote:
> Hi all,
>
> The type inference implementation of EMF-IncQuery was very slow during
> editing, so we have made the type checking validators execute on save.
> However, I am looking for possible ways to speed it up. However, for the
> type inference to work in our language, it is not enough to check a
> single reference but a larger part of the model needs to be traversed.
> Sadly, this requires multiple traversals of the entire model during
> editing (e.g. during JvmModelInference and validation).
>
> One trivial way to speed up the process would be to save the calculated
> types for later use; however, I have no idea how to make sure that
> everything is disposed after the model is changed.
>
> I have also tried to understand how the BatchScopeProvider and
> ReentrantTypeResolver solves this issue, but I got lost between the
> different service implementations.
>
> I would be very grateful for some kind of pointers/ideas where to look
> for additional ideas/solution. I am either interested in a service that
> is notified if something is changed, allowing cleaning a cache; or a
> pointer to understand how the BatchScopeProvider implementations handle
> this issue.
Hi
Xtext provides OnChangeEvictingCache that automatically clears the cache
when the model (indeed, the resource) "semantically" changes. But be
careful: the JvmModelInferrer usually invalidates the cache (since you
add something to the resource in the inferred implementation, i.e., the
inferred Java types). You might want to try to save something in the
model itself (you'll have to switch to a manually managed ecore model)
in a transient unsettable field. During editing the model will be
invalidated so such fields will not contain stale data... I'm using
this approach in https://github.com/LorenzoBettini/xsemantics
hope this helps
Lorenzo
--
Lorenzo Bettini, PhD in Computer Science, DI, Univ. Torino
HOME: http://www.lorenzobettini.it
Xtext Book:
http://www.packtpub.com/implementing-domain-specific-languages-with-xtext-and-xtend/book
HOME: http://www.lorenzobettini.it
TDD Book: https://leanpub.com/tdd-buildautomation-ci
Xtext Book: https://www.packtpub.com/application-development/implementing-domain-specific-languages-xtext-and-xtend-second-edition
|
|
|
|
Powered by
FUDForum. Page generated in 0.03866 seconds