Skip to main content


Eclipse Community Forums
Forum Search:

Search      Help    Register    Login    Home
Home » Modeling » TMF (Xtext) » Fine-grained control of caching in Xtext
Fine-grained control of caching in Xtext [message #1810787] Mon, 19 August 2019 16:36 Go to next message
Steffen Zschaler is currently offline Steffen ZschalerFriend
Messages: 266
Registered: July 2009
Senior Member
Hi,

This question seems related to https://www.eclipse.org/forums/index.php/mv/msg/639125/1233703/ and https://www.eclipse.org/forums/index.php/m/1238706/?srch=cache+eviction . Since these don't seem to have led to a resolution and I couldn't find a resolution anywhere else, I'm asking again in the context of my specific problem:

Are there good ways, in Xtext, to install a cache that gives more fine-grained control over when the cache is evicted? Specifically, I am looking for ways of implementing a cache that is only evicted if a particular sub-tree of a resource's contents changes. My use case is that my DSL has some fairly expensive scoping rules that require state to be derived from a sub-tree in the model so that I can resolve cross-references in sub-sequent parts of the model (note there is no cycling linking, it's all declare first, use later). This is beginning to become noticeable during editing.

I realise that I can build my own implementation of a cache and attach it to the relevant EObjects in a similar way to what OnChangeEvictingCache does. However, the problem with this is that the cache still gets evicted during editing for any change even outside the sub-tree, as Xtext will remove and re-add all cross-references on any edit. From inside my cache implementation there doesn't seem to be any way to differentiate situations where I actually have to evict the cache -- a cross-reference in my sub-tree has actually changed -- from situations where the cache does not need to be evicted -- something else in the resource has changed and Xtext is just running a full re-link. Equally, if I move the responsibility of calculation into a derived state computer, I do not have enough information about whether a recalculation is needed for a specific sub-tree or not. In fact, if anything, I have less information, because Xtext will simply ask me to discardDerivedState at certain times, including for every edit even when just adding a bit of white space.

For the resource-level cache, Xtext seems to try and solve this issue via execWithoutcacheClear, but my experimentation seems to indicate that this isn't used when the reconciler removes diagnostics and, because these are kept in a list, the check for semantic changes doesn't recognise that this isn't a semantic change and the cache gets evicted...

For derived state resource, there doesn't seem to be any mechanism in place that allows some more control over whether an eviction is actually needed.

For clarity: all these caching mechanisms work beautifully when the source object and the object referencing it are in two different resources. However, when they are in the same resource, every edit anywhere in the resource evicts all caches.

The reason is, of course, that Xtext has no good way of knowing whether the linking done inside the source object wasn't affected by my current edit. Hence, it has to be conservative, and assume everything may have changed; recompute all the linking information, thus "changing" the source object and evicting the cache in the process. In my language, however, scoping is strictly backward-looking, so changes further down in the file cannot affect stuff written further up.

I believe the XBase implementation would have to address similar issues, but I cannot figure out where. My language isn't based on XBase and I don't see how I could move to an XBase-based implementation as my language isn't really like a programming language at all and there is no meaningful direct translation to Java.

Is there a way to set up Xtext caching so that it can make use of this knowledge and avoid evicting a cache if the relevant sub-graph hasn't actually changed? How have others solved this problem in their languages?

Many thanks,

Steffen

[Updated on: Mon, 19 August 2019 16:39]

Report message to a moderator

Re: Fine-grained control of caching in Xtext [message #1810802 is a reply to message #1810787] Mon, 19 August 2019 19:32 Go to previous messageGo to next message
Christian Dietrich is currently offline Christian DietrichFriend
Messages: 14665
Registered: July 2009
Senior Member
Well there is no such a thing as a partial change so you would have to build your own cache using e.g. node model hashes as keys
But depending on what you want to do this won't help neither
Since you will cache outdated stuff.

But what i don't get is
What is so expensive that even calculating it only once per resource is too expensive
Maybe you have an algorithmic problem there


Twitter : @chrdietrich
Blog : https://www.dietrich-it.de
Re: Fine-grained control of caching in Xtext [message #1810815 is a reply to message #1810802] Tue, 20 August 2019 07:43 Go to previous messageGo to next message
Steffen Zschaler is currently offline Steffen ZschalerFriend
Messages: 266
Registered: July 2009
Senior Member
Hi Christian,

Many thanks for your prompt (as usual!) response. Please find some responses below:


  1. Not sure what you mean by "there is no such thing as a partial change". What I mean is something like this:

    Assume, I have a simple grammar where a model contains a list of elements (each of which may have there own internal structure). As part of its internal structure, an element may contain cross-references to other elements, but these must all be provided by elements that are either in other files or declared before the referencing element.

    Now assume I have a model that has two elements -- A and B -- in that order. B might reference something in A, but A cannot reference anything in B. Clearly, if I edit the part of the document that corresponds to A, the entire abstract syntax graph needs to be relinked. But if I only make changes to B, there shouldn't be a reason to relink stuff in the A sub-tree of the containment hierarchy.

    Xtext already does incremental parsing, so at some point it is well aware that only B's subtree has changed. Without additional information, the linker has no way of knowing that this also means there is no need to relink A (after all, I haven't explicitly declared that references can be forward references only). What I am trying to figure out is where that point is where Xtext still knows that only part of the graph needed changing and how I might be able to hook into this to create a slightly smarter linker.

  2. Complexity of computation: My language allows users to express mappings between certain types of entities and use them to describe transformations producing new entities. For a full specification, you typically need to chain these so that a mapping may use the results of a preceding transformation. To implement scoping, code completion, validation etc. in these cases, I need to actually run the preceding transformation, which can be costly. Running the transformation once per resource isn't an issue. Running it at every keystroke is. Hence, me looking at efficient caching strategies and finding that, unfortunately, the default cache seems to get invalidated too often to help.


Thanks,

Steffen
Re: Fine-grained control of caching in Xtext [message #1810821 is a reply to message #1810815] Tue, 20 August 2019 08:38 Go to previous messageGo to next message
Christian Dietrich is currently offline Christian DietrichFriend
Messages: 14665
Registered: July 2009
Senior Member
well the question is if the partial parsing works as expected.
per se you could use the objects as cache criteria and listen to changes to objects using the normal (no content) adapter in emf for that


Twitter : @chrdietrich
Blog : https://www.dietrich-it.de
Re: Fine-grained control of caching in Xtext [message #1810828 is a reply to message #1810821] Tue, 20 August 2019 09:42 Go to previous messageGo to next message
Steffen Zschaler is currently offline Steffen ZschalerFriend
Messages: 266
Registered: July 2009
Senior Member
Well, I've tried this.

I want to make sure the cache is evicted whenever there is a semantic change to an object--that is to a sub-tree in the containment hierarchy. So I have built a cache that uses the NonRecursiveEContentAdapter similar to what OnChangeEvictingCache does, but only attaches the adapter to the root of the sub-tree (and, thereby, to every object in that sub-tree). That didn't work too well for two reasons:


  1. The cache still got evicted every time I made any change somewhere in the document. The reason appeared to be that Xtext was discarding any cross references from inside the sub-tree and then repopulating them. In other words, the linker did a complete, non-incremental relink every time a change occurred. Some investigation into the Xtext source code seemed to suggest that at least some parts of the linker used execWithoutcacheClear to avoid the standard OnChangeEvictingCache to be evicted, so I looked into hooking into that infrastructure. Unfortunately, so far, I wasn't particularly successful here either, see next point.
  2. I reorganised my cache so that the cache adapter was actually derived from OnChangeEvictingCache.CacheAdapter and could be hooked to the resource, but without hooking itself in everywhere inside the resource. This way, it could be found by execWithoutcacheClear, but would hopefully still only be evicted when there are changes in the particular sub-tree it is interested in. This didn't work: it turns out that one of the first things the XtextResource does when there has been a change in the editor, is to clear all warnings and errors from the resource. This clears lists of Diagnostics in the resource. The check for semantic changes in OnChangeEvictingCache.CacheAdapter.isSemanticStateChange checks whether objects of type Diagnostic are being modified, but doesn't check whether a reference to such objects is changed. Consequently, clearing the warnings and errors is interpreted as a semantic change on the resource and evicts the whole cache even before we get to the linking stage.


I later realised that even so my second cache implementation wouldn't have worked, as OnChangeEvictingCache.CacheAdapter expects at most one cache adapter to be installed in the resource and I had no control over whether that would be mine or the original one. So, as a next step I would have to replace the IResourceScopeCache binding in the runtime module and change my implementation so that it could support cached items that only needed clearing when a change had actually occurred in a sub-tree of the containment hierarchy. But at that point, this was beginning to feel sufficiently complicated to make me wonder whether I was simply missing something blatantly obvious...

From the discussion so far, though, it's beginning to feel like there isn't any existing support in Xtext that would help me do what I am looking for. Do I interpret this correctly?

Thanks,

Steffen
Re: Fine-grained control of caching in Xtext [message #1810832 is a reply to message #1810828] Tue, 20 August 2019 10:31 Go to previous messageGo to next message
Christian Dietrich is currently offline Christian DietrichFriend
Messages: 14665
Registered: July 2009
Senior Member
well as i said: you need to write your cache in a way that it only considers changes you want to have.
=> there is nothing existing for that i would know of


Twitter : @chrdietrich
Blog : https://www.dietrich-it.de
Re: Fine-grained control of caching in Xtext [message #1810852 is a reply to message #1810832] Tue, 20 August 2019 15:39 Go to previous message
Steffen Zschaler is currently offline Steffen ZschalerFriend
Messages: 266
Registered: July 2009
Senior Member
Thanks, that's helpful.
Previous Topic:Save XtextResourceSet
Next Topic:Couldn't resolve reference in DSL using Xtext
Goto Forum:
  


Current Time: Fri Apr 26 10:04:19 GMT 2024

Powered by FUDForum. Page generated in 0.03647 seconds
.:: Contact :: Home ::.

Powered by: FUDforum 3.0.2.
Copyright ©2001-2010 FUDforum Bulletin Board Software

Back to the top