Lean serializer [message #1806027] |
Tue, 30 April 2019 19:16 |
Adriano Carvalho Messages: 54 Registered: November 2018 |
Member |
|
|
Hello,
I have a model with around 70000 nodes/attributes. If I update the model using the model editor (generated from the metamodel inferred by Xtext), and then save, it takes around 2 minutes to do so.
As expected the Xtext's generated code is trying to preserve the original format of the file. The actual format, comments, and whitespace are not important for me. The non-hidden tokens, on the other hand, are important since they are supposed to feed an external application.
Seen this, I would like to know how can I configure the serializer to be as lean as possible?
I have read that the problem is related to the SemanticSequencer that is configured/generated. In the past there seems to have been a GenericSemanticSequencer that could solve the issue, but this one cannot be found anymore.
I have also read that this may be caused by ambiguities in the grammar. Is it possible that even with a lean serializer I will still have the same performance issues?
Best regards.
|
|
|
Re: Lean serializer [message #1806077 is a reply to message #1806027] |
Thu, 02 May 2019 11:10 |
|
Quote:
I have also read that this may be caused by ambiguities in the grammar.
I hope you don't have ambiguities, and especially don't have backtracking enabled.
Quote:
Is it possible that even with a lean serializer I will still have the same performance issues?
Possibly yes. You should first use a profiler to see why the process takes 2 mins. This should show which part of the serialization takes so much time. It is quite unusual, even for your node model size.
|
|
|
|
Re: Lean serializer [message #1806111 is a reply to message #1806100] |
Thu, 02 May 2019 19:03 |
|
You will have enabled backtracking in the generator workflow for your language (.mwe2 file). Unfortunately you have opened Pandora's box with that. This is usually enabled when there are ambiguities in the grammar and one does not know how to create the language without ambiguities.
I can only recommend to get rid of this. This will be a major refactoring. Best cover your language extensively with parser/linker unit tests (let me guess: there are none or not much). Then recreate your grammar without backtracking feature by feature and cover everything by tests.
|
|
|
|
|
|
Re: Lean serializer [message #1806137 is a reply to message #1806132] |
Fri, 03 May 2019 10:48 |
|
BacktrackingSemanticSequencer is the default fallback
public void configureGenericSemanticSequencer(com.google.inject.Binder binder) {
binder.bind(ISemanticSequencer.class).annotatedWith(GenericSequencer.class).to(BacktrackingSemanticSequencer.class);
}
i have doubts that is the problem.
as i said:
create a unit test that reproduces the problem and profile there.
btw what tool do you use to profile
Twitter : @chrdietrich
Blog : https://www.dietrich-it.de
|
|
|
Re: Lean serializer [message #1806140 is a reply to message #1806137] |
Fri, 03 May 2019 11:48 |
Adriano Carvalho Messages: 54 Registered: November 2018 |
Member |
|
|
I've found a solution to my problem: when changing a model with the EMF model editor have the Xtext editor closed.
You are correct. Based on the very same page you've mentioned I used ANTLRWorks 1.5.2 to find and solve all ambiguities and the problem still persisted. Then, using the debugger I realized that the serializer was actually running under 10 seconds. The long running time is caused by what happens after the serializer is run. But this only happens if the Xtext-based editor is also opened.
From my little knowledge of the Eclipse platform I deduce the root cause is in the reconciler. Is this a known problem or is it just me?
Having the Xtext-based editor closed works for me but if you think this is an issue I can provide more information. Just let me know what we need to know.
Thank you very much.
[Updated on: Fri, 03 May 2019 11:51] Report message to a moderator
|
|
|
Re: Lean serializer [message #1806141 is a reply to message #1806140] |
Fri, 03 May 2019 12:06 |
|
this might be caused by Dirty Editor State and bad Performance in Scoping.
you might want to have a custom CrossReferenceSerializer that does not make use of scope provider too
and the "run after" should be profiled too.
and the reconciliation problems
are basically there in all tool. even java editors with many 10000s of lines.
i dont know what specific case causes this in your case.
it should happen on manual edits too.
Twitter : @chrdietrich
Blog : https://www.dietrich-it.de
|
|
|
|
|
|
|
Powered by
FUDForum. Page generated in 0.05141 seconds