Lean serializer [message #1806027] |
Tue, 30 April 2019 15:16  |
Eclipse User |
|
|
|
Hello,
I have a model with around 70000 nodes/attributes. If I update the model using the model editor (generated from the metamodel inferred by Xtext), and then save, it takes around 2 minutes to do so.
As expected the Xtext's generated code is trying to preserve the original format of the file. The actual format, comments, and whitespace are not important for me. The non-hidden tokens, on the other hand, are important since they are supposed to feed an external application.
Seen this, I would like to know how can I configure the serializer to be as lean as possible?
I have read that the problem is related to the SemanticSequencer that is configured/generated. In the past there seems to have been a GenericSemanticSequencer that could solve the issue, but this one cannot be found anymore.
I have also read that this may be caused by ambiguities in the grammar. Is it possible that even with a lean serializer I will still have the same performance issues?
Best regards.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Powered by
FUDForum. Page generated in 0.04192 seconds