Xtext Serialization: how to benchmark? [message #1790319] |
Fri, 08 June 2018 10:59 |
|
Hi all,
what is the best/fair way to benchmark serialization performance of Xtext for a given grammar. Since the results of the benchmark will influence the decision whether we will use Xtext I want to make sure it got a fair chance. Is org.eclipse.xtext.serializer.impl.Serializer#serialize() the right entry point for a JMH benchmark? Is it always doing validation during the serialization process or can that be omitted if the model is known to be valid?
Thanks,
Michael
|
|
|
|
Re: Xtext Serialization: how to benchmark? [message #1790326 is a reply to message #1790321] |
Fri, 08 June 2018 11:40 |
|
I'm wondering why this would be such an important point for a decision for or against Xtext. And if you would benchmark on that level, against what would you benchmark other frameworks? Of course, you could use that entry point to benchmark something, but the performance is heavily influenced by the factors that Christian mentioned. I doubt that you would get useful results for your decision that way, but go ahead if you think you would.
In our experience completely different aspects influence the decision on Xtext. Performance is one of them, but usually parsing, validation etc. are more important than serializing. And even more important: Type of language you develop, integration with Eclipse/VSCode etc., code generation support, rich editor support, flexibility, robustness and a long live in past and future.
We would like to understand more your intention and use case to give the right ideas if Xtext could be the right choice for you.
Kind regards,
~Karsten
|
|
|
Re: Xtext Serialization: how to benchmark? [message #1790334 is a reply to message #1790321] |
Fri, 08 June 2018 11:57 |
|
Hi Christian,
I am not very familiar with Xtext but I'll try to be as specific as possible:
- standalone
- default (probably no formatter?)
- no references between files
- just one full parse/unparse cycle (no changes)
- default (probably no change serializer?)
Regards
Michael
|
|
|
|
Re: Xtext Serialization: how to benchmark? [message #1790337 is a reply to message #1790334] |
Fri, 08 June 2018 12:13 |
|
Hi Karsten,
> And if you would benchmark on that level, against what would you benchmark other frameworks?
Yes.
We don't make decisions only based on performance. That would be very unprofessional and short-sighted. But for the other features/aspects you mentioned we are able to find resources and documentation, e.g. LSP which seems very promising. Regarding performance I couldn't find much though. That's why I specifically asked for that.
Thanks for your help
Michael
|
|
|
|
|
Re: Xtext Serialization: how to benchmark? [message #1790357 is a reply to message #1790341] |
Fri, 08 June 2018 14:16 |
|
Please keep in mind that we are talking about a benchmark ;) To measure the serialization performance, we need a model instance to serialize. Acquiring this model instance from a textual representation makes sense since we compare different modeling frameworks. I really don't see the problem.
|
|
|
|
Re: Xtext Serialization: how to benchmark? [message #1790362 is a reply to message #1790359] |
Fri, 08 June 2018 15:15 |
|
Hi Christian,
the car analogy doesn't get us anywhere.
I already explained that our evaluation is not limited to this benchmark. But for some of our use cases this is relevant. For those performance critical use cases we won't have any references and no special formatter. Why should I care?
Getting back to my original question: is Serializer#serialize() the right entry point for a JMH benchmark?
As I said, I couldn't find much credible information related to performance. If there's documentation that I didn't find, let me know.
Regards,
Michael
|
|
|
|
Powered by
FUDForum. Page generated in 0.03900 seconds