|
|
Re: TTCN-3 Language Server [message #1838341 is a reply to message #1838299] |
Mon, 22 February 2021 16:18 |
Matthias Simon Messages: 6 Registered: February 2021 |
Junior Member |
|
|
Quote:
How much of an overhead it would give to the semantic checking
A language server is just a standalone binary, which gets started by the editor as subprocess. They communicate via pipe and exchange only simple messages. So the overhead to semantic checking should be rather small. It depends on the implementation, though.
Writing a TTCN-3 language server is a relatively simple task: Just "out-source" all requests to external tools like ctags or the Titan TTCN-3 compiler and voilà. Language Server.
Writing a performant TTCN-3 language server however is a totally different story. Compared to a classic compiler, a language server has very different requirements. Because a language server is interacting with the user:
- it must expect incomplete and incorrect input.
- it must be responsive (<100ms; low latency is more important than high throughput).
- it should do stuff incrementally, on demand/lazily.
One design goal of ntt is to fulfill those requirements. But we are just at the beginning of our journey.
Quote:
To what target binaries can you compile and how do they perform?
ntt itself is available for various platforms and architectures, but when you speak about target binaries you're probably referring to test executable (TE).
The language server is very much like a compiler indeed, but usually without code generation phase. Also, ntt is very young and only the parsing phase is implemented, yet.
Quote:
what language version is supported?
ntt parses a super-set of TTCN-3 4.10.1 (import-redirections are missing) and it understands most parts of the standardized extensions for documentation comments, performance and
real-time testing, advanced parameterization, behaviour types, as well as most non-standard extensions from Titan.
I plan to update the parser to 4.12.1 as soon as I found some spare time.
Quote:
What is the performance like?
The parser is quite fast. On my laptop, two million lines of TTCN-3 code take about 18 seconds to parse with our old Flex/Bison-based C++ parser. ntt requires only 700ms (two years ago).
The jump to definition feature in the language server takes about 40μs-200ms
I am also in love with the idea to connect this parser with a llvm-generator-plugin to skip Titan's C++ generation completely. But this is a different story and I don't have any numbers yet.
|
|
|
Re: TTCN-3 Language Server [message #1838375 is a reply to message #1838341] |
Tue, 23 February 2021 16:01 |
Kristof Szabados Messages: 82 Registered: July 2009 |
Member |
|
|
Hi Matthias,
Yes, I believe, we also have this functionality (although only for eclipse).
Let me just be a bit proud if our achievements for a moment here ;)
- it not only handles incomplete and incorrect input ... but also offers help to correct it, via code completion.
- responsivity is usually on 10^-4 scale, but if your not listening to music through youtube running in the background, it can reach 10^-9 second scale (but usually developers do that)
Keeping the data in the memory of the application, helps a lot here.
Also calling java functions to reach java data structures inside the same application keeps communication/administrative costs quite low.
- incrementality: we have incremental parsing and incremental semantic analysis (actually incremental parsing comes on two levels, in best case scenario we could get away with parsing single characters or words and processing time not noticeable by the user)
+ it also scales well with the core/logical processor count of the machine (that is also kind of expected when developers work on 8 - 64 core machines)
+ jump to definition, outline, code completion, etc...
+ code quality checking, architecture visualization, refactoring plugins available to extend the capabilities.
The parsing + semantic checking for 1 - 1,5 million lines of TTCN-3 and ASN.1 code is usually around 1-2 seconds on modern laptop (using ANTLR for parsing ... and all of the above mentioned features).
Sadly the compilation via Java is not yet complete enough to compile those large projects completely ... but usually we measure around 50* better resource usage, compared to compiling on the C side (it might even beat compiling to llvm internals directly, but that has to be measured ;) ).
Well ... as far as I understand (based on your description) the only limitation on our side might be to support only eclipse.
|
|
|
|
Powered by
FUDForum. Page generated in 0.03734 seconds