Home » Modeling » TMF (Xtext) » Problem with whitespace and terminals
|Problem with whitespace and terminals [message #882792]
||Thu, 07 June 2012 08:03
| Axel Guckelsberger
Registered: July 2009
I am working on a grammar which has just been imported from an existing
When generating the language infrastructure I receive dozens of these
gen/mydsl/parser/antlr/internal/InternalMyDsl.g:16194:1: Multiple token
rules can match input such as "'D'": T__94, T__123, T__127, T__187, T__189,
T__194, T__206, T__221, T__229, T__230, T__265, T__283, T__284, RULE_ID
As a result, token(s)
were disabled for that input
So as soon as multiple keywords start with the same letter they seem to
become competitors. Probably a lexer problem?
When I removed the Terminals grammar and just used some needed rules of it,
like ID and STRING, suddenly everything worked fine. Except that I had no
comments functionality anymore. If I copy the whitespace declarations
(hidden()) as well the problems occur again.
Any idea how to prevent the whitespace rules breaking my grammar?
|Re: Problem with whitespace and terminals [message #884674 is a reply to message #884663]
||Mon, 11 June 2012 15:33
| Henrik Lindberg
Registered: July 2009
On 2012-11-06 17:20, Axel Guckelsberger wrote:|
> thanks for the pointer. Indeed the warning messages do sometimes appear and
> sometimes not, it seems like a random choice.
> Any idea how to examine the grammar for finding the ambigious parts? I tried
> opening and checking the .g file with antlrworks, but the grammar validated
> fine there.
> Karsten Thoms wrote:
>> It usually is a sign that your grammar became ambigious. This can
>> partially be solved by semantic predicates, or by enabling backtracking.
>> Backtracking should be avoided when possible, since it can result in
>> unpredictable results.
Do you have backtracking turned on? That can mask lots of grammar problems.
I had similar issues with a complex (backtracking) grammar, and I had to
do two things - make sure the grammar generator had plenty of memory,
and that it did not time out (a parameter can be set in the workflow for
that). I also had to make sure nothing heavy was running on my machine
at the same time.
Later I reduced the complexity (moving things to an external lexer)
which had several positive effects, one being that a lot less resources
were required due to big reduction in tokens/rules.
Current Time: Fri Oct 30 14:48:18 GMT 2020
Powered by FUDForum
. Page generated in 0.02096 seconds