Skip to main content


Eclipse Community Forums
Forum Search:

Search      Help    Register    Login    Home
Home » Modeling » TMF (Xtext) » Infinite looping in partial parser(Analyzing Bug 360892 and others)
Infinite looping in partial parser [message #1176278] Fri, 08 November 2013 08:27 Go to next message
Jürgen Mutschall is currently offline Jürgen MutschallFriend
Messages: 6
Registered: October 2013
Junior Member
Xtext Version 2.4.3 on Kepler SR1 Build id: 20130919-0819

Problem description:

Using a grammar with rich strings, unrestricted identifiers or complex quoted elements the partial parser based on ANTLR, that is started as a job in parallel to editing, sometimes loops forever or deadlocks the editor.

This problem was described several times in the last 3 years and there are some bug reports open (e.g. 360892).

Analysis:

During the partial parsing of a rule, calling AbstractInternalAntlrParser.parse(entryRuleName) from the Reconciler the generated derived parser Internal{Lang}Parser.java is used in combination with a {Lang}Parser.java.

If the ANTLR based parser runs into a NoViableAltException, it is caught and then the recover method is called in the hope that the retrial of the parsing of the same rule works.

Here the code that is responsible for the recovering:

/** Recover from an error found on the input stream.  This is
	 *  for NoViableAlt and mismatched symbol exceptions.  If you enable
	 *  single token insertion and deletion, this will usually not
	 *  handle mismatched symbol exceptions but there could be a mismatched
	 *  token that the match() routine could not recover from.
	 */
	public void recover(IntStream input, RecognitionException re) {
		if ( state.lastErrorIndex==input.index() ) {
			// uh oh, another error at same token index; must be a case
			// where LT(1) is in the recovery token set so nothing is
			// consumed; consume a single token so at least to prevent
			// an infinite loop; this is a failsafe.
			input.consume();
		}
		state.lastErrorIndex = input.index();
		BitSet followSet = computeErrorRecoverySet();
		beginResync();
		consumeUntil(input, followSet);
		endResync();
	}


The code works as long as there are some tokens in the input stream. But if input.consume() consumes nothing, the recover method is called again and again from the same parsing rule.

The idea to change/override the code to something like

	@Override
	public void recover(IntStream input, RecognitionException re) {
		super.recover(input, re);
		// copied from default recover impl
		if ( state.lastErrorIndex==input.index() ) {
			// uh oh, another error at same token index; must be a case
			// where LT(1) is in the recovery token set so nothing is
			// consumed; consume a single token so at least to prevent
			// an infinite loop; this is a failsafe.
			input.consume();
			// if nothing has changed
			if (state.lastErrorIndex==input.index())
			{
				state.failed = true;
			}
		}
	}


results in the correct break of the parsing loop, but the calling methods cannot handle the NVA at EOF situation correctly.

A working bug fix is urgently needed.

Ideas: Use a more sophisticated method to fix the input or return a unparsed but working result to the reconciler.







[Workaround] Infinite looping in partial parser [message #1183016 is a reply to message #1176278] Tue, 12 November 2013 15:30 Go to previous message
Jürgen Mutschall is currently offline Jürgen MutschallFriend
Messages: 6
Registered: October 2013
Junior Member
The problem can be solved by breaking the loop with state.failed = true as mentioned before.

To catch the unparsed result of the XYZInternalParser you can override parse.
This is final, so you have to override XYZParser and have to call a changed copy of parse.

Before returning the parser result, check it for a failure and construct an artifical semantic node tree and an error message.


	@Override
	public void recover(IntStream input, RecognitionException re) {
		super.recover(input, re);
		if ( state.lastErrorIndex==input.index() ) {
			// uh oh, another error at same token index; must be a case
			// where LT(1) is in the recovery token set so nothing is
			// consumed; consume a single token so at least to prevent
			// an infinite loop; this is a failsafe.
			input.consume();
		}
		// still no improvement
		if ( state.lastErrorIndex==input.index() ) {
			state.failed = true;
		}	
	}





	public final IParseResult extParse(String entryRuleName) throws RecognitionException {
		IParseResult result = super.parse(entryRuleName);
		
		if (this.state.failed)
		{
			// we have to catch a parsing failure and build a dummy semantic tree with error
			XtextTokenStream input = (XtextTokenStream) getInput();
			int lookAhead = input.getCurrentLookAhead();
			String completeContent = input.toString();
			if (completeContent == null)
				completeContent = "";
			appendAllTokens();
			ICompositeNode rootNode = getNodeModelBuilder().newRootNode(completeContent);
			// workaround: we know the top level rule of our grammar
			ICompositeNode currentNode = getNodeModelBuilder().newCompositeNode(getGrammarAccess().getUnitDefinitionRule(), lookAhead, rootNode);
			getNodeModelBuilder().setSyntaxError(currentNode, new SyntaxErrorMessage("parsing error: compilation unit not correctly closed", null, null));
			ICompositeNode compressedRoot = getNodeModelBuilder().compressAndReturnParent(currentNode);
			result = new ParseResult(currentNode.getSemanticElement(), compressedRoot, true /*hadErrors*/);
		}
		return result;
	}
Previous Topic:Restrict Index (EObjectDescriptions)
Next Topic:Xtextgrammar for UML-stereotypes
Goto Forum:
  


Current Time: Thu Dec 03 01:53:44 GMT 2020

Powered by FUDForum. Page generated in 0.01272 seconds
.:: Contact :: Home ::.

Powered by: FUDforum 3.0.2.
Copyright ©2001-2010 FUDforum Bulletin Board Software

Back to the top