|
|
Re: Parsing Xtext-File with Xtext [message #1697511 is a reply to message #1697499] |
Thu, 04 June 2015 13:45 |
Tamara Machowa Messages: 22 Registered: June 2015 |
Junior Member |
|
|
I would like a generator that would generate a SSLR-plugin. It consists of 4 maven
projects and in one of them I have to define the grammar that will be used by SSLR and
later by Sonar. Therefore I need the AST of the grammar-file itself - or something
similar to iterate over all rules and keywords.
How does a fragment look like? Could you give me an example?
Edit: I looked at the ValidatorFragment and I think, that I understand what you mean.
But how should I provide my fragment? Should I create a xtend-project and add it to
the dependencies of the xtext-project I would generate a SSLR-project? How does it work?
[Updated on: Thu, 04 June 2015 13:57] Report message to a moderator
|
|
|
|
|
|
|
|
|
|
|
|
Re: Parsing Xtext-File with Xtext [message #1697627 is a reply to message #1697598] |
Fri, 05 June 2015 13:14 |
Tamara Machowa Messages: 22 Registered: June 2015 |
Junior Member |
|
|
For those, who have a similar use case - here is my current solution:
1) Create a Plugin-Project that would contain the new fragment.
2) Create the Fragment class:
package myCompany.fragments.sslr
import com.google.inject.Inject
import org.eclipse.xtend.lib.annotations.Accessors
import org.eclipse.xtext.Grammar
import org.eclipse.xtext.generator.Generator
import org.eclipse.xtext.generator.Xtend2ExecutionContext
import org.eclipse.xtext.generator.Xtend2GeneratorFragment
class SSLRGeneratorFragment extends Xtend2GeneratorFragment implements IStubLexerless {
SSLRGrammarGenerator grammarGenerator
@Accessors boolean generateLexerless
@Inject
def void init(SSLRGrammarGenerator.Service stubGeneratorService, Grammar grammar) {
this.grammarGenerator = stubGeneratorService.createGenerator(grammar)
}
def String cls(Class<?> clazz) {
clazz.name + ".class"
}
override generate(Xtend2ExecutionContext ctx) {
ctx.writeFile(Generator.SRC_GEN, grammarGenerator.stubFileName, grammarGenerator.generateStubFileContents(generateLexerless))
}
}
Generator.SRC_GEN will generate the desired File in the src-gen-folder of the xtext-project. Again the question: How could I configure this to wirte to the workspace? Actually I don't come very far with debugging the code
3) Create the Stubgenerator class
package myCompany.fragments.sslr
...
import org.sonar.sslr.grammar.GrammarRuleKey
import org.sonar.sslr.grammar.LexerfulGrammarBuilder
import org.sonar.sslr.grammar.LexerlessGrammarBuilder
import static extension org.eclipse.xtext.GrammarUtil.*
@FinalFieldsConstructor class SSLRGrammarGenerator {
@Accessors(PUBLIC_GETTER) static class Service {
@Inject Naming naming
def SSLRGrammarGenerator createGenerator(Grammar grammar) {
new SSLRGrammarGenerator(this, grammar)
}
}
val SSLRGrammarGenerator.Service service
val Grammar grammar
val Set<AbstractRule> calledRules = Sets.newHashSet()
val HashMap<String, String> punctuators = Maps.newHashMap()
def String getStubSimpleName() {
'''«service.naming.toSimpleName(grammar.name)»Grammar'''
}
def String getStubPackageName() {
'''«service.naming.toPackageName(grammar.name)».grammar'''
}
def String getStubQualifiedName() {
'''«stubPackageName».«stubSimpleName»'''
}
def String getStubFileName() {
'''«service.naming.asPath(getStubQualifiedName)».java'''
}
def String getStubSuperClassName() {
return GrammarRuleKey.name
}
def setFileHeader(String fileHeader){
service.naming.fileHeader = fileHeader
}
// now we can write some templates to generate the desired input
def String generateStubFileContents(boolean generateLexerless) {
val extension file = new JavaEMFFile(grammar.eResource.resourceSet, stubPackageName, service.naming.fileHeader);
file.imported(GrammarRuleKey)
if(generateLexerless)
file.imported(LexerlessGrammarBuilder)
else
file.imported(LexerfulGrammarBuilder)
val grammarBuilder = '''«IF generateLexerless»LexerlessGrammarBuilder«ELSE»LexerfulGrammarBuilder«ENDIF»'''
val abstractRulesWithoutTerminals = grammar.allRules.filter[!(it instanceof TerminalRule)]
val usedRulesFinder = new UsedRulesFinder(calledRules)
usedRulesFinder.compute(grammar)
val terminalRules = grammar.allTerminalRules.filter[isUsed && !#['ML_COMMENT','SL_COMMENT'].contains(name)]
val keywords = grammar.allParserRules
.map[eAllContents.filter(Keyword)]
.map[toList].filter[!nullOrEmpty].flatten.distinct
getPunctuators(keywords.map[value].filter[!isAlphanumeric], punctuators)
file.body = '''
public enum «stubSimpleName» implements «stubSuperClassName.imported» {
«IF !terminalRules.nullOrEmpty»
// Terminal Rules
«IF terminalRules.size > 1»
«terminalRules
.effect[name=if(name.equals('WS')) 'WHITESPACE' else name]
.map[name.toUpperCase].join(',\n')»«IF abstractRulesWithoutTerminals.nullOrEmpty && punctuators.empty»;«ELSE»,«ENDIF»
«ELSEIF abstractRulesWithoutTerminals.nullOrEmpty && punctuators.empty»
«terminalRules.head.name.toUpperCase»;
«ELSE»
«terminalRules.head.name.toUpperCase»,
«ENDIF»
«ENDIF»
«IF !punctuators.empty»
«punctuators.values.join(',\n')»«IF abstractRulesWithoutTerminals.nullOrEmpty»;«ELSE»,«ENDIF»
«ENDIF»
«IF !abstractRulesWithoutTerminals.nullOrEmpty»
// NonTerminal Rules
«IF abstractRulesWithoutTerminals.size > 1»
«abstractRulesWithoutTerminals.map[name.toFirstUpper].join(',\n')»;
«ELSE»
«abstractRulesWithoutTerminals.head.name.toFirstUpper»;
«ENDIF»
«ENDIF»
public static «grammarBuilder» createGrammarBuilder() {
«grammarBuilder» b = «grammarBuilder».create();
«IF !abstractRulesWithoutTerminals.nullOrEmpty»
// NonTerminal Rules
«var i = 0»
«FOR r: abstractRulesWithoutTerminals»
«IF i != 0»
«r.genRule»
«ELSE»
«r.genRoot»
«ENDIF»
«{i++; null}»
«ENDFOR»
«ENDIF»
«IF !terminalRules.nullOrEmpty»
// Terminal Rules
b.rule(WHITESPACE).is(b.regexp("\\s*+"));
«FOR rule: terminalRules.filter[!name.equals('WHITESPACE')]»
«rule.genRule»
«ENDFOR»
«ENDIF»
«IF !punctuators.empty»
// Punctuators
«FOR keyword: punctuators.toPairs»
«keyword.genRule»
«ENDFOR»
«ENDIF»
b.setRootRule(«grammar.rules.head.name»);
return b;
}
}
'''
return file.toString
}
...
4) Beacuse I wanted an other property to be set in the workflow, I had to implement also this:
package myCompany.fragments.sslr;
public interface IStubLexerless {
boolean isGenerateLexerless();
void setGenerateLexerless(boolean isGenerateLexerless);
}
5) Add this plugin as a depencies to the xtext-project you want to extend and add the following lines in the workflow:
import myCompany.fragments.*
// SSLR-Fragment
fragment = sslr.SSLRGeneratorFragment auto-inject {
generateLexerless = true
}
Works like a charm - as long as you don't want to generate a whole project.
|
|
|
|
Powered by
FUDForum. Page generated in 0.02457 seconds