Eclipse Community Forums
Forum Search:

Search      Help    Register    Login    Home
Home » Archived » IMP » Semantic check
Semantic check [message #21673] Tue, 24 June 2008 14:45 Go to next message
Eclipse User
Originally posted by: lars.ohlen.remove.tietoenator.com

Hi IMP forum,

We are about to create an IDE for a DSL. The requirements pretty much forces
us to implement all the front-end parts of the compiler stack.
For the lexer and parser part we are currently looking into LPG (and IMP),
but how should be handle the semantic checks? I can see
two options; The first is to add some extra validation code that gets
geneated into the genarated AST and the other one is to perform
the analysis of the AST. I really would like to get a hint on which method
to use, and also some sample code.

Any ideas or help in this question appreciated.

BR

Lars
Re: Semantic check [message #22033 is a reply to message #21673] Tue, 01 July 2008 03:15 Go to previous messageGo to next message
Eclipse User
Originally posted by: bo.hansson.el.ahrairah.se

It looks like knowbody knows the answer to your questions! Myself I confess
I do not know, but for sure others do.

Perhaps the people that do know works with other parsers (like ANTRL), sure
hate to think that that is the case.

Good luck.

Bo











"Lars Ohl
Re: Semantic check [message #22249 is a reply to message #21673] Wed, 02 July 2008 17:48 Go to previous message
Stan Sutton is currently offline Stan Sutton
Messages: 121
Registered: July 2009
Senior Member
Hi Lars,

I don't really have much experience with this, so really I am not the
person to provide an answer, but this is what I think. You really have
both options available to you. You can experiment with both of them and
may want some combination of the two. I don't believe that there is any
one right or wrong answer. I think the trade-off can be based on issues
such as performance, size of data structures, ease of development and
extensibility, etc.

The LPG grammar templates that are provided with IMP include an example
grammar and show examples of code that is added to some AST classes.
This can be done simply.

The code in the example does not perform semantic checking; rather, it
generates data that can be used in semantic checking (and also in
support of various IDE services). In particular, this code maintains
symbol-table information and allows each identifier AST node to refer to
its declaration AST node. That information is directly available in the
AST then when we implement an IDE service like reference resolution or
hover help. (If this code were not generated into the ASTs, we would
have to compute a symbol table every time we wanted to find the
declaration of a reference--much more programming and much more runtime
cost.)

This reflects a general strategy that we advocate: as much as you can,
rely on the parser to generate information in the AST nodes that you may
need later on.

Different approaches can then be taken to analyze the AST. You might
put some of that code into the AST nodes, or you might have callouts
from the AST nodes, or you might have separate analysis services. If
the analyses might be a growing or evolving set, then you may want to
keep them separate from the AST so as to avoid having to regenerate your
AST types each time you update the analysis code. If you really want
the analysis in the AST, you can probably develop it separately and then
migrate it into the AST types once it is stable.

I'm sorry I can't give you more specific guidelines, I don't have much
experience with this myself. But I think you have a range of workable
approaches and a chance to tailor what you do so as to best meet your
needs. I hope that helps!

Regards,

Stan


Lars Ohlén wrote:
> Hi IMP forum,
>
> We are about to create an IDE for a DSL. The requirements pretty much forces
> us to implement all the front-end parts of the compiler stack.
> For the lexer and parser part we are currently looking into LPG (and IMP),
> but how should be handle the semantic checks? I can see
> two options; The first is to add some extra validation code that gets
> geneated into the genarated AST and the other one is to perform
> the analysis of the AST. I really would like to get a hint on which method
> to use, and also some sample code.
>
> Any ideas or help in this question appreciated.
>
> BR
>
> Lars
>
>
>
Re: Semantic check [message #571828 is a reply to message #21673] Tue, 01 July 2008 03:15 Go to previous message
Eclipse User
Originally posted by: bo.hansson.el.ahrairah.se

It looks like knowbody knows the answer to your questions! Myself I confess
I do not know, but for sure others do.

Perhaps the people that do know works with other parsers (like ANTRL), sure
hate to think that that is the case.

Good luck.

Bo











"Lars Ohl
Re: Semantic check [message #571964 is a reply to message #21673] Wed, 02 July 2008 17:48 Go to previous message
Stan Sutton is currently offline Stan Sutton
Messages: 121
Registered: July 2009
Senior Member
Hi Lars,

I don't really have much experience with this, so really I am not the
person to provide an answer, but this is what I think. You really have
both options available to you. You can experiment with both of them and
may want some combination of the two. I don't believe that there is any
one right or wrong answer. I think the trade-off can be based on issues
such as performance, size of data structures, ease of development and
extensibility, etc.

The LPG grammar templates that are provided with IMP include an example
grammar and show examples of code that is added to some AST classes.
This can be done simply.

The code in the example does not perform semantic checking; rather, it
generates data that can be used in semantic checking (and also in
support of various IDE services). In particular, this code maintains
symbol-table information and allows each identifier AST node to refer to
its declaration AST node. That information is directly available in the
AST then when we implement an IDE service like reference resolution or
hover help. (If this code were not generated into the ASTs, we would
have to compute a symbol table every time we wanted to find the
declaration of a reference--much more programming and much more runtime
cost.)

This reflects a general strategy that we advocate: as much as you can,
rely on the parser to generate information in the AST nodes that you may
need later on.

Different approaches can then be taken to analyze the AST. You might
put some of that code into the AST nodes, or you might have callouts
from the AST nodes, or you might have separate analysis services. If
the analyses might be a growing or evolving set, then you may want to
keep them separate from the AST so as to avoid having to regenerate your
AST types each time you update the analysis code. If you really want
the analysis in the AST, you can probably develop it separately and then
migrate it into the AST types once it is stable.

I'm sorry I can't give you more specific guidelines, I don't have much
experience with this myself. But I think you have a range of workable
approaches and a chance to tailor what you do so as to best meet your
needs. I hope that helps!

Regards,

Stan


Lars Ohlén wrote:
> Hi IMP forum,
>
> We are about to create an IDE for a DSL. The requirements pretty much forces
> us to implement all the front-end parts of the compiler stack.
> For the lexer and parser part we are currently looking into LPG (and IMP),
> but how should be handle the semantic checks? I can see
> two options; The first is to add some extra validation code that gets
> geneated into the genarated AST and the other one is to perform
> the analysis of the AST. I really would like to get a hint on which method
> to use, and also some sample code.
>
> Any ideas or help in this question appreciated.
>
> BR
>
> Lars
>
>
>
Previous Topic:IMP error recovery
Next Topic:Diagnose parser
Goto Forum:
  


Current Time: Fri Aug 22 13:58:20 EDT 2014

Powered by FUDForum. Page generated in 0.07035 seconds