[
Date Prev][
Date Next][
Thread Prev][
Thread Next][
Date Index][
Thread Index]
[
List Home]
Re: [cdt-core-dev] Another Parser Document
|
> Sorry for being late.
I'll forgive you if you forgive me for holding up 1.2 ... :-)
> * You mention that you'll have to deal with different C/C++ dialect.
> So the parser by default will understand what:
> = For C++
> - ISO C++?
> - ARM ?
> = For C
> - K&R ?
> - Ansi C 89 ?
> - Ansi C 99 ?
> = GNU extensions
> - GNU C dialect ?
> - GNU C++ dialect ?
>
> It is clear(at least to me) that it is probably not a good idea to
> support those dialects within CDT, so what are the mechanism to
> add new parser for my dialects:
> - Watcomm C
> - Hp aCC
> - Solaris SunPro CC
I must admit that I am working far to hard on the current ANSI
C/C++ parser support to have a complete answer for how I wish
to handle variants. I have considered using subclasses, breaking
the parser up into smaller procedural classes that can be mixed
and matched at construction time, even using aspectj and
aspect-oriented programming to provide a different way
of mixing and matching the functionality in a way that allows
for another parser implementation to be created and well
integrated via a currently undefined Parser extension point.
Regardless of the "how can we do it", there still lies the
question of "where does it fit" and how do parser clients
(like Outline View, Search, Indexer) use different parsers
transparently. My hope from the Target Model discussion was
that we would be able to figure out for a given project what
the toolchain was, and thus provide mechanisms for describing
these tools (compilers) independent of what builder was being
used. In the 1.2 timeframe this is becoming necessary even, as
compilers have built-in preprocessor macros (like Visual Studio's
_MSC_VER property, for the version of the compiler) need to be
specified for our parser to work correctly, but really shouldn't
be visible to the user in his build properties.
Regarding what dialects we support : for 1.2 most likely all I
will have time to support is ISO C++/C99 (I'd like to get GNU in
there as well, but our schedule is already tight). Beyond GNU, we
will have to make the call as to what toolchains we want good
integrations with : given the amount of work it is to write and
validate and test a parser, I would say that if we do not provide
support for particular compilers, it is doubtful that we will get
vollunteers to do it for us. But if we provide support to integrate
the Sun Workshop debugger, we will need to provide support for the
SunCC compiler I would hope.
> * Dealing with translation unit outside the workspace.
> Is the new architecture make this easier ? Before
> we have to go through loops and curvers, since the parser
> only accept ITranslationUnit which can only be implemented
> by the CoreModel.
The parser and scanner always accepted a Reader as input, and thus
could accept any string, stream or file input.
For simplicities sake, we have kept the Parser isolated from Eclipse
& CDT constructs as much as possible. If we are capable of coming
up with a decent extension process for parsing variant C-esque
languages, this chunk of the core could have a life outside
of a running CDT toolset.
> * No word on the memory usage?
Things that used to crash Eclipse now just take a very long time.
The small victories!
JohnC