Lightweight standalone AST parser [message #1018400] |
Wed, 13 March 2013 15:17  |
Eclipse User |
|
|
|
Hello, is there a nice way of exporting Xtext project as a java archive with a standalone parser? When I export the initial Hello name! project as a runnable JAR with packed references I get a 40MB archive. That is really unusable.
David Apltauer
|
|
|
|
|
|
|
|
|
|
|
|
|
Re: Lightweight standalone AST parser [message #1020178 is a reply to message #1020174] |
Sun, 17 March 2013 09:04  |
Eclipse User |
|
|
|
Hi
If you're really after optimising speed and size, have time to spare and
are only interested in parsing, you could could pursue another approach.
Use an Xtext grammar for all its good development/editor properties but
then do a grammarModel-to-grammarText transformation so that you produce
a much smaller (perhaps 10-fold) and somewhat faster (perhaps two-fold)
LPG grammar. I started work on this in
org.eclipse.ocl.examples.xtext2lpg but haven't had time to progress it
recently.
Just an idea.
Regards
Ed Willink
On 17/03/2013 12:52, Steve Kallestad wrote:
> Funny enough, I went through this a week or so ago. I haven't gotten
> to the point where I know exactly what is required, but I did manage
> to get it down to ~8MB and ~3K files. I'm sure there is a lot more to
> whittle down.
>
> Stripping down the jar is not an easy task. It's best to start with a
> very simple project.
>
> ProGuard (free/open source) is designed for this purpose, but it
> requires a lot of information to do the job AND you have to do a great
> deal of testing to make sure that something you stripped out isn't
> required at all.
>
> The path I'm going down is this:
>
> 1) leverage big jar with obvious file stripping for standalone parsing
>
> 2) In a future release, create an external parser that's more
> lightweight.
>
> At this time, I don't want to have to deal with language changes in
> two different areas and testing/syncing semantics between them. When
> I've gotten enough feedback for the next revision it will be time to
> optimize.
>
> I have a long-standing optimize-everything mentality, but the level of
> effort for coming up with an optimized solution here isn't worth the
> effort (at least for me). Regardless of how much you can pull out,
> the overhead of EMF is going to be significantly higher than a
> targeted solution.
> I have seen at least one open source standalone xtext app (the puppet
> one) that looks very mature. I'm sure there are plenty more. There
> may be an example or two out there to work from.
|
|
|
Powered by
FUDForum. Page generated in 0.06838 seconds