Not taking it as criticism - I have not even started down this path and I hope to not have muddied the waters with anything that I have said. I am unlikely to be working on the implementation of this model so I have no opinions yet. I hope to be able to help facilitate this work and make sure it is sustainable and not overly dependent on a single person/organisation.
(PS See agenda for next week's meeting - github is a topic for discussion)
After re-reading your vision I have a perception that we have
different understanding how the modeling should be applied here.
Please don't take it as a criticism, the subject is to big and
complex and I may be wrong with my interpretation of your ideas, but
let me share my thoughts.
I assume we will use EMF - best in class for today and native to
Eclipse ecosystem. And EMF is the most effective for declarative
approach. It supports the "methods" for objects but this way will
quickly direct you to a swamp of "@generated not".
The good example of the right EMF usage is Sirius: after years of
"GMF Gen"-based exercises they found a way to switch from "direct
model implementation" to a kind of "interpretation of modeled
structures".
May be I'm not so clear in one sentence - this is a big topic more
suitable for a book - but it is very important to understand the
limits of effective applicability for a particular tool, EMF in our
case.
I'll try to describe it on other words: we should not model "how".
but "what". I know that you are familiar with Maven it asks us
"what" do we have and then applies a set of "how" plugins according
to more or less fixed algo. Gradle goes further and provides much
more customization, but still with a good focus on declarative
approach. But if you remember Ant - it is completely different, it
requires to specify each and every step that is really not scalable
and hardly maintainable approach.
So, the most important part is to describe carefully and structure
effectively the "metadata" that "router" will operate on, not to
model a "router" algo. Why? Because "router" who operates on
metadata will be most effectively implemented either with OSGi
services or by an existing tools, or by a chain of tools, but not
with EMF. For example "make" works just perfect and we are not going
to re-implement make, right? But we would like to avoid creating
make files manually, yes. Well, not a problem, we can generate them
if we have sufficient metadata, or generate cmake files, or generate
manifests for other build systems. It will be relatively easy if we
have good metadata that includes every aspect we may need to care
about: host capabilities, target hardware, tool options, project
structure, whatever else we need.
Having EMF-based metadata available via OSGi services - we already
successfully tested this architecture with Eclipse Passage -
"routers" can be developed in parallel according to your pictures.
What is important - the same metadata will be available during all
the project lyfecycle:
create/export/import/refactor/build/run/debug/etc.
Nope, it is not planned to be multi years project before we can
_start to_ get it into CDT, it may be done faster. However, it
depends on resources, as you can guess this work is not funded ATM.
Also from organizational perspective it is not clear how it can get
into CDT as it will have some "volume" with EMF-generated sources
and everything:
- it may be a separate proposal - in this case it will need to go
through incubation,
- or it may be a separate repository (GitHub, please, to get more
people involved easily) inside the CDT project to be utilized
part-by-part,
- or you may have other suggestions.
Regards,
AF
30.01.2020 1:56, Jan Baeyens пишет:
Op 27/01/2020 om 14:47 schreef
Alexander Fedorov:
Hello,
We (with my good friends @ArSysOp) are working on Toolchain
metamodel prototype that should cover the needs of CDT and will
be flexible enough to be applied for sibling areas.
Of course there is a number of EMF usage tricks collected
through all that years that we are going to use during this
work. We know the model-to-text generation technologies from JET
to Acceleo, so we will be able to create whatever we need, not
only the default Java classes.
Great
Currently we are polishing things that were presented for
Postgres community 1 year ago during the "IDE for stored
procedures" talk and discussion. Yes, it may sound curious, but
the "toolchain model" idea with all the "target hardware
description" and "project structure model" appears to be very
similar for a lot of ecosystems.
This is what I would expect. Basically in toolchain land there are
"files" and "tools". There is only a very limited set of
combinations you can have with "files" and "tools".
IMHO recursivity at the file level does not exists and tool
sequence is pretty obvious at the model level. However: correct me
if I'm wrong.
I have been thinking hard about how to model a gcc toolchain
and below is a representation of how I see things now
I haven't yet found a good name for what I called router. It is
absolutely crucial to understand router to understand the model.
All other stuff is pretty obvious
In it's simplest form router will create a list of "input file;
output file" (like "input.cpp" "input.cpp.o"). for each item in
the list tool can add a command which results in "input file;
output file; command" which -in make world- is called a "rule".
A more complicated router may have multiple input files and
multiple output files which means that the router produces a
list of "list of input files; list of output files".
Also the tool can produce more than one command so the output of
a router tool combinations should be a list of "list of input
files; list of output files; ordered list of commands".
The router can also filter input files. For instance in the
example above the output of the "c to object" tool is send to
both the archiver and the linker. It is up to the "collection of
routers" to make sure the files are processed at the right
place. Given a specific file: the model supports all options
(archiver; linker; archiver and linker;none) and assumes the
need for coding to support this construction.
The builder can simply go from left to right processing the
router/tool columns one by one. Asking the routers the "list of
input files; list of output files" checking whether a command
needs to be run and if so ask the command to generate the
"ordered list of commands" to run. Run these and move on to the
next.
Note 1 that the router explicitly names the output file(s).
Note 2 when implementing this model it seems logical to have 2
generic routers ("1 on 1" -fi compile- and "all on 1" -fi
archive-) "1 on 1" would need a "name provider" (append .o) and
"all on 1" will need a output file (archive.ar). The model above
needs "custom build" routers next to the generic ones
Note 3 Looking an the linker input files (which can be object
files and archive files) one can argue that the "list of input
files' needs to be a "list of lists of input files". One can
also argue it is up to the tool to "triage the files". I haven't
decided yet what I think is the best.
Note 4: this model assumes there is no order of commands in the
column. It behaves as if all the commands from one column can be
executed at the same time.
Note 5: the model assumes that all actions in a column must
have finished before the next column can be
"evaluated/executed". This allows for "non file based commands"
like "pre build actions"; "post build actions" to have their own
column like demonstrated below
Note 6:I understand that note 4 and 5 are a serious constraint
for a more generic modelling tool.
The plan is to share it on GitHub as a set of Ecore metamodels
and then go through several iteration of minor "releases" until
we will find good solution. Then we can have it either as a
separate Eclipse project or as a part of CDT - this may be
decided later.
Do you think this will be multi year or multi month project before
we get it into CDT?
Regards,
AF
23.01.2020 23:35, Jonah Graham
пишет:
I have no experience with emf modelling so I don't
know it is capable enough. I also have no experience
getting this in java classes. Help will be needed if
we go this way. Learning EMF is now on my todo :-)
Is there some doc from the discussion from Alexander
Federov and William Riley?
P.S. Well, as we started the introduction session ... actually,
my family name is Fedorov, from the Greek "Θεόδωρος " you may know it as
the Latin "Theodor", that means "God-given" :)