Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [cdt-dev] Future of Managed Build


> On 31 Jan 2020, at 03:11, Jan Baeyens <jan@xxxxxxxxxx> wrote:
> 
> ... I have asked the question here on the dev list:"Do we have the requirement of readable makefiles". I think -as long as the build works- nobody really cares whether it is make, cmake, ninja who is doing the build lifting. If you disagree please elaborate.

Well, I really don't know, I can't even say I disagree, just that, as you also said before, things go to places where I don't feel comfortable to be in (like very generic modelling tools), or that I personally do not like (like cmake).

I have nothing agains such theoretical approaches, as long as the results have reasonably simple practical applications, I don't have to use them directly, and they do not artificially complicate my run-time.


Below are some thoughts, in random order; feel free to disregards them if they do not seem useful.

---

When designing a build system, I identified two problems: how to define the tools that you want to use in the build (like the compiler, linker, archiver, etc), and how to define the dependencies between different parts (files?) that go into the build.

Basic builders may use implicit rules to define dependencies (like deriving the tool from the file extension and building executables, libraries). More elaborate systems may need explicit rules.

Defining toolchain capabilities is another grey area, where things can be as simple as a set of static definitions, or as complex as defining plug-ins.

---

Regardless what methods you chose, after you completely process a project, you end up in memory with a graph of all dependencies and all details required to invoke each tool. Then you convert this graph into a configuration file (let's say a ninja.build) and give it to the builder, hoping that it'll recreate the same graph in memory, and perform the build.

Since I'm biased towards ninja, in my tools I decided to organise the internal graph as close as possible to how the ninja input data is structured, such that exporting the configuration file be simple.

Even more, although in the initial releases I will continue to invoke ninja (or make), in a future release I'm considering going the extra step and run the build internally, since apparently I already have the complete image of the project.

---

As for configuration formats, there are two diametrically opposed solutions: one is to use specific languages, with their own syntax (the most notorious being the cMake syntax) and the other one is to use structured formats (like xml, json, etc).

There are multiple contradictory requirements for these configuration formats.

One is to provide some flexibility, and a Domain Specific Language seems a good choice.

Another is to allow a machine to edit the configuration, and in this case a structured format is a reasonable choice.

As DSL examples, I think the cMake language deserves special attention, since it is probably the most difficult to read one, probably reflecting the way it was incrementally developed. meson tried to fix this, and they propose a much cleaner syntax. meson also makes an interesting case by arguing for a DSL that is not Turing complete.

Both were designed to be edited by humans, not machines, thus editing them automatically by GUI environments is practically impossible, all the GUI can do is present an edit window on the file itself.

It is worth mentioning that being editable by machine does not necessarily mean fancy settings pages, like the ones used by Eclipse to manage toolchain options (with the ones available in the GNU MCU Eclipse settings probably extreme). For example Visual Studio Code is a JSON only environment, where all configuration files are JSONs. It is up to each extension to decide how far it goes with a graphical interface to manage the configuration, but the environment always provides a basic edit view. This may look simplistic, but it is not, since behind the scene Visual Studio Code also allows to define a kind of schema for the JSON content, so when editing the file, auto-completion is available, and all kind of context sensitive tooltips help you edit the file.


Personally I decided to use Node.js to write my tools, and this makes JSONs a natural choice. This also makes embedding JavaScript code a natural choice for any scripting, so probably I'll cover complex configurations by a mix of JSON definitions and JavaScript actions. 

---

So, I have no idea what the future of managed build in Eclipse will be, or even if there will be such a thing, but if you want to investigate, I wish you good luck.

However, once you identify a solution, I suggest you try to answer some simple questions, like: 

- is it possible to create projects and run builds outside Eclipse, for example from a script in a simple CI environment, like Travis?
- is it possible to extend this solution to create projects and run builds in another IDE, or not even IDE, in a more advanced editor, like Visual Studio Code?

---

I'll personally proceed with my xPack command line tools, and once they are functional I'll make the decision if it is worth integrating them first in Eclipse or go directly to VSC.


Regards,

Liviu



Back to the top