|Re: [cdt-dev] Core build-in-container question Was: Are managed projects a requirement/recommendation or the new build system is enough?|
Apologize me for responding late here. See inline.
On Donnerstag, 10. Dezember 2020 00:50:50 CET Jeff Johnston wrote:
> The design of Core Build was essentially being done from the ground up.
> IIRC, Doug's
> intention was to simplify build and remove all the fluff that MBS caused
> and wasn't being
> maintained any more. So, Doug came up with basic functionality and asked
> for developers to exercise the model and suggest additions where
> functionality was missing.
> A user of Core Build projects is expected to use the launch bar to select
> configuration. A user can select to run or debug and with the
> Container support, can choose to build/run/debug in a Container. In
> addition, profiling is also
> supported via the Profiling Tools menu. A user can choose to
> build/run/debug the same project in multiple
> containers, same as under MBS. Each configuration has a separate build
> The indexer needs to know the build commands to figure out the include
> files and macros required
> as these need to be copied to the host for the indexer to correctly index
> the code (e.g. container may have completely
> different version/set of tools/headers). Thus, knowledge of the toolchain
> used is required in addition to the build commands used so
> that knowing how a resource is built and with what options, the command can
> be used to figure out what macros and header paths are implied.
> The build system cannot be treated as a black box for indexing to work
> properly. It would be possible to add additional properties tabs
> to support manual specification of include paths and macros, but some
> paths/macros are specific to compiling a particular file so it would
> get complex to force a user to supply all of this info. There is support
> in CMake and Meson Core Build projects to look at compile-commands.json.
The compile_commands.json-parser used in CMake-Build already knows how to
parse the command-line arguments for at least 5  compilers and extract the
information the indexer wants.
Given that, do we really need to force users to specify/define a thing called
toolchain for in-container-builds?
Concerning CMake-Build, we do not need a user-provided 'tool-chain' to answer
the question of the indexer: For source file xyz.c, what macros and header
paths are implied .
But the indexer not just wants include paths and macros, it also wants to
parse the header files. As Jeff stated, these have to be copied to/mounted
into the host-system from the container in order to make them visible to the
AFAIK, the container-build support by default copies the in-container headers
in /usr/include and /usr/local/include to a location accessible for the
indexer. (Correct me if I am wrong).
This is feasible, but what about projects that add e.g. /opt/xkrlmpf/include
to the include directories? Copy anything below /opt to the host?
The compile_commands.json-parser could help here; it knows exactly which
include directories below /opt are referenced. Just an interface to tell the
in-container-build launcher is missing.
 gcc, gcc-derived cross-compilers and clang work best here just because
these support built-ins detection. But not because the parser supports them
Cd wrttn wtht vwls s mch trsr.
cdt-dev mailing list
To unsubscribe from this list, visit https://www.eclipse.org/mailman/listinfo/cdt-dev
Back to the top