In my opinion, shell script integration "doesn't count".
I know how to write a shell script that runs A and then runs B.
That's the easy part.
The integration I'd like, and that we may not be able to achieve, is
to only have to configure the TCK once - where is the app server,
how do I deploy to it, where is the database server, where is the
LDAP server, where will I find log file output, how do I configure
which tests to run, etc.
After speaking with some of the CTS experts in IBM, I would contest the claim that we currently "only have to configure the TCK once".
All of the various configs for the TCK are in one giant (over 2.5k lines) config file:
I actually see it is a drawback, rather than a benefit, that config for the entire platform TCK is in one place. For example, if I just want to run the TCKs for one spec on my company's runtime, I'd have to sift through the entire 2.5k line "common" config file and configure the parts that my spec's TCK needs.
If we had unified configuration for "old" and "new" test suites,
that would make a big difference. At some level it's "only
software" so it should be possible, but I doubt that it's worth the
effort. I think it's unavoidable that I would need to write both an
Aquillian adapter and a CTS deployment adapter to allow me to deploy
to my new app server.
Also, remember that JavaTest has a GUI that allows you to configure
and run the TCK. Would we expect to integrate the new style TCKs
with that GUI? I wouldn't think so.
Scott Stark wrote on 2/25/20 9:08 AM:
We talked about a POC that had JSONB, CDI and BV to be run
as a unit. I think a next level POC would be a Jenkins script
that illustrated how to incorporate that into a kick off of
the CTS. At that point you have a repeatable platform
collection of TCK tests. Worst case there is a top level ant
or shell script that calls the maven test run and a jtrun.