User-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.13.0
Just to be clear, I'm only reporting the peak numbers. Much of
the time, these CI systems are unused. I would like to think that,
when GlassFish runs it's broader TCK tests -- and certainly when
that become more the norm (as we move GlassFish away from the sole
TCK test CI for Jakarta EE TCK project), the peak usage will go up
-- as GlassFish attempts to run certification tests -- one or both
of the Platform TCK and the respective stand-alone TCKs.
My comment about, not utilizing the maximum number of CPUs holds
for both the Platform TCK, and for GlassFish projects (neither
seem to be able to utilize all the resources that are claimed to
be allocated).
Could you clarify your point about "only running 4 machines?" I
had the impression that the CI resource requests would simply
claim more from their available pool, if the pool was bigger.
Perhaps that is not the case.
GlassFish peaked at about 8 vCPUs. The TCK CI systems have
been "optimized" to take advantage of parallel test
operations. I'm not sure that similar optimizations have
been made for GlassFish.
The build using the GF Jenkins file first does the build and
a short small TCK test in one run, then it runs various tests
from appserver/tests in parallel, each on their own server.
We had about 12 or so of such tests before. They included
EJB, CDI, Web, Nucleus, two versions of quicklook and more.
When we moved to GlassFish 6 we disabled the tests
temporarily, and then later on activated only 4 of them again.
So currently resource usage of GF is likely lower because of
that.
We did for a long time have the issue that one of the
servers, when they all would run in parallel, would get
corrupted somehow and GF then failed to start.
Since we’re running only 4 machines in parallel we haven’t
seen this.
Kind regards,
Arjan
It isn't clear to me why these numbers don't peak at the
maximum available.
My guess is, we'd start seeing resource strain in projects
that don't have additional resource packs -- especially if
they attempt to run the Platform test jobs with any
frequency. We'll probably have to monitor this and see if we
need to make adjustments as more component projects pick up
their TCK test load.
We may want to continue using the resources in the Platform
TCK project since that project has a large pool of resource
packs -- but we can decide that if we actually encounter
problems. These loads are quite transient -- weeks will go
by with little to no use -- then, when projects head toward
the finish-line the loads go way, way up.
-- Ed
On 9/3/2021 10:49 AM, Scott Marlow wrote:
On 9/2/21 4:41 PM, Ed Bratt wrote:
If there are no assertion changes (i.e. the tests don't
change) then re-release can probably be avoided. These
updates can use the micro-version ID bumps.
I agree with no assertion changes == what you stated at
the bottom "If the Platform specification (or any other
spec that hasn't finished) imposes new or changed
requirements (adds or changes an assertion) on RESTFul Web
Services, after 3.1 is released -- that forces a new
release."
RESTFul Web Services will need to verify itself with a
compatible implementation that passes both the newly
refactored TCK and a TCK that comes from the Platform
TCK project.
Agreed since the RESTFul Web Service TCK currently only
contains new tests and one Platform level TCK test. I
think that this will change when the RESTFul Web Service
TCK can run all of the existing Platform TCK tests
(including Jakarta EE Platform + Standalone Java SE
tests).
Personally, I would like to see both Jersey and
GlassFish (with the new RESTful Web Services
implementation) both passing these TCKs prior to the 3.1
ballot.
I have no preference as to which implementation is used
but would like to see as many implementations (at least
one) pass as possible.
My hope is RESTFul web services project, or through an
arrangement between it and the Platform TCK project,
will run these TCKs against GlassFish (or, if another CI
is willing to provide this service, great) and Jersey
periodically -- and hopefully before either of the TCK
releases is updated -- even if just a micro update.
So -- if we find a change breaks the previously
compatible implementation, we can investigate and
correct if necessary.
Good point, should this be coordinated with all of the
compatible implementation teams via one of our processes?
I have thought about reaching out (nagging) to the
compatible implementations when they create compatibility
requests, that they should join https://accounts.eclipse.org/mailing-list/jakartaee-tck-dev
so we could interact with them directly but I am not sure
if mailing list discussions is enough to cover this need.
If the Platform specification (or any other spec that
hasn't finished) imposes new or changed requirements
(adds or changes an assertion) on RESTFul Web Services,
after 3.1 is released -- that forces a new release.
I think that makes sense and does create a wall against
changing requirements for a SPEC API after that SPEC API
has completed its ballot. The wall doesn't prevent
further changes but makes it obvious that there is a cost
to change requirements late in the process.
Is this helpful?
Yes, very much so, thank you!
-- Ed
On 9/2/2021 8:28 AM, Scott Marlow wrote:
Hi,
[1][2][3][4][5][6][7] are from the "Process for TCK
service releases that include TCK updates for running
signature tests on newer JDK versions..." discussion
thread. I am starting a new thread with the subject
that Ed Bratt suggested as I agree that we need a new
thread for this sub-discussion.
I am responding here to the last message from Ed [7]
for which I will paste his message. I hope this is
clear:
> I would have expected, when we hold a ballot,
for any Spec., those components are expected to be
ready for all required compatibility configurations --
both with a stand-alone compatible implementation and
with a Platform implementation. If no suitable
platform is
> available at the time the component is finished
it is plausible that the platform will simply have to
conform to the component TCK. If a Platform
implementation is required and none is available, that
component won't be ready for ballot. In a case where
the Platform is,
> for some reason or other, going to imply changes
on the component -- that seems like new requirements
and a new release to me.
The concrete situation is for how we will deal with
the EE 10 Platform TCK tests that *could* get added on
top of the jaxrs-api/pull/1002 [8]. IMO, in order to
release EE 10 in the first quarter of 2022, we need
the overall development process to be as streamlined
as possible for this brand new situation where we are
adding new features to many EE specifications and also
starting the TCK refactoring to improve how easy it is
to add new TCK tests (as well as maintain the TCK
tests for the future).
I assert that the RESTFul Web Services (3.1) Spec API
ballot should only be run once for the EE 10 release
and the same for all other Specs as that is what is
required and that can include the Java SE TCK tests.
But what are the options for releasing [8] Platform
level tests that do not require any Spec Ballot to be
repeated?
> OR put another way: if the tests can't be
frozen, then the Spec. won't be ready for a ballot.
So the Platform level tests cannot be frozen until
the Platform spec is frozen but if some (new for EE
10) Platform level tests will be in the jaxrs-api
repository [9], those platform level tests shouldn't
be validated until after all of the Spec Ballots have
completed. IMO, the team maintaining the Platform
level tests in the jaxrs-api repository [9] will need
a way to fix any test defects identified prior to the
Platform spec being released. Also IMO, the Platform
TCK tests in the jaxrs-api repository [9] need a way
to be released along with any TCKs produced by the
Platform TCK.
> We should always operate on the principle -- the
Spec encompasses all of the Spec. text, the Spec.
binary artifacts, and the TCK. If any of these change
-- we are effectively changing the Spec. (I need to
keep reminding myself, Compatibility tests are not the
same as
> product tests.) Under this principle, the tests
are not subject to change after the release ballot is
started. We did a bit of a sleight of hand to allow
for adding additional JDK support between 9 and 9.1
but that was expressly not supposed to cause any test
changes. There
> might have been other TCK test changes, but those
all should have been due to challenges or other errata
type fixes.
The change now is that we need coordination of
releasing Platform TCK tests that are maintained by
the Standalone SPEC API teams. IMO, we should
identify at ballot time if any Platform level tests
are maintained by the relevant Spec so we can
coordinate linking that Spec API with the Platform
Spec final release process.