Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [jdt-dev] Builds: regressions, compliance levels ...

Just to add the data from our first Java 10 based gerrit build:

 95,268 tests (+12507)
 Took 1 hr 13 min.

Faster than I expected. Unfortunately, we can't trust Jenkin's
break down of execution times - test.compiler.regression is said
to have taken 4 days 2 hrs ;p -, but surefire results show 47min.
spent alone in that test suite, so I believe we could significantly
cut down execution times.

Stephan

On 22.07.2018 12:39, Stephan Herrmann wrote:
Hi team,

Today I noticed a regression caused by my recent fix in bug 537089
(heads-up: by using IntersectionTypeBinding18 more frequently,
we may see some follow-up bugs in the future - see the bug).

This particular regression I only noticed by following changes in
test results of eclipse.jdt.core-run.javac-10 (which I updated from
9 to 10 only recently). Thanks to jenkins for showing the differences
in the first place!

This once more raises a couple of questions regarding our test strategy:

The regular gerrit job didn't report a problem because it ran tests on 9,
and hence the Java 10 specific tests didn't run at all. I'm right now
testing to run that same job on Java 10.
But still
- shouldn't we go straight to Java 11 ea for BETA_JAVA11 work?
- is it good to run gerrit jobs against all of 1.3 .. 11ea,
   i.e., at 9 distinct compliance levels?
   This is becoming a problem of long test runs again.
   The currently running #1037 [2] will give an idea of _how_ long it runs
For gerrit tests I would propose some subset like:
   1.4, 1.7, 1.8, 10, 11ea
Idea being: major changes are in 1.5, 1.8, 9, so we'd like to test the
most recent version _prior_ to each change (1.4, 1.7, 1.8),
plus very recent versions (10, 11ea).
WDYT?
With 2 new compliance levels per year, and also with Oracle's plan
to drop support for most old versions, will we have a new routine of
which compliance levels we test, that automatically updates twice
a year?


During platform builds we have a number of known failures which may
easily hide real fresh regressions, when they occur.
- Does the platform build provide a means to inspect the differences
   between test runs (like Jenkins does, see above)?
- What priority do we give to getting "clean" on platform builds,
   i.e., to be free of known, constant regressions?
   (I say 'constant' because I know we have a few intermittent
   failures, which we may not have the means to analyse & fix).
   Can we expect releng support for issues relating to the test machine's
   local configuration (i.e., char encoding)?


Most is said with focus on JDT/Core, but I guess the other components
are similarly affected, or is it just Core?

cheers,
Stephan

[1] https://ci.eclipse.org/jdt/job/eclipse.jdt.core-run.javac-10
[2] https://ci.eclipse.org/jdt/job/eclipse.jdt.core-Gerrit/1037



Back to the top