It does *not* effectively eliminate that test coverage. The failures will still be there, and developers will still run into them by default.
Do you honestly think forcing a developer to repeatedly see a failure in code he's not familiar with will motivate him to learn that code and address the failure? It won't. What it will do is reduce his confidence in the tests and make him less motivated
to run them. As I said in my first email, if a developer can't get a clean baseline from which to work with, regression detection becomes much more difficult. I'm trying to address that.
The failures I'm seeing don't seem rooted in environmental conditions, and I will double check that before adding the guard. For example, there are tests that failed because I didn't have MinGW in my PATH. Those are not the type of failures I'm looking
to make avoidable. Clearly the proper action there is to correct your environment to meet the tests' requirements.
John
From: cdt-dev-bounces@xxxxxxxxxxx [cdt-dev-bounces@xxxxxxxxxxx] on behalf of Sergey Prigogin [eclipse.sprigogin@xxxxxxxxx]
Sent: Thursday, August 16, 2012 9:35 PM
To: CDT General developers list.
Subject: Re: [cdt-dev] proposal for side-stepping known test failures
On Thu, Aug 16, 2012 at 6:36 PM, Cortell John-RAT042
<RAT042@xxxxxxxxxxxxx> wrote:
You make an an assumption that the tests failures are caused by Windows and not by some other environmental condition. You are absolutely right that the tests have to run on Windows. Since we don't have Hudson on Windows, the only Windows test coverage
we get comes from developers who use Windows as their development environment. You proposal would effectively eliminate that test coverage.
What's isn't adequate, IMO, is that the tests are only be exercised (by you and Hudson) on Linux. I'm trying then on Windows and there are failures. My changes won't remove the tests. As I said, they'll allow me and others to avoid tests that aren't stable,
on an as-needed basis.
John
-sergey
Sent: Thursday, August 16, 2012 7:27 PM
To: CDT General developers list.
Subject: Re: [cdt-dev] proposal for side-stepping known test failures
On Thu, Aug 16, 2012 at 5:18 PM, Cortell John-RAT042
<RAT042@xxxxxxxxxxxxx> wrote:
I run the tests with high memory ceilings, too. The only difference I see is OS.
Anyway, the point is there are failures. The failures are not in code I'm sufficiently familiar with, otherwise I'd troubleshoot them. So, I'm going to make them avoidable as per my proposal, unless there are objections.
This is a slippery slope. Please don't consider you testing adequate if you see failures of tests that don't happen on Hudson.
-sergey
On Thu, Aug 16, 2012 at 3:36 PM, Cortell John-RAT042
<RAT042@xxxxxxxxxxxxx> wrote:
There are failures in the following plugins when I run the tests locally on my Windows machine. I have the very latest master sources.
codan/org.eclipse.cdt.codan.core.test
core/org.eclipse.cdt.core.tests
core/org.eclipse.cdt.ui.tests
These tests work for me on Linux when I run them with -Xmx768m -XX:MaxPermSize=192M
xlc/org.eclipse.cdt.core.lrparser.xlc.tests (*)
These tests were excluded because they were not properly maintained.
build/org.eclipse.cdt.make.core.tests (*)
Don't know about these.
I’ll be happy to communicate the details to anyone who’s interested in pursuing them. Note that the last two plugins are not in the Hudson report. Do you
know why they’re not being tested?
Any test that intermittently fails should be optionally avoidable.
John
-sergey
https://hudson.eclipse.org/hudson/job/cdt-nightly/lastCompletedBuild/testReport/ shows only two failures, both of which are intermittent.
Which test failures are you referring to?
-sergey
On Thu, Aug 16, 2012 at 3:03 PM, Cortell John-RAT042 <RAT042@xxxxxxxxxxxxx> wrote:
The CDT tests appear to have known failures--more than just a handful. This makes it very difficult to detect new failures caused by new changes.
I’d like to wrap known failures as follows:
if (System.getProperty("cdt.skip.known.test.failures") == null) {
// the failing test
}
This will allow me (and others) to get a clean test run before starting on a set of changes. I can then run the tests again after my changes and immediately find out if I’ve broken anything. Right now, I’d have to manually/visually filter
out dozens of known failures from the results, which is very tedious, time consuming and error prone. This is much better than simply commenting out broken tests and putting a “TODO”, as it actually leaves the broken tests active in the codebase.
Objections?
John
_______________________________________________
cdt-dev mailing list
cdt-dev@xxxxxxxxxxx
https://dev.eclipse.org/mailman/listinfo/cdt-dev
_______________________________________________
cdt-dev mailing list
cdt-dev@xxxxxxxxxxx
https://dev.eclipse.org/mailman/listinfo/cdt-dev
_______________________________________________
cdt-dev mailing list
cdt-dev@xxxxxxxxxxx
https://dev.eclipse.org/mailman/listinfo/cdt-dev
_______________________________________________
cdt-dev mailing list
cdt-dev@xxxxxxxxxxx
https://dev.eclipse.org/mailman/listinfo/cdt-dev
|