Here is what I would recommend…
Setup one test job that is a small subset of your tests. Have it
triggered by completion of the build. That will give you a quick smoke test to
flush out any obvious issues.
Setup another test job that is all of your tests. Trigger it
manually when you want to “promote” a build at whatever the
appropriate frequency is for your project. If the entire test suite is too
large, consider creating several jobs. Alternatively, multi-configuration jobs
(it’s a type of a job – selectable at creation time only) is an
extremely powerful construct. It allows you to configure a matrix with any
number of axis that you define. You can have an axis for different platforms
you support, an axis for your components, etc. Each cell in that matrix runs
separately, but you provide only one job definition. This means your tests can
run in parallel if executors are available and some cells can fail or be
aborted without affecting the rest of the matrix. Hudson tracks completion of
the matrix and you get a nice overview at the end. If you want to run multiple
test jobs, Hudson has a merge feature that allows you to bring it all together
for the purpose of promotion, etc. In my experience, the approach of breaking
up a large test set using multiple jobs is more difficult to configure and
maintain than using one multi-configuration job.
To configure a test job, you check “this build is
parameterized” then add a “run parameter”. Specify your build
job for the “project”. This way, when the test job is invoked, an
environment variable will be set with a URL to the build run that it should
use. You can use that URL to download build artifacts that you can use to run
tests.
You may be tempted to setup the long test job to run on a timer
(nightly, weekly, etc.). That almost works with existing Hudson facilities. The
gotcha is that when a timer kicks of a job parameterized with a run parameter,
it grabs the latest run parameter without regard for whether that run has
completed (it could be running still) or failed. There is an outstanding enhancement
request on Hudson to add “Good Run Parameter” option. In the
meantime, I would recommend limiting yourself to kicking off test jobs after a build
job or triggering them manually.
Hope that is helpful.
- Konstantin
From: cross-project-issues-dev-bounces@xxxxxxxxxxx
[mailto:cross-project-issues-dev-bounces@xxxxxxxxxxx] On Behalf Of Miles
Parker
Sent: Thursday, September 30, 2010 2:55 PM
To: Cross project issues
Subject: Re: [cross-project-issues-dev] cbi-papyrus-0.7-nightly build is
taking ages
On Sep 30, 2010, at 1:35 PM, Konstantin Komissarchik wrote:
For running tests, you need very little I/O power and good
amount of CPU power. Since tests are more likely to fubar a node and thereby
cause interference to other jobs, I recommend creating a node/VM per-core as
long as memory allows and using only one Hudson executor per node/VM.
This brings up the issue of UI testing. It makes a lot of
sense to separate those out. I've got some SWTBot tests that could conceivably
take up a lot of time..in fact I have't been putting too much thought into how
to get them running hosted just because of potential resource use. Any special
recommendations on good strategies for UI?
One other related issue. If a test fails, we probably
should't be promoting I'd think -- not even nightly. Has anyone given thought
to how this might be accomplished?