1. As you said earlier, the performance
results will not be very meaningful unless the environment is controlled.
This requires running the tests on equivalent machines in the same state
after each build. For this reason I think the performance tests should
be separated out. The build can then do everything it does now and then
set the machine up for performance testing and run the performance test
suite. (Because performance tests take a long time to run there should
be a build option not to run them.)
2. If the performance tests are separated
out with a folder such as 'performance' I think the folder should be named
'performance-tests' or something along this line to be clear.
IBM Rational Software - XML Web Services Tooling
Phone: 905 - 413 - 3814 Fax: 905 - 413 - 4920
Jeffrey Liu/Toronto/IBM@IBMCA Sent by: wtp-dev-admin@xxxxxxxxxxx
11/02/2004 10:57 AM
Please respond to
Re: [wtp-dev] Performance
Thanks for the response. I want to get some performance tests created for
WTP, but am wondering how these testcases should be packaged. I've a couple
of ideas, but want to hear from other people.
Approach 1. Follow what the Platform project is doing... Create a
"performance" source folder in existing tests plugins. Package
performance testcases into their own test suite and create a "performance"
ant target in test.xml. So, performance tests are run exactly like
non-performance tests except that you need to pass in the "performance"
target in the build/test scripts. I think this approach is a little messy
because tests plugins that solely consist of non-performance testcases
required to specify a dummy/empty "performance" target.
Likewise, pure performance plugins need to specify a dummy/empty
non-performance target (missing target can result in build breaks)...
Approach 2. I suggest we make a better seperation between performance tests
and non-performance tests by adding a "performance" folder to
structure and put performance plugins in that folder.
>I've a few questions regarding performance tests. Are performance tests
>non-performance tests going to be run on the same build/test machine?
Currently, yes. However, there are no performnace test committed
>Performance tests are more sensitive to the environment they run in.
>results are only accurate if the initial environment are reasonablely
>same every time. Running performance tests right after a build or after
>non-performance tests can affect their results. Ideally, rebooting
>system before running each test can give us more accurate numbers;
>this's probably not possible given time limits. Is this being taken
No, but I would be interested in hearing more. The build machines
IBM x-series xeon with 2 GB ram and red hat linux. It seems to do
with builds (i.e. the utilization is still very low). It sounds too
radical to me to reboot a machine to have a "reset". Linux
should do a
good job managing these issues. However, the build machine is rebooted
only a few times a month so it is likely that there are some issues for
performance testing. Unfortunately, we (eteration) will not be able
dedicate a seperate machine for perf-tests only.
>The standard to which performance tests have failed or passed is different
>from non-performance tests. Are there any documents that speak out
>consider a performance failure (ex. x% or x seconds regression) and
>the next course of action if this happens (ex. send an email, open
>Also, in some cases, performance regression can be justified, for example,
>a very compelling feature. Is there any process for justifying changes
>cause performance regressions? Have these questions been answered or
>documented somewhere? If not, I can help out on sorting out these issues.
Currently, there are no documents beyond that of base eclipse. But, please
do this. It will be very valuable.
>wtp-dev mailing list