Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [wtp-dev] Performance tests


Jeffrey, you raise lots of good questions, and I'm not sure anyone knows the answers to them,
but I do know our overall plan is to follow the way the base Eclipse does it. So ... I'd encourage
you to start there, and see what their framework and procedure are, and if you see opportunity for
improvement, to let them (and us) know.

Thanks.



Jeffrey Liu <jeffliu@xxxxxxxxxx>
Sent by: wtp-dev-admin@xxxxxxxxxxx

10/31/2004 02:48 PM

Please respond to
wtp-dev

To
wtp-dev@xxxxxxxxxxx
cc
Subject
[wtp-dev] Performance tests









Hi all,

I've a few questions regarding performance tests. Are performance tests and
non-performance tests going to be run on the same build/test machine?
Performance tests are more sensitive to the environment they run in. Their
results are only accurate if the initial environment are reasonablely the
same every time. Running performance tests right after a build or after
non-performance tests can affect their results. Ideally, rebooting the
system before running each test can give us more accurate numbers; however,
this's probably not possible given time limits. Is this being taken into
account?

The standard to which performance tests have failed or passed is different
from non-performance tests. Are there any documents that speak out what's
consider a performance failure (ex. x% or x seconds regression) and what's
the next course of action if this happens (ex. send an email, open a bug).
Also, in some cases, performance regression can be justified, for example,
a very compelling feature. Is there any process for justifying changes that
cause performance regressions? Have these questions been answered or
documented somewhere? If not, I can help out on sorting out these issues.

Thanks,

Jeff

_______________________________________________
wtp-dev mailing list
wtp-dev@xxxxxxxxxxx
http://dev.eclipse.org/mailman/listinfo/wtp-dev


Back to the top