[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [p2-dev] Coverage update
- From: Scott Lewis <slewis@xxxxxxxxxxxxx>
- Date: Tue, 16 Sep 2008 21:08:35 -0700
- Delivered-to: email@example.com
- User-agent: Thunderbird 184.108.40.206 (Windows/20080708)
Pascal Rapicault wrote:
> We can run the current tests against public servers in our automated
> build (there's obviously some work for us there but hopefully we could
> ride off the releng support that you already have...).
Are you suggesting that currently the tests are not being run or are
you just annoyed by the overhead of maintaining the infrastructure
necessary to run the tests?
The tests are being run manually (not automated). We have planned and
still plan to run them automatically after our automated build (which is
automated). With everything else we haven't had enough time to
implement the automated test infrastructure yet. So we're just hoping
that we can reuse what's available.
I can understand the annoyance of the infrastructure aspect, but the
lack of a good common build infrastructure can't be use as an argument
to get the tests run by somebody else (you may want to make sure that
the common build infrastructure cover your needs).
We have been/are involved in the common build infrastructure work and
will do what we can to make sure it covers our needs.
> BUT, what would
> really be helpful is running the filetransfer tests in network
> environments we can't reproduce (e.g. on a set of servers internal to
> IBM...behind proxies, etc) and with multiple targets (i.e. a *much*
> larger set of servers...both behind firewalls/proxies, outside of
> firewalls/proxies, servers with strange/incorrect behavior,
> misconfigured servers, etc).
> So we can obviously run tests ourselves against a variety of target
> servers, if they are available on the Internet. But we can't put
> a variety of proxied environments, or point it at servers behind
> firewalls, finicky/misconfigured servers, etc. If there was some way
> for that to happen via you and/or others on the p2/equinox team (e.g.
> compeople, etc) that would be most helpful to us.
As I said in this week's call, I'm very interested in getting "funny"
setups tested on a regular basis. However the difficulty is in finding
both the required pieces of infrastructure/setup to go through (e.g.
proxy, authentication, etc) and also have the particular server setups
that cause problems.
Yes, that's right.
Currently in IBM I don't have access to any of those and even if I had
there would be an additional burden of using those from our tests
machines and I would unlikely be able to have access to all the
required setups. That said I will double check with our infra team and
For now I think that the most efficient way to test our transports is
to have individuals, each in their specific environments, running the
tests (much like the ECF community has been doing so far).
Yes, we'll continue to depend on both the ECF community and p2 community
for testing...as per the request today WRT httpclient testing.
Therefore the most important thing for now is to put together a
standalone test app that is *easily* available and can be run easily,
or have a test feature that serves the same role. Potentially there
would even be 2 apps/features, one for p2, one for ECF.
We don't have a tests feature as that is yet more releng work that we
probably can't immediately do. We do, however, have a tests project set
file for filetransfer:
We do have to also do reorg of our features...to allow easier
consumption of relevant pieces of ECF, but that's going to take a while
(proabably be a goal in ECF plan for 3.5/ECF 3.0).