Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [stellation-res] misc revision main@@4 - add test 10 (Bugzilla 21819), test 11 (Bugzilla 21826)

On Tuesday 23 July 2002 03:41 pm, Dave Shields wrote:
> On Tue, Jul 23, 2002 at 03:33:39PM -0400, Mark C. Chu-Carroll wrote:
> > Definitely do. But try to take a look at the JUnit stuff, which can give
> > us more precise testing.
> >
> > 	-Mark
>
> I took a look at Junit docs, and am unclear how to write Junit test that,
> for example, does repeated merge. Workspace tests tend to involve multiple
> runs of svc with non-trivial IO.

The trick is to create different kinds of tests. The script tests are 
good at testing whole system function, but as in the recent merge
glitch, they can miss synergistic cases where bugs are interacting
in ways  that mask thier effects. They also don't give us the ability
to detect what part of the system is causing a problem.

For unit tests, we wouldn't run the whole system through multiple steps as
we do now. We'd build up some scaffolding that allows us to home in
and test one specific thing in a fairly intensive way.

For instance, for testing sync, there's really no reason to create a 
repository, or to do any checkins or checkouts. What sync really
does is take a project file and a workspace, and update the project
file to match the workspace.

I suspect that the code as written might make it hard to write
unit tests. It often takes some scaffold code to set things up
to run the tests. Writing a bit of extra code for the tests is a bit
annoying, but the payoff, in the long run, is enormous. 

For the workspace, we probably want a test scaffold that lets us provide
a copy of a project file and a location for the workspace directory,
instead of letting the system just automatically read the project document
out of the metadata directory. 

A good set of unit tests for sync would have a set of Stellation
workspaces, and a bunch of project files, and one at a time, would
run sync, and then verify that the project file was correct.

A good set of test cases would include:
- A case where the workspace is identical to the project file.
- Empty project file, non-empty workspace.
- Non-empty project file, empty workspace.
- Non-empty project file, workspace with addition non-directory files.
- Non-empty project file, workspace with additional empty directories.
- Non-empty project file, workspace with additional non-empty directories.
- Non-empty project file, workspace with non-overlapping contents.
- Non-empy project file, workspace with deleted non-directory files.
- Non-empy project file, workspoce with deleted directories.
- non-empty project file, workspace with both adds and deletes.

It's a lot of tests, but then sync is a pretty complex little piece of
code, and likely one that will be used often.  Once a test scaffold
is written, it'll be easier to both write and run them
in JUnit than it would with a shell script, and it would go a long way
towards identifying when something breaks, and exactly what
is broken.

I'm going to try to do a set of tests like this for merge, which separately
tests the different kinds of artifact merges, the repository branch merge
operator, and the workspace merge command. 

	-Mark

-- 
Mark Craig Chu-Carroll,  IBM T.J. Watson Research Center  
*** The Stellation project: Advanced SCM for Collaboration
***		http://www.eclipse.org/stellation
*** Work Email: mcc@xxxxxxxxxxxxxx  ------- Personal Email: markcc@xxxxxxxxxxx




Back to the top