Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
[dsdp-dd-dev] DD project manual testing

Hi All,
Starting with DD M4, I would like us to start using a standard process for testing the milestone drops. Why have a standard process for manual testing and why do manual testing at all? To answer the latter. We have a growing set of unit tests for the GDB/MI reference implementation as well as a few (and a lot planned) unit tests for the DSF framework itself, both of which we plan to add soon to the nightly build process. However these tests do not cover a lot code which is close to the UI APIs, and within non-UI code we're not likely to achieve 100% coverage. Also I do not know of any plans to have unit tests in the IP-XACT editor or the Memory Rendering component. Hence the only we way we can be confident that a given release of the Device Debugging project does in fact work as expected, is by performing some manual testing on it. It is actually not required that the DD project have any format test plan. However without any coordination, it is likely that much testing effort will be duplicated and that large areas of functionality will go untested for extended periods of time. Worst of all, without a test plan we won't know what is being tested and again we'll have no confidence in our releases. With all that in mind, I've spent a good part of last week setting up wiki pages to help us create and track our testing efforts in DD, these can be found at the top level page. I modeled the content of the test documentation after the Target Management project, although I tried to simplify it a bit, since DD has less of a focus on the different host/target platform combinations, and it has fewer releases to manage. Note that this test plan page primarily applies to sub-groups which actually develop code in the DD project, although if other groups such as disassembly would like to use this page to coordinate their test effort in the future, that's perfectly OK. The most valuable part of this documentation is the page documenting the manual tests:, and the goal for the M4 test effort is to start filling in this plan. Rather than listing in detailed test procedures with code references etc., this page lists (in detail) the features that are to be tested. This is the way that the Target Management project and others document their test plan, and I hope that it is light-weight enough to encourage people to participate in testing. It answers the question of what to test and now how to test it. I'm not sure if we need a standard process for documenting how to test various features, because standardizing on this could be very laborious, and dependent on the feature being tested. For example for a GDB sanity test, I created a test project and checked it into /cvsroot/dsdp/org.eclipse.dd.dsf/tests/SanityTest and added the test instructions into the test file itself. So unless anyone objets, I would like to leave the method of documenting the test procedure up to the individual testers.

Next step: What I would love to see is that during the DD M4 testing period, which is the first week of January, after the holiday break, the active DD committers and anyone else that would like to participate, sign up on to test a portion of the DD feature set and for that feature set fill in the missing details in the test plan, as well as actually do some testing and file bugs :-). BTW, this includes the IP-XACT and the Memory Rendering components.


Back to the top