Skip to main content

Eclipse Community Forums
Forum Search:

Search      Help    Register    Login    Home
Home » Eclipse Projects » Eclipse Titan » Titanium: support for improving test quality.(Who is testing the tests?)
Titanium: support for improving test quality. [message #1746489] Mon, 31 October 2016 08:00
Kristof Szabados is currently offline Kristof SzabadosFriend
Messages: 30
Registered: July 2015
The set of Eclipse plug-ins for Titan consists of four plug-ins: the Designer, Executor, LogViewer and Titanium. This forum entry is all about the fourth plug-in, Titanium.

The TTCN-3 code executed by Titan is software typically used to test software, and this raises a number of questions. We use tests to discover bugs, performance issues, or to simply better understand how a system will perform; but how do we know the test software is more credible and trustworthy than the software we test? How do we establish its degree of trustworthiness? How do we measure the quality of tests?
(BTW, there's a visible recursiveness here - how we test the software used to test the software that is used to test the software .... etc. - that is limited by common sense and practicalities).

The whole test flow is also characterized by a number of hard-to-be-quantified criteria:
- Tests require domain specific knowledge: to be able to test the functionality of a system, the test writer must have an understanding of the tested system.
- Tests have to be robust: when a system needs to execute for days the test system must also be stable for that number of days.
- Tests have to provide load for performance testing if required.
- Tests have to be able to handle complexity; for instance, when testing a telecommunication node, the tests should be able to simulate the whole network around it.

Of course these criteria are not to be fulfilled by every single test but the whole system of tests is expected to be comparable in knowledge, robustness, load handling, size and complexity to the system that is subjected to testing.

Titanium is a tool that aims to provide testers with the static analysis capabilities, previously only available for developers of production systems. The code of TTCN-3 tests can be automatically checked for code quality issues that would be very hard to detect otherwise.

1) Code smells are suspicious code sequences that could - but not necessarily do- cause problems.
For example:
- Setting the reason of a fail verdict takes little effort when writing the code, but could save hours of debugging.
- Short-hand notations can be used to create a quick draft of the test system, but as the system grows they start to cause more and more trouble.
- Performance optimizations are needed in some cases and in some functions to reach hard time limits, but when done incorrectly they can turn out to be overcomplicated, inconsistent and prematurely used.
- Having some unnecessary imports in the code might be OK during some refactoring sessions when large parts of the code are moved around between modules. But there should be none remaining once the refactoring session is finished, as these slow down the build and so the testing itself.

These kind of issues are not clearly black or white, they cannot be reported as errors, as in some cases they are perfectly acceptable. For this reason, it is possible to set the severity of these issues per code smell or even turn off their detection. Titanium tends to err on the cautious side, trying to detect as many suspicious locations as possible, producing false positives that need to be thought through.

As an application example, in a practical situation a tester wanted to check all out formal parameters in his test system to find ones that could be read before written, and he estimated the number of locations to be checked in the tens of thousands, and the effort itself requiring weeks. When the code smell detector for this kind of issue was developed, only 42 locations needed to be validated (of which 37 were valid issues), reducing the length of this task to a bit more than an hour.

2) Titanium is also able to measure several metrics: lines of code in the project and in functions/altsteps/testcases, McCabe complexity, number of formal parameters, complexity of expressions, etc.

These kind of measurements are more useful for advanced developers and system architects as they are more abstract and less actionable than code smells.
It is important to know which functions are too large or complex, but making the code more maintainable could require smaller architectural changes. Until our refactoring research is available, even extracting code into a new function is complex and tedious work.

3) Titanium also supports the extraction and visualization of both module and component hierarchy (including clustering).

This feature is for architects, who need to have a wide overview of the whole system at once. They can find parts of the structure that can be removed, or need to be ā€˛wired-in" in a different way. Can also be used to find dependency circles and module not related to the project at all.

4) Titanium is also an active research project, with the aim to provide testers with static analysis features as good as (or even better) than the developers of the tested systems have.

We have already shown that:
- The usual ISO/IEC software quality standards, and some of the C/C++/Java code smells are applicable to TTCN-3 to find quality problems.
- These code smells are present even in standardized test systems.
- If all instances had to be removed, that would mean thousands of hours of re-coding to clean up standardized test systems.
- Which also harbor architectural issues (particularly circular dependencies in module and folder level)
- The network of modules in TTCN-3 test systems shows scale-free and small-world properties, just like production software systems.

There are many similarities in what tests and production systems of quality have to know/support/handle. It might not be surprising that in our 2015 survey we found that testers and software developers had very similar responses to our process/methods related questions.

If you are interested in this topic from learning/academic point of view, you are welcome to contribute with research in related subjects or extend/support your own research with data collected by Titanium.
Previous Topic:Providing of reasons for a failing test
Next Topic:Eclipse Titan presence on software test conferences
Goto Forum:

Current Time: Mon Sep 24 22:06:27 GMT 2018

Powered by FUDForum. Page generated in 0.01497 seconds
.:: Contact :: Home ::.

Powered by: FUDforum 3.0.2.
Copyright ©2001-2010 FUDforum Bulletin Board Software

Back to the top