|Best practice to test M2T transformation [message #656416]
||Fri, 25 February 2011 14:11
| Niels Brouwers
Registered: July 2009
We are currently incorporating M2T transformations within transformation sequences to transform models at low level of abstractions to program code (C/C++, Python, etc.). To assure better quality and ensure that the transformation results in working implementations, we would like to test our generators better than we currently do; currently we manually inspect the generated source code and try to compile it. If our customers don't complain, it is assumed to work fine! |
I started this topic to learn about the practices currently used to test the code generators created by the community and I hope this will be the start of a fruitful discussion.
Thinking about a better way to test our M2T transformations than described earlier, resulted in the following alternatives:
- Creating a framework which automatically executes the transformation for a set of test models and compares the output of the transformation with a set of reference texts.
- Create one big test model which covers the majority of the requirements, generate code from this model, execute the implementation and test this implementation with a (conventional) automatic tester.
The first alternative allows to test the generator very extensively, but will also result in a large amount of test models and corresponding reference text files. Furthermore, I am not aware of a framework which simplifies this task. Last but not least, I think it requires a lot of effort to prevent false positives due to irrelevant formatting differences (e.g. white spaces & comments in the code) and to non-deterministic code generation (e.g. the order of variable declarations might not be relevant to the compiler, but it will be for the M2T tester).
The second alternative is perceived to be less challenging, as we can easily generate code from one big model, we can use existing unit tester frameworks (e.g. PyUnit or CxxUnit) and we can better test if the resulting implementation behaves according to our expectations in stead being forced to prevent the false-positives. However, choosing this alternative will likely not result in a high level of test coverage.
It is my guess that the second alternative is good enough for us and provides the best benefit / effort ratio. What do you think about this? Is there a better alternative that I've missed?
Powered by FUDForum
. Page generated in 0.01567 seconds