Best practice to test M2T transformation [message #656416] |
Fri, 25 February 2011 14:11  |
Niels Brouwers Messages: 80 Registered: July 2009 |
Member |
|
|
We are currently incorporating M2T transformations within transformation sequences to transform models at low level of abstractions to program code (C/C++, Python, etc.). To assure better quality and ensure that the transformation results in working implementations, we would like to test our generators better than we currently do; currently we manually inspect the generated source code and try to compile it. If our customers don't complain, it is assumed to work fine! 
I started this topic to learn about the practices currently used to test the code generators created by the community and I hope this will be the start of a fruitful discussion.
Thinking about a better way to test our M2T transformations than described earlier, resulted in the following alternatives:
- Creating a framework which automatically executes the transformation for a set of test models and compares the output of the transformation with a set of reference texts.
- Create one big test model which covers the majority of the requirements, generate code from this model, execute the implementation and test this implementation with a (conventional) automatic tester.
The first alternative allows to test the generator very extensively, but will also result in a large amount of test models and corresponding reference text files. Furthermore, I am not aware of a framework which simplifies this task. Last but not least, I think it requires a lot of effort to prevent false positives due to irrelevant formatting differences (e.g. white spaces & comments in the code) and to non-deterministic code generation (e.g. the order of variable declarations might not be relevant to the compiler, but it will be for the M2T tester).
The second alternative is perceived to be less challenging, as we can easily generate code from one big model, we can use existing unit tester frameworks (e.g. PyUnit or CxxUnit) and we can better test if the resulting implementation behaves according to our expectations in stead being forced to prevent the false-positives. However, choosing this alternative will likely not result in a high level of test coverage.
It is my guess that the second alternative is good enough for us and provides the best benefit / effort ratio. What do you think about this? Is there a better alternative that I've missed?
Best regards,
Niels Brouwers.
Kind regards,
Niels Brouwers.
|
|
|
Re: Best practice to test M2T transformation [message #656714 is a reply to message #656416] |
Mon, 28 February 2011 08:53   |
|
Hi Niels,
My opinion would be to go for your first approach:
"Creating a framework which automatically executes the transformation for a set of test models and compares the output of the transformation with a set of reference texts."
Because with this approach, you can also test how your generator should handle an invalid input model. For example, if you are generating Java source code from UML and if you have to classes with the same name in the same package, you could test that your generator has created a log file with a message indicating that your model contains a problem. Same thing with a Java class with two identical methods or anything that would not compile.
Stephane Begaudeau, Obeo
--
Twitter: @sbegaudeau
Acceleo wiki: http://wiki.eclipse.org/Acceleo
Blogs: http://stephanebegaudeau.tumblr.com & http://sbegaudeau.tumblr.com
|
|
|
|
|
|
Powered by
FUDForum. Page generated in 0.02477 seconds