|The "Christmas refactoring" [message #1819039]
||Mon, 06 January 2020 06:47
| Kristof Szabados
Registered: July 2015
Let me give some background for our commit just before Christmas ("Christmas refactoring"), as it is architecturally quite a large change in our tests.
In short: instead of running the encoder/decoder test of the function_test folder from a script, to use the newer hardware architectures, we now have them generated to speed up our testing.
And the speedup is quite visible even on my laptop:
- BER tests drop from ~122 minutes to ~2 minutes.
- XER tests drop from ~21 minutes to ~20 seconds.
- RAW tests drop from ~8 minutes to ~1 minute.
- TEXT tests drop from ~3 minutes to ~14 seconds.
In our internal Continuous Integration systems, this meant a reduction from ~5 hours to ~2 hours, to execute all tests in both runtime modes.
For the long explanation I need to give some historical background.
In the beginning these scripts were used to ensure that no test could have side effects on other tests, and to generate special reports on the test results.
The separation was ensured by the script generating for each test individually a temporary TTCN-3 file, compiling them, executing them ... with no information left behind from the previous tests.
This was very efficient to debug, when we found an issue, since each test was only a few lines of code, and its result could not be tainted by earlier tests.
BUT this also meant a long time for execution.
As each test (few lines of TTCN-3/ASN.1 code) was generated, compiled, executed separately ... it meant accessing the hard drives 8 times, for each test, which was quite slow.
(generating TTCN-3 files, reading TTCN-3 files + generating .hh/.cc files, reading .hh/.cc files and generating .o files, reading .o files and generating the binary, executing the binary)
Later we merged some of the tests, that were operating on the very same types (for example testing integer values).
This came out of an important observation: by that time these encoder/decoders have not found any errors for years (the code was mature enough)
As such the ability of perfect separation, for optimal debugging started to lose its value, but we still paid for it with execution time.
The merging of these tests meant a very little worsening for debugging, as in case of error we would have to start debugging from a larger initial code size.
BUT it also meant fewer access to the hard drive. Even though more data was read at once, this was still more optimal (HDDs have a hard time spinning up, but with 50 - 60MB/s reading speed it does not really matter if the compiler needs to read 100 or 500 characters more)
The "Christmas refactoring"
2 big changes in the environment lead to this commit.
The first change in the environment is the spread of massively multi-core CPUs.
The original script, to eliminate side effects, forced the compilation of the tests to be sequential. This also happened to be optimal when laptop generally had 1-2 cores.
BUT in the era of 4 core - 8 thread laptops (and 64 core server processors available soon) ... is just no longer optimal.
(plus, there is no need for special reports ... knowing that all tests for an encoder/decoder still pass is generally enough)
For this reason, in the refactoring I created a separate TTCN-3 file for each test from the BER/RAW/TEXT/XER scripts.
This essentially means running the exact same code, expecting the same output from each test (hence refactoring).
With being able to compile the files in parallel.
Please note that in theory this would mean the possibility of more side effects.
The second change in the environment is the better tools we have developed since. The Refactoring plugin has a feature for extracting a definition with only its needed dependencies into a new project.
Allowing us to easily create a minimal code example that reproduces a bug, for debugging, if that would be needed.
Powered by FUDForum
. Page generated in 0.01558 seconds