Skip to main content


Eclipse Community Forums
Forum Search:

Search      Help    Register    Login    Home
Home » Eclipse Projects » Eclipse Titan » Measuring time / timer accuracy
Measuring time / timer accuracy [message #1807333] Tue, 28 May 2019 10:13 Go to next message
Harald Welte is currently offline Harald WelteFriend
Messages: 140
Registered: July 2017
Location: Berlin, Germany
Senior Member

In some of our tests, we want to ensure a certain number of event happens within a given time frame.

Let's assume we have a TDMA system like GSM. We start a TTCN3 timer, let it run for 5 seconds and sample the number of TDMA frames expiring within that interval. We then verify that the expected number of frames have occurred, +/- some tolerance.

The problem now is that the timers of the TITAN runtime are not very precise, particularly not if the system running the test suite is under some load. So we request a 5 seconds timer, but we might get quite a bit more than 5 seconds (let's say 6s for the sake of the argument), and hence the number of TDMA frames is much higher than expected.

This problem would be easy to solve if one could measure the time that the timer *actually* took. Then I could compute then umber of TDMA frames expected for the actual duration of the timer, and everything would work out. After all, I don't care whether it's 5 or 6s, I just care about waiting for some time and knowing how much time actually passed.

I'm surprised I couldn't find any standard way to determine this informatin from a TTCN3 timer. Am I missing something? What is the recommended way to approach this? The "stupid" solution would be to have a C++ native function that calls gettimeofday() or even better clock_gettime(CLOCK_MONOTONIC) and invoke that function from my TTCN3 code before starting the timer and after it expires, then do the subtraction.. However, that would of course not be very elegant - and not at all related to TTCN3.

Another, slightly related question: There's a very common pattern in protocol testing which goes like this:


  1. send a message
  2. start a timer
  3. wait for a response or expiration of the timer (using 'alt' in TTCN3)


I have the feeling (no hard evidence, but experience from running related tests for many months on a cluster of build hosts) that particularly on loaded systems running the test suite, there can be quite some delay between '1' and '2' and I'm actually waiting considerably longer than I wanted to wait. Or, if I first start the timer and then send the message, the timer is expiring before it actually is supposed to expire.

So what would be useful is some way to send a message on a port and start a timer, simultaneously. Is there any such provision in TTCN3 or TITAN?

Thanks in advance.

Regards,
Harald
Re: Measuring time / timer accuracy [message #1807335 is a reply to message #1807333] Tue, 28 May 2019 10:27 Go to previous messageGo to next message
Gábor Szalai is currently offline Gábor SzalaiFriend
Messages: 131
Registered: December 2015
Senior Member
The timer timout precision depends on the OS scheduling and the precision of the timeout of the poll/epoll system calls.

You can measure the elapsed time using timers:

Start a timer with a vary long timeout.
read the time value at the start and the end of the measurement period.
The difference of the two value is the elapsed time in seconds.

The precision of this method is the same as calling the clock_gettime.

Re: Measuring time / timer accuracy [message #1807341 is a reply to message #1807335] Tue, 28 May 2019 10:47 Go to previous messageGo to next message
Elemer Lelik is currently offline Elemer LelikFriend
Messages: 1120
Registered: January 2015
Senior Member
Hi Harald,

a few reflections on the topic:

-TTCN-3 being abstract , it assumes that all resources are infinite , so no load should influence timer precision. Of course this might not be the case in real-life situations. We have been contemplating using real-time OSs (possible assisted by HW ) to improve timer precision, but there was no concrete use case so far as Titan is not so much used on the radio interface.

-Take a look into Bug 539514 - Real-time testing in TITAN ;

https://bugs.eclipse.org/bugs/show_bug.cgi?id=539514
this assumes modifications in the test ports as well to be usable

see also Eclipse Titan release 6.5.0, intermediate release 6.5.1 announcements
https://www.eclipse.org/forums/index.php/t/1097206/



"Added compiler option '-I', which enables the real-time testing features mentioned here.
The features are disabled by default, and the new keywords ('now', 'realtime' and 'timestamp') can be used as identifiers again (for backward compatibility).

Also added makefilegen option '-i', which activates this option for the compiler, and the makefile setting 'enableRealtimeTesting' in the TPD, which does the same thing.


4.37. real-time testing features

TITAN supports the real-time testing features described in chapter 5 of the TTCN-3 standard extension TTCN-3 Performance and Real Time Testing (ETSI ES 202 782 V1.3.1, [26]) with the differences and limitations described in this section.
• The real-time testing features are disabled by default. They can be activated with the compiler command line option -I.
• The stepsize setting is not supported. The symbol now always returns the current test system time with microsecond precision (i.e. 6 decimal precision).
• While the compiler allows timestamp redirects (→ timestamp) to be used in all situations indicated by the standard, they only store a valid timestamp if one was stored in the user defined port's C++ code (as specified in the API guide, [16]). Timestamp redirects on connected (internal) ports don't do anything, and the referenced variable remains unchanged (see
example below).
• Timestamp redirects also work on ports in translation mode, as long as both the translation port type and the provider port type have the realtime clause. Timestamp redirects can also be used inside translation functions (i.e. the port.send and port.receive statements can also have
timestamp redirects).
• The 'wait' statement is not supported.


The significance of this new feature is the following: it permits a more accurate measurement of message turnaround time, network latency and such as the timestamping of sent/received messages can be moved from the ATS (Abstract Test Suite ) to the test port or even the IP stack of the operating system.
"

-The real-time extension of the language describes a central clock that can be used to measure time intervals; its'implementation is in our backlog, but
a long timeout timer as described by Gabor can be used to the same effect.



Best regards
Elemer



Re: Measuring time / timer accuracy [message #1807347 is a reply to message #1807333] Tue, 28 May 2019 11:23 Go to previous messageGo to next message
Axel Rennoch is currently offline Axel RennochFriend
Messages: 4
Registered: August 2014
Junior Member
Dear Harald,

you may have a look to the TTCN-3 Real-Time extension https://www.etsi.org/deliver/etsi_es/202700_202799/202782/01.03.01_60/es_202782v010301p.pdf

However, I do not know the status on the implementation of this extension in Titan.

Best regards,
Axel
Re: Measuring time / timer accuracy [message #1807375 is a reply to message #1807341] Tue, 28 May 2019 19:34 Go to previous messageGo to next message
Harald Welte is currently offline Harald WelteFriend
Messages: 140
Registered: July 2017
Location: Berlin, Germany
Senior Member

Hi Elemer,

thanks for your extensive response, I will look into that.

I don't think using a real RTOS underneath TITAN would neccessarily a good match. One can always use Linux's real-time scheduler priorities SCHED_RR and get much better "soft" real time performance.

But my point here wasn't at all that I need more accurate timers. It's our own "fault" if we run TITAN/TTCN3 tests on a buildhost with a number of other build or test jobs in parallel, and obviously we cannot expect TITAN to provide better results than the given underlying system/hardware/OS.

The timestamping feature (-> timestamp redirection) seems more than sufficient for what I'm looking for. Sems all I have to do is verify if the various test ports we use (UD, SCTP, IPL4, ...) implement setting the timestamp in their c++ side and send related patches, if needed.

I'm looking at UD_PT.cc of the unix domain port first, which doesn't yet pass the timestamp along incoming_message() calls. It's easy to add, I'm happy to do so. The interesting questions starts with regard to STREAM sockets and segmented messages: Should the timestamp reflect the time when the firrst segment was received, or the time of the last segment? A brief look through the ETS 202 782 doesn't seem to provide guidance here,. I guess it's left up to the implementation. Would you have any preference? I can find arguments going both ways...

Regards,
Harald
Re: Measuring time / timer accuracy [message #1807399 is a reply to message #1807375] Wed, 29 May 2019 08:40 Go to previous messageGo to next message
Elemer Lelik is currently offline Elemer LelikFriend
Messages: 1120
Registered: January 2015
Senior Member
Hi Harald,

when receiving UDP packets we can rely on the timestamps that are set by the kernel;

in case of streaming sockets, such as TCP, the situation is as follows:

the concept of message is not present in that layer, but in the upper layer that "owns" the message.
In case of message segmentation nor the first segment, neither the last makes more sense. I think the correct answer is that the timestamp
should reflect the moment when a message is identified and extracted from the stream.

This can be done in the ATS TTCN-3 code when receiving said message,
or , somewhat more accurately , in the test port , right after extraction is done.

In this case the timestamping cannot be pushed down from ATS to the kernel , but only from the ATS to the port.

I hope this makes sense.

One more thing: none of our test ports implements yet support for real-time testing. We are grateful that you are interested in adding this feature.
However we need to find a mechanism that would be sufficiently versatile to permit switching this feature on and off, as timestamping will decrease
the performance of the test ports.

We have some ideas but we need to look a bit closer to them. Briefly we are thinking along the line of having distinct port type definitions for real-time/non-real time supporting port versions.

Thank you and best regards
Elemer









[Updated on: Wed, 29 May 2019 08:50]

Report message to a moderator

Re: Measuring time / timer accuracy [message #1807406 is a reply to message #1807399] Wed, 29 May 2019 09:30 Go to previous messageGo to next message
Harald Welte is currently offline Harald WelteFriend
Messages: 140
Registered: July 2017
Location: Berlin, Germany
Senior Member

Hi Elemer,

Elemer Lelik wrote on Wed, 29 May 2019 10:40

when receiving UDP packets we can rely on the timestamps that are set by the kernel;


good point. In case somebody is reading this thread, https://www.kernel.org/doc/Documentation/networking/timestamping.txt is the "official" documentation about this. Interestingly, it actually also appears to be possible for STREAM sockets (e.g. TCP).

Elemer Lelik wrote on Wed, 29 May 2019 10:40

in case of streaming sockets, such as TCP, the situation is as follows:
...
In this case the timestamping cannot be pushed down from ATS to the kernel , but only from the ATS to the port.

I hope this makes sense.


It would be far sufficient for my purpose. But at least on the transmit side, it seems one could actually obtain a timestamp of when the entire message written has passed the timestamping point (kernel driver handing it off to hardware or real hardware timestamp).

Quote:

One more thing: none of our test ports implements yet support for real-time testing. We are grateful that you are interested in adding this feature.


FYI: I've created patches to IPL4, UDP, UNIX_DOMAIN and SCTP last night, but didn't have a chance to test them yet.

Quote:

However we need to find a mechanism that would be sufficiently versatile to permit switching this feature on and off, as timestamping will decrease
the performance of the test ports.


Where would you expect the performance hit to appear? gettimeofday() is super fast at least on Linux for at least the last decade or so, where it's no longer a syscall but a vsyscall implemented entirely in userspace.

So if there's a performance hit, it would have to originate most likely in the way how timestamps are represented/passed up between test port and ATS (conversion from integer to float and the additional branches for the NULL check of 'timestamp_redirect'? I would be curious to see some benchmarks/figures around that...

Regards,
Harald
Re: Measuring time / timer accuracy [message #1807411 is a reply to message #1807406] Wed, 29 May 2019 11:20 Go to previous messageGo to next message
Gábor Szalai is currently offline Gábor SzalaiFriend
Messages: 131
Registered: December 2015
Senior Member
Performance:

Quote:
This socket option enables timestamping of datagrams on the reception
path. Because the destination socket, if any, is not known early in
the network stack, the feature has to be enabled for all packets. The
same is true for all early receive timestamp options.


Generating timestamps for all network packet has an effect to the network performance of the host. So some configuration option is needed to enable the feature in the test ports.
Re: Measuring time / timer accuracy [message #1807414 is a reply to message #1807411] Wed, 29 May 2019 11:58 Go to previous messageGo to next message
Elemer Lelik is currently offline Elemer LelikFriend
Messages: 1120
Registered: January 2015
Senior Member
Hi Harald,


Quote:

Interestingly, it actually also appears to be possible for STREAM sockets (e.g. TCP).


juts to avoid confusion among the readers: I assume this refers to timestamping TCP packets and we are pursuing timestamping of messages ;
there is no obvious and instant relationship between TCP packets and messages of the layer residing on top of the TCP layer, hence the packet timestamps
will not help.

In sending direction, we can add a timestamp in the testport at the point where the message is sent to the kernel.


Best regards
Elemer







Re: Measuring time / timer accuracy [message #1807621 is a reply to message #1807414] Wed, 05 June 2019 08:48 Go to previous message
Botond Baranyi is currently offline Botond BaranyiFriend
Messages: 53
Registered: February 2016
Member
Hi Harald,

About your initial question: I'd suggest using the 'now' operation, which returns the time elapsed since the test case start in seconds (as a float value). This is also part of the real-time testing features in TITAN (so it also requires the compiler option '-l').

The exact number of seconds the timer ran could be obtained by something like this:
my_timer.start;
var float start_time := now;
...
my_timer.timeout;
var float elapsed_time := now - start_time;


Hope this helps.

Best regards,
Botond Baranyi
Previous Topic:OsmoDevCon 2019 TTCN-3 /Titan videos
Next Topic: Test of RESTful APIs in 5G Service Based Architecture with Titan
Goto Forum:
  


Current Time: Thu Mar 28 16:17:29 GMT 2024

Powered by FUDForum. Page generated in 0.02596 seconds
.:: Contact :: Home ::.

Powered by: FUDforum 3.0.2.
Copyright ©2001-2010 FUDforum Bulletin Board Software

Back to the top