Home » Eclipse Projects » Papyrus for Real Time » determine elapsed time since creation of timer?
|
Re: determine elapsed time since creation of timer? [message #1776374 is a reply to message #1776373] |
Wed, 15 November 2017 19:49 |
|
I think this should work...but I don't have an environment on which to test it. Plus, Peter will probably come back with a better solution... ;-)
First, you need to store the timerID, which is returned when you set a timer.
From that timerID, you can get the timer using "timerID.getTimer()".
From this timer, you can get the due time ("timer.due") as an UMLRTTimespec
You can then use the operations defined on UMLRTTimespec to calculate the time elapsed or remaining time.
If you store the start time, you can then calculate the elapsed time.
/Charles (wishing he had more fun time to play with such things... :-(
/Charles Rivet
|
|
|
Re: determine elapsed time since creation of timer? [message #1776375 is a reply to message #1776373] |
Wed, 15 November 2017 19:49 |
|
I think this should work...but I don't have an environment on which to test it. Plus, Peter will probably come back with a better solution... ;-)
First, you need to store the timerID, which is returned when you set a timer.
From that timerID, you can get the timer using "timerID.getTimer()".
From this timer, you can get the due time ("timer.due") as an UMLRTTimespec
You can then use the operations defined on UMLRTTimespec to calculate the time elapsed or remaining time.
If you store the start time, you can then calculate the elapsed time.
/Charles (wishing he had more fun time to play with such things... :-(
/Charles Rivet
|
|
|
Re: determine elapsed time since creation of timer? [message #1776378 is a reply to message #1776375] |
Wed, 15 November 2017 20:08 |
Ernesto Posse Messages: 438 Registered: March 2011 |
Senior Member |
|
|
(UML-RT is not kiltera ;))
Currently the way to do this is using UMLRTTimespec. In your capsule define an attribute (say "time") of type UMLRTTimespec (from the RTS model library). Then, in action code you set a timer something like this:
timing.informIn(UMLRTTimespec(sec, nsec)); // sec and nsec could be constants or attributes of type long.
UMLRTTimespec::getclock(&time); // here time is the attribute to keep track of time.
Then, in the action triggered by the message you could have something like this:
UMLRTTimespec now;
UMLRTTimespec::getclock(&now);
UMLRTimespec diff = now - time;
And then you can access the timespec attributes (tv_sec and tv_nsec of type long) or use other operations from UMLRTTimespec.
And as Charles points out, you can use the UMLRTTimerId to get a UMLRTTimer from which you can get the due time. This can be useful as you can pass around the UMLRTTimerId between capsules.
[Updated on: Wed, 15 November 2017 20:10] Report message to a moderator
|
|
|
Re: determine elapsed time since creation of timer? [message #1776549 is a reply to message #1776378] |
Fri, 17 November 2017 15:10 |
Juergen Dingel Messages: 42 Registered: January 2015 |
Member |
|
|
Thanks, Ernesto and Charles.
This appears to work as you say on Linux.
However, in Windows under Cygwin, 'getClock' does not seem to work right.
Imagine the PingPong example and suppose you grab the current time when the very first 'ping' is sent in the initial transition of the Pinger (i.e., the first code snippet in Ernesto's response is executed). Then, the current time is grabbed again when a 'pong' arrives (i.e., the second code snippet is executed in the transition triggered by 'pong'). On Windows/cygwin, the difference is always 0:
$ ./TopMain.exe
Controller "DefaultController" running.
Starting game
ping sent!
Ponger is ready
Ponger: Ping received!
Ponger: pong sent!
Pinger: Pong received!
Pinger: startTime sec: 1510930750
Pinger: startTime nsec: 953561000
Pinger: now sec: 1510930750
Pinger: now nsec: 953561000
Pinger: diff sec: 0
Pinger: diff nsec: 0
Pinger: ping sent!
Ponger: Ping received!
Ponger: pong sent!
Pinger: Pong received!
Pinger: startTime sec: 1510930750
Pinger: startTime nsec: 953561000
Pinger: now sec: 1510930750
Pinger: now nsec: 953561000
Pinger: diff sec: 0
Pinger: diff nsec: 0
Pinger: ping sent!
Ponger: Ping received!
Ponger: pong sent!
Pinger: Pong received!
Pinger: startTime sec: 1510930750
Pinger: startTime nsec: 953561000
Pinger: now sec: 1510930750
Pinger: now nsec: 953561000
Pinger: diff sec: 0
Pinger: diff nsec: 0
Pinger: ping sent!
Ponger: Ping received!
Ponger: pong sent!
Pinger: Pong received!
Pinger: startTime sec: 1510930750
Pinger: startTime nsec: 953561000
Pinger: now sec: 1510930750
Pinger: now nsec: 953561000
Pinger: diff sec: 0
Pinger: diff nsec: 0
Pinger: ping sent!
Ponger: Ping received!
Ponger: I'm done
Again, on Linux the behaviour is what you would expect, suggesting that 'getClock' does not work right on Windows/cygwin...
Since 'getClock' is probably used elsewhere, I wonder what this means for the timer service on Windows in general...
Juergen
|
|
|
Re: determine elapsed time since creation of timer? [message #1776560 is a reply to message #1776549] |
Fri, 17 November 2017 17:18 |
Ernesto Posse Messages: 438 Registered: March 2011 |
Senior Member |
|
|
Hmmm... interesting.
The timing service, and getclock in particular, depend on the underlying platform's timing service. The RTS was developed with Linux as its main target, and so it was less tested on other platforms, including Windows.
This is only a guess, but I think it is possible that Cygwin's implementation of the standard time services is incorrect. In the past I've encountered cases where Cygwin's libraries were incomplete or yielded incorrect results. On the linux implementation, getclock uses "gettimeofday" from the standard C time.h header. On Windows it uses "GetSystemTime" from "Windows.h". So maybe Cygwin's implementation of that function is incorrect? Or, if the Cygwin build uses the linux headers, its implementation of "gettimeofday" from time.h might be incorrect.
Unfortunately I cannot think of a workaround at the moment.
[Updated on: Fri, 17 November 2017 17:19] Report message to a moderator
|
|
| | | |
Re: determine elapsed time since creation of timer? [message #1776710 is a reply to message #1776701] |
Mon, 20 November 2017 15:56 |
Peter Cigehn Messages: 49 Registered: September 2014 |
Member |
|
|
Ernesto Posse wrote on Mon, 20 November 2017 15:07That validation rule is a bit overzealous. You can generate code and compile it without worrying about that error.
I would not really say that it is the validation rule that is overzealous. The RTCppProperties profile should probably define the stereotype attributes that actually are optional to have multiplicity [0..1], i.e. to clearly indicate that they are optional and can be left unset. As it looks now several stereotype attributes in the RTCppProperties profile have multiplicity [1], which causes this validation rule to kick in when a model is validated and you have those stereotype attributes left unset. But there are other attributes that indeed have multiplicity [0..1] so maybe there are just a few attributes that have been overlooked.
Anyway, maybe something to be considered as a future improvement to the RTCppProperties profile.
/Peter Cigéhn
[Updated on: Mon, 20 November 2017 16:00] Report message to a moderator
|
|
| |
Re: determine elapsed time since creation of timer? [message #1776737 is a reply to message #1776713] |
Mon, 20 November 2017 22:45 |
Juergen Dingel Messages: 42 Registered: January 2015 |
Member |
|
|
Ernesto, Charles, and Peter,
I was able to deal with the validation error (change value of 'sourceFile' attribute
in 'UtilityMacros' from 'NULL' to ' ' (i.e., space character).
Also, thanks for the model, because it shows me how to pass command line
arguments into a model execution --- a question that I've had for a long time.
Ernesto's model appears to work fine on Windows and exhibit the expected behaviour.
However, only if the difference between the two arguments is on the order of seconds:
$ ./TopMain.exe -u 1 2
Controller "DefaultController" running.
[17490 d; 22 h; 18 m; 46 s; 757 ms][Top:Top::(no role)[0]](State1) starting
[Top](State1) 2 user arguments provided
[Top](State1) user-provided delay = 1
[Top](State1) delay used = 1 sec
[17490 d; 22 h; 18 m; 46 s; 757 ms][Top.capsule1:Capsule1::capsule1[0]](State1) waiting
[Capsule1](State1) 2 user arguments provided
[Capsule1](State1) user-provided delay = 2
[Capsule1](State1) delay used = 2 sec
[17490 d; 22 h; 18 m; 47 s; 787 ms][Top:Top::(no role)[0]](State2) sending message to 'capsule1'
[17490 d; 22 h; 18 m; 47 s; 787 ms][Top.capsule1:Capsule1::capsule1[0]](State2) message received
Unexpected message to capsule instance Top.capsule1 role capsule1 on port timing protocol Timing signal timeout
If the difference is less than a second, Capsule 1 ends up in state State3:
$ ./TopMain.exe -u 1 1.99
Controller "DefaultController" running.
[17490 d; 22 h; 20 m; 42 s; 795 ms][Top:Top::(no role)[0]](State1) starting
[Top](State1) 2 user arguments provided
[Top](State1) user-provided delay = 1
[Top](State1) delay used = 1 sec
[17490 d; 22 h; 20 m; 42 s; 795 ms][Top.capsule1:Capsule1::capsule1[0]](State1) waiting
[Capsule1](State1) 2 user arguments provided
[Capsule1](State1) user-provided delay = 1.99
[Capsule1](State1) delay used = 1 sec
[17490 d; 22 h; 20 m; 43 s; 825 ms][Top:Top::(no role)[0]](State2) sending message to 'capsule1'
[17490 d; 22 h; 20 m; 43 s; 825 ms][Top.capsule1:Capsule1::capsule1[0]](State3) timed out
Unexpected message to capsule instance Top.capsule1 role capsule1 on port p protocol Protocol1 signal m
When running the attached model (which produced the output in my previous post) on Linux,
the 'getClock' service shows a clock advance even after just only one round of 'ping, pong'.
On Windows under Cygwin, at least 150 rounds are needed, before a clock advance between
the very first 'ping' and the very last 'pong' can be detected.
So, the value of 'getclock' on Windows/Cygwin seems reliable only in the seconds range.
Thanks for your help!
Juergen
|
|
| | | |
Re: determine elapsed time since creation of timer? [message #1776873 is a reply to message #1776802] |
Wed, 22 November 2017 12:11 |
Juergen Dingel Messages: 42 Registered: January 2015 |
Member |
|
|
Ok, the new model is more precise. On Windows, it appears to be able to tell the difference between 0.98 sec and 1 sec reliably. For
$ ./TopMain.exe -u 0.983 1
I get non-deterministic behaviour, i.e., sometimes the end state is 'State2', sometimes 'State3'.
Ok, but now I still don't know why in my model (attached previously), on Linux a non-zero delay can be measured after just one round of 'ping,pong', while on Windows, I appear to need over 150 (see above).
And, no, the model uses 'getclock'.
Juergen
|
|
| |
Re: determine elapsed time since creation of timer? [message #1776911 is a reply to message #1776896] |
Wed, 22 November 2017 17:53 |
Ernesto Posse Messages: 438 Registered: March 2011 |
Senior Member |
|
|
After some experiments, I also observed the non-determinism on Linux, but that in Linux this happens with higher precision than on Windows/Cygwin. Like you, on Cygwin, a difference of less than 0.02 seconds caused a race, but on Linux it seemed to be about 0.001 seconds for the race to occur.
This might indeed be dependent on the underlying platform. When building on Cygwin, the build is treated as if it was on Linux, i.e., using Cygwin's Linux-like toolchain: gcc + make, rather than VisualStudio. This means that the underlying libraries used are Cygwin's libraries, which are supposed to be POSIX compliant. For this toolchain, getclock uses "gettimeofday" from <sys/time.h>. According to the official POSIX specs [1]:
Quote:
The gettimeofday() function shall obtain the current time, expressed as seconds and microseconds since the Epoch, and store it in the timeval structure pointed to by tp. The resolution of the system clock is unspecified.
So it is indeed possible that on Cygwin that resolution is different.
Now, with respect to your model, I think it's not a bug, but it's actually expected behaviour, since there is no delay between the pinger sending a ping and getting a pong back. Well, there is a delay, but it is pretty minimal, and the low resolution of gettimeofday may explain why it shows 0 nsec: the time it takes the runtime to deliver the message to the receiver (both of which are on the same controller thread) and execute the associated actions on the receiver (which also sends a message back) is probably smaller than the system's gettimeofday resolution.
To check, I modified the model (see attachment) by adding a response delay on ponger. When Ponger gets a message, it goes into a new HitBack state where it sets a timer for half a second (5e+8 nsec), and only then it sends a pong back. The resulting trace can be seen below. Note that the value of 'diff' is incremented as expected (the difference is from the start time, and startTime is never updated, so diff gets the values 0.5 sec, 1 sec, 1.5 sec, 2 sec).
tester@eposse-win7vm /cygdrive/c/Users/tester/Desktop/Papyrus-RT 1.0/ws1/TimeElapsedExample1_CDTProject/build
$ ./TopMain.exe
Controller "DefaultController" running.
Starting game
ping sent!
Ponger is ready
Ponger: Ping received!
Ponger: pong sent!
Pinger: Pong received!
Pinger: startTime sec: 1511372026
Pinger: startTime nsec: 134914000
Pinger: now sec: 1511372026
Pinger: now nsec: 634914000
Pinger: diff sec: 0
Pinger: diff nsec: 500000000
Pinger: ping sent!
Ponger: Ping received!
Ponger: pong sent!
Pinger: Pong received!
Pinger: startTime sec: 1511372026
Pinger: startTime nsec: 134914000
Pinger: now sec: 1511372027
Pinger: now nsec: 134914000
Pinger: diff sec: 1
Pinger: diff nsec: 0
Pinger: ping sent!
Ponger: Ping received!
Ponger: pong sent!
Pinger: Pong received!
Pinger: startTime sec: 1511372026
Pinger: startTime nsec: 134914000
Pinger: now sec: 1511372027
Pinger: now nsec: 634914000
Pinger: diff sec: 1
Pinger: diff nsec: 500000000
Pinger: ping sent!
Ponger: Ping received!
Ponger: pong sent!
Pinger: Pong received!
Pinger: startTime sec: 1511372026
Pinger: startTime nsec: 134914000
Pinger: now sec: 1511372028
Pinger: now nsec: 134914000
Pinger: diff sec: 2
Pinger: diff nsec: 0
Pinger: ping sent!
Ponger: Ping received!
Ponger: I'm done
Unexpected message to capsule instance Top.pinger role pinger on port timing protocol Timing signal timeout
tester@eposse-win7vm /cygdrive/c/Users/tester/Desktop/Papyrus-RT 1.0/ws1/TimeElapsedExample1_CDTProject/build
$
If my theory about the actions taking negligible time w.r.t. the timing resolution is correct, I wonder why the time difference would show up on Linux in the original model. Can you share a trace of the original model on Linux to see what kind of differences are you observing on Linux?
[1] http://pubs.opengroup.org/onlinepubs/9699919799/
[Updated on: Wed, 22 November 2017 18:05] Report message to a moderator
|
|
| | | | |
Re: determine elapsed time since creation of timer? [message #1777142 is a reply to message #1777136] |
Fri, 24 November 2017 22:32 |
Ernesto Posse Messages: 438 Registered: March 2011 |
Senior Member |
|
|
I think you may be misinterpreting the result. The startTime variable is never updated so each time the difference is computed it is from the first time it was measured, hence the value of diff is approx 0.5 sec, 1 sec, 1.5 sec, 2 sec. With less precision, the error accumulates.
Granted, my measurements seem a bit too precise, but yours are approximately similar, albeit with less precision, but not in the order of 0.5sec.
To be sure, we should update the model to reset startTime each time a ping is received. Then we could get some difference, but it shouldn't accumulate.
Nicolas, which Linux are you using? and what's your cpu and clock speed?
|
|
| | | | |
Re: determine elapsed time since creation of timer? [message #1777697 is a reply to message #1777687] |
Fri, 01 December 2017 17:06 |
Ernesto Posse Messages: 438 Registered: March 2011 |
Senior Member |
|
|
Indeed, Windows 7 is not very precise. I would add a few more conclusions:
1) Timing behaviour depends on the underlying platform.
2) Neither Windows Vista nor 7, nor even the more precise 8 and 10, nor standard Linux nor macOS are real-time operating systems.
3) UML-RT is for soft real-time systems, so modellers should understand that timing expectations are approximate, not exact.
4) Atomic actions can generally be considered to take negligible amount of time, typically less than the underlying platform and runtime's clock resolution (we do not yet support any RTOS). Of course, if you execute a lot of atomic actions, you will eventually start to notice that their execution time does accumulate.
5) The above applies for the "send" action as well, but since this action performs asynchronous message sending, the modeller cannot make any assumptions about how much time it will take to deliver the message. In small examples its execution may appear as instantaneous, but depending on the scale and the underlying platform, the delivery time may not be negligible.
6) If the intent of the modeller is to observe the time difference between the occurrence of events, this should be explicitly modelled using timers.
Anyway, it's been an interesting issue. Perhaps we should put this list in the wiki somewhere.
[Updated on: Fri, 01 December 2017 17:07] Report message to a moderator
|
|
| |
Goto Forum:
Current Time: Wed Sep 18 11:39:08 GMT 2024
Powered by FUDForum. Page generated in 0.31419 seconds
|