2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 <2020> 2021 2022 2023 2024 | Index | 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 <2020> 2021 2022 2023 2024 |
<== Date ==> | <== Thread ==> |
---|
Subject: | Re: Jenkins test failures on macOS |
From: | "Johnson, Andrew N. via Core-talk" <core-talk at aps.anl.gov> |
To: | Michael Davidsaver <mdavidsaver at gmail.com> |
Cc: | EPICS core-talk <core-talk at aps.anl.gov> |
Date: | Tue, 4 Aug 2020 16:09:13 +0000 |
On Aug 3, 2020, at 9:37 PM, Michael Davidsaver <mdavidsaver at gmail.com> wrote:
In the Github issue yesterday I explained that on my laptop reading the time using the Mach clock_get_time() API takes about 1.6 microseconds and gives nanosecond precision, whereas using clock_gettime() takes only about 58 nanoseconds but gives microsecond
precision (as averaged over 100000 readings by epicsTimeTest). Which is better?
It likely is using the same mechanism underneath. I assume that if you ask the Mach kernel for the accurate time the clock_get_time() API has to do a system call which takes a long time, whereas there is some other mechanism which doesn’t require
a context switch to provide a lower resolution time-stamp for the other APIs.
Interesting, I just noticed the clocks listed on the Darwin manpage for clock_gettime(), which says exactly that for CLOCK_REALTIME:
There are others for uptime, process and thread that I omitted. It might be interesting measuring how long it takes to read each of these, which would be done by counting the max number of times you can read it and get exactly the same time and
divide the time increment by that max count. Linux has some _COARSE clocks that are similarly faster to read:
- Andrew
--
Complexity comes for free, simplicity you have to work for.
|