Re: semTake .. millisecond/microsecond waits



ma740988@xxxxxxxxx wrote:
Jeffrey Creem wrote:

ma740988@xxxxxxxxx wrote:


I can't really be more helpful because it is not entirely clear where
you are headed with this.

The semTake timeout parameter doesn't provide the granuality I need.
If I did if ( semTake ( xyz_sema, sysClkRateGet() ) == OK ) that
implies a 1 second rate ( assuming 60 ticks ). The wait is too long.
As a result the question is. How do I get semTake to 'wait' say 100 ms?


In case it is not clear, another poster (Patrick) replied with something close to the real answer.

So lets back up a little.

Assuming the clock rate is 60 hz, then had you done a
semTake(xyz,1)

The semTake would return on a timeout between a min of 0 and a max of 0.0166666... seconds later.

If you did a semTake (xyz,2) it would return in a min of 0.0166666 and a max of 0.3333 seconds later...

the second parameter says how many system clock tick interrupts must happen before the call times out (and of course if someone signals the semaphore before the timeout, it will return immediately without waiting for the timeout)).

Since you can't (generally) know how far away the next clock tick is, the time that you actually get for the timeout can always be just less than 1 tick of what you asked.

i.e.

Tick Tick Tick Tick Tick
^
|-- If you did the sem take at this point in time
with a timeout of 1 tick, you would return in
substantially less than 1 ticks time v.s.

^
|-- If it happened here, you would get almost one full tick time.

So, back to your original case, if you want to timeout after you are 100% sure that at least 100 msec have gone by then
semTake(mySem, sysClkRateGet()/10 + 1)

Should do it. With the default system clock rate of 60 hz, this will timeout after a minimum of 0.100 and a max of 0.116666... etc

If that level of granularity is too course then setup you system clock rate to a higher level in the BSP.

Note that whatever level you set it too, you must (when using this approach) be willing to deal with the 1 tick undertainty in the time.

When working with timeout values for things like DMA checking this is almost always fine.

The example you linked to in your first message was much more specific than this case and was dealing with what should one do if they need much more exact fine grained delay's potentially without the additional uncertainty of a context switch.

While there are certain things on certain devices that you may run into that will require something like the busy wait approach, they are usually the exception and certainly the result of an extremely poor hardware design unless (perhaps) the timing involved is on the order of a very small number of microseconds.

.



Relevant Pages

  • Re: semTake .. millisecond/microsecond waits
    ... The semTake timeout parameter doesn't provide the granuality I need. ... Since you can't know how far away the next clock tick is, ... With the default system clock rate of 60 hz, ...
    (comp.os.vxworks)
  • Re: sleep() API behaves differently in Linux 2.4 and Linux 2.6
    ... Linux 2.4.20 it takes about 10 msec. ... The default system timer tick on 2.4 kernels was 10ms. ... 10ms is the minimum amount of time you can sleep. ... timeout is actually a lower bound on the maximimum time slept ...
    (comp.os.linux.development.apps)
  • Re: ntpd and database servers
    ... clock rate to a slower value, so that the systems will lose time. ... Would you mind explaining the link between CONFIG_HZ and the tick value? ... system time for each kernel tick interrupt. ...
    (comp.protocols.time.ntp)
  • Re: Updating callout_reset
    ... Converting from another format to a tick count in callout_resetwould ... Fine tuning the timeout would be a better ... efficiently and avoid representation problems. ... Many callers don't worry much about efficiency and do calculations like ...
    (freebsd-arch)
  • Re: Time measurement in Microseconds
    ... Based solely on your clock rate, the smallest delay you can measure ... An interrupt function connected to the system clock updates the tick ... as per your clock rate. ... tickGet() once every second exactly, the value would be updated by ...
    (comp.os.vxworks)