Re: semTake .. millisecond/microsecond waits

ma740988@xxxxxxxxx wrote:
Jeffrey Creem wrote:

ma740988@xxxxxxxxx wrote:

I can't really be more helpful because it is not entirely clear where
you are headed with this.

The semTake timeout parameter doesn't provide the granuality I need.
If I did if ( semTake ( xyz_sema, sysClkRateGet() ) == OK ) that
implies a 1 second rate ( assuming 60 ticks ). The wait is too long.
As a result the question is. How do I get semTake to 'wait' say 100 ms?

In case it is not clear, another poster (Patrick) replied with something close to the real answer.

So lets back up a little.

Assuming the clock rate is 60 hz, then had you done a

The semTake would return on a timeout between a min of 0 and a max of 0.0166666... seconds later.

If you did a semTake (xyz,2) it would return in a min of 0.0166666 and a max of 0.3333 seconds later...

the second parameter says how many system clock tick interrupts must happen before the call times out (and of course if someone signals the semaphore before the timeout, it will return immediately without waiting for the timeout)).

Since you can't (generally) know how far away the next clock tick is, the time that you actually get for the timeout can always be just less than 1 tick of what you asked.


Tick Tick Tick Tick Tick
|-- If you did the sem take at this point in time
with a timeout of 1 tick, you would return in
substantially less than 1 ticks time v.s.

|-- If it happened here, you would get almost one full tick time.

So, back to your original case, if you want to timeout after you are 100% sure that at least 100 msec have gone by then
semTake(mySem, sysClkRateGet()/10 + 1)

Should do it. With the default system clock rate of 60 hz, this will timeout after a minimum of 0.100 and a max of 0.116666... etc

If that level of granularity is too course then setup you system clock rate to a higher level in the BSP.

Note that whatever level you set it too, you must (when using this approach) be willing to deal with the 1 tick undertainty in the time.

When working with timeout values for things like DMA checking this is almost always fine.

The example you linked to in your first message was much more specific than this case and was dealing with what should one do if they need much more exact fine grained delay's potentially without the additional uncertainty of a context switch.

While there are certain things on certain devices that you may run into that will require something like the busy wait approach, they are usually the exception and certainly the result of an extremely poor hardware design unless (perhaps) the timing involved is on the order of a very small number of microseconds.