Re: Context Switch Time



xakee wrote:
I read about this interesting questions in one of Google Interviews.
The questions is how would you calculate the time it requires to
context switch in a Unix environment. I know this might sound off
topic, but i think its pretty much concerned with how threading works.
Let me know if im off topic, i apologize in advance for it.

I suppose one really rough approximation is to create two processes and
two pipes. Have one process write() a byte to a pipe. Then set both
processes in a loop where they read() a byte from one pipe and then
write() a byte to the other.

Since only one pipe has a byte in it at any time (you're essentially
passing a token back and forth as fast as possible), every read() call
will block until the corresponding write() call completes. And every
write() call will succeed since the pipe will never fill.

So, do this loop some fixed number of times (a million or something),
and count how long it takes.

Of course this is also measuring the time to write() and read() to
and from pipes. You can eliminate this by doubling the number of
pipes and write()ing one byte each to two pipes before you then
read() one byte from two pipes. You're then doing exactly the
same number of system calls (and exactly the same ones), but you're
doing half as many context switches, so it should take you less time.

And in fact, the reduction in time tells you how much you are saving
by doing fewer context switches but the same amount of everything else.
And thus the overhead of doing a context switch.

There's probably a good reason why this is imperfect, but it's the
first idea that I could come up with. Also, of course it assumes that
context switch means between processes.

I hope there's not a way that embarrassingly easy compared to this. :-)

- Logan
.