Just a note from a lurker--
I'm maintaining a home-grown OS, and we also have a "condition variable"
construct. The implementation is along the lines of:
wait(cond):
e = new event
lock(cond->mutex)
cond->waiters = [cond->waiters, e]
unlock(cond->mutex)
wait(e)
signal(cond):
lock(cond->mutex)
e = pop(cond->waiters)
unlock(cond->mutex)
signal(e)
broadcast(cond):
lock(cond->mutex)
waiters = cond->waiters
cond->waiters = []
unlock(cond->mutex)
for e in waiters:
signal(e)
(Sorry if I'm using too many Perl-isms in the Python-pseudocode...)
The above is (as far as I know) free from any races, deadlocks, etc.
-Alon Ziv
-----Original Message-----
From: mark benedetto king [mailto:mbk@lowlatency.com]
Sent: Wednesday, September 10, 2003 8:35 PM
To: Philip Martin
Cc: dev@subversion.tigris.org
Subject: Re: [OT] Win32 condition variable implementation
On Wed, Sep 10, 2003 at 05:18:53PM +0100, Philip Martin wrote:
>
> So a thread would do acquire(mutex) then wait(cond,mutex) and the
> wait() would immediately release(mutex)? The mutex doesn't seem to be
> protecting anything, it's a bit strange.
usage of a condition variable is generally along the lines of
enqueue(queue, item):
lock(queue->mutex)
append(list, item)
signal(queue->cond)
unlock(queue->mutex)
dequeue(queue):
lock(queue->mutex)
while list->size == 0:
wait(queue->cond, queue->mutex)
item = remove_head(list)
unlock(queue->mutex)
return item
The idea is that wait() gives up the mutex so that other threads can
act, but once wait() returns, this thread holds the mutex.
> Another concern, your algorithm has the broadcast thread blocking for
> an ack from each waiting thread. I suppose that may be acceptable
> POSIX behaviour, but it's less than ideal. Your first algorithm had a
> non-blocking broadcast, although it had other problems.
This is a valid concern. I could remove the spin-lock from the first
approach by using a second event, "cond->stable", that is initially
set, reset just before cond->event is set, and set just after
cond->event
is reset. I'm not sure that would be better, though, just different:
The first implementation had a bit of a "thundering herd" problem.
If N threads were waiting, even a single signal caused all N to wake,
and
N-1 to go back to sleep. This implementation has the opposite effect;
a single signal wakes only one waiter, but a broadcast could conceivably
cause quite a bit of context switching.
--ben
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org
Received on Thu Sep 11 08:31:56 2003