This patch illustrates an alternative approach to waking and waiting on daemons using semaphores instead of direct operations on wait queues. The idea of using semaphores to regulate the cycling of a daemon was suggested to me by Arjan Vos. The basic idea is simple: on each cycle a daemon down's a semaphore, and is reactivated when some other task up's the semaphore. Sometimes an activating task wants to wait until the daemon completes whatever it's supposed to do - flushing memory in this case. I generalized the above idea by adding another semaphore for wakers to sleep on, and a count variable to let the daemon know how many sleepers it needs to activate. This patch updates bdflush and wakeup_bdflush to use that mechanism. The implementation uses two semaphores and a counter: DECLARE_MUTEX_LOCKED(bdflush_request); DECLARE_MUTEX_LOCKED(bdflush_waiter); atomic_t bdflush_waiters /*= 0*/; A task wanting to activate bdflush does: up(&bdflush_request); A task wanting to activate bdflush and wait does: atomic_inc(&bdflush_waiters); up(&bdflush_request); down(&bdflush_waiter); When bdflush has finished its work it does: waiters = atomic_read(&bdflush_waiters); atomic_sub(waiters, &bdflush_waiters); while (waiters--) up(&bdflush_waiter); down(&bdflush_request); Since I wasn't sure whether the side effect in the existing code of setting the current task RUNNING was really wanted, I wrote this in explicitly in the places where the side effect was noted, with the obligatory comment. I've done some fairly heavy stress-testing and this new scheme (but not on non-x86 or SMP) and it does seem to work much the same as the existing one. I doubt that there is a measureable difference in execution overhead, nor is there a difference in correctness as far as I can see. But for me at least, it's considerably easier to verify that the semaphore approach is correct. OK, there it is. Is this better, worse, or lateral? -- Daniel