On Wed, Feb 03, 2016 at 10:52:38AM +0100, Paolo Bonzini wrote: > > > On 03/02/2016 10:34, Stefan Hajnoczi wrote: > >>>>>>>>> + g_usleep(500000); > >>>>> Sleep? > >>> > >>> What about it? :) > > Sleep in a loop is inefficient but at least correct. > > > > A sleep outside a loop is a race condition. If the machine is > > heavily loaded, whatever you are waiting for may not happen in 500 > > ms and the test case is unreliable. > > > > What is the purpose of this sleep? > > In the test the desired ordering of events is > > main thread additional thread > --------------------------------------------------------------------- > lock start_lock > start thread > lock start_lock (waits) > acquire context > unlock start_lock > lock start_lock (returns) > unlock start_lock > aio_poll > poll() <<<< > event_notifier_set <<<< > > Comparing the test to QEMU, the roles of the threads are reversed. The > test's "main thread" is QEMU's dataplane thread, and the test's > additional thread is QEMU's main thread. > > The sleep "ensures" that poll() blocks before event_notifier_set. The > sleep represents how QEMU's main thread acquires the context rarely, > and not right after starting the dataplane thread. > > Because this is a file descriptor source, there is really no > difference between the code's behavior, no matter if aio_poll starts > before or after the event_notifier_set. The test passes even if you > remove the sleep. > > Do you have any suggestion? Just add a comment? How about removing the sleep and adding a comment explaining that the event_notifier_set() could happen either before or during poll()? Stefan