On 13/11/15 02:57, David Gibson wrote: > On Thu, Nov 12, 2015 at 10:40:11AM +0100, Thomas Huth wrote: >> On 12/11/15 09:09, Thomas Huth wrote: >>> On 11/11/15 18:16, Aravinda Prasad wrote: [...] >>>> + qemu_mutex_lock(&spapr->mc_in_progress); >>> >>> Using a mutex here is definitely wrong. The kvm_arch_handle_exit() code >>> is run under the Big QEMU Lockā„¢ (see qemu_mutex_lock_iothread() in >>> kvm_cpu_exec()), [...] >> Ok, I now had a look into the LoPAPR spec, and if I've got that right, >> you really have to serialize the NMIs in case they happen at multiple >> CPUs at the same time. So I guess the best thing you can do here is >> something like: >> >> while (spapr->mc_in_progress) { >> /* >> * There is already another NMI in progress, thus we need >> * to yield here to wait until it has been finsihed >> */ >> qemu_mutex_unlock_iothread(); >> usleep(10); >> qemu_mutex_lock_iothread(); >> } >> spapr->mc_in_progress = true; [...] > You should be able to avoid the nasty usleep by using a pthread > condition variable. So here you'd have > > while (spapr->mc_in_progress) { > pthread_cond_wait(&mc_delivery_cond, &qemu_iothread_lock); > } > spapr->mc_in_progress = true; > > Or.. there may be qemu wrappers around the pthread functions you > should be using. Once delivery of a single MC is complete, you'd use > pthread_cond_signal() to wake up any additional ones. > > pthread_cond_wait automatically drops the specified mutex internally, > so access to mc_in_progress itself is still protected by the iothread > mutex. That's a nice one, didn't know that function yet! And actually, there is already a QEMU wrapper function: qemu_cond_wait() - so this should be used instead since threads on Windows are working differently in QEMU as far as I know. Thomas