From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:58537) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Z7KmJ-0008Qf-AG for qemu-devel@nongnu.org; Tue, 23 Jun 2015 05:46:27 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Z7KmC-00036A-Cf for qemu-devel@nongnu.org; Tue, 23 Jun 2015 05:46:03 -0400 Received: from mx1.redhat.com ([209.132.183.28]:55587) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Z7KmB-000342-Of for qemu-devel@nongnu.org; Tue, 23 Jun 2015 05:45:56 -0400 Date: Tue, 23 Jun 2015 17:45:50 +0800 From: Fam Zheng Message-ID: <20150623094550.GA10697@ad.nay.redhat.com> References: <1434646046-27150-1-git-send-email-pbonzini@redhat.com> <1434646046-27150-7-git-send-email-pbonzini@redhat.com> <20150623092620.GA5798@ad.nay.redhat.com> <55892707.4040103@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <55892707.4040103@redhat.com> Subject: Re: [Qemu-devel] [PATCH 6/9] kvm: First step to push iothread lock out of inner run loop List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Paolo Bonzini Cc: jan.kiszka@siemens.com, qemu-devel@nongnu.org On Tue, 06/23 11:29, Paolo Bonzini wrote: > > > On 23/06/2015 11:26, Fam Zheng wrote: > > On Thu, 06/18 18:47, Paolo Bonzini wrote: > >> From: Jan Kiszka > >> > >> This opens the path to get rid of the iothread lock on vmexits in KVM > >> mode. On x86, the in-kernel irqchips has to be used because we otherwise > >> need to synchronize APIC and other per-cpu state accesses that could be > >> changed concurrently. > >> > >> s390x and ARM should be fine without specific locking as their > >> pre/post-run callbacks are empty. MIPS and POWER require locking for > >> the pre-run callback. > >> > >> Signed-off-by: Jan Kiszka > >> Signed-off-by: Paolo Bonzini > >> --- > >> kvm-all.c | 14 ++++++++++++-- > >> target-i386/kvm.c | 18 ++++++++++++++++++ > >> target-mips/kvm.c | 4 ++++ > >> target-ppc/kvm.c | 4 ++++ > >> 4 files changed, 38 insertions(+), 2 deletions(-) > >> > >> diff --git a/kvm-all.c b/kvm-all.c > >> index b2b1bc3..2bd8e9b 100644 > >> --- a/kvm-all.c > >> +++ b/kvm-all.c > >> @@ -1795,6 +1795,8 @@ int kvm_cpu_exec(CPUState *cpu) > >> return EXCP_HLT; > >> } > >> > >> + qemu_mutex_unlock_iothread(); > >> + > >> do { > >> MemTxAttrs attrs; > >> > >> @@ -1813,11 +1815,9 @@ int kvm_cpu_exec(CPUState *cpu) > >> */ > >> qemu_cpu_kick_self(); > >> } > >> - qemu_mutex_unlock_iothread(); > >> > >> run_ret = kvm_vcpu_ioctl(cpu, KVM_RUN, 0); > >> > >> - qemu_mutex_lock_iothread(); > >> attrs = kvm_arch_post_run(cpu, run); > >> > >> if (run_ret < 0) { > >> @@ -1836,20 +1836,24 @@ int kvm_cpu_exec(CPUState *cpu) > >> switch (run->exit_reason) { > >> case KVM_EXIT_IO: > >> DPRINTF("handle_io\n"); > >> + qemu_mutex_lock_iothread(); > >> kvm_handle_io(run->io.port, attrs, > >> (uint8_t *)run + run->io.data_offset, > >> run->io.direction, > >> run->io.size, > >> run->io.count); > >> + qemu_mutex_unlock_iothread(); > >> ret = 0; > >> break; > >> case KVM_EXIT_MMIO: > >> DPRINTF("handle_mmio\n"); > >> + qemu_mutex_lock_iothread(); > >> address_space_rw(&address_space_memory, > >> run->mmio.phys_addr, attrs, > >> run->mmio.data, > >> run->mmio.len, > >> run->mmio.is_write); > >> + qemu_mutex_unlock_iothread(); > >> ret = 0; > >> break; > >> case KVM_EXIT_IRQ_WINDOW_OPEN: > >> @@ -1858,7 +1862,9 @@ int kvm_cpu_exec(CPUState *cpu) > >> break; > >> case KVM_EXIT_SHUTDOWN: > >> DPRINTF("shutdown\n"); > >> + qemu_mutex_lock_iothread(); > >> qemu_system_reset_request(); > >> + qemu_mutex_unlock_iothread(); > >> ret = EXCP_INTERRUPT; > >> break; > >> case KVM_EXIT_UNKNOWN: > > > > More context: > > > > fprintf(stderr, "KVM: unknown exit, hardware reason %" PRIx64 "\n", > > (uint64_t)run->hw.hardware_exit_reason); > > ret = -1; > > break; > > case KVM_EXIT_INTERNAL_ERROR: > > * ret = kvm_handle_internal_error(cpu, run); > > This one only accesses data internal to the VCPU thread. > > > break; > > case KVM_EXIT_SYSTEM_EVENT: > > switch (run->system_event.type) { > > case KVM_SYSTEM_EVENT_SHUTDOWN: > > * qemu_system_shutdown_request(); > > ret = EXCP_INTERRUPT; > > break; > > case KVM_SYSTEM_EVENT_RESET: > > * qemu_system_reset_request(); > > These two are thread-safe. But above you add lock/unlock around the qemu_system_reset_request() under KVM_EXIT_SHUTDOWN. What's different? Fam > > Paolo > > > ret = EXCP_INTERRUPT; > >> break; > >> default: > >> DPRINTF("kvm_arch_handle_exit\n"); > >> + qemu_mutex_lock_iothread(); > >> ret = kvm_arch_handle_exit(cpu, run); > >> + qemu_mutex_unlock_iothread(); > >> break; > >> } > >> } while (ret == 0); > >> > >> + qemu_mutex_lock_iothread(); > >> + > >> if (ret < 0) { > >> cpu_dump_state(cpu, stderr, fprintf, CPU_DUMP_CODE); > >> vm_stop(RUN_STATE_INTERNAL_ERROR); > > > > Could you explain why above three "*" calls are safe? > > > > Fam > > > >