From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932381AbcCMD1s (ORCPT ); Sat, 12 Mar 2016 22:27:48 -0500 Received: from mga02.intel.com ([134.134.136.20]:49262 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753930AbcCMD1E (ORCPT ); Sat, 12 Mar 2016 22:27:04 -0500 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.24,328,1455004800"; d="scan'208";a="935757782" From: Lan Tianyu To: gleb@kernel.org, pbonzini@redhat.com, guangrong.xiao@linux.intel.com Cc: Lan Tianyu , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 5/7] KVM: Replace smp_mb() with smp_load_acquire() in the kvm_flush_remote_tlbs() Date: Sun, 13 Mar 2016 11:10:28 +0800 Message-Id: <1457838631-26775-6-git-send-email-tianyu.lan@intel.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1457838631-26775-1-git-send-email-tianyu.lan@intel.com> References: <1457838631-26775-1-git-send-email-tianyu.lan@intel.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org smp_load_acquire() is enough here and it's cheaper than smp_mb(). Adding a comment about reusing memory barrier of kvm_make_all_cpus_request() here to keep order between modifications to the page tables and reading mode. Signed-off-by: Lan Tianyu --- virt/kvm/kvm_main.c | 18 ++++++++++++++++-- 1 file changed, 16 insertions(+), 2 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index ec5aa8d..39ebee9a 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -191,9 +191,23 @@ bool kvm_make_all_cpus_request(struct kvm *kvm, unsigned int req) #ifndef CONFIG_HAVE_KVM_ARCH_TLB_FLUSH_ALL void kvm_flush_remote_tlbs(struct kvm *kvm) { - long dirty_count = kvm->tlbs_dirty; + /* + * Read tlbs_dirty before setting KVM_REQ_TLB_FLUSH in + * kvm_make_all_cpus_request. + */ + long dirty_count = smp_load_acquire(&kvm->tlbs_dirty); - smp_mb(); + /* + * We want to publish modifications to the page tables before reading + * mode. Pairs with a memory barrier in arch-specific code. + * - x86: smp_mb__after_srcu_read_unlock in vcpu_enter_guest + * and smp_mb in walk_shadow_page_lockless_begin/end. + * - powerpc: smp_mb in kvmppc_prepare_to_enter. + * + * There is already an smp_mb__after_atomic() before + * kvm_make_all_cpus_request() reads vcpu->mode. We reuse that + * barrier here. + */ if (kvm_make_all_cpus_request(kvm, KVM_REQ_TLB_FLUSH)) ++kvm->stat.remote_tlb_flush; cmpxchg(&kvm->tlbs_dirty, dirty_count, 0); -- 1.8.4.rc0.1.g8f6a3e5.dirty