From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S936242AbdCJJt4 (ORCPT ); Fri, 10 Mar 2017 04:49:56 -0500 Received: from mail.linuxfoundation.org ([140.211.169.12]:42828 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S965078AbdCJJf3 (ORCPT ); Fri, 10 Mar 2017 04:35:29 -0500 From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Christoffer Dall , Marc Zyngier Subject: [PATCH 4.10 101/167] arm/arm64: KVM: Enforce unconditional flush to PoC when mapping to stage-2 Date: Fri, 10 Mar 2017 10:09:04 +0100 Message-Id: <20170310084002.373310751@linuxfoundation.org> X-Mailer: git-send-email 2.12.0 In-Reply-To: <20170310083956.767605269@linuxfoundation.org> References: <20170310083956.767605269@linuxfoundation.org> User-Agent: quilt/0.65 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.10-stable review patch. If anyone has any objections, please let me know. ------------------ From: Marc Zyngier commit 8f36ebaf21fdae99c091c67e8b6fab33969f2667 upstream. When we fault in a page, we flush it to the PoC (Point of Coherency) if the faulting vcpu has its own caches off, so that it can observe the page we just brought it. But if the vcpu has its caches on, we skip that step. Bad things happen when *another* vcpu tries to access that page with its own caches disabled. At that point, there is no garantee that the data has made it to the PoC, and we access stale data. The obvious fix is to always flush to PoC when a page is faulted in, no matter what the state of the vcpu is. Fixes: 2d58b733c876 ("arm64: KVM: force cache clean on page fault when caches are off") Reviewed-by: Christoffer Dall Signed-off-by: Marc Zyngier Signed-off-by: Greg Kroah-Hartman --- arch/arm/include/asm/kvm_mmu.h | 9 +-------- arch/arm64/include/asm/kvm_mmu.h | 3 +-- 2 files changed, 2 insertions(+), 10 deletions(-) --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -150,18 +150,12 @@ static inline void __coherent_cache_gues * and iterate over the range. */ - bool need_flush = !vcpu_has_cache_enabled(vcpu) || ipa_uncached; - VM_BUG_ON(size & ~PAGE_MASK); - if (!need_flush && !icache_is_pipt()) - goto vipt_cache; - while (size) { void *va = kmap_atomic_pfn(pfn); - if (need_flush) - kvm_flush_dcache_to_poc(va, PAGE_SIZE); + kvm_flush_dcache_to_poc(va, PAGE_SIZE); if (icache_is_pipt()) __cpuc_coherent_user_range((unsigned long)va, @@ -173,7 +167,6 @@ static inline void __coherent_cache_gues kunmap_atomic(va); } -vipt_cache: if (!icache_is_pipt() && !icache_is_vivt_asid_tagged()) { /* any kind of VIPT cache */ __flush_icache_all(); --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -241,8 +241,7 @@ static inline void __coherent_cache_gues { void *va = page_address(pfn_to_page(pfn)); - if (!vcpu_has_cache_enabled(vcpu) || ipa_uncached) - kvm_flush_dcache_to_poc(va, size); + kvm_flush_dcache_to_poc(va, size); if (!icache_is_aliasing()) { /* PIPT */ flush_icache_range((unsigned long)va,