From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932706AbcFIWLL (ORCPT ); Thu, 9 Jun 2016 18:11:11 -0400 Received: from youngberry.canonical.com ([91.189.89.112]:50731 "EHLO youngberry.canonical.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932343AbcFIVRu (ORCPT ); Thu, 9 Jun 2016 17:17:50 -0400 From: Kamal Mostafa To: linux-kernel@vger.kernel.org, stable@vger.kernel.org, kernel-team@lists.ubuntu.com Cc: Marc Zyngier , Christoffer Dall , Kamal Mostafa Subject: [PATCH 4.2.y-ckt 037/206] arm/arm64: KVM: Enforce Break-Before-Make on Stage-2 page tables Date: Thu, 9 Jun 2016 14:14:06 -0700 Message-Id: <1465507015-23052-38-git-send-email-kamal@canonical.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1465507015-23052-1-git-send-email-kamal@canonical.com> References: <1465507015-23052-1-git-send-email-kamal@canonical.com> X-Extended-Stable: 4.2 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.2.8-ckt12 -stable review patch. If anyone has any objections, please let me know. ---8<------------------------------------------------------------ From: Marc Zyngier commit d4b9e0790aa764c0b01e18d4e8d33e93ba36d51f upstream. The ARM architecture mandates that when changing a page table entry from a valid entry to another valid entry, an invalid entry is first written, TLB invalidated, and only then the new entry being written. The current code doesn't respect this, directly writing the new entry and only then invalidating TLBs. Let's fix it up. Reported-by: Christoffer Dall Signed-off-by: Marc Zyngier Signed-off-by: Christoffer Dall Signed-off-by: Kamal Mostafa --- arch/arm/kvm/mmu.c | 17 +++++++++++------ 1 file changed, 11 insertions(+), 6 deletions(-) diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index 61d96a6..12d727fa 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -886,11 +886,14 @@ static int stage2_set_pmd_huge(struct kvm *kvm, struct kvm_mmu_memory_cache VM_BUG_ON(pmd_present(*pmd) && pmd_pfn(*pmd) != pmd_pfn(*new_pmd)); old_pmd = *pmd; - kvm_set_pmd(pmd, *new_pmd); - if (pmd_present(old_pmd)) + if (pmd_present(old_pmd)) { + pmd_clear(pmd); kvm_tlb_flush_vmid_ipa(kvm, addr); - else + } else { get_page(virt_to_page(pmd)); + } + + kvm_set_pmd(pmd, *new_pmd); return 0; } @@ -939,12 +942,14 @@ static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache, /* Create 2nd stage page table mapping - Level 3 */ old_pte = *pte; - kvm_set_pte(pte, *new_pte); - if (pte_present(old_pte)) + if (pte_present(old_pte)) { + kvm_set_pte(pte, __pte(0)); kvm_tlb_flush_vmid_ipa(kvm, addr); - else + } else { get_page(virt_to_page(pte)); + } + kvm_set_pte(pte, *new_pte); return 0; } -- 2.7.4