From mboxrd@z Thu Jan 1 00:00:00 1970 From: Mario Smarduch Subject: [PATCH v7 2/4] arm: dirty page logging inital mem region write protect (w/no huge PUD support) Date: Tue, 03 Jun 2014 16:19:25 -0700 Message-ID: <1401837567-5527-3-git-send-email-m.smarduch@samsung.com> References: <1401837567-5527-1-git-send-email-m.smarduch@samsung.com> Mime-Version: 1.0 Content-Type: text/plain Cc: steve.capper@arm.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, gavin.guo@canonical.com, peter.maydell@linaro.org, jays.lee@samsung.com, sungjinn.chung@samsung.com, Mario Smarduch To: kvmarm@lists.cs.columbia.edu, christoffer.dall@linaro.org, marc.zyngier@arm.com Return-path: Received: from mailout4.w2.samsung.com ([211.189.100.14]:9904 "EHLO usmailout4.samsung.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933972AbaFCXTd (ORCPT ); Tue, 3 Jun 2014 19:19:33 -0400 Received: from uscpsbgex3.samsung.com (u124.gpu85.samsung.co.kr [203.254.195.124]) by usmailout4.samsung.com (Oracle Communications Messaging Server 7u4-24.01(7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTP id <0N6M00AQX8SKYKB0@usmailout4.samsung.com> for kvm@vger.kernel.org; Tue, 03 Jun 2014 19:19:33 -0400 (EDT) In-reply-to: <1401837567-5527-1-git-send-email-m.smarduch@samsung.com> Sender: kvm-owner@vger.kernel.org List-ID: Patch adds memslot support for initial write protection and split up of huge pages. This patch series assumes that huge PUDs will not be used to map VM memory. This patch depends on the unmap_range() patch, it needs to be applied first. Signed-off-by: Mario Smarduch --- arch/arm/include/asm/kvm_host.h | 2 + arch/arm/include/asm/kvm_mmu.h | 20 ++++++ arch/arm/include/asm/pgtable-3level.h | 1 + arch/arm/kvm/arm.c | 6 ++ arch/arm/kvm/mmu.c | 114 +++++++++++++++++++++++++++++++++ 5 files changed, 143 insertions(+) diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index 193ceaf..59565f5 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -231,4 +231,6 @@ int kvm_perf_teardown(void); u64 kvm_arm_timer_get_reg(struct kvm_vcpu *, u64 regid); int kvm_arm_timer_set_reg(struct kvm_vcpu *, u64 regid, u64 value); +void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot); + #endif /* __ARM_KVM_HOST_H__ */ diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index 5cc0b0f..08ab5e8 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -114,6 +114,26 @@ static inline void kvm_set_s2pmd_writable(pmd_t *pmd) pmd_val(*pmd) |= L_PMD_S2_RDWR; } +static inline void kvm_set_s2pte_readonly(pte_t *pte) +{ + pte_val(*pte) = (pte_val(*pte) & ~L_PTE_S2_RDWR) | L_PTE_S2_RDONLY; +} + +static inline bool kvm_s2pte_readonly(pte_t *pte) +{ + return (pte_val(*pte) & L_PTE_S2_RDWR) == L_PTE_S2_RDONLY; +} + +static inline void kvm_set_s2pmd_readonly(pmd_t *pmd) +{ + pmd_val(*pmd) = (pmd_val(*pmd) & ~L_PMD_S2_RDWR) | L_PMD_S2_RDONLY; +} + +static inline bool kvm_s2pmd_readonly(pmd_t *pmd) +{ + return (pmd_val(*pmd) & L_PMD_S2_RDWR) == L_PMD_S2_RDONLY; +} + /* Open coded p*d_addr_end that can deal with 64bit addresses */ #define kvm_pgd_addr_end(addr, end) \ ({ u64 __boundary = ((addr) + PGDIR_SIZE) & PGDIR_MASK; \ diff --git a/arch/arm/include/asm/pgtable-3level.h b/arch/arm/include/asm/pgtable-3level.h index 85c60ad..d8bb40b 100644 --- a/arch/arm/include/asm/pgtable-3level.h +++ b/arch/arm/include/asm/pgtable-3level.h @@ -129,6 +129,7 @@ #define L_PTE_S2_RDONLY (_AT(pteval_t, 1) << 6) /* HAP[1] */ #define L_PTE_S2_RDWR (_AT(pteval_t, 3) << 6) /* HAP[2:1] */ +#define L_PMD_S2_RDONLY (_AT(pteval_t, 1) << 6) /* HAP[1] */ #define L_PMD_S2_RDWR (_AT(pmdval_t, 3) << 6) /* HAP[2:1] */ /* diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index 3c82b37..dfd63ac 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -242,6 +242,12 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, const struct kvm_memory_slot *old, enum kvm_mr_change change) { + /* + * At this point memslot has been committed and the there is an + * allocated dirty_bitmap[] so marking of diryt pages works now on. + */ + if ((change != KVM_MR_DELETE) && (mem->flags & KVM_MEM_LOG_DIRTY_PAGES)) + kvm_mmu_wp_memory_region(kvm, mem->slot); } void kvm_arch_flush_shadow_all(struct kvm *kvm) diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index ef29540..e5dff85 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -760,6 +760,120 @@ static bool transparent_hugepage_adjust(pfn_t *pfnp, phys_addr_t *ipap) return false; } + +/** + * stage2_wp_pte_range - write protect PTE range + * @pmd: pointer to pmd entry + * @addr: range start address + * @end: range end address + */ +static void stage2_wp_pte_range(pmd_t *pmd, phys_addr_t addr, phys_addr_t end) +{ + pte_t *pte; + + pte = pte_offset_kernel(pmd, addr); + do { + if (!pte_none(*pte)) { + if (!kvm_s2pte_readonly(pte)) + kvm_set_s2pte_readonly(pte); + } + } while (pte++, addr += PAGE_SIZE, addr != end); +} + +/** + * stage2_wp_pmd_range - write protect PMD range + * @pud: pointer to pud entry + * @addr: range start address + * @end: range end address + */ +static void stage2_wp_pmd_range(pud_t *pud, phys_addr_t addr, phys_addr_t end) +{ + pmd_t *pmd; + phys_addr_t next; + + pmd = pmd_offset(pud, addr); + + do { + next = kvm_pmd_addr_end(addr, end); + if (!pmd_none(*pmd)) { + if (kvm_pmd_huge(*pmd)) { + /* + * Write Protect the PMD, give user_mem_abort() + * a choice to clear and fault on demand or + * break up the huge page. + */ + if (!kvm_s2pmd_readonly(pmd)) + kvm_set_s2pmd_readonly(pmd); + } else + stage2_wp_pte_range(pmd, addr, next); + + } + } while (pmd++, addr = next, addr != end); +} + +/** + * stage2_wp_pud_range - write protect PUD range + * @kvm: pointer to kvm structure + * @pud: pointer to pgd entry + * @addr: range start address + * @end: range end address + * + * While walking the PUD range huge PUD pages are ignored, in the future this + * may need to be revisited. Determine how to handle huge PUDs when logging + * of dirty pages is enabled. + */ +static void stage2_wp_pud_range(struct kvm *kvm, pgd_t *pgd, + phys_addr_t addr, phys_addr_t end) +{ + pud_t *pud; + phys_addr_t next; + + pud = pud_offset(pgd, addr); + do { + /* Check for contention every PUD range and release CPU */ + if (need_resched() || spin_needbreak(&kvm->mmu_lock)) + cond_resched_lock(&kvm->mmu_lock); + + next = kvm_pud_addr_end(addr, end); + /* TODO: huge PUD not supported, revisit later */ + if (!pud_none(*pud)) + stage2_wp_pmd_range(pud, addr, next); + } while (pud++, addr = next, addr != end); +} + +/** + * kvm_mmu_wp_memory_region() - initial write protected of memory region slot + * @kvm: The KVM pointer + * @slot: The memory slot to write protect + * + * Called to start logging dirty pages after memory region + * KVM_MEM_LOG_DIRTY_PAGES operation is called. After this function returns + * all present PMD and PTEs are write protected in the memory region. + * Afterwards read of dirty page log can be called. Pages not present are + * write protected on future access in user_mem_abort(). + * + * Acquires kvm_mmu_lock. Called with kvm->slots_lock mutex acquired, + * serializing operations for VM memory regions. + */ +void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot) +{ + pgd_t *pgd; + struct kvm_memory_slot *memslot = id_to_memslot(kvm->memslots, slot); + phys_addr_t addr = memslot->base_gfn << PAGE_SHIFT; + phys_addr_t end = (memslot->base_gfn + memslot->npages) << PAGE_SHIFT; + phys_addr_t next; + + spin_lock(&kvm->mmu_lock); + pgd = kvm->arch.pgd + pgd_index(addr); + do { + next = kvm_pgd_addr_end(addr, end); + if (pgd_present(*pgd)) + stage2_wp_pud_range(kvm, pgd, addr, next); + } while (pgd++, addr = next, addr != end); + kvm_flush_remote_tlbs(kvm); + spin_unlock(&kvm->mmu_lock); +} + static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, struct kvm_memory_slot *memslot, unsigned long fault_status) -- 1.7.9.5 From mboxrd@z Thu Jan 1 00:00:00 1970 From: m.smarduch@samsung.com (Mario Smarduch) Date: Tue, 03 Jun 2014 16:19:25 -0700 Subject: [PATCH v7 2/4] arm: dirty page logging inital mem region write protect (w/no huge PUD support) In-Reply-To: <1401837567-5527-1-git-send-email-m.smarduch@samsung.com> References: <1401837567-5527-1-git-send-email-m.smarduch@samsung.com> Message-ID: <1401837567-5527-3-git-send-email-m.smarduch@samsung.com> To: linux-arm-kernel@lists.infradead.org List-Id: linux-arm-kernel.lists.infradead.org Patch adds memslot support for initial write protection and split up of huge pages. This patch series assumes that huge PUDs will not be used to map VM memory. This patch depends on the unmap_range() patch, it needs to be applied first. Signed-off-by: Mario Smarduch --- arch/arm/include/asm/kvm_host.h | 2 + arch/arm/include/asm/kvm_mmu.h | 20 ++++++ arch/arm/include/asm/pgtable-3level.h | 1 + arch/arm/kvm/arm.c | 6 ++ arch/arm/kvm/mmu.c | 114 +++++++++++++++++++++++++++++++++ 5 files changed, 143 insertions(+) diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index 193ceaf..59565f5 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -231,4 +231,6 @@ int kvm_perf_teardown(void); u64 kvm_arm_timer_get_reg(struct kvm_vcpu *, u64 regid); int kvm_arm_timer_set_reg(struct kvm_vcpu *, u64 regid, u64 value); +void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot); + #endif /* __ARM_KVM_HOST_H__ */ diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index 5cc0b0f..08ab5e8 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -114,6 +114,26 @@ static inline void kvm_set_s2pmd_writable(pmd_t *pmd) pmd_val(*pmd) |= L_PMD_S2_RDWR; } +static inline void kvm_set_s2pte_readonly(pte_t *pte) +{ + pte_val(*pte) = (pte_val(*pte) & ~L_PTE_S2_RDWR) | L_PTE_S2_RDONLY; +} + +static inline bool kvm_s2pte_readonly(pte_t *pte) +{ + return (pte_val(*pte) & L_PTE_S2_RDWR) == L_PTE_S2_RDONLY; +} + +static inline void kvm_set_s2pmd_readonly(pmd_t *pmd) +{ + pmd_val(*pmd) = (pmd_val(*pmd) & ~L_PMD_S2_RDWR) | L_PMD_S2_RDONLY; +} + +static inline bool kvm_s2pmd_readonly(pmd_t *pmd) +{ + return (pmd_val(*pmd) & L_PMD_S2_RDWR) == L_PMD_S2_RDONLY; +} + /* Open coded p*d_addr_end that can deal with 64bit addresses */ #define kvm_pgd_addr_end(addr, end) \ ({ u64 __boundary = ((addr) + PGDIR_SIZE) & PGDIR_MASK; \ diff --git a/arch/arm/include/asm/pgtable-3level.h b/arch/arm/include/asm/pgtable-3level.h index 85c60ad..d8bb40b 100644 --- a/arch/arm/include/asm/pgtable-3level.h +++ b/arch/arm/include/asm/pgtable-3level.h @@ -129,6 +129,7 @@ #define L_PTE_S2_RDONLY (_AT(pteval_t, 1) << 6) /* HAP[1] */ #define L_PTE_S2_RDWR (_AT(pteval_t, 3) << 6) /* HAP[2:1] */ +#define L_PMD_S2_RDONLY (_AT(pteval_t, 1) << 6) /* HAP[1] */ #define L_PMD_S2_RDWR (_AT(pmdval_t, 3) << 6) /* HAP[2:1] */ /* diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index 3c82b37..dfd63ac 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -242,6 +242,12 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, const struct kvm_memory_slot *old, enum kvm_mr_change change) { + /* + * At this point memslot has been committed and the there is an + * allocated dirty_bitmap[] so marking of diryt pages works now on. + */ + if ((change != KVM_MR_DELETE) && (mem->flags & KVM_MEM_LOG_DIRTY_PAGES)) + kvm_mmu_wp_memory_region(kvm, mem->slot); } void kvm_arch_flush_shadow_all(struct kvm *kvm) diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index ef29540..e5dff85 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -760,6 +760,120 @@ static bool transparent_hugepage_adjust(pfn_t *pfnp, phys_addr_t *ipap) return false; } + +/** + * stage2_wp_pte_range - write protect PTE range + * @pmd: pointer to pmd entry + * @addr: range start address + * @end: range end address + */ +static void stage2_wp_pte_range(pmd_t *pmd, phys_addr_t addr, phys_addr_t end) +{ + pte_t *pte; + + pte = pte_offset_kernel(pmd, addr); + do { + if (!pte_none(*pte)) { + if (!kvm_s2pte_readonly(pte)) + kvm_set_s2pte_readonly(pte); + } + } while (pte++, addr += PAGE_SIZE, addr != end); +} + +/** + * stage2_wp_pmd_range - write protect PMD range + * @pud: pointer to pud entry + * @addr: range start address + * @end: range end address + */ +static void stage2_wp_pmd_range(pud_t *pud, phys_addr_t addr, phys_addr_t end) +{ + pmd_t *pmd; + phys_addr_t next; + + pmd = pmd_offset(pud, addr); + + do { + next = kvm_pmd_addr_end(addr, end); + if (!pmd_none(*pmd)) { + if (kvm_pmd_huge(*pmd)) { + /* + * Write Protect the PMD, give user_mem_abort() + * a choice to clear and fault on demand or + * break up the huge page. + */ + if (!kvm_s2pmd_readonly(pmd)) + kvm_set_s2pmd_readonly(pmd); + } else + stage2_wp_pte_range(pmd, addr, next); + + } + } while (pmd++, addr = next, addr != end); +} + +/** + * stage2_wp_pud_range - write protect PUD range + * @kvm: pointer to kvm structure + * @pud: pointer to pgd entry + * @addr: range start address + * @end: range end address + * + * While walking the PUD range huge PUD pages are ignored, in the future this + * may need to be revisited. Determine how to handle huge PUDs when logging + * of dirty pages is enabled. + */ +static void stage2_wp_pud_range(struct kvm *kvm, pgd_t *pgd, + phys_addr_t addr, phys_addr_t end) +{ + pud_t *pud; + phys_addr_t next; + + pud = pud_offset(pgd, addr); + do { + /* Check for contention every PUD range and release CPU */ + if (need_resched() || spin_needbreak(&kvm->mmu_lock)) + cond_resched_lock(&kvm->mmu_lock); + + next = kvm_pud_addr_end(addr, end); + /* TODO: huge PUD not supported, revisit later */ + if (!pud_none(*pud)) + stage2_wp_pmd_range(pud, addr, next); + } while (pud++, addr = next, addr != end); +} + +/** + * kvm_mmu_wp_memory_region() - initial write protected of memory region slot + * @kvm: The KVM pointer + * @slot: The memory slot to write protect + * + * Called to start logging dirty pages after memory region + * KVM_MEM_LOG_DIRTY_PAGES operation is called. After this function returns + * all present PMD and PTEs are write protected in the memory region. + * Afterwards read of dirty page log can be called. Pages not present are + * write protected on future access in user_mem_abort(). + * + * Acquires kvm_mmu_lock. Called with kvm->slots_lock mutex acquired, + * serializing operations for VM memory regions. + */ +void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot) +{ + pgd_t *pgd; + struct kvm_memory_slot *memslot = id_to_memslot(kvm->memslots, slot); + phys_addr_t addr = memslot->base_gfn << PAGE_SHIFT; + phys_addr_t end = (memslot->base_gfn + memslot->npages) << PAGE_SHIFT; + phys_addr_t next; + + spin_lock(&kvm->mmu_lock); + pgd = kvm->arch.pgd + pgd_index(addr); + do { + next = kvm_pgd_addr_end(addr, end); + if (pgd_present(*pgd)) + stage2_wp_pud_range(kvm, pgd, addr, next); + } while (pgd++, addr = next, addr != end); + kvm_flush_remote_tlbs(kvm); + spin_unlock(&kvm->mmu_lock); +} + static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, struct kvm_memory_slot *memslot, unsigned long fault_status) -- 1.7.9.5