From mboxrd@z Thu Jan 1 00:00:00 1970 From: Mario Smarduch Subject: Re: [PATCH v9 2/4] arm: ARMv7 dirty page logging inital mem region write protect (w/no huge PUD support) Date: Fri, 25 Jul 2014 10:45:17 -0700 Message-ID: <53D297AD.8020103@samsung.com> References: <1406249768-25315-1-git-send-email-m.smarduch@samsung.com> <1406249768-25315-3-git-send-email-m.smarduch@samsung.com> <53D1F642.6010007@suse.de> Mime-Version: 1.0 Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Cc: kvmarm@lists.cs.columbia.edu, marc.zyngier@arm.com, christoffer.dall@linaro.org, pbonzini@redhat.com, gleb@kernel.org, xiantao.zhang@intel.com, borntraeger@de.ibm.com, cornelia.huck@de.ibm.com, xiaoguangrong@linux.vnet.ibm.com, steve.capper@arm.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, jays.lee@samsung.com, sungjinn.chung@samsung.com To: Alexander Graf Return-path: Received: from mailout4.w2.samsung.com ([211.189.100.14]:37833 "EHLO usmailout4.samsung.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933322AbaGYRpV (ORCPT ); Fri, 25 Jul 2014 13:45:21 -0400 Received: from uscpsbgex2.samsung.com (u123.gpu85.samsung.co.kr [203.254.195.123]) by usmailout4.samsung.com (Oracle Communications Messaging Server 7u4-24.01(7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTP id <0N9A00DLP3ZKZQ60@usmailout4.samsung.com> for kvm@vger.kernel.org; Fri, 25 Jul 2014 13:45:20 -0400 (EDT) In-reply-to: <53D1F642.6010007@suse.de> Sender: kvm-owner@vger.kernel.org List-ID: On 07/24/2014 11:16 PM, Alexander Graf wrote: > > On 25.07.14 02:56, Mario Smarduch wrote: >> Patch adds support for initial write protection VM memlsot. This >> patch series >> assumes that huge PUDs will not be used in 2nd stage tables. > > Is this a valid assumption? Right now it's unclear if PUDs will be used to back guest memory, assuming so required quite a bit of additional code. After discussing on mailing list it was recommended to treat this as BUG_ON case for now. > >> >> Signed-off-by: Mario Smarduch >> --- >> arch/arm/include/asm/kvm_host.h | 1 + >> arch/arm/include/asm/kvm_mmu.h | 20 ++++++ >> arch/arm/include/asm/pgtable-3level.h | 1 + >> arch/arm/kvm/arm.c | 9 +++ >> arch/arm/kvm/mmu.c | 128 >> +++++++++++++++++++++++++++++++++ >> 5 files changed, 159 insertions(+) >> >> diff --git a/arch/arm/include/asm/kvm_host.h >> b/arch/arm/include/asm/kvm_host.h >> index 042206f..6521a2d 100644 >> --- a/arch/arm/include/asm/kvm_host.h >> +++ b/arch/arm/include/asm/kvm_host.h >> @@ -231,5 +231,6 @@ int kvm_perf_teardown(void); >> u64 kvm_arm_timer_get_reg(struct kvm_vcpu *, u64 regid); >> int kvm_arm_timer_set_reg(struct kvm_vcpu *, u64 regid, u64 value); >> void kvm_arch_flush_remote_tlbs(struct kvm *); >> +void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot); >> #endif /* __ARM_KVM_HOST_H__ */ >> diff --git a/arch/arm/include/asm/kvm_mmu.h >> b/arch/arm/include/asm/kvm_mmu.h >> index 5cc0b0f..08ab5e8 100644 >> --- a/arch/arm/include/asm/kvm_mmu.h >> +++ b/arch/arm/include/asm/kvm_mmu.h >> @@ -114,6 +114,26 @@ static inline void kvm_set_s2pmd_writable(pmd_t >> *pmd) >> pmd_val(*pmd) |= L_PMD_S2_RDWR; >> } >> +static inline void kvm_set_s2pte_readonly(pte_t *pte) >> +{ >> + pte_val(*pte) = (pte_val(*pte) & ~L_PTE_S2_RDWR) | L_PTE_S2_RDONLY; >> +} >> + >> +static inline bool kvm_s2pte_readonly(pte_t *pte) >> +{ >> + return (pte_val(*pte) & L_PTE_S2_RDWR) == L_PTE_S2_RDONLY; >> +} >> + >> +static inline void kvm_set_s2pmd_readonly(pmd_t *pmd) >> +{ >> + pmd_val(*pmd) = (pmd_val(*pmd) & ~L_PMD_S2_RDWR) | L_PMD_S2_RDONLY; >> +} >> + >> +static inline bool kvm_s2pmd_readonly(pmd_t *pmd) >> +{ >> + return (pmd_val(*pmd) & L_PMD_S2_RDWR) == L_PMD_S2_RDONLY; >> +} >> + >> /* Open coded p*d_addr_end that can deal with 64bit addresses */ >> #define kvm_pgd_addr_end(addr, end) \ >> ({ u64 __boundary = ((addr) + PGDIR_SIZE) & PGDIR_MASK; \ >> diff --git a/arch/arm/include/asm/pgtable-3level.h >> b/arch/arm/include/asm/pgtable-3level.h >> index 85c60ad..d8bb40b 100644 >> --- a/arch/arm/include/asm/pgtable-3level.h >> +++ b/arch/arm/include/asm/pgtable-3level.h >> @@ -129,6 +129,7 @@ >> #define L_PTE_S2_RDONLY (_AT(pteval_t, 1) << 6) /* >> HAP[1] */ >> #define L_PTE_S2_RDWR (_AT(pteval_t, 3) << 6) /* >> HAP[2:1] */ >> +#define L_PMD_S2_RDONLY (_AT(pteval_t, 1) << 6) /* >> HAP[1] */ >> #define L_PMD_S2_RDWR (_AT(pmdval_t, 3) << 6) /* >> HAP[2:1] */ >> /* >> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c >> index 3c82b37..e11c2dd 100644 >> --- a/arch/arm/kvm/arm.c >> +++ b/arch/arm/kvm/arm.c >> @@ -242,6 +242,15 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, >> const struct kvm_memory_slot *old, >> enum kvm_mr_change change) >> { >> +#ifdef CONFIG_ARM > > Same question on CONFIG_ARM here. Is this the define used to distinguish > between 32bit and 64bit? Yes let ARM64 compile. Eventually we'll come back to ARM64 soon, and these will go. > > > Alex > From mboxrd@z Thu Jan 1 00:00:00 1970 From: m.smarduch@samsung.com (Mario Smarduch) Date: Fri, 25 Jul 2014 10:45:17 -0700 Subject: [PATCH v9 2/4] arm: ARMv7 dirty page logging inital mem region write protect (w/no huge PUD support) In-Reply-To: <53D1F642.6010007@suse.de> References: <1406249768-25315-1-git-send-email-m.smarduch@samsung.com> <1406249768-25315-3-git-send-email-m.smarduch@samsung.com> <53D1F642.6010007@suse.de> Message-ID: <53D297AD.8020103@samsung.com> To: linux-arm-kernel@lists.infradead.org List-Id: linux-arm-kernel.lists.infradead.org On 07/24/2014 11:16 PM, Alexander Graf wrote: > > On 25.07.14 02:56, Mario Smarduch wrote: >> Patch adds support for initial write protection VM memlsot. This >> patch series >> assumes that huge PUDs will not be used in 2nd stage tables. > > Is this a valid assumption? Right now it's unclear if PUDs will be used to back guest memory, assuming so required quite a bit of additional code. After discussing on mailing list it was recommended to treat this as BUG_ON case for now. > >> >> Signed-off-by: Mario Smarduch >> --- >> arch/arm/include/asm/kvm_host.h | 1 + >> arch/arm/include/asm/kvm_mmu.h | 20 ++++++ >> arch/arm/include/asm/pgtable-3level.h | 1 + >> arch/arm/kvm/arm.c | 9 +++ >> arch/arm/kvm/mmu.c | 128 >> +++++++++++++++++++++++++++++++++ >> 5 files changed, 159 insertions(+) >> >> diff --git a/arch/arm/include/asm/kvm_host.h >> b/arch/arm/include/asm/kvm_host.h >> index 042206f..6521a2d 100644 >> --- a/arch/arm/include/asm/kvm_host.h >> +++ b/arch/arm/include/asm/kvm_host.h >> @@ -231,5 +231,6 @@ int kvm_perf_teardown(void); >> u64 kvm_arm_timer_get_reg(struct kvm_vcpu *, u64 regid); >> int kvm_arm_timer_set_reg(struct kvm_vcpu *, u64 regid, u64 value); >> void kvm_arch_flush_remote_tlbs(struct kvm *); >> +void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot); >> #endif /* __ARM_KVM_HOST_H__ */ >> diff --git a/arch/arm/include/asm/kvm_mmu.h >> b/arch/arm/include/asm/kvm_mmu.h >> index 5cc0b0f..08ab5e8 100644 >> --- a/arch/arm/include/asm/kvm_mmu.h >> +++ b/arch/arm/include/asm/kvm_mmu.h >> @@ -114,6 +114,26 @@ static inline void kvm_set_s2pmd_writable(pmd_t >> *pmd) >> pmd_val(*pmd) |= L_PMD_S2_RDWR; >> } >> +static inline void kvm_set_s2pte_readonly(pte_t *pte) >> +{ >> + pte_val(*pte) = (pte_val(*pte) & ~L_PTE_S2_RDWR) | L_PTE_S2_RDONLY; >> +} >> + >> +static inline bool kvm_s2pte_readonly(pte_t *pte) >> +{ >> + return (pte_val(*pte) & L_PTE_S2_RDWR) == L_PTE_S2_RDONLY; >> +} >> + >> +static inline void kvm_set_s2pmd_readonly(pmd_t *pmd) >> +{ >> + pmd_val(*pmd) = (pmd_val(*pmd) & ~L_PMD_S2_RDWR) | L_PMD_S2_RDONLY; >> +} >> + >> +static inline bool kvm_s2pmd_readonly(pmd_t *pmd) >> +{ >> + return (pmd_val(*pmd) & L_PMD_S2_RDWR) == L_PMD_S2_RDONLY; >> +} >> + >> /* Open coded p*d_addr_end that can deal with 64bit addresses */ >> #define kvm_pgd_addr_end(addr, end) \ >> ({ u64 __boundary = ((addr) + PGDIR_SIZE) & PGDIR_MASK; \ >> diff --git a/arch/arm/include/asm/pgtable-3level.h >> b/arch/arm/include/asm/pgtable-3level.h >> index 85c60ad..d8bb40b 100644 >> --- a/arch/arm/include/asm/pgtable-3level.h >> +++ b/arch/arm/include/asm/pgtable-3level.h >> @@ -129,6 +129,7 @@ >> #define L_PTE_S2_RDONLY (_AT(pteval_t, 1) << 6) /* >> HAP[1] */ >> #define L_PTE_S2_RDWR (_AT(pteval_t, 3) << 6) /* >> HAP[2:1] */ >> +#define L_PMD_S2_RDONLY (_AT(pteval_t, 1) << 6) /* >> HAP[1] */ >> #define L_PMD_S2_RDWR (_AT(pmdval_t, 3) << 6) /* >> HAP[2:1] */ >> /* >> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c >> index 3c82b37..e11c2dd 100644 >> --- a/arch/arm/kvm/arm.c >> +++ b/arch/arm/kvm/arm.c >> @@ -242,6 +242,15 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, >> const struct kvm_memory_slot *old, >> enum kvm_mr_change change) >> { >> +#ifdef CONFIG_ARM > > Same question on CONFIG_ARM here. Is this the define used to distinguish > between 32bit and 64bit? Yes let ARM64 compile. Eventually we'll come back to ARM64 soon, and these will go. > > > Alex >