From: Marc Zyngier <marc.zyngier@arm.com> To: Christoffer Dall <christoffer.dall@linaro.org>, Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will.deacon@arm.com> Cc: linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu Subject: [PATCH v3 8/9] KVM: arm/arm64: Preserve Exec permission across R/W permission faults Date: Mon, 23 Oct 2017 17:11:21 +0100 [thread overview] Message-ID: <20171023161122.15291-9-marc.zyngier@arm.com> (raw) In-Reply-To: <20171023161122.15291-1-marc.zyngier@arm.com> So far, we loose the Exec property whenever we take permission faults, as we always reconstruct the PTE/PMD from scratch. This can be counter productive as we can end-up with the following fault sequence: X -> RO -> ROX -> RW -> RWX Instead, we can lookup the existing PTE/PMD and clear the XN bit in the new entry if it was already cleared in the old one, leadig to a much nicer fault sequence: X -> ROX -> RWX Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> --- arch/arm/include/asm/kvm_mmu.h | 10 ++++++++++ arch/arm64/include/asm/kvm_mmu.h | 10 ++++++++++ virt/kvm/arm/mmu.c | 27 +++++++++++++++++++++++++++ 3 files changed, 47 insertions(+) diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index 4d7a54cbb3ab..aab64fe52146 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -107,6 +107,11 @@ static inline bool kvm_s2pte_readonly(pte_t *pte) return (pte_val(*pte) & L_PTE_S2_RDWR) == L_PTE_S2_RDONLY; } +static inline bool kvm_s2pte_exec(pte_t *pte) +{ + return !(pte_val(*pte) & L_PTE_XN); +} + static inline void kvm_set_s2pmd_readonly(pmd_t *pmd) { pmd_val(*pmd) = (pmd_val(*pmd) & ~L_PMD_S2_RDWR) | L_PMD_S2_RDONLY; @@ -117,6 +122,11 @@ static inline bool kvm_s2pmd_readonly(pmd_t *pmd) return (pmd_val(*pmd) & L_PMD_S2_RDWR) == L_PMD_S2_RDONLY; } +static inline bool kvm_s2pmd_exec(pmd_t *pmd) +{ + return !(pmd_val(*pmd) & PMD_SECT_XN); +} + static inline bool kvm_page_empty(void *ptr) { struct page *ptr_page = virt_to_page(ptr); diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 1e1b20cb348f..126abefffe7f 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -203,6 +203,11 @@ static inline bool kvm_s2pte_readonly(pte_t *pte) return (pte_val(*pte) & PTE_S2_RDWR) == PTE_S2_RDONLY; } +static inline bool kvm_s2pte_exec(pte_t *pte) +{ + return !(pte_val(*pte) & PTE_S2_XN); +} + static inline void kvm_set_s2pmd_readonly(pmd_t *pmd) { kvm_set_s2pte_readonly((pte_t *)pmd); @@ -213,6 +218,11 @@ static inline bool kvm_s2pmd_readonly(pmd_t *pmd) return kvm_s2pte_readonly((pte_t *)pmd); } +static inline bool kvm_s2pmd_exec(pmd_t *pmd) +{ + return !(pmd_val(*pmd) & PMD_S2_XN); +} + static inline bool kvm_page_empty(void *ptr) { struct page *ptr_page = virt_to_page(ptr); diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c index f956efbd933d..b83b5a8442bb 100644 --- a/virt/kvm/arm/mmu.c +++ b/virt/kvm/arm/mmu.c @@ -926,6 +926,25 @@ static int stage2_set_pmd_huge(struct kvm *kvm, struct kvm_mmu_memory_cache return 0; } +static bool stage2_is_exec(struct kvm *kvm, phys_addr_t addr) +{ + pmd_t *pmdp; + pte_t *ptep; + + pmdp = stage2_get_pmd(kvm, NULL, addr); + if (!pmdp || pmd_none(*pmdp) || !pmd_present(*pmdp)) + return false; + + if (pmd_thp_or_huge(*pmdp)) + return kvm_s2pmd_exec(pmdp); + + ptep = pte_offset_kernel(pmdp, addr); + if (!ptep || pte_none(*ptep) || !pte_present(*ptep)) + return false; + + return kvm_s2pte_exec(ptep); +} + static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache, phys_addr_t addr, const pte_t *new_pte, unsigned long flags) @@ -1407,6 +1426,10 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, if (exec_fault) { new_pmd = kvm_s2pmd_mkexec(new_pmd); invalidate_icache_guest_page(vcpu, pfn, PMD_SIZE); + } else if (fault_status == FSC_PERM) { + /* Preserve execute if XN was already cleared */ + if (stage2_is_exec(kvm, fault_ipa)) + new_pmd = kvm_s2pmd_mkexec(new_pmd); } ret = stage2_set_pmd_huge(kvm, memcache, fault_ipa, &new_pmd); @@ -1425,6 +1448,10 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, if (exec_fault) { new_pte = kvm_s2pte_mkexec(new_pte); invalidate_icache_guest_page(vcpu, pfn, PAGE_SIZE); + } else if (fault_status == FSC_PERM) { + /* Preserve execute if XN was already cleared */ + if (stage2_is_exec(kvm, fault_ipa)) + new_pte = kvm_s2pte_mkexec(new_pte); } ret = stage2_set_pte(kvm, memcache, fault_ipa, &new_pte, flags); -- 2.11.0
WARNING: multiple messages have this Message-ID (diff)
From: marc.zyngier@arm.com (Marc Zyngier) To: linux-arm-kernel@lists.infradead.org Subject: [PATCH v3 8/9] KVM: arm/arm64: Preserve Exec permission across R/W permission faults Date: Mon, 23 Oct 2017 17:11:21 +0100 [thread overview] Message-ID: <20171023161122.15291-9-marc.zyngier@arm.com> (raw) In-Reply-To: <20171023161122.15291-1-marc.zyngier@arm.com> So far, we loose the Exec property whenever we take permission faults, as we always reconstruct the PTE/PMD from scratch. This can be counter productive as we can end-up with the following fault sequence: X -> RO -> ROX -> RW -> RWX Instead, we can lookup the existing PTE/PMD and clear the XN bit in the new entry if it was already cleared in the old one, leadig to a much nicer fault sequence: X -> ROX -> RWX Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> --- arch/arm/include/asm/kvm_mmu.h | 10 ++++++++++ arch/arm64/include/asm/kvm_mmu.h | 10 ++++++++++ virt/kvm/arm/mmu.c | 27 +++++++++++++++++++++++++++ 3 files changed, 47 insertions(+) diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index 4d7a54cbb3ab..aab64fe52146 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -107,6 +107,11 @@ static inline bool kvm_s2pte_readonly(pte_t *pte) return (pte_val(*pte) & L_PTE_S2_RDWR) == L_PTE_S2_RDONLY; } +static inline bool kvm_s2pte_exec(pte_t *pte) +{ + return !(pte_val(*pte) & L_PTE_XN); +} + static inline void kvm_set_s2pmd_readonly(pmd_t *pmd) { pmd_val(*pmd) = (pmd_val(*pmd) & ~L_PMD_S2_RDWR) | L_PMD_S2_RDONLY; @@ -117,6 +122,11 @@ static inline bool kvm_s2pmd_readonly(pmd_t *pmd) return (pmd_val(*pmd) & L_PMD_S2_RDWR) == L_PMD_S2_RDONLY; } +static inline bool kvm_s2pmd_exec(pmd_t *pmd) +{ + return !(pmd_val(*pmd) & PMD_SECT_XN); +} + static inline bool kvm_page_empty(void *ptr) { struct page *ptr_page = virt_to_page(ptr); diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 1e1b20cb348f..126abefffe7f 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -203,6 +203,11 @@ static inline bool kvm_s2pte_readonly(pte_t *pte) return (pte_val(*pte) & PTE_S2_RDWR) == PTE_S2_RDONLY; } +static inline bool kvm_s2pte_exec(pte_t *pte) +{ + return !(pte_val(*pte) & PTE_S2_XN); +} + static inline void kvm_set_s2pmd_readonly(pmd_t *pmd) { kvm_set_s2pte_readonly((pte_t *)pmd); @@ -213,6 +218,11 @@ static inline bool kvm_s2pmd_readonly(pmd_t *pmd) return kvm_s2pte_readonly((pte_t *)pmd); } +static inline bool kvm_s2pmd_exec(pmd_t *pmd) +{ + return !(pmd_val(*pmd) & PMD_S2_XN); +} + static inline bool kvm_page_empty(void *ptr) { struct page *ptr_page = virt_to_page(ptr); diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c index f956efbd933d..b83b5a8442bb 100644 --- a/virt/kvm/arm/mmu.c +++ b/virt/kvm/arm/mmu.c @@ -926,6 +926,25 @@ static int stage2_set_pmd_huge(struct kvm *kvm, struct kvm_mmu_memory_cache return 0; } +static bool stage2_is_exec(struct kvm *kvm, phys_addr_t addr) +{ + pmd_t *pmdp; + pte_t *ptep; + + pmdp = stage2_get_pmd(kvm, NULL, addr); + if (!pmdp || pmd_none(*pmdp) || !pmd_present(*pmdp)) + return false; + + if (pmd_thp_or_huge(*pmdp)) + return kvm_s2pmd_exec(pmdp); + + ptep = pte_offset_kernel(pmdp, addr); + if (!ptep || pte_none(*ptep) || !pte_present(*ptep)) + return false; + + return kvm_s2pte_exec(ptep); +} + static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache, phys_addr_t addr, const pte_t *new_pte, unsigned long flags) @@ -1407,6 +1426,10 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, if (exec_fault) { new_pmd = kvm_s2pmd_mkexec(new_pmd); invalidate_icache_guest_page(vcpu, pfn, PMD_SIZE); + } else if (fault_status == FSC_PERM) { + /* Preserve execute if XN was already cleared */ + if (stage2_is_exec(kvm, fault_ipa)) + new_pmd = kvm_s2pmd_mkexec(new_pmd); } ret = stage2_set_pmd_huge(kvm, memcache, fault_ipa, &new_pmd); @@ -1425,6 +1448,10 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, if (exec_fault) { new_pte = kvm_s2pte_mkexec(new_pte); invalidate_icache_guest_page(vcpu, pfn, PAGE_SIZE); + } else if (fault_status == FSC_PERM) { + /* Preserve execute if XN was already cleared */ + if (stage2_is_exec(kvm, fault_ipa)) + new_pte = kvm_s2pte_mkexec(new_pte); } ret = stage2_set_pte(kvm, memcache, fault_ipa, &new_pte, flags); -- 2.11.0
next prev parent reply other threads:[~2017-10-23 16:12 UTC|newest] Thread overview: 50+ messages / expand[flat|nested] mbox.gz Atom feed top 2017-10-23 16:11 [PATCH v3 0/9] arm/arm64: KVM: limit icache invalidation to prefetch aborts Marc Zyngier 2017-10-23 16:11 ` Marc Zyngier 2017-10-23 16:11 ` [PATCH v3 1/9] KVM: arm/arm64: Detangle kvm_mmu.h from kvm_hyp.h Marc Zyngier 2017-10-23 16:11 ` Marc Zyngier 2017-10-23 16:11 ` [PATCH v3 2/9] KVM: arm/arm64: Split dcache/icache flushing Marc Zyngier 2017-10-23 16:11 ` Marc Zyngier 2017-10-23 16:11 ` [PATCH v3 3/9] arm64: KVM: Add invalidate_icache_range helper Marc Zyngier 2017-10-23 16:11 ` Marc Zyngier 2017-10-23 16:19 ` Will Deacon 2017-10-23 16:19 ` Will Deacon 2017-10-23 16:11 ` [PATCH v3 4/9] arm: KVM: Add optimized PIPT icache flushing Marc Zyngier 2017-10-23 16:11 ` Marc Zyngier 2017-10-23 16:11 ` [PATCH v3 5/9] arm64: KVM: PTE/PMD S2 XN bit definition Marc Zyngier 2017-10-23 16:11 ` Marc Zyngier 2017-10-23 16:11 ` [PATCH v3 6/9] KVM: arm/arm64: Limit icache invalidation to prefetch aborts Marc Zyngier 2017-10-23 16:11 ` Marc Zyngier 2017-11-01 10:17 ` Andrew Jones 2017-11-01 10:17 ` Andrew Jones 2017-11-02 10:36 ` Marc Zyngier 2017-11-02 10:36 ` Marc Zyngier 2017-11-02 13:13 ` Andrew Jones 2017-11-02 13:13 ` Andrew Jones 2017-10-23 16:11 ` [PATCH v3 7/9] KVM: arm/arm64: Only clean the dcache on translation fault Marc Zyngier 2017-10-23 16:11 ` Marc Zyngier 2018-08-21 13:35 ` Alexander Graf 2018-08-21 13:35 ` Alexander Graf 2018-08-21 13:42 ` Alexander Graf 2018-08-21 13:42 ` Alexander Graf 2018-08-21 13:57 ` Marc Zyngier 2018-08-21 13:57 ` Marc Zyngier 2018-08-21 14:08 ` Alexander Graf 2018-08-21 14:08 ` Alexander Graf 2018-08-21 15:08 ` Marc Zyngier 2018-08-21 15:08 ` Marc Zyngier 2018-08-21 16:54 ` Alexander Graf 2018-08-21 16:54 ` Alexander Graf 2018-08-23 11:16 ` Marc Zyngier 2018-08-23 11:16 ` Marc Zyngier 2018-08-23 12:24 ` Alexander Graf 2018-08-23 12:24 ` Alexander Graf 2018-08-23 12:43 ` Marc Zyngier 2018-08-23 12:43 ` Marc Zyngier 2018-09-01 10:03 ` Alexander Graf 2018-09-01 10:03 ` Alexander Graf 2018-08-21 16:45 ` Alexander Graf 2018-08-21 16:45 ` Alexander Graf 2017-10-23 16:11 ` Marc Zyngier [this message] 2017-10-23 16:11 ` [PATCH v3 8/9] KVM: arm/arm64: Preserve Exec permission across R/W permission faults Marc Zyngier 2017-10-23 16:11 ` [PATCH v3 9/9] KVM: arm/arm64: Drop vcpu parameter from guest cache maintenance operartions Marc Zyngier 2017-10-23 16:11 ` Marc Zyngier
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20171023161122.15291-9-marc.zyngier@arm.com \ --to=marc.zyngier@arm.com \ --cc=catalin.marinas@arm.com \ --cc=christoffer.dall@linaro.org \ --cc=kvm@vger.kernel.org \ --cc=kvmarm@lists.cs.columbia.edu \ --cc=linux-arm-kernel@lists.infradead.org \ --cc=will.deacon@arm.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.