All of lore.kernel.org
 help / color / mirror / Atom feed
From: Punit Agrawal <punit.agrawal@arm.com>
To: kvmarm@lists.cs.columbia.edu
Cc: Punit Agrawal <punit.agrawal@arm.com>,
	marc.zyngier@arm.com, will.deacon@arm.com,
	linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	Christoffer Dall <christoffer.dall@arm.com>
Subject: [PATCH v7 3/9] KVM: arm/arm64: Re-factor setting the Stage 2 entry to exec on fault
Date: Mon, 24 Sep 2018 18:45:46 +0100	[thread overview]
Message-ID: <20180924174552.8387-4-punit.agrawal@arm.com> (raw)
In-Reply-To: <20180924174552.8387-1-punit.agrawal@arm.com>

Stage 2 fault handler marks a page as executable if it is handling an
execution fault or if it was a permission fault in which case the
executable bit needs to be preserved.

The logic to decide if the page should be marked executable is
duplicated for PMD and PTE entries. To avoid creating another copy
when support for PUD hugepages is introduced refactor the code to
share the checks needed to mark a page table entry as executable.

Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
---
 virt/kvm/arm/mmu.c | 28 +++++++++++++++-------------
 1 file changed, 15 insertions(+), 13 deletions(-)

diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index 5b76ee204000..ec64d21c6571 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -1481,7 +1481,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 			  unsigned long fault_status)
 {
 	int ret;
-	bool write_fault, exec_fault, writable, hugetlb = false, force_pte = false;
+	bool write_fault, writable, hugetlb = false, force_pte = false;
+	bool exec_fault, needs_exec;
 	unsigned long mmu_seq;
 	gfn_t gfn = fault_ipa >> PAGE_SHIFT;
 	struct kvm *kvm = vcpu->kvm;
@@ -1606,19 +1607,25 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 	if (exec_fault)
 		invalidate_icache_guest_page(pfn, vma_pagesize);
 
+	/*
+	 * If we took an execution fault we have made the
+	 * icache/dcache coherent above and should now let the s2
+	 * mapping be executable.
+	 *
+	 * Write faults (!exec_fault && FSC_PERM) are orthogonal to
+	 * execute permissions, and we preserve whatever we have.
+	 */
+	needs_exec = exec_fault ||
+		(fault_status == FSC_PERM && stage2_is_exec(kvm, fault_ipa));
+
 	if (hugetlb && vma_pagesize == PMD_SIZE) {
 		pmd_t new_pmd = pfn_pmd(pfn, mem_type);
 		new_pmd = pmd_mkhuge(new_pmd);
 		if (writable)
 			new_pmd = kvm_s2pmd_mkwrite(new_pmd);
 
-		if (exec_fault) {
+		if (needs_exec)
 			new_pmd = kvm_s2pmd_mkexec(new_pmd);
-		} else if (fault_status == FSC_PERM) {
-			/* Preserve execute if XN was already cleared */
-			if (stage2_is_exec(kvm, fault_ipa))
-				new_pmd = kvm_s2pmd_mkexec(new_pmd);
-		}
 
 		ret = stage2_set_pmd_huge(kvm, memcache, fault_ipa, &new_pmd);
 	} else {
@@ -1629,13 +1636,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 			mark_page_dirty(kvm, gfn);
 		}
 
-		if (exec_fault) {
+		if (needs_exec)
 			new_pte = kvm_s2pte_mkexec(new_pte);
-		} else if (fault_status == FSC_PERM) {
-			/* Preserve execute if XN was already cleared */
-			if (stage2_is_exec(kvm, fault_ipa))
-				new_pte = kvm_s2pte_mkexec(new_pte);
-		}
 
 		ret = stage2_set_pte(kvm, memcache, fault_ipa, &new_pte, flags);
 	}
-- 
2.18.0


WARNING: multiple messages have this Message-ID (diff)
From: punit.agrawal@arm.com (Punit Agrawal)
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH v7 3/9] KVM: arm/arm64: Re-factor setting the Stage 2 entry to exec on fault
Date: Mon, 24 Sep 2018 18:45:46 +0100	[thread overview]
Message-ID: <20180924174552.8387-4-punit.agrawal@arm.com> (raw)
In-Reply-To: <20180924174552.8387-1-punit.agrawal@arm.com>

Stage 2 fault handler marks a page as executable if it is handling an
execution fault or if it was a permission fault in which case the
executable bit needs to be preserved.

The logic to decide if the page should be marked executable is
duplicated for PMD and PTE entries. To avoid creating another copy
when support for PUD hugepages is introduced refactor the code to
share the checks needed to mark a page table entry as executable.

Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
---
 virt/kvm/arm/mmu.c | 28 +++++++++++++++-------------
 1 file changed, 15 insertions(+), 13 deletions(-)

diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index 5b76ee204000..ec64d21c6571 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -1481,7 +1481,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 			  unsigned long fault_status)
 {
 	int ret;
-	bool write_fault, exec_fault, writable, hugetlb = false, force_pte = false;
+	bool write_fault, writable, hugetlb = false, force_pte = false;
+	bool exec_fault, needs_exec;
 	unsigned long mmu_seq;
 	gfn_t gfn = fault_ipa >> PAGE_SHIFT;
 	struct kvm *kvm = vcpu->kvm;
@@ -1606,19 +1607,25 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 	if (exec_fault)
 		invalidate_icache_guest_page(pfn, vma_pagesize);
 
+	/*
+	 * If we took an execution fault we have made the
+	 * icache/dcache coherent above and should now let the s2
+	 * mapping be executable.
+	 *
+	 * Write faults (!exec_fault && FSC_PERM) are orthogonal to
+	 * execute permissions, and we preserve whatever we have.
+	 */
+	needs_exec = exec_fault ||
+		(fault_status == FSC_PERM && stage2_is_exec(kvm, fault_ipa));
+
 	if (hugetlb && vma_pagesize == PMD_SIZE) {
 		pmd_t new_pmd = pfn_pmd(pfn, mem_type);
 		new_pmd = pmd_mkhuge(new_pmd);
 		if (writable)
 			new_pmd = kvm_s2pmd_mkwrite(new_pmd);
 
-		if (exec_fault) {
+		if (needs_exec)
 			new_pmd = kvm_s2pmd_mkexec(new_pmd);
-		} else if (fault_status == FSC_PERM) {
-			/* Preserve execute if XN was already cleared */
-			if (stage2_is_exec(kvm, fault_ipa))
-				new_pmd = kvm_s2pmd_mkexec(new_pmd);
-		}
 
 		ret = stage2_set_pmd_huge(kvm, memcache, fault_ipa, &new_pmd);
 	} else {
@@ -1629,13 +1636,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 			mark_page_dirty(kvm, gfn);
 		}
 
-		if (exec_fault) {
+		if (needs_exec)
 			new_pte = kvm_s2pte_mkexec(new_pte);
-		} else if (fault_status == FSC_PERM) {
-			/* Preserve execute if XN was already cleared */
-			if (stage2_is_exec(kvm, fault_ipa))
-				new_pte = kvm_s2pte_mkexec(new_pte);
-		}
 
 		ret = stage2_set_pte(kvm, memcache, fault_ipa, &new_pte, flags);
 	}
-- 
2.18.0

  parent reply	other threads:[~2018-09-24 17:46 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-09-24 17:45 [PATCH v7 0/9] KVM: Support PUD hugepages at stage 2 Punit Agrawal
2018-09-24 17:45 ` Punit Agrawal
2018-09-24 17:45 ` [PATCH v7 1/9] KVM: arm/arm64: Ensure only THP is candidate for adjustment Punit Agrawal
2018-09-24 17:45   ` Punit Agrawal
2018-09-24 20:54   ` Suzuki K Poulose
2018-09-24 20:54     ` Suzuki K Poulose
2018-09-24 17:45 ` [PATCH v7 2/9] KVM: arm/arm64: Share common code in user_mem_abort() Punit Agrawal
2018-09-24 17:45   ` Punit Agrawal
2018-09-24 17:45 ` Punit Agrawal [this message]
2018-09-24 17:45   ` [PATCH v7 3/9] KVM: arm/arm64: Re-factor setting the Stage 2 entry to exec on fault Punit Agrawal
2018-09-24 17:45 ` [PATCH v7 4/9] KVM: arm/arm64: Introduce helpers to manipulate page table entries Punit Agrawal
2018-09-24 17:45   ` Punit Agrawal
2018-09-24 17:45 ` [PATCH v7 5/9] KVM: arm64: Support dirty page tracking for PUD hugepages Punit Agrawal
2018-09-24 17:45   ` Punit Agrawal
2018-09-24 17:45 ` [PATCH v7 6/9] KVM: arm64: Support PUD hugepage in stage2_is_exec() Punit Agrawal
2018-09-24 17:45   ` Punit Agrawal
2018-09-24 17:45 ` [PATCH v7 7/9] KVM: arm64: Support handling access faults for PUD hugepages Punit Agrawal
2018-09-24 17:45   ` Punit Agrawal
2018-09-24 17:45 ` [PATCH v7 8/9] KVM: arm64: Update age handlers to support " Punit Agrawal
2018-09-24 17:45   ` Punit Agrawal
2018-09-24 17:45 ` [PATCH v7 9/9] KVM: arm64: Add support for creating PUD hugepages at stage 2 Punit Agrawal
2018-09-24 17:45   ` Punit Agrawal
2018-09-24 21:21   ` Suzuki K Poulose
2018-09-24 21:21     ` Suzuki K Poulose
2018-09-25  9:21     ` Punit Agrawal
2018-09-25  9:21       ` Punit Agrawal
2018-09-25  9:21       ` Punit Agrawal
2018-09-25 14:37   ` [PATCH v7.1 " Punit Agrawal
2018-09-25 14:37     ` Punit Agrawal
2018-10-01 14:00     ` Punit Agrawal
2018-10-01 14:00       ` Punit Agrawal

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180924174552.8387-4-punit.agrawal@arm.com \
    --to=punit.agrawal@arm.com \
    --cc=christoffer.dall@arm.com \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=marc.zyngier@arm.com \
    --cc=will.deacon@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.