linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 0/7] KVM: Support PUD hugepages at stage 2
@ 2018-07-05 14:08 Punit Agrawal
  2018-07-05 14:08 ` [PATCH v4 1/7] KVM: arm/arm64: Share common code in user_mem_abort() Punit Agrawal
                   ` (6 more replies)
  0 siblings, 7 replies; 15+ messages in thread
From: Punit Agrawal @ 2018-07-05 14:08 UTC (permalink / raw)
  To: kvmarm
  Cc: Punit Agrawal, linux-arm-kernel, marc.zyngier, christoffer.dall,
	linux-kernel, suzuki.poulose

This series is an update to the PUD hugepage support previously posted
at [0][1][2][3]. This patchset adds support for PUD hugepages at stage
2. This feature is useful on cores that have support for large sized
TLB mappings (e.g., 1GB for 4K granule).

There are a three new patches to support PUD hugepages for execute
permission faults, access faults and handling aging of PUD page table
entries (patches 4-6). This addresses Suzuki's feedback on the
previous version.

Also, live migration didn't work with earlier versions. This has now
been addressed by updating patch 1 & 7 to ensure that hugepages are
dissolved correctly when dirty logging is enabled.

Support is added to code that is shared between arm and arm64. Dummy
helpers for arm are provided as the port does not support PUD hugepage
sizes.

The patches have been tested on an A57 based system. The patchset is
based on v4.18-rc3. The are a few conflicts with the support for 52
bit IPA[4] due to change in number of parameters for
stage2_pmd_offset().

Thanks,
Punit

v3 -> v4:
* Patch 1 and 7 - Don't put down hugepages pte if logging is enabled
* Patch 4-5 - Add PUD hugepage support for exec and access faults
* Patch 6 - PUD hugepage support for aging page table entries

v2 -> v3:
* Update vma_pagesize directly if THP [1/4]. Previsouly this was done
  indirectly via hugetlb
* Added review tag [4/4]

v1 -> v2:
* Create helper to check if the page should have exec permission [1/4]
* Fix broken condition to detect THP hugepage [1/4]
* Fix in-correct hunk resulting from a rebase [4/4]

[0] https://lkml.org/lkml/2018/5/14/907
[1] https://www.spinics.net/lists/arm-kernel/msg628053.html
[2] https://lkml.org/lkml/2018/4/20/566
[3] https://lkml.org/lkml/2018/5/1/133
[4] https://www.spinics.net/lists/kvm/msg171065.html

Punit Agrawal (7):
  KVM: arm/arm64: Share common code in user_mem_abort()
  KVM: arm/arm64: Introduce helpers to manupulate page table entries
  KVM: arm64: Support dirty page tracking for PUD hugepages
  KVM: arm64: Support PUD hugepage in stage2_is_exec()
  KVM: arm64: Support handling access faults for PUD hugepages
  KVM: arm64: Update age handlers to support PUD hugepages
  KVM: arm64: Add support for creating PUD hugepages at stage 2

 arch/arm/include/asm/kvm_mmu.h         |  60 +++++++
 arch/arm64/include/asm/kvm_mmu.h       |  47 ++++++
 arch/arm64/include/asm/pgtable-hwdef.h |   4 +
 arch/arm64/include/asm/pgtable.h       |   9 ++
 virt/kvm/arm/mmu.c                     | 214 ++++++++++++++++++++-----
 5 files changed, 291 insertions(+), 43 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v4 1/7] KVM: arm/arm64: Share common code in user_mem_abort()
  2018-07-05 14:08 [PATCH v4 0/7] KVM: Support PUD hugepages at stage 2 Punit Agrawal
@ 2018-07-05 14:08 ` Punit Agrawal
  2018-07-05 16:24   ` Marc Zyngier
  2018-07-05 14:08 ` [PATCH v4 2/7] KVM: arm/arm64: Introduce helpers to manupulate page table entries Punit Agrawal
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 15+ messages in thread
From: Punit Agrawal @ 2018-07-05 14:08 UTC (permalink / raw)
  To: kvmarm
  Cc: Punit Agrawal, linux-arm-kernel, marc.zyngier, christoffer.dall,
	linux-kernel, suzuki.poulose

The code for operations such as marking the pfn as dirty, and
dcache/icache maintenance during stage 2 fault handling is duplicated
between normal pages and PMD hugepages.

Instead of creating another copy of the operations when we introduce
PUD hugepages, let's share them across the different pagesizes.

Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
---
 virt/kvm/arm/mmu.c | 68 +++++++++++++++++++++++++++-------------------
 1 file changed, 40 insertions(+), 28 deletions(-)

diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index 1d90d79706bd..dd14cc36c51c 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -1398,6 +1398,21 @@ static void invalidate_icache_guest_page(kvm_pfn_t pfn, unsigned long size)
 	__invalidate_icache_guest_page(pfn, size);
 }
 
+static bool stage2_should_exec(struct kvm *kvm, phys_addr_t addr,
+			       bool exec_fault, unsigned long fault_status)
+{
+	/*
+	 * If we took an execution fault we will have made the
+	 * icache/dcache coherent and should now let the s2 mapping be
+	 * executable.
+	 *
+	 * Write faults (!exec_fault && FSC_PERM) are orthogonal to
+	 * execute permissions, and we preserve whatever we have.
+	 */
+	return exec_fault ||
+		(fault_status == FSC_PERM && stage2_is_exec(kvm, addr));
+}
+
 static void kvm_send_hwpoison_signal(unsigned long address,
 				     struct vm_area_struct *vma)
 {
@@ -1431,7 +1446,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 	kvm_pfn_t pfn;
 	pgprot_t mem_type = PAGE_S2;
 	bool logging_active = memslot_is_logging(memslot);
-	unsigned long flags = 0;
+	unsigned long vma_pagesize, flags = 0;
 
 	write_fault = kvm_is_write_fault(vcpu);
 	exec_fault = kvm_vcpu_trap_is_iabt(vcpu);
@@ -1451,7 +1466,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 		return -EFAULT;
 	}
 
-	if (vma_kernel_pagesize(vma) == PMD_SIZE && !logging_active) {
+	vma_pagesize = vma_kernel_pagesize(vma);
+	if (vma_pagesize == PMD_SIZE && !logging_active) {
 		hugetlb = true;
 		gfn = (fault_ipa & PMD_MASK) >> PAGE_SHIFT;
 	} else {
@@ -1520,28 +1536,34 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 	if (mmu_notifier_retry(kvm, mmu_seq))
 		goto out_unlock;
 
-	if (!hugetlb && !force_pte)
+	if (!hugetlb && !force_pte) {
+		/*
+		 * Only PMD_SIZE transparent hugepages(THP) are
+		 * currently supported. This code will need to be
+		 * updated to support other THP sizes.
+		 */
 		hugetlb = transparent_hugepage_adjust(&pfn, &fault_ipa);
+		if (hugetlb)
+			vma_pagesize = PMD_SIZE;
+	}
+
+	if (writable)
+		kvm_set_pfn_dirty(pfn);
 
-	if (hugetlb) {
+	if (fault_status != FSC_PERM)
+		clean_dcache_guest_page(pfn, vma_pagesize);
+
+	if (exec_fault)
+		invalidate_icache_guest_page(pfn, vma_pagesize);
+
+	if (hugetlb && vma_pagesize == PMD_SIZE) {
 		pmd_t new_pmd = pfn_pmd(pfn, mem_type);
 		new_pmd = pmd_mkhuge(new_pmd);
-		if (writable) {
+		if (writable)
 			new_pmd = kvm_s2pmd_mkwrite(new_pmd);
-			kvm_set_pfn_dirty(pfn);
-		}
 
-		if (fault_status != FSC_PERM)
-			clean_dcache_guest_page(pfn, PMD_SIZE);
-
-		if (exec_fault) {
+		if (stage2_should_exec(kvm, fault_ipa, exec_fault, fault_status))
 			new_pmd = kvm_s2pmd_mkexec(new_pmd);
-			invalidate_icache_guest_page(pfn, PMD_SIZE);
-		} else if (fault_status == FSC_PERM) {
-			/* Preserve execute if XN was already cleared */
-			if (stage2_is_exec(kvm, fault_ipa))
-				new_pmd = kvm_s2pmd_mkexec(new_pmd);
-		}
 
 		ret = stage2_set_pmd_huge(kvm, memcache, fault_ipa, &new_pmd);
 	} else {
@@ -1549,21 +1571,11 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 
 		if (writable) {
 			new_pte = kvm_s2pte_mkwrite(new_pte);
-			kvm_set_pfn_dirty(pfn);
 			mark_page_dirty(kvm, gfn);
 		}
 
-		if (fault_status != FSC_PERM)
-			clean_dcache_guest_page(pfn, PAGE_SIZE);
-
-		if (exec_fault) {
+		if (stage2_should_exec(kvm, fault_ipa, exec_fault, fault_status))
 			new_pte = kvm_s2pte_mkexec(new_pte);
-			invalidate_icache_guest_page(pfn, PAGE_SIZE);
-		} else if (fault_status == FSC_PERM) {
-			/* Preserve execute if XN was already cleared */
-			if (stage2_is_exec(kvm, fault_ipa))
-				new_pte = kvm_s2pte_mkexec(new_pte);
-		}
 
 		ret = stage2_set_pte(kvm, memcache, fault_ipa, &new_pte, flags);
 	}
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v4 2/7] KVM: arm/arm64: Introduce helpers to manupulate page table entries
  2018-07-05 14:08 [PATCH v4 0/7] KVM: Support PUD hugepages at stage 2 Punit Agrawal
  2018-07-05 14:08 ` [PATCH v4 1/7] KVM: arm/arm64: Share common code in user_mem_abort() Punit Agrawal
@ 2018-07-05 14:08 ` Punit Agrawal
  2018-07-05 14:08 ` [PATCH v4 3/7] KVM: arm64: Support dirty page tracking for PUD hugepages Punit Agrawal
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 15+ messages in thread
From: Punit Agrawal @ 2018-07-05 14:08 UTC (permalink / raw)
  To: kvmarm
  Cc: Punit Agrawal, linux-arm-kernel, marc.zyngier, christoffer.dall,
	linux-kernel, suzuki.poulose, Russell King, Catalin Marinas,
	Will Deacon

Introduce helpers to abstract architectural handling of the conversion
of pfn to page table entries and marking a PMD page table entry as a
block entry.

The helpers are introduced in preparation for supporting PUD hugepages
at stage 2 - which are supported on arm64 but do not exist on arm.

Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm/include/asm/kvm_mmu.h   | 5 +++++
 arch/arm64/include/asm/kvm_mmu.h | 5 +++++
 virt/kvm/arm/mmu.c               | 8 +++++---
 3 files changed, 15 insertions(+), 3 deletions(-)

diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index 8553d68b7c8a..d095c2d0b284 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -75,6 +75,11 @@ phys_addr_t kvm_get_idmap_vector(void);
 int kvm_mmu_init(void);
 void kvm_clear_hyp_idmap(void);
 
+#define kvm_pfn_pte(pfn, prot)	pfn_pte(pfn, prot)
+#define kvm_pfn_pmd(pfn, prot)	pfn_pmd(pfn, prot)
+
+#define kvm_pmd_mkhuge(pmd)	pmd_mkhuge(pmd)
+
 static inline void kvm_set_pmd(pmd_t *pmd, pmd_t new_pmd)
 {
 	*pmd = new_pmd;
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index fb9a7127bb75..689def9bb9d5 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -172,6 +172,11 @@ void kvm_clear_hyp_idmap(void);
 #define	kvm_set_pte(ptep, pte)		set_pte(ptep, pte)
 #define	kvm_set_pmd(pmdp, pmd)		set_pmd(pmdp, pmd)
 
+#define kvm_pfn_pte(pfn, prot)		pfn_pte(pfn, prot)
+#define kvm_pfn_pmd(pfn, prot)		pfn_pmd(pfn, prot)
+
+#define kvm_pmd_mkhuge(pmd)		pmd_mkhuge(pmd)
+
 static inline pte_t kvm_s2pte_mkwrite(pte_t pte)
 {
 	pte_val(pte) |= PTE_S2_RDWR;
diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index dd14cc36c51c..040cd0bce5e1 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -1557,8 +1557,10 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 		invalidate_icache_guest_page(pfn, vma_pagesize);
 
 	if (hugetlb && vma_pagesize == PMD_SIZE) {
-		pmd_t new_pmd = pfn_pmd(pfn, mem_type);
-		new_pmd = pmd_mkhuge(new_pmd);
+		pmd_t new_pmd = kvm_pfn_pmd(pfn, mem_type);
+
+		new_pmd = kvm_pmd_mkhuge(new_pmd);
+
 		if (writable)
 			new_pmd = kvm_s2pmd_mkwrite(new_pmd);
 
@@ -1567,7 +1569,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 
 		ret = stage2_set_pmd_huge(kvm, memcache, fault_ipa, &new_pmd);
 	} else {
-		pte_t new_pte = pfn_pte(pfn, mem_type);
+		pte_t new_pte = kvm_pfn_pte(pfn, mem_type);
 
 		if (writable) {
 			new_pte = kvm_s2pte_mkwrite(new_pte);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v4 3/7] KVM: arm64: Support dirty page tracking for PUD hugepages
  2018-07-05 14:08 [PATCH v4 0/7] KVM: Support PUD hugepages at stage 2 Punit Agrawal
  2018-07-05 14:08 ` [PATCH v4 1/7] KVM: arm/arm64: Share common code in user_mem_abort() Punit Agrawal
  2018-07-05 14:08 ` [PATCH v4 2/7] KVM: arm/arm64: Introduce helpers to manupulate page table entries Punit Agrawal
@ 2018-07-05 14:08 ` Punit Agrawal
  2018-07-05 14:08 ` [PATCH v4 4/7] KVM: arm64: Support PUD hugepage in stage2_is_exec() Punit Agrawal
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 15+ messages in thread
From: Punit Agrawal @ 2018-07-05 14:08 UTC (permalink / raw)
  To: kvmarm
  Cc: Punit Agrawal, linux-arm-kernel, marc.zyngier, christoffer.dall,
	linux-kernel, suzuki.poulose, Russell King, Catalin Marinas,
	Will Deacon

In preparation for creating PUD hugepages at stage 2, add support for
write protecting PUD hugepages when they are encountered. Write
protecting guest tables is used to track dirty pages when migrating
VMs.

Also, provide trivial implementations of required kvm_s2pud_* helpers
to allow sharing of code with arm32.

Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm/include/asm/kvm_mmu.h   | 16 ++++++++++++++++
 arch/arm64/include/asm/kvm_mmu.h | 10 ++++++++++
 virt/kvm/arm/mmu.c               | 11 +++++++----
 3 files changed, 33 insertions(+), 4 deletions(-)

diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index d095c2d0b284..c23722f75d5c 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -80,6 +80,22 @@ void kvm_clear_hyp_idmap(void);
 
 #define kvm_pmd_mkhuge(pmd)	pmd_mkhuge(pmd)
 
+/*
+ * The following kvm_*pud*() functionas are provided strictly to allow
+ * sharing code with arm64. They should never be called in practice.
+ */
+static inline void kvm_set_s2pud_readonly(pud_t *pud)
+{
+	BUG();
+}
+
+static inline bool kvm_s2pud_readonly(pud_t *pud)
+{
+	BUG();
+	return false;
+}
+
+
 static inline void kvm_set_pmd(pmd_t *pmd, pmd_t new_pmd)
 {
 	*pmd = new_pmd;
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 689def9bb9d5..84051930ddfe 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -239,6 +239,16 @@ static inline bool kvm_s2pmd_exec(pmd_t *pmdp)
 	return !(READ_ONCE(pmd_val(*pmdp)) & PMD_S2_XN);
 }
 
+static inline void kvm_set_s2pud_readonly(pud_t *pudp)
+{
+	kvm_set_s2pte_readonly((pte_t *)pudp);
+}
+
+static inline bool kvm_s2pud_readonly(pud_t *pudp)
+{
+	return kvm_s2pte_readonly((pte_t *)pudp);
+}
+
 static inline bool kvm_page_empty(void *ptr)
 {
 	struct page *ptr_page = virt_to_page(ptr);
diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index 040cd0bce5e1..db04b18218c1 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -1288,9 +1288,12 @@ static void  stage2_wp_puds(pgd_t *pgd, phys_addr_t addr, phys_addr_t end)
 	do {
 		next = stage2_pud_addr_end(addr, end);
 		if (!stage2_pud_none(*pud)) {
-			/* TODO:PUD not supported, revisit later if supported */
-			BUG_ON(stage2_pud_huge(*pud));
-			stage2_wp_pmds(pud, addr, next);
+			if (stage2_pud_huge(*pud)) {
+				if (!kvm_s2pud_readonly(pud))
+					kvm_set_s2pud_readonly(pud);
+			} else {
+				stage2_wp_pmds(pud, addr, next);
+			}
 		}
 	} while (pud++, addr = next, addr != end);
 }
@@ -1333,7 +1336,7 @@ static void stage2_wp_range(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
  *
  * Called to start logging dirty pages after memory region
  * KVM_MEM_LOG_DIRTY_PAGES operation is called. After this function returns
- * all present PMD and PTEs are write protected in the memory region.
+ * all present PUD, PMD and PTEs are write protected in the memory region.
  * Afterwards read of dirty page log can be called.
  *
  * Acquires kvm_mmu_lock. Called with kvm->slots_lock mutex acquired,
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v4 4/7] KVM: arm64: Support PUD hugepage in stage2_is_exec()
  2018-07-05 14:08 [PATCH v4 0/7] KVM: Support PUD hugepages at stage 2 Punit Agrawal
                   ` (2 preceding siblings ...)
  2018-07-05 14:08 ` [PATCH v4 3/7] KVM: arm64: Support dirty page tracking for PUD hugepages Punit Agrawal
@ 2018-07-05 14:08 ` Punit Agrawal
  2018-07-05 16:48   ` Suzuki K Poulose
  2018-07-05 14:08 ` [PATCH v4 5/7] KVM: arm64: Support handling access faults for PUD hugepages Punit Agrawal
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 15+ messages in thread
From: Punit Agrawal @ 2018-07-05 14:08 UTC (permalink / raw)
  To: kvmarm
  Cc: Punit Agrawal, linux-arm-kernel, marc.zyngier, christoffer.dall,
	linux-kernel, suzuki.poulose, Russell King, Catalin Marinas,
	Will Deacon

In preparation for creating PUD hugepages at stage 2, add support for
detecting execute permissions on PUD page table entries. Faults due to
lack of execute permissions on page table entries is used to perform
i-cache invalidation on first execute.

Provide trivial implementations of arm32 helpers to allow sharing of
code.

Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm/include/asm/kvm_mmu.h         |  6 ++++++
 arch/arm64/include/asm/kvm_mmu.h       |  5 +++++
 arch/arm64/include/asm/pgtable-hwdef.h |  2 ++
 virt/kvm/arm/mmu.c                     | 10 +++++++++-
 4 files changed, 22 insertions(+), 1 deletion(-)

diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index c23722f75d5c..d05c8986e495 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -96,6 +96,12 @@ static inline bool kvm_s2pud_readonly(pud_t *pud)
 }
 
 
+static inline bool kvm_s2pud_exec(pud_t *pud)
+{
+	BUG();
+	return false;
+}
+
 static inline void kvm_set_pmd(pmd_t *pmd, pmd_t new_pmd)
 {
 	*pmd = new_pmd;
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 84051930ddfe..15bc1be8f82f 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -249,6 +249,11 @@ static inline bool kvm_s2pud_readonly(pud_t *pudp)
 	return kvm_s2pte_readonly((pte_t *)pudp);
 }
 
+static inline bool kvm_s2pud_exec(pud_t *pudp)
+{
+	return !(READ_ONCE(pud_val(*pudp)) & PUD_S2_XN);
+}
+
 static inline bool kvm_page_empty(void *ptr)
 {
 	struct page *ptr_page = virt_to_page(ptr);
diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h
index fd208eac9f2a..10ae592b78b8 100644
--- a/arch/arm64/include/asm/pgtable-hwdef.h
+++ b/arch/arm64/include/asm/pgtable-hwdef.h
@@ -193,6 +193,8 @@
 #define PMD_S2_RDWR		(_AT(pmdval_t, 3) << 6)   /* HAP[2:1] */
 #define PMD_S2_XN		(_AT(pmdval_t, 2) << 53)  /* XN[1:0] */
 
+#define PUD_S2_XN		(_AT(pudval_t, 2) << 53)  /* XN[1:0] */
+
 /*
  * Memory Attribute override for Stage-2 (MemAttr[3:0])
  */
diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index db04b18218c1..ccdea0edabb3 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -1040,10 +1040,18 @@ static int stage2_set_pmd_huge(struct kvm *kvm, struct kvm_mmu_memory_cache
 
 static bool stage2_is_exec(struct kvm *kvm, phys_addr_t addr)
 {
+	pud_t *pudp;
 	pmd_t *pmdp;
 	pte_t *ptep;
 
-	pmdp = stage2_get_pmd(kvm, NULL, addr);
+	pudp = stage2_get_pud(kvm, NULL, addr);
+	if (!pudp || pud_none(*pudp) || !pud_present(*pudp))
+		return false;
+
+	if (pud_huge(*pudp))
+		return kvm_s2pud_exec(pudp);
+
+	pmdp = stage2_pmd_offset(pudp, addr);
 	if (!pmdp || pmd_none(*pmdp) || !pmd_present(*pmdp))
 		return false;
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v4 5/7] KVM: arm64: Support handling access faults for PUD hugepages
  2018-07-05 14:08 [PATCH v4 0/7] KVM: Support PUD hugepages at stage 2 Punit Agrawal
                   ` (3 preceding siblings ...)
  2018-07-05 14:08 ` [PATCH v4 4/7] KVM: arm64: Support PUD hugepage in stage2_is_exec() Punit Agrawal
@ 2018-07-05 14:08 ` Punit Agrawal
  2018-07-05 14:08 ` [PATCH v4 6/7] KVM: arm64: Update age handlers to support " Punit Agrawal
  2018-07-05 14:08 ` [PATCH v4 7/7] KVM: arm64: Add support for creating PUD hugepages at stage 2 Punit Agrawal
  6 siblings, 0 replies; 15+ messages in thread
From: Punit Agrawal @ 2018-07-05 14:08 UTC (permalink / raw)
  To: kvmarm
  Cc: Punit Agrawal, linux-arm-kernel, marc.zyngier, christoffer.dall,
	linux-kernel, suzuki.poulose, Russell King, Catalin Marinas,
	Will Deacon

In preparation for creating larger hugepages at Stage 2, extend the
access fault handling at Stage 2 to support PUD hugepages when
encountered.

Provide trivial helpers for arm32 to allow sharing of code.

Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm/include/asm/kvm_mmu.h   |  8 ++++++++
 arch/arm64/include/asm/kvm_mmu.h |  7 +++++++
 arch/arm64/include/asm/pgtable.h |  6 ++++++
 virt/kvm/arm/mmu.c               | 14 +++++++++++++-
 4 files changed, 34 insertions(+), 1 deletion(-)

diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index d05c8986e495..a4298d429efc 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -78,6 +78,8 @@ void kvm_clear_hyp_idmap(void);
 #define kvm_pfn_pte(pfn, prot)	pfn_pte(pfn, prot)
 #define kvm_pfn_pmd(pfn, prot)	pfn_pmd(pfn, prot)
 
+#define kvm_pud_pfn(pud)	(((pud_val(pud) & PUD_MASK) & PHYS_MASK) >> PAGE_SHIFT)
+
 #define kvm_pmd_mkhuge(pmd)	pmd_mkhuge(pmd)
 
 /*
@@ -102,6 +104,12 @@ static inline bool kvm_s2pud_exec(pud_t *pud)
 	return false;
 }
 
+static inline pud_t kvm_s2pud_mkyoung(pud_t pud)
+{
+	BUG();
+	return pud;
+}
+
 static inline void kvm_set_pmd(pmd_t *pmd, pmd_t new_pmd)
 {
 	*pmd = new_pmd;
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 15bc1be8f82f..4d2780c588b0 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -175,6 +175,8 @@ void kvm_clear_hyp_idmap(void);
 #define kvm_pfn_pte(pfn, prot)		pfn_pte(pfn, prot)
 #define kvm_pfn_pmd(pfn, prot)		pfn_pmd(pfn, prot)
 
+#define kvm_pud_pfn(pud)		pud_pfn(pud)
+
 #define kvm_pmd_mkhuge(pmd)		pmd_mkhuge(pmd)
 
 static inline pte_t kvm_s2pte_mkwrite(pte_t pte)
@@ -254,6 +256,11 @@ static inline bool kvm_s2pud_exec(pud_t *pudp)
 	return !(READ_ONCE(pud_val(*pudp)) & PUD_S2_XN);
 }
 
+static inline pud_t kvm_s2pud_mkyoung(pud_t pud)
+{
+	return pud_mkyoung(pud);
+}
+
 static inline bool kvm_page_empty(void *ptr)
 {
 	struct page *ptr_page = virt_to_page(ptr);
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 1bdeca8918a6..a64a5c35beb1 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -314,6 +314,11 @@ static inline pte_t pud_pte(pud_t pud)
 	return __pte(pud_val(pud));
 }
 
+static inline pud_t pte_pud(pte_t pte)
+{
+	return __pud(pte_val(pte));
+}
+
 static inline pmd_t pud_pmd(pud_t pud)
 {
 	return __pmd(pud_val(pud));
@@ -380,6 +385,7 @@ static inline int pmd_protnone(pmd_t pmd)
 #define pfn_pmd(pfn,prot)	__pmd(__phys_to_pmd_val((phys_addr_t)(pfn) << PAGE_SHIFT) | pgprot_val(prot))
 #define mk_pmd(page,prot)	pfn_pmd(page_to_pfn(page),prot)
 
+#define pud_mkyoung(pud)	pte_pud(pte_mkyoung(pud_pte(pud)))
 #define pud_write(pud)		pte_write(pud_pte(pud))
 
 #define __pud_to_phys(pud)	__pte_to_phys(pud_pte(pud))
diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index ccdea0edabb3..94a91bcdd152 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -1609,6 +1609,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
  */
 static void handle_access_fault(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa)
 {
+	pud_t *pud;
 	pmd_t *pmd;
 	pte_t *pte;
 	kvm_pfn_t pfn;
@@ -1618,7 +1619,18 @@ static void handle_access_fault(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa)
 
 	spin_lock(&vcpu->kvm->mmu_lock);
 
-	pmd = stage2_get_pmd(vcpu->kvm, NULL, fault_ipa);
+	pud = stage2_get_pud(vcpu->kvm, NULL, fault_ipa);
+	if (!pud || pud_none(*pud))
+		goto out;		/* Nothing there */
+
+	if (pud_huge(*pud)) {		/* HugeTLB */
+		*pud = kvm_s2pud_mkyoung(*pud);
+		pfn = kvm_pud_pfn(*pud);
+		pfn_valid = true;
+		goto out;
+	}
+
+	pmd = stage2_pmd_offset(pud, fault_ipa);
 	if (!pmd || pmd_none(*pmd))	/* Nothing there */
 		goto out;
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v4 6/7] KVM: arm64: Update age handlers to support PUD hugepages
  2018-07-05 14:08 [PATCH v4 0/7] KVM: Support PUD hugepages at stage 2 Punit Agrawal
                   ` (4 preceding siblings ...)
  2018-07-05 14:08 ` [PATCH v4 5/7] KVM: arm64: Support handling access faults for PUD hugepages Punit Agrawal
@ 2018-07-05 14:08 ` Punit Agrawal
  2018-07-05 14:08 ` [PATCH v4 7/7] KVM: arm64: Add support for creating PUD hugepages at stage 2 Punit Agrawal
  6 siblings, 0 replies; 15+ messages in thread
From: Punit Agrawal @ 2018-07-05 14:08 UTC (permalink / raw)
  To: kvmarm
  Cc: Punit Agrawal, linux-arm-kernel, marc.zyngier, christoffer.dall,
	linux-kernel, suzuki.poulose, Russell King, Catalin Marinas,
	Will Deacon

In preparation for creating larger hugepages at Stage 2, add support
to the age handling notifiers for PUD hugepages when encountered.

Provide trivial helpers for arm32 to allow sharing code.

Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm/include/asm/kvm_mmu.h   |  6 ++++++
 arch/arm64/include/asm/kvm_mmu.h |  5 +++++
 arch/arm64/include/asm/pgtable.h |  1 +
 virt/kvm/arm/mmu.c               | 29 +++++++++++++++++++++++++----
 4 files changed, 37 insertions(+), 4 deletions(-)

diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index a4298d429efc..8e1e8aee229e 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -110,6 +110,12 @@ static inline pud_t kvm_s2pud_mkyoung(pud_t pud)
 	return pud;
 }
 
+static inline bool kvm_s2pud_young(pud_t pud)
+{
+	BUG();
+	return false;
+}
+
 static inline void kvm_set_pmd(pmd_t *pmd, pmd_t new_pmd)
 {
 	*pmd = new_pmd;
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 4d2780c588b0..c542052fb199 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -261,6 +261,11 @@ static inline pud_t kvm_s2pud_mkyoung(pud_t pud)
 	return pud_mkyoung(pud);
 }
 
+static inline bool kvm_s2pud_young(pud_t pud)
+{
+	return pud_young(pud);
+}
+
 static inline bool kvm_page_empty(void *ptr)
 {
 	struct page *ptr_page = virt_to_page(ptr);
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index a64a5c35beb1..4d9476e420d9 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -385,6 +385,7 @@ static inline int pmd_protnone(pmd_t pmd)
 #define pfn_pmd(pfn,prot)	__pmd(__phys_to_pmd_val((phys_addr_t)(pfn) << PAGE_SHIFT) | pgprot_val(prot))
 #define mk_pmd(page,prot)	pfn_pmd(page_to_pfn(page),prot)
 
+#define pud_young(pud)		pte_young(pud_pte(pud))
 #define pud_mkyoung(pud)	pte_pud(pte_mkyoung(pud_pte(pud)))
 #define pud_write(pud)		pte_write(pud_pte(pud))
 
diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index 94a91bcdd152..0c04c64e858c 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -1141,6 +1141,11 @@ static int stage2_pmdp_test_and_clear_young(pmd_t *pmd)
 	return stage2_ptep_test_and_clear_young((pte_t *)pmd);
 }
 
+static int stage2_pudp_test_and_clear_young(pud_t *pud)
+{
+	return stage2_ptep_test_and_clear_young((pte_t *)pud);
+}
+
 /**
  * kvm_phys_addr_ioremap - map a device range to guest IPA
  *
@@ -1860,11 +1865,19 @@ void kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte)
 
 static int kvm_age_hva_handler(struct kvm *kvm, gpa_t gpa, u64 size, void *data)
 {
+	pud_t *pud;
 	pmd_t *pmd;
 	pte_t *pte;
 
-	WARN_ON(size != PAGE_SIZE && size != PMD_SIZE);
-	pmd = stage2_get_pmd(kvm, NULL, gpa);
+	WARN_ON(size != PAGE_SIZE && size != PMD_SIZE && size != PUD_SIZE);
+	pud = stage2_get_pud(kvm, NULL, gpa);
+	if (!pud || pud_none(*pud))	/* Nothing there */
+		return 0;
+
+	if (pud_huge(*pud))		/* HugeTLB */
+		return stage2_pudp_test_and_clear_young(pud);
+
+	pmd = stage2_pmd_offset(pud, gpa);
 	if (!pmd || pmd_none(*pmd))	/* Nothing there */
 		return 0;
 
@@ -1880,11 +1893,19 @@ static int kvm_age_hva_handler(struct kvm *kvm, gpa_t gpa, u64 size, void *data)
 
 static int kvm_test_age_hva_handler(struct kvm *kvm, gpa_t gpa, u64 size, void *data)
 {
+	pud_t *pud;
 	pmd_t *pmd;
 	pte_t *pte;
 
-	WARN_ON(size != PAGE_SIZE && size != PMD_SIZE);
-	pmd = stage2_get_pmd(kvm, NULL, gpa);
+	WARN_ON(size != PAGE_SIZE && size != PMD_SIZE && size != PUD_SIZE);
+	pud = stage2_get_pud(kvm, NULL, gpa);
+	if (!pud || pud_none(*pud))	/* Nothing there */
+		return 0;
+
+	if (pud_huge(*pud))		/* HugeTLB */
+		return kvm_s2pud_young(*pud);
+
+	pmd = stage2_pmd_offset(pud, gpa);
 	if (!pmd || pmd_none(*pmd))	/* Nothing there */
 		return 0;
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v4 7/7] KVM: arm64: Add support for creating PUD hugepages at stage 2
  2018-07-05 14:08 [PATCH v4 0/7] KVM: Support PUD hugepages at stage 2 Punit Agrawal
                   ` (5 preceding siblings ...)
  2018-07-05 14:08 ` [PATCH v4 6/7] KVM: arm64: Update age handlers to support " Punit Agrawal
@ 2018-07-05 14:08 ` Punit Agrawal
  2018-07-05 17:11   ` Suzuki K Poulose
  6 siblings, 1 reply; 15+ messages in thread
From: Punit Agrawal @ 2018-07-05 14:08 UTC (permalink / raw)
  To: kvmarm
  Cc: Punit Agrawal, linux-arm-kernel, marc.zyngier, christoffer.dall,
	linux-kernel, suzuki.poulose, Russell King, Catalin Marinas,
	Will Deacon

KVM only supports PMD hugepages at stage 2. Now that the various page
handling routines are updated, extend the stage 2 fault handling to
map in PUD hugepages.

Addition of PUD hugepage support enables additional page sizes (e.g.,
1G with 4K granule) which can be useful on cores that support mapping
larger block sizes in the TLB entries.

Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm/include/asm/kvm_mmu.h         | 19 +++++++
 arch/arm64/include/asm/kvm_mmu.h       | 15 +++++
 arch/arm64/include/asm/pgtable-hwdef.h |  2 +
 arch/arm64/include/asm/pgtable.h       |  2 +
 virt/kvm/arm/mmu.c                     | 78 ++++++++++++++++++++++++--
 5 files changed, 112 insertions(+), 4 deletions(-)

diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index 8e1e8aee229e..787baf9ec994 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -77,10 +77,13 @@ void kvm_clear_hyp_idmap(void);
 
 #define kvm_pfn_pte(pfn, prot)	pfn_pte(pfn, prot)
 #define kvm_pfn_pmd(pfn, prot)	pfn_pmd(pfn, prot)
+#define kvm_pfn_pud(pfn, prot)	(__pud(0))
 
 #define kvm_pud_pfn(pud)	(((pud_val(pud) & PUD_MASK) & PHYS_MASK) >> PAGE_SHIFT)
 
 #define kvm_pmd_mkhuge(pmd)	pmd_mkhuge(pmd)
+/* No support for pud hugepages */
+#define kvm_pud_mkhuge(pud)	(pud)
 
 /*
  * The following kvm_*pud*() functionas are provided strictly to allow
@@ -97,6 +100,22 @@ static inline bool kvm_s2pud_readonly(pud_t *pud)
 	return false;
 }
 
+static inline void kvm_set_pud(pud_t *pud, pud_t new_pud)
+{
+	BUG();
+}
+
+static inline pud_t kvm_s2pud_mkwrite(pud_t pud)
+{
+	BUG();
+	return pud;
+}
+
+static inline pud_t kvm_s2pud_mkexec(pud_t pud)
+{
+	BUG();
+	return pud;
+}
 
 static inline bool kvm_s2pud_exec(pud_t *pud)
 {
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index c542052fb199..dd8a23159463 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -171,13 +171,16 @@ void kvm_clear_hyp_idmap(void);
 
 #define	kvm_set_pte(ptep, pte)		set_pte(ptep, pte)
 #define	kvm_set_pmd(pmdp, pmd)		set_pmd(pmdp, pmd)
+#define kvm_set_pud(pudp, pud)		set_pud(pudp, pud)
 
 #define kvm_pfn_pte(pfn, prot)		pfn_pte(pfn, prot)
 #define kvm_pfn_pmd(pfn, prot)		pfn_pmd(pfn, prot)
+#define kvm_pfn_pud(pfn, prot)		pfn_pud(pfn, prot)
 
 #define kvm_pud_pfn(pud)		pud_pfn(pud)
 
 #define kvm_pmd_mkhuge(pmd)		pmd_mkhuge(pmd)
+#define kvm_pud_mkhuge(pud)		pud_mkhuge(pud)
 
 static inline pte_t kvm_s2pte_mkwrite(pte_t pte)
 {
@@ -191,6 +194,12 @@ static inline pmd_t kvm_s2pmd_mkwrite(pmd_t pmd)
 	return pmd;
 }
 
+static inline pud_t kvm_s2pud_mkwrite(pud_t pud)
+{
+	pud_val(pud) |= PUD_S2_RDWR;
+	return pud;
+}
+
 static inline pte_t kvm_s2pte_mkexec(pte_t pte)
 {
 	pte_val(pte) &= ~PTE_S2_XN;
@@ -203,6 +212,12 @@ static inline pmd_t kvm_s2pmd_mkexec(pmd_t pmd)
 	return pmd;
 }
 
+static inline pud_t kvm_s2pud_mkexec(pud_t pud)
+{
+	pud_val(pud) &= ~PUD_S2_XN;
+	return pud;
+}
+
 static inline void kvm_set_s2pte_readonly(pte_t *ptep)
 {
 	pteval_t old_pteval, pteval;
diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h
index 10ae592b78b8..e327665e94d1 100644
--- a/arch/arm64/include/asm/pgtable-hwdef.h
+++ b/arch/arm64/include/asm/pgtable-hwdef.h
@@ -193,6 +193,8 @@
 #define PMD_S2_RDWR		(_AT(pmdval_t, 3) << 6)   /* HAP[2:1] */
 #define PMD_S2_XN		(_AT(pmdval_t, 2) << 53)  /* XN[1:0] */
 
+#define PUD_S2_RDONLY		(_AT(pudval_t, 1) << 6)   /* HAP[2:1] */
+#define PUD_S2_RDWR		(_AT(pudval_t, 3) << 6)   /* HAP[2:1] */
 #define PUD_S2_XN		(_AT(pudval_t, 2) << 53)  /* XN[1:0] */
 
 /*
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 4d9476e420d9..0afc34f94ff5 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -389,6 +389,8 @@ static inline int pmd_protnone(pmd_t pmd)
 #define pud_mkyoung(pud)	pte_pud(pte_mkyoung(pud_pte(pud)))
 #define pud_write(pud)		pte_write(pud_pte(pud))
 
+#define pud_mkhuge(pud)		(__pud(pud_val(pud) & ~PUD_TABLE_BIT))
+
 #define __pud_to_phys(pud)	__pte_to_phys(pud_pte(pud))
 #define __phys_to_pud_val(phys)	__phys_to_pte_val(phys)
 #define pud_pfn(pud)		((__pud_to_phys(pud) & PUD_MASK) >> PAGE_SHIFT)
diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index 0c04c64e858c..5912210e94d9 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -116,6 +116,25 @@ static void stage2_dissolve_pmd(struct kvm *kvm, phys_addr_t addr, pmd_t *pmd)
 	put_page(virt_to_page(pmd));
 }
 
+/**
+ * stage2_dissolve_pud() - clear and flush huge PUD entry
+ * @kvm:	pointer to kvm structure.
+ * @addr:	IPA
+ * @pud:	pud pointer for IPA
+ *
+ * Function clears a PUD entry, flushes addr 1st and 2nd stage TLBs. Marks all
+ * pages in the range dirty.
+ */
+static void stage2_dissolve_pud(struct kvm *kvm, phys_addr_t addr, pud_t *pud)
+{
+	if (!pud_huge(*pud))
+		return;
+
+	pud_clear(pud);
+	kvm_tlb_flush_vmid_ipa(kvm, addr);
+	put_page(virt_to_page(pud));
+}
+
 static int mmu_topup_memory_cache(struct kvm_mmu_memory_cache *cache,
 				  int min, int max)
 {
@@ -993,7 +1012,7 @@ static pmd_t *stage2_get_pmd(struct kvm *kvm, struct kvm_mmu_memory_cache *cache
 	pmd_t *pmd;
 
 	pud = stage2_get_pud(kvm, cache, addr);
-	if (!pud)
+	if (!pud || pud_huge(*pud))
 		return NULL;
 
 	if (stage2_pud_none(*pud)) {
@@ -1038,6 +1057,26 @@ static int stage2_set_pmd_huge(struct kvm *kvm, struct kvm_mmu_memory_cache
 	return 0;
 }
 
+static int stage2_set_pud_huge(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
+			       phys_addr_t addr, const pud_t *new_pud)
+{
+	pud_t *pud, old_pud;
+
+	pud = stage2_get_pud(kvm, cache, addr);
+	VM_BUG_ON(!pud);
+
+	old_pud = *pud;
+	if (pud_present(old_pud)) {
+		pud_clear(pud);
+		kvm_tlb_flush_vmid_ipa(kvm, addr);
+	} else {
+		get_page(virt_to_page(pud));
+	}
+
+	kvm_set_pud(pud, *new_pud);
+	return 0;
+}
+
 static bool stage2_is_exec(struct kvm *kvm, phys_addr_t addr)
 {
 	pud_t *pudp;
@@ -1069,6 +1108,7 @@ static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
 			  phys_addr_t addr, const pte_t *new_pte,
 			  unsigned long flags)
 {
+	pud_t *pud;
 	pmd_t *pmd;
 	pte_t *pte, old_pte;
 	bool iomap = flags & KVM_S2PTE_FLAG_IS_IOMAP;
@@ -1077,6 +1117,22 @@ static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
 	VM_BUG_ON(logging_active && !cache);
 
 	/* Create stage-2 page table mapping - Levels 0 and 1 */
+	pud = stage2_get_pud(kvm, cache, addr);
+	if (!pud) {
+		/*
+		 * Ignore calls from kvm_set_spte_hva for unallocated
+		 * address ranges.
+		 */
+		return 0;
+	}
+
+	/*
+	 * While dirty page logging - dissolve huge PUD, then continue
+	 * on to allocate page.
+	 */
+	if (logging_active)
+		stage2_dissolve_pud(kvm, addr, pud);
+
 	pmd = stage2_get_pmd(kvm, cache, addr);
 	if (!pmd) {
 		/*
@@ -1483,9 +1539,12 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 	}
 
 	vma_pagesize = vma_kernel_pagesize(vma);
-	if (vma_pagesize == PMD_SIZE && !logging_active) {
+	if ((vma_pagesize == PMD_SIZE || vma_pagesize == PUD_SIZE) &&
+	    !logging_active) {
+		struct hstate *h = hstate_vma(vma);
+
 		hugetlb = true;
-		gfn = (fault_ipa & PMD_MASK) >> PAGE_SHIFT;
+		gfn = (fault_ipa & huge_page_mask(h)) >> PAGE_SHIFT;
 	} else {
 		/*
 		 * Pages belonging to memslots that don't have the same
@@ -1572,7 +1631,18 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 	if (exec_fault)
 		invalidate_icache_guest_page(pfn, vma_pagesize);
 
-	if (hugetlb && vma_pagesize == PMD_SIZE) {
+	if (hugetlb && vma_pagesize == PUD_SIZE) {
+		pud_t new_pud = kvm_pfn_pud(pfn, mem_type);
+
+		new_pud = kvm_pud_mkhuge(new_pud);
+		if (writable)
+			new_pud = kvm_s2pud_mkwrite(new_pud);
+
+		if (stage2_should_exec(kvm, fault_ipa, exec_fault, fault_status))
+			new_pud = kvm_s2pud_mkexec(new_pud);
+
+		ret = stage2_set_pud_huge(kvm, memcache, fault_ipa, &new_pud);
+	} else if (hugetlb && vma_pagesize == PMD_SIZE) {
 		pmd_t new_pmd = kvm_pfn_pmd(pfn, mem_type);
 
 		new_pmd = kvm_pmd_mkhuge(new_pmd);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH v4 1/7] KVM: arm/arm64: Share common code in user_mem_abort()
  2018-07-05 14:08 ` [PATCH v4 1/7] KVM: arm/arm64: Share common code in user_mem_abort() Punit Agrawal
@ 2018-07-05 16:24   ` Marc Zyngier
  2018-07-05 16:57     ` Punit Agrawal
  0 siblings, 1 reply; 15+ messages in thread
From: Marc Zyngier @ 2018-07-05 16:24 UTC (permalink / raw)
  To: Punit Agrawal, kvmarm
  Cc: linux-arm-kernel, christoffer.dall, linux-kernel, suzuki.poulose

Hi Punit,

On 05/07/18 15:08, Punit Agrawal wrote:
> The code for operations such as marking the pfn as dirty, and
> dcache/icache maintenance during stage 2 fault handling is duplicated
> between normal pages and PMD hugepages.
> 
> Instead of creating another copy of the operations when we introduce
> PUD hugepages, let's share them across the different pagesizes.
> 
> Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
> Cc: Christoffer Dall <christoffer.dall@arm.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  virt/kvm/arm/mmu.c | 68 +++++++++++++++++++++++++++-------------------
>  1 file changed, 40 insertions(+), 28 deletions(-)
> 
> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
> index 1d90d79706bd..dd14cc36c51c 100644
> --- a/virt/kvm/arm/mmu.c
> +++ b/virt/kvm/arm/mmu.c
> @@ -1398,6 +1398,21 @@ static void invalidate_icache_guest_page(kvm_pfn_t pfn, unsigned long size)
>  	__invalidate_icache_guest_page(pfn, size);
>  }
>  
> +static bool stage2_should_exec(struct kvm *kvm, phys_addr_t addr,
> +			       bool exec_fault, unsigned long fault_status)

I find this "should exec" very confusing.

> +{
> +	/*
> +	 * If we took an execution fault we will have made the
> +	 * icache/dcache coherent and should now let the s2 mapping be
> +	 * executable.
> +	 *
> +	 * Write faults (!exec_fault && FSC_PERM) are orthogonal to
> +	 * execute permissions, and we preserve whatever we have.
> +	 */
> +	return exec_fault ||
> +		(fault_status == FSC_PERM && stage2_is_exec(kvm, addr));
> +}
> +
>  static void kvm_send_hwpoison_signal(unsigned long address,
>  				     struct vm_area_struct *vma)
>  {
> @@ -1431,7 +1446,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  	kvm_pfn_t pfn;
>  	pgprot_t mem_type = PAGE_S2;
>  	bool logging_active = memslot_is_logging(memslot);
> -	unsigned long flags = 0;
> +	unsigned long vma_pagesize, flags = 0;
>  
>  	write_fault = kvm_is_write_fault(vcpu);
>  	exec_fault = kvm_vcpu_trap_is_iabt(vcpu);
> @@ -1451,7 +1466,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  		return -EFAULT;
>  	}
>  
> -	if (vma_kernel_pagesize(vma) == PMD_SIZE && !logging_active) {
> +	vma_pagesize = vma_kernel_pagesize(vma);
> +	if (vma_pagesize == PMD_SIZE && !logging_active) {
>  		hugetlb = true;
>  		gfn = (fault_ipa & PMD_MASK) >> PAGE_SHIFT;
>  	} else {
> @@ -1520,28 +1536,34 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  	if (mmu_notifier_retry(kvm, mmu_seq))
>  		goto out_unlock;
>  
> -	if (!hugetlb && !force_pte)
> +	if (!hugetlb && !force_pte) {
> +		/*
> +		 * Only PMD_SIZE transparent hugepages(THP) are
> +		 * currently supported. This code will need to be
> +		 * updated to support other THP sizes.
> +		 */
>  		hugetlb = transparent_hugepage_adjust(&pfn, &fault_ipa);
> +		if (hugetlb)
> +			vma_pagesize = PMD_SIZE;
> +	}
> +
> +	if (writable)
> +		kvm_set_pfn_dirty(pfn);
>  
> -	if (hugetlb) {
> +	if (fault_status != FSC_PERM)
> +		clean_dcache_guest_page(pfn, vma_pagesize);
> +
> +	if (exec_fault)
> +		invalidate_icache_guest_page(pfn, vma_pagesize);
> +
> +	if (hugetlb && vma_pagesize == PMD_SIZE) {
>  		pmd_t new_pmd = pfn_pmd(pfn, mem_type);
>  		new_pmd = pmd_mkhuge(new_pmd);
> -		if (writable) {
> +		if (writable)
>  			new_pmd = kvm_s2pmd_mkwrite(new_pmd);
> -			kvm_set_pfn_dirty(pfn);
> -		}
>  
> -		if (fault_status != FSC_PERM)
> -			clean_dcache_guest_page(pfn, PMD_SIZE);
> -
> -		if (exec_fault) {
> +		if (stage2_should_exec(kvm, fault_ipa, exec_fault, fault_status))
>  			new_pmd = kvm_s2pmd_mkexec(new_pmd);

OK, I find this absolutely horrid... ;-)

The rest of the function deals with discrete flags, and all of a sudden
we have a function call with a bunch of seemingly unrelated parameters.
And you are repeating it for each vma_pagesize...

How about something like:

	bool needs_exec;

	[...]

	needs_exec = exec_fault || (fault_status == FSC_PERM &&
				    stage2_is_exec(kvm, fault_ipa);

And then you just check needs_exec to update the pte/pmd. And you drop
this helper.

> -			invalidate_icache_guest_page(pfn, PMD_SIZE);
> -		} else if (fault_status == FSC_PERM) {
> -			/* Preserve execute if XN was already cleared */
> -			if (stage2_is_exec(kvm, fault_ipa))
> -				new_pmd = kvm_s2pmd_mkexec(new_pmd);
> -		}
>  
>  		ret = stage2_set_pmd_huge(kvm, memcache, fault_ipa, &new_pmd);
>  	} else {
> @@ -1549,21 +1571,11 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  
>  		if (writable) {
>  			new_pte = kvm_s2pte_mkwrite(new_pte);
> -			kvm_set_pfn_dirty(pfn);
>  			mark_page_dirty(kvm, gfn);
>  		}
>  
> -		if (fault_status != FSC_PERM)
> -			clean_dcache_guest_page(pfn, PAGE_SIZE);
> -
> -		if (exec_fault) {
> +		if (stage2_should_exec(kvm, fault_ipa, exec_fault, fault_status))
>  			new_pte = kvm_s2pte_mkexec(new_pte);
> -			invalidate_icache_guest_page(pfn, PAGE_SIZE);
> -		} else if (fault_status == FSC_PERM) {
> -			/* Preserve execute if XN was already cleared */
> -			if (stage2_is_exec(kvm, fault_ipa))
> -				new_pte = kvm_s2pte_mkexec(new_pte);
> -		}
>  
>  		ret = stage2_set_pte(kvm, memcache, fault_ipa, &new_pte, flags);
>  	}
> 

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v4 4/7] KVM: arm64: Support PUD hugepage in stage2_is_exec()
  2018-07-05 14:08 ` [PATCH v4 4/7] KVM: arm64: Support PUD hugepage in stage2_is_exec() Punit Agrawal
@ 2018-07-05 16:48   ` Suzuki K Poulose
  2018-07-06 14:36     ` Punit Agrawal
  0 siblings, 1 reply; 15+ messages in thread
From: Suzuki K Poulose @ 2018-07-05 16:48 UTC (permalink / raw)
  To: Punit Agrawal, kvmarm
  Cc: linux-arm-kernel, marc.zyngier, christoffer.dall, linux-kernel,
	Russell King, Catalin Marinas, Will Deacon

Hi Punit,

On 05/07/18 15:08, Punit Agrawal wrote:
> In preparation for creating PUD hugepages at stage 2, add support for
> detecting execute permissions on PUD page table entries. Faults due to
> lack of execute permissions on page table entries is used to perform
> i-cache invalidation on first execute.
> 
> Provide trivial implementations of arm32 helpers to allow sharing of
> code.
> 
> Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
> Cc: Christoffer Dall <christoffer.dall@arm.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Russell King <linux@armlinux.org.uk>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> ---
>   arch/arm/include/asm/kvm_mmu.h         |  6 ++++++
>   arch/arm64/include/asm/kvm_mmu.h       |  5 +++++
>   arch/arm64/include/asm/pgtable-hwdef.h |  2 ++
>   virt/kvm/arm/mmu.c                     | 10 +++++++++-
>   4 files changed, 22 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
> index c23722f75d5c..d05c8986e495 100644
> --- a/arch/arm/include/asm/kvm_mmu.h
> +++ b/arch/arm/include/asm/kvm_mmu.h
> @@ -96,6 +96,12 @@ static inline bool kvm_s2pud_readonly(pud_t *pud)
>   }
>   
>   
> +static inline bool kvm_s2pud_exec(pud_t *pud)
> +{
> +	BUG();
> +	return false;
> +}
> +
>   static inline void kvm_set_pmd(pmd_t *pmd, pmd_t new_pmd)
>   {
>   	*pmd = new_pmd;
> diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
> index 84051930ddfe..15bc1be8f82f 100644
> --- a/arch/arm64/include/asm/kvm_mmu.h
> +++ b/arch/arm64/include/asm/kvm_mmu.h
> @@ -249,6 +249,11 @@ static inline bool kvm_s2pud_readonly(pud_t *pudp)
>   	return kvm_s2pte_readonly((pte_t *)pudp);
>   }
>   
> +static inline bool kvm_s2pud_exec(pud_t *pudp)
> +{
> +	return !(READ_ONCE(pud_val(*pudp)) & PUD_S2_XN);
> +}
> +
>   static inline bool kvm_page_empty(void *ptr)
>   {
>   	struct page *ptr_page = virt_to_page(ptr);
> diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h
> index fd208eac9f2a..10ae592b78b8 100644
> --- a/arch/arm64/include/asm/pgtable-hwdef.h
> +++ b/arch/arm64/include/asm/pgtable-hwdef.h
> @@ -193,6 +193,8 @@
>   #define PMD_S2_RDWR		(_AT(pmdval_t, 3) << 6)   /* HAP[2:1] */
>   #define PMD_S2_XN		(_AT(pmdval_t, 2) << 53)  /* XN[1:0] */
>   
> +#define PUD_S2_XN		(_AT(pudval_t, 2) << 53)  /* XN[1:0] */
> +

The changes above look good to me. Please see below.

>   /*
>    * Memory Attribute override for Stage-2 (MemAttr[3:0])
>    */
> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
> index db04b18218c1..ccdea0edabb3 100644
> --- a/virt/kvm/arm/mmu.c
> +++ b/virt/kvm/arm/mmu.c
> @@ -1040,10 +1040,18 @@ static int stage2_set_pmd_huge(struct kvm *kvm, struct kvm_mmu_memory_cache
>   
>   static bool stage2_is_exec(struct kvm *kvm, phys_addr_t addr)
>   {
> +	pud_t *pudp;
>   	pmd_t *pmdp;
>   	pte_t *ptep;
>   
> -	pmdp = stage2_get_pmd(kvm, NULL, addr);
> +	pudp = stage2_get_pud(kvm, NULL, addr);
> +	if (!pudp || pud_none(*pudp) || !pud_present(*pudp))
> +		return false;
> +
> +	if (pud_huge(*pudp))
> +		return kvm_s2pud_exec(pudp);
> +
> +	pmdp = stage2_pmd_offset(pudp, addr);
>   	if (!pmdp || pmd_none(*pmdp) || !pmd_present(*pmdp))
>   		return false;

I am wondering if we need a slightly better way to deal with this
kind of operation. We seem to duplicate the above operation (here and
in the following patches), i.e, finding the "leaf entry" for a given
address and follow the checks one level up at a time.

So instead of doing, stage2_get_pud() and walking down everywhere this
is needed, how about adding :

/* Returns true if the leaf entry is found and updates the relevant pointer */
found = stage2_get_leaf_entry(kvm, NULL, addr, &pudp, &pmdp, &ptep)

which could set the appropriate entry and we could check the result here.

Cheers
Suzuki

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v4 1/7] KVM: arm/arm64: Share common code in user_mem_abort()
  2018-07-05 16:24   ` Marc Zyngier
@ 2018-07-05 16:57     ` Punit Agrawal
  0 siblings, 0 replies; 15+ messages in thread
From: Punit Agrawal @ 2018-07-05 16:57 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, christoffer.dall, linux-kernel, suzuki.poulose

Marc Zyngier <marc.zyngier@arm.com> writes:

> Hi Punit,
>
> On 05/07/18 15:08, Punit Agrawal wrote:
>> The code for operations such as marking the pfn as dirty, and
>> dcache/icache maintenance during stage 2 fault handling is duplicated
>> between normal pages and PMD hugepages.
>> 
>> Instead of creating another copy of the operations when we introduce
>> PUD hugepages, let's share them across the different pagesizes.
>> 
>> Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
>> Cc: Christoffer Dall <christoffer.dall@arm.com>
>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>> ---
>>  virt/kvm/arm/mmu.c | 68 +++++++++++++++++++++++++++-------------------
>>  1 file changed, 40 insertions(+), 28 deletions(-)
>> 
>> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
>> index 1d90d79706bd..dd14cc36c51c 100644
>> --- a/virt/kvm/arm/mmu.c
>> +++ b/virt/kvm/arm/mmu.c
>> @@ -1398,6 +1398,21 @@ static void invalidate_icache_guest_page(kvm_pfn_t pfn, unsigned long size)
>>  	__invalidate_icache_guest_page(pfn, size);
>>  }
>>  
>> +static bool stage2_should_exec(struct kvm *kvm, phys_addr_t addr,
>> +			       bool exec_fault, unsigned long fault_status)
>
> I find this "should exec" very confusing.
>
>> +{
>> +	/*
>> +	 * If we took an execution fault we will have made the
>> +	 * icache/dcache coherent and should now let the s2 mapping be
>> +	 * executable.
>> +	 *
>> +	 * Write faults (!exec_fault && FSC_PERM) are orthogonal to
>> +	 * execute permissions, and we preserve whatever we have.
>> +	 */
>> +	return exec_fault ||
>> +		(fault_status == FSC_PERM && stage2_is_exec(kvm, addr));
>> +}
>> +
>>  static void kvm_send_hwpoison_signal(unsigned long address,
>>  				     struct vm_area_struct *vma)
>>  {
>> @@ -1431,7 +1446,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>>  	kvm_pfn_t pfn;
>>  	pgprot_t mem_type = PAGE_S2;
>>  	bool logging_active = memslot_is_logging(memslot);
>> -	unsigned long flags = 0;
>> +	unsigned long vma_pagesize, flags = 0;
>>  
>>  	write_fault = kvm_is_write_fault(vcpu);
>>  	exec_fault = kvm_vcpu_trap_is_iabt(vcpu);
>> @@ -1451,7 +1466,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>>  		return -EFAULT;
>>  	}
>>  
>> -	if (vma_kernel_pagesize(vma) == PMD_SIZE && !logging_active) {
>> +	vma_pagesize = vma_kernel_pagesize(vma);
>> +	if (vma_pagesize == PMD_SIZE && !logging_active) {
>>  		hugetlb = true;
>>  		gfn = (fault_ipa & PMD_MASK) >> PAGE_SHIFT;
>>  	} else {
>> @@ -1520,28 +1536,34 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>>  	if (mmu_notifier_retry(kvm, mmu_seq))
>>  		goto out_unlock;
>>  
>> -	if (!hugetlb && !force_pte)
>> +	if (!hugetlb && !force_pte) {
>> +		/*
>> +		 * Only PMD_SIZE transparent hugepages(THP) are
>> +		 * currently supported. This code will need to be
>> +		 * updated to support other THP sizes.
>> +		 */
>>  		hugetlb = transparent_hugepage_adjust(&pfn, &fault_ipa);
>> +		if (hugetlb)
>> +			vma_pagesize = PMD_SIZE;
>> +	}
>> +
>> +	if (writable)
>> +		kvm_set_pfn_dirty(pfn);
>>  
>> -	if (hugetlb) {
>> +	if (fault_status != FSC_PERM)
>> +		clean_dcache_guest_page(pfn, vma_pagesize);
>> +
>> +	if (exec_fault)
>> +		invalidate_icache_guest_page(pfn, vma_pagesize);
>> +
>> +	if (hugetlb && vma_pagesize == PMD_SIZE) {
>>  		pmd_t new_pmd = pfn_pmd(pfn, mem_type);
>>  		new_pmd = pmd_mkhuge(new_pmd);
>> -		if (writable) {
>> +		if (writable)
>>  			new_pmd = kvm_s2pmd_mkwrite(new_pmd);
>> -			kvm_set_pfn_dirty(pfn);
>> -		}
>>  
>> -		if (fault_status != FSC_PERM)
>> -			clean_dcache_guest_page(pfn, PMD_SIZE);
>> -
>> -		if (exec_fault) {
>> +		if (stage2_should_exec(kvm, fault_ipa, exec_fault, fault_status))
>>  			new_pmd = kvm_s2pmd_mkexec(new_pmd);
>
> OK, I find this absolutely horrid... ;-)
>
> The rest of the function deals with discrete flags, and all of a sudden
> we have a function call with a bunch of seemingly unrelated parameters.
> And you are repeating it for each vma_pagesize...
>
> How about something like:
>
> 	bool needs_exec;
>
> 	[...]
>
> 	needs_exec = exec_fault || (fault_status == FSC_PERM &&
> 				    stage2_is_exec(kvm, fault_ipa);
>
> And then you just check needs_exec to update the pte/pmd. And you drop
> this helper.

That does look a lot better. I'll roll the change into the next version.

Thanks,
Punit

[...]


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v4 7/7] KVM: arm64: Add support for creating PUD hugepages at stage 2
  2018-07-05 14:08 ` [PATCH v4 7/7] KVM: arm64: Add support for creating PUD hugepages at stage 2 Punit Agrawal
@ 2018-07-05 17:11   ` Suzuki K Poulose
  2018-07-06 14:12     ` Punit Agrawal
  0 siblings, 1 reply; 15+ messages in thread
From: Suzuki K Poulose @ 2018-07-05 17:11 UTC (permalink / raw)
  To: Punit Agrawal, kvmarm
  Cc: linux-arm-kernel, marc.zyngier, christoffer.dall, linux-kernel,
	Russell King, Catalin Marinas, Will Deacon

Hi Punit,

On 05/07/18 15:08, Punit Agrawal wrote:
> KVM only supports PMD hugepages at stage 2. Now that the various page
> handling routines are updated, extend the stage 2 fault handling to
> map in PUD hugepages.
> 
> Addition of PUD hugepage support enables additional page sizes (e.g.,
> 1G with 4K granule) which can be useful on cores that support mapping
> larger block sizes in the TLB entries.
> 
> Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
> Cc: Christoffer Dall <christoffer.dall@arm.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Russell King <linux@armlinux.org.uk>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>

> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
> index 0c04c64e858c..5912210e94d9 100644
> --- a/virt/kvm/arm/mmu.c
> +++ b/virt/kvm/arm/mmu.c
> @@ -116,6 +116,25 @@ static void stage2_dissolve_pmd(struct kvm *kvm, phys_addr_t addr, pmd_t *pmd)
>   	put_page(virt_to_page(pmd));
>   }
>   
> +/**
> + * stage2_dissolve_pud() - clear and flush huge PUD entry
> + * @kvm:	pointer to kvm structure.
> + * @addr:	IPA
> + * @pud:	pud pointer for IPA
> + *
> + * Function clears a PUD entry, flushes addr 1st and 2nd stage TLBs. Marks all
> + * pages in the range dirty.
> + */
> +static void stage2_dissolve_pud(struct kvm *kvm, phys_addr_t addr, pud_t *pud)
> +{
> +	if (!pud_huge(*pud))
> +		return;
> +
> +	pud_clear(pud);

You need to use the stage2_ accessors here. The stage2_dissolve_pmd() uses
"pmd_" helpers as the PTE entries (level 3) are always guaranteed to exist.


> +	kvm_tlb_flush_vmid_ipa(kvm, addr);
> +	put_page(virt_to_page(pud));
> +}
> +
>   static int mmu_topup_memory_cache(struct kvm_mmu_memory_cache *cache,
>   				  int min, int max)
>   {
> @@ -993,7 +1012,7 @@ static pmd_t *stage2_get_pmd(struct kvm *kvm, struct kvm_mmu_memory_cache *cache
>   	pmd_t *pmd;
>   
>   	pud = stage2_get_pud(kvm, cache, addr);
> -	if (!pud)
> +	if (!pud || pud_huge(*pud))
>   		return NULL;

Same here.

>   
>   	if (stage2_pud_none(*pud)) {

Like this ^

> @@ -1038,6 +1057,26 @@ static int stage2_set_pmd_huge(struct kvm *kvm, struct kvm_mmu_memory_cache
>   	return 0;
>   }
>   
> +static int stage2_set_pud_huge(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
> +			       phys_addr_t addr, const pud_t *new_pud)
> +{
> +	pud_t *pud, old_pud;
> +
> +	pud = stage2_get_pud(kvm, cache, addr);
> +	VM_BUG_ON(!pud);
> +
> +	old_pud = *pud;
> +	if (pud_present(old_pud)) {
> +		pud_clear(pud);
> +		kvm_tlb_flush_vmid_ipa(kvm, addr);

Same here.

> +	} else {
> +		get_page(virt_to_page(pud));
> +	}
> +
> +	kvm_set_pud(pud, *new_pud);
> +	return 0;
> +}
> +
>   static bool stage2_is_exec(struct kvm *kvm, phys_addr_t addr)
>   {
>   	pud_t *pudp;
> @@ -1069,6 +1108,7 @@ static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
>   			  phys_addr_t addr, const pte_t *new_pte,
>   			  unsigned long flags)
>   {
> +	pud_t *pud;
>   	pmd_t *pmd;
>   	pte_t *pte, old_pte;
>   	bool iomap = flags & KVM_S2PTE_FLAG_IS_IOMAP;
> @@ -1077,6 +1117,22 @@ static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
>   	VM_BUG_ON(logging_active && !cache);
>   
>   	/* Create stage-2 page table mapping - Levels 0 and 1 */
> +	pud = stage2_get_pud(kvm, cache, addr);
> +	if (!pud) {
> +		/*
> +		 * Ignore calls from kvm_set_spte_hva for unallocated
> +		 * address ranges.
> +		 */
> +		return 0;
> +	}
> +
> +	/*
> +	 * While dirty page logging - dissolve huge PUD, then continue
> +	 * on to allocate page.
> +	 */
> +	if (logging_active)
> +		stage2_dissolve_pud(kvm, addr, pud);
> +
>   	pmd = stage2_get_pmd(kvm, cache, addr);
>   	if (!pmd) {
>   		/*
> @@ -1483,9 +1539,12 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>   	}
>   
>   	vma_pagesize = vma_kernel_pagesize(vma);
> -	if (vma_pagesize == PMD_SIZE && !logging_active) {
> +	if ((vma_pagesize == PMD_SIZE || vma_pagesize == PUD_SIZE) &&
> +	    !logging_active) {
> +		struct hstate *h = hstate_vma(vma);
> +
>   		hugetlb = true;
> -		gfn = (fault_ipa & PMD_MASK) >> PAGE_SHIFT;
> +		gfn = (fault_ipa & huge_page_mask(h)) >> PAGE_SHIFT;
>   	} else {
>   		/*
>   		 * Pages belonging to memslots that don't have the same
> @@ -1572,7 +1631,18 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>   	if (exec_fault)
>   		invalidate_icache_guest_page(pfn, vma_pagesize);
>   
> -	if (hugetlb && vma_pagesize == PMD_SIZE) {
> +	if (hugetlb && vma_pagesize == PUD_SIZE) {

I think we may need to check if the stage2 indeed has 3 levels of
tables to use stage2 PUD. Otherwise, fall back to PTE level mapping
or even PMD huge pages. Also, this cannot be triggered right now,
as we only get PUD hugepages with 4K and we are guaranteed to have
at least 3 levels with 40bit IPA. May be I can take care of it in
the Dynamic IPA series, when we run a guest with say 32bit IPA.
So for now, it is worth adding a comment here.

> +		pud_t new_pud = kvm_pfn_pud(pfn, mem_type);
> +
> +		new_pud = kvm_pud_mkhuge(new_pud);
> +		if (writable)
> +			new_pud = kvm_s2pud_mkwrite(new_pud);
> +
> +		if (stage2_should_exec(kvm, fault_ipa, exec_fault, fault_status))
> +			new_pud = kvm_s2pud_mkexec(new_pud);
> +
> +		ret = stage2_set_pud_huge(kvm, memcache, fault_ipa, &new_pud);
> +	} else if (hugetlb && vma_pagesize == PMD_SIZE) {

Suzuki

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v4 7/7] KVM: arm64: Add support for creating PUD hugepages at stage 2
  2018-07-05 17:11   ` Suzuki K Poulose
@ 2018-07-06 14:12     ` Punit Agrawal
  2018-07-06 14:23       ` Suzuki K Poulose
  0 siblings, 1 reply; 15+ messages in thread
From: Punit Agrawal @ 2018-07-06 14:12 UTC (permalink / raw)
  To: Suzuki K Poulose
  Cc: kvmarm, linux-kernel, marc.zyngier, Catalin Marinas, Will Deacon,
	Russell King, linux-arm-kernel

Suzuki K Poulose <Suzuki.Poulose@arm.com> writes:

> Hi Punit,
>
> On 05/07/18 15:08, Punit Agrawal wrote:
>> KVM only supports PMD hugepages at stage 2. Now that the various page
>> handling routines are updated, extend the stage 2 fault handling to
>> map in PUD hugepages.
>>
>> Addition of PUD hugepage support enables additional page sizes (e.g.,
>> 1G with 4K granule) which can be useful on cores that support mapping
>> larger block sizes in the TLB entries.
>>
>> Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
>> Cc: Christoffer Dall <christoffer.dall@arm.com>
>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>> Cc: Russell King <linux@armlinux.org.uk>
>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>> Cc: Will Deacon <will.deacon@arm.com>
>
>> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
>> index 0c04c64e858c..5912210e94d9 100644
>> --- a/virt/kvm/arm/mmu.c
>> +++ b/virt/kvm/arm/mmu.c
>> @@ -116,6 +116,25 @@ static void stage2_dissolve_pmd(struct kvm *kvm, phys_addr_t addr, pmd_t *pmd)
>>   	put_page(virt_to_page(pmd));
>>   }
>>   +/**
>> + * stage2_dissolve_pud() - clear and flush huge PUD entry
>> + * @kvm:	pointer to kvm structure.
>> + * @addr:	IPA
>> + * @pud:	pud pointer for IPA
>> + *
>> + * Function clears a PUD entry, flushes addr 1st and 2nd stage TLBs. Marks all
>> + * pages in the range dirty.
>> + */
>> +static void stage2_dissolve_pud(struct kvm *kvm, phys_addr_t addr, pud_t *pud)
>> +{
>> +	if (!pud_huge(*pud))
>> +		return;
>> +
>> +	pud_clear(pud);
>
> You need to use the stage2_ accessors here. The stage2_dissolve_pmd() uses
> "pmd_" helpers as the PTE entries (level 3) are always guaranteed to exist.

I've fixed this and other uses of the PUD helpers to go via the stage2_
accessors.

I've still not quite come to terms with the lack of certain levels at
stage 2 vis-a-vis stage 1. I'll be more careful about this going
forward.

>
>> +	kvm_tlb_flush_vmid_ipa(kvm, addr);
>> +	put_page(virt_to_page(pud));
>> +}
>> +
>>   static int mmu_topup_memory_cache(struct kvm_mmu_memory_cache *cache,
>>   				  int min, int max)
>>   {
>> @@ -993,7 +1012,7 @@ static pmd_t *stage2_get_pmd(struct kvm *kvm, struct kvm_mmu_memory_cache *cache
>>   	pmd_t *pmd;
>>     	pud = stage2_get_pud(kvm, cache, addr);
>> -	if (!pud)
>> +	if (!pud || pud_huge(*pud))
>>   		return NULL;
>
> Same here.
>
>>     	if (stage2_pud_none(*pud)) {
>
> Like this ^
>
>> @@ -1038,6 +1057,26 @@ static int stage2_set_pmd_huge(struct kvm *kvm, struct kvm_mmu_memory_cache
>>   	return 0;
>>   }
>>   +static int stage2_set_pud_huge(struct kvm *kvm, struct
>> kvm_mmu_memory_cache *cache,
>> +			       phys_addr_t addr, const pud_t *new_pud)
>> +{
>> +	pud_t *pud, old_pud;
>> +
>> +	pud = stage2_get_pud(kvm, cache, addr);
>> +	VM_BUG_ON(!pud);
>> +
>> +	old_pud = *pud;
>> +	if (pud_present(old_pud)) {
>> +		pud_clear(pud);
>> +		kvm_tlb_flush_vmid_ipa(kvm, addr);
>
> Same here.
>
>> +	} else {
>> +		get_page(virt_to_page(pud));
>> +	}
>> +
>> +	kvm_set_pud(pud, *new_pud);
>> +	return 0;
>> +}
>> +
>>   static bool stage2_is_exec(struct kvm *kvm, phys_addr_t addr)
>>   {
>>   	pud_t *pudp;

[...]

>> @@ -1572,7 +1631,18 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>>   	if (exec_fault)
>>   		invalidate_icache_guest_page(pfn, vma_pagesize);
>>   -	if (hugetlb && vma_pagesize == PMD_SIZE) {
>> +	if (hugetlb && vma_pagesize == PUD_SIZE) {
>
> I think we may need to check if the stage2 indeed has 3 levels of
> tables to use stage2 PUD. Otherwise, fall back to PTE level mapping
> or even PMD huge pages. Also, this cannot be triggered right now,
> as we only get PUD hugepages with 4K and we are guaranteed to have
> at least 3 levels with 40bit IPA. May be I can take care of it in
> the Dynamic IPA series, when we run a guest with say 32bit IPA.
> So for now, it is worth adding a comment here.

Good point. I've added the following comment.

    /*
     * PUD level may not exist if the guest boots with two
     * levels at Stage 2. This configuration is currently
     * not supported due to IPA size supported by KVM.
     *
     * Revisit the assumptions about PUD levels when
     * additional IPA sizes are supported by KVM.
     */

Let me know if looks OK to you.

Thanks a lot for reviewing the patches.

Punit

>
>> +		pud_t new_pud = kvm_pfn_pud(pfn, mem_type);
>> +
>> +		new_pud = kvm_pud_mkhuge(new_pud);
>> +		if (writable)
>> +			new_pud = kvm_s2pud_mkwrite(new_pud);
>> +
>> +		if (stage2_should_exec(kvm, fault_ipa, exec_fault, fault_status))
>> +			new_pud = kvm_s2pud_mkexec(new_pud);
>> +
>> +		ret = stage2_set_pud_huge(kvm, memcache, fault_ipa, &new_pud);
>> +	} else if (hugetlb && vma_pagesize == PMD_SIZE) {
>
> Suzuki
> _______________________________________________
> kvmarm mailing list
> kvmarm@lists.cs.columbia.edu
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v4 7/7] KVM: arm64: Add support for creating PUD hugepages at stage 2
  2018-07-06 14:12     ` Punit Agrawal
@ 2018-07-06 14:23       ` Suzuki K Poulose
  0 siblings, 0 replies; 15+ messages in thread
From: Suzuki K Poulose @ 2018-07-06 14:23 UTC (permalink / raw)
  To: Punit Agrawal
  Cc: kvmarm, linux-kernel, marc.zyngier, Catalin Marinas, Will Deacon,
	Russell King, linux-arm-kernel

Hi Punit,


On 06/07/18 15:12, Punit Agrawal wrote:
> Suzuki K Poulose <Suzuki.Poulose@arm.com> writes:
> 
>> Hi Punit,
>>
>> On 05/07/18 15:08, Punit Agrawal wrote:
>>> KVM only supports PMD hugepages at stage 2. Now that the various page
>>> handling routines are updated, extend the stage 2 fault handling to
>>> map in PUD hugepages.
>>>
>>> Addition of PUD hugepage support enables additional page sizes (e.g.,
>>> 1G with 4K granule) which can be useful on cores that support mapping
>>> larger block sizes in the TLB entries.
>>>
>>> Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
>>> Cc: Christoffer Dall <christoffer.dall@arm.com>
>>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>>> Cc: Russell King <linux@armlinux.org.uk>
>>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>>> Cc: Will Deacon <will.deacon@arm.com>
>>
>>> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
>>> index 0c04c64e858c..5912210e94d9 100644
>>> --- a/virt/kvm/arm/mmu.c
>>> +++ b/virt/kvm/arm/mmu.c
>>> @@ -116,6 +116,25 @@ static void stage2_dissolve_pmd(struct kvm *kvm, phys_addr_t addr, pmd_t *pmd)
>>>    	put_page(virt_to_page(pmd));
>>>    }
>>>    +/**
>>> + * stage2_dissolve_pud() - clear and flush huge PUD entry
>>> + * @kvm:	pointer to kvm structure.
>>> + * @addr:	IPA
>>> + * @pud:	pud pointer for IPA
>>> + *
>>> + * Function clears a PUD entry, flushes addr 1st and 2nd stage TLBs. Marks all
>>> + * pages in the range dirty.
>>> + */
>>> +static void stage2_dissolve_pud(struct kvm *kvm, phys_addr_t addr, pud_t *pud)
>>> +{
>>> +	if (!pud_huge(*pud))
>>> +		return;
>>> +
>>> +	pud_clear(pud);
>>
>> You need to use the stage2_ accessors here. The stage2_dissolve_pmd() uses
>> "pmd_" helpers as the PTE entries (level 3) are always guaranteed to exist.
> 
> I've fixed this and other uses of the PUD helpers to go via the stage2_
> accessors.
> 
> I've still not quite come to terms with the lack of certain levels at
> stage 2 vis-a-vis stage 1. I'll be more careful about this going
> forward.

I accept that it can be quite confusing. Once we get level independent types
and table accessors this might be easier. For now, the general rule is stick
to stage2_ accessors whenever you deal with the stage2 table. Rest should be
left to the stage2 code to deal with it.

> 
>>> @@ -1572,7 +1631,18 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>>>    	if (exec_fault)
>>>    		invalidate_icache_guest_page(pfn, vma_pagesize);
>>>    -	if (hugetlb && vma_pagesize == PMD_SIZE) {
>>> +	if (hugetlb && vma_pagesize == PUD_SIZE) {
>>
>> I think we may need to check if the stage2 indeed has 3 levels of
>> tables to use stage2 PUD. Otherwise, fall back to PTE level mapping
>> or even PMD huge pages. Also, this cannot be triggered right now,
>> as we only get PUD hugepages with 4K and we are guaranteed to have
>> at least 3 levels with 40bit IPA. May be I can take care of it in
>> the Dynamic IPA series, when we run a guest with say 32bit IPA.
>> So for now, it is worth adding a comment here.
> 
> Good point. I've added the following comment.
> 
>      /*
>       * PUD level may not exist if the guest boots with two
>       * levels at Stage 2. This configuration is currently
>       * not supported due to IPA size supported by KVM.
>       *
>       * Revisit the assumptions about PUD levels when
>       * additional IPA sizes are supported by KVM.
>       */
> 

Yep, that looks fine to me.

Suzuki

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v4 4/7] KVM: arm64: Support PUD hugepage in stage2_is_exec()
  2018-07-05 16:48   ` Suzuki K Poulose
@ 2018-07-06 14:36     ` Punit Agrawal
  0 siblings, 0 replies; 15+ messages in thread
From: Punit Agrawal @ 2018-07-06 14:36 UTC (permalink / raw)
  To: Suzuki K Poulose
  Cc: kvmarm, linux-kernel, marc.zyngier, Catalin Marinas, Will Deacon,
	Russell King, linux-arm-kernel

Suzuki K Poulose <Suzuki.Poulose@arm.com> writes:

> Hi Punit,
>
> On 05/07/18 15:08, Punit Agrawal wrote:
>> In preparation for creating PUD hugepages at stage 2, add support for
>> detecting execute permissions on PUD page table entries. Faults due to
>> lack of execute permissions on page table entries is used to perform
>> i-cache invalidation on first execute.
>>
>> Provide trivial implementations of arm32 helpers to allow sharing of
>> code.
>>
>> Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
>> Cc: Christoffer Dall <christoffer.dall@arm.com>
>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>> Cc: Russell King <linux@armlinux.org.uk>
>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>> Cc: Will Deacon <will.deacon@arm.com>
>> ---
>>   arch/arm/include/asm/kvm_mmu.h         |  6 ++++++
>>   arch/arm64/include/asm/kvm_mmu.h       |  5 +++++
>>   arch/arm64/include/asm/pgtable-hwdef.h |  2 ++
>>   virt/kvm/arm/mmu.c                     | 10 +++++++++-
>>   4 files changed, 22 insertions(+), 1 deletion(-)
>>

[...]

>> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
>> index db04b18218c1..ccdea0edabb3 100644
>> --- a/virt/kvm/arm/mmu.c
>> +++ b/virt/kvm/arm/mmu.c
>> @@ -1040,10 +1040,18 @@ static int stage2_set_pmd_huge(struct kvm *kvm, struct kvm_mmu_memory_cache
>>     static bool stage2_is_exec(struct kvm *kvm, phys_addr_t addr)
>>   {
>> +	pud_t *pudp;
>>   	pmd_t *pmdp;
>>   	pte_t *ptep;
>>   -	pmdp = stage2_get_pmd(kvm, NULL, addr);
>> +	pudp = stage2_get_pud(kvm, NULL, addr);
>> +	if (!pudp || pud_none(*pudp) || !pud_present(*pudp))
>> +		return false;
>> +
>> +	if (pud_huge(*pudp))
>> +		return kvm_s2pud_exec(pudp);
>> +
>> +	pmdp = stage2_pmd_offset(pudp, addr);
>>   	if (!pmdp || pmd_none(*pmdp) || !pmd_present(*pmdp))
>>   		return false;
>
> I am wondering if we need a slightly better way to deal with this
> kind of operation. We seem to duplicate the above operation (here and
> in the following patches), i.e, finding the "leaf entry" for a given
> address and follow the checks one level up at a time.

We definitely need a better way to walk the page tables - for stage 2
but also stage 1 and hugetlbfs. As things stands, there is a lot of
repetitive pattern with small differences at some levels (hugepage
and/or THP, p*d_none(), p*d_present(), ...)

> So instead of doing, stage2_get_pud() and walking down everywhere this
> is needed, how about adding :
>
> /* Returns true if the leaf entry is found and updates the relevant pointer */
> found = stage2_get_leaf_entry(kvm, NULL, addr, &pudp, &pmdp, &ptep)
>
> which could set the appropriate entry and we could check the result
> here.

I prototyped with the above approach but found that it could not be used
in all places due to the specific semantics of the walk. Also, then we
end up with the following pattern.

    if (pudp) {
            ...
    } else if (pmdp) {
            ...
    } else {
            ...
    }

At the end of the conversion, the resulting code is the same size as
well (see diff below for changes).

Another idea might be to build a page table walker passing in callbacks
- but this makes more sense if we have unified modifiers for the
levels. I think this is something we should explore but would like to do
outside the context of this series.

Hope thats ok.

Thanks for having a look,
Punit

-- >8 --
diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index eddb74a7fac3..ea5c99f6dfab 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -1077,31 +1077,56 @@ static int stage2_set_pud_huge(struct kvm *kvm, struct kvm_mmu_memory_cache *cac
 	return 0;
 }
 
-static bool stage2_is_exec(struct kvm *kvm, phys_addr_t addr)
+static bool stage2_get_leaf_entry(struct kvm *kvm, phys_addr_t addr, pud_t **pudp,
+				  pmd_t **pmdp, pte_t **ptep)
 {
-	pud_t *pudp;
-	pmd_t *pmdp;
-	pte_t *ptep;
+	pud_t *lpudp;
+	pmd_t *lpmdp;
+	pte_t *lptep;
 
-	pudp = stage2_get_pud(kvm, NULL, addr);
-	if (!pudp || pud_none(*pudp) || !pud_present(*pudp))
+	lpudp = stage2_get_pud(kvm, NULL, addr);
+	if (!lpudp || pud_none(*lpudp) || !pud_present(*lpudp))
 		return false;
 
-	if (pud_huge(*pudp))
-		return kvm_s2pud_exec(pudp);
+	if (pud_huge(*lpudp)) {
+		*pudp = lpudp;
+		return true;
+	}
 
-	pmdp = stage2_pmd_offset(pudp, addr);
-	if (!pmdp || pmd_none(*pmdp) || !pmd_present(*pmdp))
+	lpmdp = stage2_pmd_offset(lpudp, addr);
+	if (!lpmdp || pmd_none(*lpmdp) || !pmd_present(*lpmdp))
 		return false;
 
-	if (pmd_thp_or_huge(*pmdp))
-		return kvm_s2pmd_exec(pmdp);
+	if (pmd_thp_or_huge(*lpmdp)) {
+		*pmdp = lpmdp;
+		return true;
+	}
 
-	ptep = pte_offset_kernel(pmdp, addr);
-	if (!ptep || pte_none(*ptep) || !pte_present(*ptep))
+	lptep = pte_offset_kernel(lpmdp, addr);
+	if (!lptep || pte_none(*lptep) || !pte_present(*lptep))
 		return false;
 
-	return kvm_s2pte_exec(ptep);
+	*ptep = lptep;
+	return true;
+}
+
+static bool stage2_is_exec(struct kvm *kvm, phys_addr_t addr)
+{
+	pud_t *pudp = NULL;
+	pmd_t *pmdp = NULL;
+	pte_t *ptep = NULL;
+	bool found;
+
+	found = stage2_get_leaf_entry(kvm, addr, &pudp, &pmdp, &ptep);
+	if (!found)
+		return false;
+
+	if (pudp)
+		return kvm_s2pud_exec(pudp);
+	else if (pmdp)
+		return kvm_s2pmd_exec(pmdp);
+	else
+		return kvm_s2pte_exec(ptep);
 }
 
 static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
@@ -1681,45 +1706,36 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
  */
 static void handle_access_fault(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa)
 {
-	pud_t *pud;
-	pmd_t *pmd;
-	pte_t *pte;
+	pud_t *pud = NULL;
+	pmd_t *pmd = NULL;
+	pte_t *pte = NULL;
 	kvm_pfn_t pfn;
-	bool pfn_valid = false;
+	bool found, pfn_valid = false;
 
 	trace_kvm_access_fault(fault_ipa);
 
 	spin_lock(&vcpu->kvm->mmu_lock);
 
-	pud = stage2_get_pud(vcpu->kvm, NULL, fault_ipa);
-	if (!pud || pud_none(*pud))
-		goto out;		/* Nothing there */
+	found = stage2_get_leaf_entry(kvm, fault_ipa, &pud, &pmd, &pte);
+	if (!found)
+		goto out;
 
-	if (pud_huge(*pud)) {		/* HugeTLB */
+	if (pud) {		/* HugeTLB */
 		*pud = kvm_s2pud_mkyoung(*pud);
 		pfn = kvm_pud_pfn(*pud);
 		pfn_valid = true;
 		goto out;
-	}
-
-	pmd = stage2_pmd_offset(pud, fault_ipa);
-	if (!pmd || pmd_none(*pmd))	/* Nothing there */
-		goto out;
-
-	if (pmd_thp_or_huge(*pmd)) {	/* THP, HugeTLB */
+	} else 	if (pmd) {	/* THP, HugeTLB */
 		*pmd = pmd_mkyoung(*pmd);
 		pfn = pmd_pfn(*pmd);
 		pfn_valid = true;
 		goto out;
+	} else {
+		*pte = pte_mkyoung(*pte);	/* Just a page... */
+		pfn = pte_pfn(*pte);
+		pfn_valid = true;
 	}
 
-	pte = pte_offset_kernel(pmd, fault_ipa);
-	if (pte_none(*pte))		/* Nothing there either */
-		goto out;
-
-	*pte = pte_mkyoung(*pte);	/* Just a page... */
-	pfn = pte_pfn(*pte);
-	pfn_valid = true;
 out:
 	spin_unlock(&vcpu->kvm->mmu_lock);
 	if (pfn_valid)
@@ -1932,58 +1948,42 @@ void kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte)
 
 static int kvm_age_hva_handler(struct kvm *kvm, gpa_t gpa, u64 size, void *data)
 {
-	pud_t *pud;
-	pmd_t *pmd;
-	pte_t *pte;
+	pud_t *pud = NULL;
+	pmd_t *pmd = NULL;
+	pte_t *pte = NULL;
+	bool found;
 
 	WARN_ON(size != PAGE_SIZE && size != PMD_SIZE && size != PUD_SIZE);
-	pud = stage2_get_pud(kvm, NULL, gpa);
-	if (!pud || pud_none(*pud))	/* Nothing there */
+	found = stage2_get_leaf_entry(kvm, gpa, &pud, &pmd, &pte);
+	if (!found)
 		return 0;
 
-	if (pud_huge(*pud))		/* HugeTLB */
+	if (pud)
 		return stage2_pudp_test_and_clear_young(pud);
-
-	pmd = stage2_pmd_offset(pud, gpa);
-	if (!pmd || pmd_none(*pmd))	/* Nothing there */
-		return 0;
-
-	if (pmd_thp_or_huge(*pmd))	/* THP, HugeTLB */
+	else if (pmd)
 		return stage2_pmdp_test_and_clear_young(pmd);
-
-	pte = pte_offset_kernel(pmd, gpa);
-	if (pte_none(*pte))
-		return 0;
-
-	return stage2_ptep_test_and_clear_young(pte);
+	else
+		return stage2_ptep_test_and_clear_young(pte);
 }
 
 static int kvm_test_age_hva_handler(struct kvm *kvm, gpa_t gpa, u64 size, void *data)
 {
-	pud_t *pud;
-	pmd_t *pmd;
-	pte_t *pte;
+	pud_t *pud = NULL;
+	pmd_t *pmd = NULL;
+	pte_t *pte = NULL;
+	bool found;
 
 	WARN_ON(size != PAGE_SIZE && size != PMD_SIZE && size != PUD_SIZE);
-	pud = stage2_get_pud(kvm, NULL, gpa);
-	if (!pud || pud_none(*pud))	/* Nothing there */
+	found = stage2_get_leaf_entry(kvm, gpa, &pud, &pmd, &pte);
+	if (!found)
 		return 0;
 
-	if (pud_huge(*pud))		/* HugeTLB */
+	if (pud)
 		return kvm_s2pud_young(*pud);
-
-	pmd = stage2_pmd_offset(pud, gpa);
-	if (!pmd || pmd_none(*pmd))	/* Nothing there */
-		return 0;
-
-	if (pmd_thp_or_huge(*pmd))		/* THP, HugeTLB */
+	else if (pmd)
 		return pmd_young(*pmd);
-
-	pte = pte_offset_kernel(pmd, gpa);
-	if (!pte_none(*pte))		/* Just a page... */
+	else
 		return pte_young(*pte);
-
-	return 0;
 }
 
 int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end)

^ permalink raw reply related	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2018-07-06 14:36 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-07-05 14:08 [PATCH v4 0/7] KVM: Support PUD hugepages at stage 2 Punit Agrawal
2018-07-05 14:08 ` [PATCH v4 1/7] KVM: arm/arm64: Share common code in user_mem_abort() Punit Agrawal
2018-07-05 16:24   ` Marc Zyngier
2018-07-05 16:57     ` Punit Agrawal
2018-07-05 14:08 ` [PATCH v4 2/7] KVM: arm/arm64: Introduce helpers to manupulate page table entries Punit Agrawal
2018-07-05 14:08 ` [PATCH v4 3/7] KVM: arm64: Support dirty page tracking for PUD hugepages Punit Agrawal
2018-07-05 14:08 ` [PATCH v4 4/7] KVM: arm64: Support PUD hugepage in stage2_is_exec() Punit Agrawal
2018-07-05 16:48   ` Suzuki K Poulose
2018-07-06 14:36     ` Punit Agrawal
2018-07-05 14:08 ` [PATCH v4 5/7] KVM: arm64: Support handling access faults for PUD hugepages Punit Agrawal
2018-07-05 14:08 ` [PATCH v4 6/7] KVM: arm64: Update age handlers to support " Punit Agrawal
2018-07-05 14:08 ` [PATCH v4 7/7] KVM: arm64: Add support for creating PUD hugepages at stage 2 Punit Agrawal
2018-07-05 17:11   ` Suzuki K Poulose
2018-07-06 14:12     ` Punit Agrawal
2018-07-06 14:23       ` Suzuki K Poulose

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).