All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/4] KVM: Support PUD hugepages at stage 2
@ 2018-05-01 10:26 ` Punit Agrawal
  0 siblings, 0 replies; 28+ messages in thread
From: Punit Agrawal @ 2018-05-01 10:26 UTC (permalink / raw)
  To: kvmarm
  Cc: Punit Agrawal, linux-arm-kernel, marc.zyngier, christoffer.dall,
	linux-kernel, suzuki.poulose

Hi,

This patchset adds support for PUD hugepages at stage 2. This feature
is useful on cores that have support for large sized TLB mappings
(e.g., 1GB for 4K granule). Previous postings can be found at [0][1].

Support is added to code that is shared between arm and arm64. Dummy
helpers for arm are provided as the port does not support PUD hugepage
sizes.

There is a small conflict with the series to add support for 52 bit
IPA[2]. The patches have been functionally tested on an A57 based
system. The patchset is based on v4.17-rc3 and incorporates feedback
received on the previous version.

Thanks,
Punit

Changes v2:
* Create helper to check if the page should have exec permission [1/4]
* Fix broken condition to detect THP hugepage [1/4]
* Fix in-correct hunk resulting from a rebase [4/4]

[0] https://www.spinics.net/lists/arm-kernel/msg628053.html
[1] https://lkml.org/lkml/2018/4/20/566
[2] https://lwn.net/Articles/750176/

Punit Agrawal (4):
  KVM: arm/arm64: Share common code in user_mem_abort()
  KVM: arm/arm64: Introduce helpers to manupulate page table entries
  KVM: arm64: Support dirty page tracking for PUD hugepages
  KVM: arm64: Add support for PUD hugepages at stage 2

 arch/arm/include/asm/kvm_mmu.h         |  40 +++++++++
 arch/arm64/include/asm/kvm_mmu.h       |  30 +++++++
 arch/arm64/include/asm/pgtable-hwdef.h |   4 +
 arch/arm64/include/asm/pgtable.h       |   2 +
 virt/kvm/arm/mmu.c                     | 120 +++++++++++++++++--------
 5 files changed, 161 insertions(+), 35 deletions(-)

-- 
2.17.0

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH v2 0/4] KVM: Support PUD hugepages at stage 2
@ 2018-05-01 10:26 ` Punit Agrawal
  0 siblings, 0 replies; 28+ messages in thread
From: Punit Agrawal @ 2018-05-01 10:26 UTC (permalink / raw)
  To: linux-arm-kernel

Hi,

This patchset adds support for PUD hugepages at stage 2. This feature
is useful on cores that have support for large sized TLB mappings
(e.g., 1GB for 4K granule). Previous postings can be found at [0][1].

Support is added to code that is shared between arm and arm64. Dummy
helpers for arm are provided as the port does not support PUD hugepage
sizes.

There is a small conflict with the series to add support for 52 bit
IPA[2]. The patches have been functionally tested on an A57 based
system. The patchset is based on v4.17-rc3 and incorporates feedback
received on the previous version.

Thanks,
Punit

Changes v2:
* Create helper to check if the page should have exec permission [1/4]
* Fix broken condition to detect THP hugepage [1/4]
* Fix in-correct hunk resulting from a rebase [4/4]

[0] https://www.spinics.net/lists/arm-kernel/msg628053.html
[1] https://lkml.org/lkml/2018/4/20/566
[2] https://lwn.net/Articles/750176/

Punit Agrawal (4):
  KVM: arm/arm64: Share common code in user_mem_abort()
  KVM: arm/arm64: Introduce helpers to manupulate page table entries
  KVM: arm64: Support dirty page tracking for PUD hugepages
  KVM: arm64: Add support for PUD hugepages at stage 2

 arch/arm/include/asm/kvm_mmu.h         |  40 +++++++++
 arch/arm64/include/asm/kvm_mmu.h       |  30 +++++++
 arch/arm64/include/asm/pgtable-hwdef.h |   4 +
 arch/arm64/include/asm/pgtable.h       |   2 +
 virt/kvm/arm/mmu.c                     | 120 +++++++++++++++++--------
 5 files changed, 161 insertions(+), 35 deletions(-)

-- 
2.17.0

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH v2 1/4] KVM: arm/arm64: Share common code in user_mem_abort()
  2018-05-01 10:26 ` Punit Agrawal
@ 2018-05-01 10:26   ` Punit Agrawal
  -1 siblings, 0 replies; 28+ messages in thread
From: Punit Agrawal @ 2018-05-01 10:26 UTC (permalink / raw)
  To: kvmarm
  Cc: Punit Agrawal, linux-arm-kernel, marc.zyngier, christoffer.dall,
	linux-kernel, suzuki.poulose

The code for operations such as marking the pfn as dirty, and
dcache/icache maintenance during stage 2 fault handling is duplicated
between normal pages and PMD hugepages.

Instead of creating another copy of the operations when we introduce
PUD hugepages, let's share them across the different pagesizes.

Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
---
 virt/kvm/arm/mmu.c | 66 +++++++++++++++++++++++++++-------------------
 1 file changed, 39 insertions(+), 27 deletions(-)

diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index 7f6a944db23d..686fc6a4b866 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -1396,6 +1396,21 @@ static void invalidate_icache_guest_page(kvm_pfn_t pfn, unsigned long size)
 	__invalidate_icache_guest_page(pfn, size);
 }
 
+static bool stage2_should_exec(struct kvm *kvm, phys_addr_t addr,
+			       bool exec_fault, unsigned long fault_status)
+{
+	/*
+	 * If we took an execution fault we will have made the
+	 * icache/dcache coherent and should now let the s2 mapping be
+	 * executable.
+	 *
+	 * Write faults (!exec_fault && FSC_PERM) are orthogonal to
+	 * execute permissions, and we preserve whatever we have.
+	 */
+	return exec_fault ||
+		(fault_status == FSC_PERM && stage2_is_exec(kvm, addr));
+}
+
 static void kvm_send_hwpoison_signal(unsigned long address,
 				     struct vm_area_struct *vma)
 {
@@ -1428,7 +1443,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 	kvm_pfn_t pfn;
 	pgprot_t mem_type = PAGE_S2;
 	bool logging_active = memslot_is_logging(memslot);
-	unsigned long flags = 0;
+	unsigned long vma_pagesize, flags = 0;
 
 	write_fault = kvm_is_write_fault(vcpu);
 	exec_fault = kvm_vcpu_trap_is_iabt(vcpu);
@@ -1448,7 +1463,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 		return -EFAULT;
 	}
 
-	if (vma_kernel_pagesize(vma) == PMD_SIZE && !logging_active) {
+	vma_pagesize = vma_kernel_pagesize(vma);
+	if (vma_pagesize == PMD_SIZE && !logging_active) {
 		hugetlb = true;
 		gfn = (fault_ipa & PMD_MASK) >> PAGE_SHIFT;
 	} else {
@@ -1517,28 +1533,34 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 	if (mmu_notifier_retry(kvm, mmu_seq))
 		goto out_unlock;
 
-	if (!hugetlb && !force_pte)
+	if (!hugetlb && !force_pte) {
 		hugetlb = transparent_hugepage_adjust(&pfn, &fault_ipa);
+		/*
+		 * Only PMD_SIZE transparent hugepages(THP) are
+		 * currently supported. This code will need to be
+		 * updated to support other THP sizes.
+		 */
+		if (hugetlb)
+			vma_pagesize = PMD_SIZE;
+	}
+
+	if (writable)
+		kvm_set_pfn_dirty(pfn);
+
+	if (fault_status != FSC_PERM)
+		clean_dcache_guest_page(pfn, vma_pagesize);
+
+	if (exec_fault)
+		invalidate_icache_guest_page(pfn, vma_pagesize);
 
 	if (hugetlb) {
 		pmd_t new_pmd = pfn_pmd(pfn, mem_type);
 		new_pmd = pmd_mkhuge(new_pmd);
-		if (writable) {
+		if (writable)
 			new_pmd = kvm_s2pmd_mkwrite(new_pmd);
-			kvm_set_pfn_dirty(pfn);
-		}
 
-		if (fault_status != FSC_PERM)
-			clean_dcache_guest_page(pfn, PMD_SIZE);
-
-		if (exec_fault) {
+		if (stage2_should_exec(kvm, fault_ipa, exec_fault, fault_status))
 			new_pmd = kvm_s2pmd_mkexec(new_pmd);
-			invalidate_icache_guest_page(pfn, PMD_SIZE);
-		} else if (fault_status == FSC_PERM) {
-			/* Preserve execute if XN was already cleared */
-			if (stage2_is_exec(kvm, fault_ipa))
-				new_pmd = kvm_s2pmd_mkexec(new_pmd);
-		}
 
 		ret = stage2_set_pmd_huge(kvm, memcache, fault_ipa, &new_pmd);
 	} else {
@@ -1546,21 +1568,11 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 
 		if (writable) {
 			new_pte = kvm_s2pte_mkwrite(new_pte);
-			kvm_set_pfn_dirty(pfn);
 			mark_page_dirty(kvm, gfn);
 		}
 
-		if (fault_status != FSC_PERM)
-			clean_dcache_guest_page(pfn, PAGE_SIZE);
-
-		if (exec_fault) {
+		if (stage2_should_exec(kvm, fault_ipa, exec_fault, fault_status))
 			new_pte = kvm_s2pte_mkexec(new_pte);
-			invalidate_icache_guest_page(pfn, PAGE_SIZE);
-		} else if (fault_status == FSC_PERM) {
-			/* Preserve execute if XN was already cleared */
-			if (stage2_is_exec(kvm, fault_ipa))
-				new_pte = kvm_s2pte_mkexec(new_pte);
-		}
 
 		ret = stage2_set_pte(kvm, memcache, fault_ipa, &new_pte, flags);
 	}
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 1/4] KVM: arm/arm64: Share common code in user_mem_abort()
@ 2018-05-01 10:26   ` Punit Agrawal
  0 siblings, 0 replies; 28+ messages in thread
From: Punit Agrawal @ 2018-05-01 10:26 UTC (permalink / raw)
  To: linux-arm-kernel

The code for operations such as marking the pfn as dirty, and
dcache/icache maintenance during stage 2 fault handling is duplicated
between normal pages and PMD hugepages.

Instead of creating another copy of the operations when we introduce
PUD hugepages, let's share them across the different pagesizes.

Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
---
 virt/kvm/arm/mmu.c | 66 +++++++++++++++++++++++++++-------------------
 1 file changed, 39 insertions(+), 27 deletions(-)

diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index 7f6a944db23d..686fc6a4b866 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -1396,6 +1396,21 @@ static void invalidate_icache_guest_page(kvm_pfn_t pfn, unsigned long size)
 	__invalidate_icache_guest_page(pfn, size);
 }
 
+static bool stage2_should_exec(struct kvm *kvm, phys_addr_t addr,
+			       bool exec_fault, unsigned long fault_status)
+{
+	/*
+	 * If we took an execution fault we will have made the
+	 * icache/dcache coherent and should now let the s2 mapping be
+	 * executable.
+	 *
+	 * Write faults (!exec_fault && FSC_PERM) are orthogonal to
+	 * execute permissions, and we preserve whatever we have.
+	 */
+	return exec_fault ||
+		(fault_status == FSC_PERM && stage2_is_exec(kvm, addr));
+}
+
 static void kvm_send_hwpoison_signal(unsigned long address,
 				     struct vm_area_struct *vma)
 {
@@ -1428,7 +1443,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 	kvm_pfn_t pfn;
 	pgprot_t mem_type = PAGE_S2;
 	bool logging_active = memslot_is_logging(memslot);
-	unsigned long flags = 0;
+	unsigned long vma_pagesize, flags = 0;
 
 	write_fault = kvm_is_write_fault(vcpu);
 	exec_fault = kvm_vcpu_trap_is_iabt(vcpu);
@@ -1448,7 +1463,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 		return -EFAULT;
 	}
 
-	if (vma_kernel_pagesize(vma) == PMD_SIZE && !logging_active) {
+	vma_pagesize = vma_kernel_pagesize(vma);
+	if (vma_pagesize == PMD_SIZE && !logging_active) {
 		hugetlb = true;
 		gfn = (fault_ipa & PMD_MASK) >> PAGE_SHIFT;
 	} else {
@@ -1517,28 +1533,34 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 	if (mmu_notifier_retry(kvm, mmu_seq))
 		goto out_unlock;
 
-	if (!hugetlb && !force_pte)
+	if (!hugetlb && !force_pte) {
 		hugetlb = transparent_hugepage_adjust(&pfn, &fault_ipa);
+		/*
+		 * Only PMD_SIZE transparent hugepages(THP) are
+		 * currently supported. This code will need to be
+		 * updated to support other THP sizes.
+		 */
+		if (hugetlb)
+			vma_pagesize = PMD_SIZE;
+	}
+
+	if (writable)
+		kvm_set_pfn_dirty(pfn);
+
+	if (fault_status != FSC_PERM)
+		clean_dcache_guest_page(pfn, vma_pagesize);
+
+	if (exec_fault)
+		invalidate_icache_guest_page(pfn, vma_pagesize);
 
 	if (hugetlb) {
 		pmd_t new_pmd = pfn_pmd(pfn, mem_type);
 		new_pmd = pmd_mkhuge(new_pmd);
-		if (writable) {
+		if (writable)
 			new_pmd = kvm_s2pmd_mkwrite(new_pmd);
-			kvm_set_pfn_dirty(pfn);
-		}
 
-		if (fault_status != FSC_PERM)
-			clean_dcache_guest_page(pfn, PMD_SIZE);
-
-		if (exec_fault) {
+		if (stage2_should_exec(kvm, fault_ipa, exec_fault, fault_status))
 			new_pmd = kvm_s2pmd_mkexec(new_pmd);
-			invalidate_icache_guest_page(pfn, PMD_SIZE);
-		} else if (fault_status == FSC_PERM) {
-			/* Preserve execute if XN was already cleared */
-			if (stage2_is_exec(kvm, fault_ipa))
-				new_pmd = kvm_s2pmd_mkexec(new_pmd);
-		}
 
 		ret = stage2_set_pmd_huge(kvm, memcache, fault_ipa, &new_pmd);
 	} else {
@@ -1546,21 +1568,11 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 
 		if (writable) {
 			new_pte = kvm_s2pte_mkwrite(new_pte);
-			kvm_set_pfn_dirty(pfn);
 			mark_page_dirty(kvm, gfn);
 		}
 
-		if (fault_status != FSC_PERM)
-			clean_dcache_guest_page(pfn, PAGE_SIZE);
-
-		if (exec_fault) {
+		if (stage2_should_exec(kvm, fault_ipa, exec_fault, fault_status))
 			new_pte = kvm_s2pte_mkexec(new_pte);
-			invalidate_icache_guest_page(pfn, PAGE_SIZE);
-		} else if (fault_status == FSC_PERM) {
-			/* Preserve execute if XN was already cleared */
-			if (stage2_is_exec(kvm, fault_ipa))
-				new_pte = kvm_s2pte_mkexec(new_pte);
-		}
 
 		ret = stage2_set_pte(kvm, memcache, fault_ipa, &new_pte, flags);
 	}
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 2/4] KVM: arm/arm64: Introduce helpers to manupulate page table entries
  2018-05-01 10:26 ` Punit Agrawal
@ 2018-05-01 10:26   ` Punit Agrawal
  -1 siblings, 0 replies; 28+ messages in thread
From: Punit Agrawal @ 2018-05-01 10:26 UTC (permalink / raw)
  To: kvmarm
  Cc: Punit Agrawal, linux-arm-kernel, marc.zyngier, christoffer.dall,
	linux-kernel, suzuki.poulose, Russell King, Catalin Marinas,
	Will Deacon

Introduce helpers to abstract architectural handling of the conversion
of pfn to page table entries and marking a PMD page table entry as a
block entry.

The helpers are introduced in preparation for supporting PUD hugepages
at stage 2 - which are supported on arm64 but do not exist on arm.

Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm/include/asm/kvm_mmu.h   | 5 +++++
 arch/arm64/include/asm/kvm_mmu.h | 5 +++++
 virt/kvm/arm/mmu.c               | 7 ++++---
 3 files changed, 14 insertions(+), 3 deletions(-)

diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index 707a1f06dc5d..5907a81ad5c1 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -75,6 +75,11 @@ phys_addr_t kvm_get_idmap_vector(void);
 int kvm_mmu_init(void);
 void kvm_clear_hyp_idmap(void);
 
+#define kvm_pfn_pte(pfn, prot)	pfn_pte(pfn, prot)
+#define kvm_pfn_pmd(pfn, prot)	pfn_pmd(pfn, prot)
+
+#define kvm_pmd_mkhuge(pmd)	pmd_mkhuge(pmd)
+
 static inline void kvm_set_pmd(pmd_t *pmd, pmd_t new_pmd)
 {
 	*pmd = new_pmd;
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 082110993647..d962508ce4b3 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -173,6 +173,11 @@ void kvm_clear_hyp_idmap(void);
 #define	kvm_set_pte(ptep, pte)		set_pte(ptep, pte)
 #define	kvm_set_pmd(pmdp, pmd)		set_pmd(pmdp, pmd)
 
+#define kvm_pfn_pte(pfn, prot)		pfn_pte(pfn, prot)
+#define kvm_pfn_pmd(pfn, prot)		pfn_pmd(pfn, prot)
+
+#define kvm_pmd_mkhuge(pmd)		pmd_mkhuge(pmd)
+
 static inline pte_t kvm_s2pte_mkwrite(pte_t pte)
 {
 	pte_val(pte) |= PTE_S2_RDWR;
diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index 686fc6a4b866..74750236f445 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -1554,8 +1554,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 		invalidate_icache_guest_page(pfn, vma_pagesize);
 
 	if (hugetlb) {
-		pmd_t new_pmd = pfn_pmd(pfn, mem_type);
-		new_pmd = pmd_mkhuge(new_pmd);
+		pmd_t new_pmd = kvm_pfn_pmd(pfn, mem_type);
+
+		new_pmd = kvm_pmd_mkhuge(new_pmd);
 		if (writable)
 			new_pmd = kvm_s2pmd_mkwrite(new_pmd);
 
@@ -1564,7 +1565,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 
 		ret = stage2_set_pmd_huge(kvm, memcache, fault_ipa, &new_pmd);
 	} else {
-		pte_t new_pte = pfn_pte(pfn, mem_type);
+		pte_t new_pte = kvm_pfn_pte(pfn, mem_type);
 
 		if (writable) {
 			new_pte = kvm_s2pte_mkwrite(new_pte);
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 2/4] KVM: arm/arm64: Introduce helpers to manupulate page table entries
@ 2018-05-01 10:26   ` Punit Agrawal
  0 siblings, 0 replies; 28+ messages in thread
From: Punit Agrawal @ 2018-05-01 10:26 UTC (permalink / raw)
  To: linux-arm-kernel

Introduce helpers to abstract architectural handling of the conversion
of pfn to page table entries and marking a PMD page table entry as a
block entry.

The helpers are introduced in preparation for supporting PUD hugepages
at stage 2 - which are supported on arm64 but do not exist on arm.

Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm/include/asm/kvm_mmu.h   | 5 +++++
 arch/arm64/include/asm/kvm_mmu.h | 5 +++++
 virt/kvm/arm/mmu.c               | 7 ++++---
 3 files changed, 14 insertions(+), 3 deletions(-)

diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index 707a1f06dc5d..5907a81ad5c1 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -75,6 +75,11 @@ phys_addr_t kvm_get_idmap_vector(void);
 int kvm_mmu_init(void);
 void kvm_clear_hyp_idmap(void);
 
+#define kvm_pfn_pte(pfn, prot)	pfn_pte(pfn, prot)
+#define kvm_pfn_pmd(pfn, prot)	pfn_pmd(pfn, prot)
+
+#define kvm_pmd_mkhuge(pmd)	pmd_mkhuge(pmd)
+
 static inline void kvm_set_pmd(pmd_t *pmd, pmd_t new_pmd)
 {
 	*pmd = new_pmd;
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 082110993647..d962508ce4b3 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -173,6 +173,11 @@ void kvm_clear_hyp_idmap(void);
 #define	kvm_set_pte(ptep, pte)		set_pte(ptep, pte)
 #define	kvm_set_pmd(pmdp, pmd)		set_pmd(pmdp, pmd)
 
+#define kvm_pfn_pte(pfn, prot)		pfn_pte(pfn, prot)
+#define kvm_pfn_pmd(pfn, prot)		pfn_pmd(pfn, prot)
+
+#define kvm_pmd_mkhuge(pmd)		pmd_mkhuge(pmd)
+
 static inline pte_t kvm_s2pte_mkwrite(pte_t pte)
 {
 	pte_val(pte) |= PTE_S2_RDWR;
diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index 686fc6a4b866..74750236f445 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -1554,8 +1554,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 		invalidate_icache_guest_page(pfn, vma_pagesize);
 
 	if (hugetlb) {
-		pmd_t new_pmd = pfn_pmd(pfn, mem_type);
-		new_pmd = pmd_mkhuge(new_pmd);
+		pmd_t new_pmd = kvm_pfn_pmd(pfn, mem_type);
+
+		new_pmd = kvm_pmd_mkhuge(new_pmd);
 		if (writable)
 			new_pmd = kvm_s2pmd_mkwrite(new_pmd);
 
@@ -1564,7 +1565,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 
 		ret = stage2_set_pmd_huge(kvm, memcache, fault_ipa, &new_pmd);
 	} else {
-		pte_t new_pte = pfn_pte(pfn, mem_type);
+		pte_t new_pte = kvm_pfn_pte(pfn, mem_type);
 
 		if (writable) {
 			new_pte = kvm_s2pte_mkwrite(new_pte);
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 3/4] KVM: arm64: Support dirty page tracking for PUD hugepages
  2018-05-01 10:26 ` Punit Agrawal
@ 2018-05-01 10:26   ` Punit Agrawal
  -1 siblings, 0 replies; 28+ messages in thread
From: Punit Agrawal @ 2018-05-01 10:26 UTC (permalink / raw)
  To: kvmarm
  Cc: Punit Agrawal, linux-arm-kernel, marc.zyngier, christoffer.dall,
	linux-kernel, suzuki.poulose, Russell King, Catalin Marinas,
	Will Deacon

In preparation for creating PUD hugepages at stage 2, add support for
write protecting PUD hugepages when they are encountered. Write
protecting guest tables is used to track dirty pages when migrating
VMs.

Also, provide trivial implementations of required kvm_s2pud_* helpers
to allow sharing of code with arm32.

Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm/include/asm/kvm_mmu.h   | 16 ++++++++++++++++
 arch/arm64/include/asm/kvm_mmu.h | 10 ++++++++++
 virt/kvm/arm/mmu.c               |  9 ++++++---
 3 files changed, 32 insertions(+), 3 deletions(-)

diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index 5907a81ad5c1..224c22c0a69c 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -80,6 +80,22 @@ void kvm_clear_hyp_idmap(void);
 
 #define kvm_pmd_mkhuge(pmd)	pmd_mkhuge(pmd)
 
+/*
+ * The following kvm_*pud*() functionas are provided strictly to allow
+ * sharing code with arm64. They should never be called in practice.
+ */
+static inline void kvm_set_s2pud_readonly(pud_t *pud)
+{
+	BUG();
+}
+
+static inline bool kvm_s2pud_readonly(pud_t *pud)
+{
+	BUG();
+	return false;
+}
+
+
 static inline void kvm_set_pmd(pmd_t *pmd, pmd_t new_pmd)
 {
 	*pmd = new_pmd;
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index d962508ce4b3..f440cf216a23 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -240,6 +240,16 @@ static inline bool kvm_s2pmd_exec(pmd_t *pmdp)
 	return !(READ_ONCE(pmd_val(*pmdp)) & PMD_S2_XN);
 }
 
+static inline void kvm_set_s2pud_readonly(pud_t *pudp)
+{
+	kvm_set_s2pte_readonly((pte_t *)pudp);
+}
+
+static inline bool kvm_s2pud_readonly(pud_t *pudp)
+{
+	return kvm_s2pte_readonly((pte_t *)pudp);
+}
+
 static inline bool kvm_page_empty(void *ptr)
 {
 	struct page *ptr_page = virt_to_page(ptr);
diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index 74750236f445..3afbf693e045 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -1286,9 +1286,12 @@ static void  stage2_wp_puds(pgd_t *pgd, phys_addr_t addr, phys_addr_t end)
 	do {
 		next = stage2_pud_addr_end(addr, end);
 		if (!stage2_pud_none(*pud)) {
-			/* TODO:PUD not supported, revisit later if supported */
-			BUG_ON(stage2_pud_huge(*pud));
-			stage2_wp_pmds(pud, addr, next);
+			if (stage2_pud_huge(*pud)) {
+				if (!kvm_s2pud_readonly(pud))
+					kvm_set_s2pud_readonly(pud);
+			} else {
+				stage2_wp_pmds(pud, addr, next);
+			}
 		}
 	} while (pud++, addr = next, addr != end);
 }
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 3/4] KVM: arm64: Support dirty page tracking for PUD hugepages
@ 2018-05-01 10:26   ` Punit Agrawal
  0 siblings, 0 replies; 28+ messages in thread
From: Punit Agrawal @ 2018-05-01 10:26 UTC (permalink / raw)
  To: linux-arm-kernel

In preparation for creating PUD hugepages at stage 2, add support for
write protecting PUD hugepages when they are encountered. Write
protecting guest tables is used to track dirty pages when migrating
VMs.

Also, provide trivial implementations of required kvm_s2pud_* helpers
to allow sharing of code with arm32.

Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm/include/asm/kvm_mmu.h   | 16 ++++++++++++++++
 arch/arm64/include/asm/kvm_mmu.h | 10 ++++++++++
 virt/kvm/arm/mmu.c               |  9 ++++++---
 3 files changed, 32 insertions(+), 3 deletions(-)

diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index 5907a81ad5c1..224c22c0a69c 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -80,6 +80,22 @@ void kvm_clear_hyp_idmap(void);
 
 #define kvm_pmd_mkhuge(pmd)	pmd_mkhuge(pmd)
 
+/*
+ * The following kvm_*pud*() functionas are provided strictly to allow
+ * sharing code with arm64. They should never be called in practice.
+ */
+static inline void kvm_set_s2pud_readonly(pud_t *pud)
+{
+	BUG();
+}
+
+static inline bool kvm_s2pud_readonly(pud_t *pud)
+{
+	BUG();
+	return false;
+}
+
+
 static inline void kvm_set_pmd(pmd_t *pmd, pmd_t new_pmd)
 {
 	*pmd = new_pmd;
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index d962508ce4b3..f440cf216a23 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -240,6 +240,16 @@ static inline bool kvm_s2pmd_exec(pmd_t *pmdp)
 	return !(READ_ONCE(pmd_val(*pmdp)) & PMD_S2_XN);
 }
 
+static inline void kvm_set_s2pud_readonly(pud_t *pudp)
+{
+	kvm_set_s2pte_readonly((pte_t *)pudp);
+}
+
+static inline bool kvm_s2pud_readonly(pud_t *pudp)
+{
+	return kvm_s2pte_readonly((pte_t *)pudp);
+}
+
 static inline bool kvm_page_empty(void *ptr)
 {
 	struct page *ptr_page = virt_to_page(ptr);
diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index 74750236f445..3afbf693e045 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -1286,9 +1286,12 @@ static void  stage2_wp_puds(pgd_t *pgd, phys_addr_t addr, phys_addr_t end)
 	do {
 		next = stage2_pud_addr_end(addr, end);
 		if (!stage2_pud_none(*pud)) {
-			/* TODO:PUD not supported, revisit later if supported */
-			BUG_ON(stage2_pud_huge(*pud));
-			stage2_wp_pmds(pud, addr, next);
+			if (stage2_pud_huge(*pud)) {
+				if (!kvm_s2pud_readonly(pud))
+					kvm_set_s2pud_readonly(pud);
+			} else {
+				stage2_wp_pmds(pud, addr, next);
+			}
 		}
 	} while (pud++, addr = next, addr != end);
 }
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 4/4] KVM: arm64: Add support for PUD hugepages at stage 2
  2018-05-01 10:26 ` Punit Agrawal
@ 2018-05-01 10:26   ` Punit Agrawal
  -1 siblings, 0 replies; 28+ messages in thread
From: Punit Agrawal @ 2018-05-01 10:26 UTC (permalink / raw)
  To: kvmarm
  Cc: Punit Agrawal, linux-arm-kernel, marc.zyngier, christoffer.dall,
	linux-kernel, suzuki.poulose, Russell King, Catalin Marinas,
	Will Deacon

KVM currently supports PMD hugepages at stage 2. Extend the stage 2
fault handling to add support for PUD hugepages.

Addition of pud hugepage support enables additional hugepage
sizes (e.g., 1G with 4K granule) which can be useful on cores that
support mapping larger block sizes in the TLB entries.

Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm/include/asm/kvm_mmu.h         | 19 ++++++++++++
 arch/arm64/include/asm/kvm_mmu.h       | 15 ++++++++++
 arch/arm64/include/asm/pgtable-hwdef.h |  4 +++
 arch/arm64/include/asm/pgtable.h       |  2 ++
 virt/kvm/arm/mmu.c                     | 40 ++++++++++++++++++++++++--
 5 files changed, 77 insertions(+), 3 deletions(-)

diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index 224c22c0a69c..155916dbdd7e 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -77,8 +77,11 @@ void kvm_clear_hyp_idmap(void);
 
 #define kvm_pfn_pte(pfn, prot)	pfn_pte(pfn, prot)
 #define kvm_pfn_pmd(pfn, prot)	pfn_pmd(pfn, prot)
+#define kvm_pfn_pud(pfn, prot)	(__pud(0))
 
 #define kvm_pmd_mkhuge(pmd)	pmd_mkhuge(pmd)
+/* No support for pud hugepages */
+#define kvm_pud_mkhuge(pud)	(pud)
 
 /*
  * The following kvm_*pud*() functionas are provided strictly to allow
@@ -95,6 +98,22 @@ static inline bool kvm_s2pud_readonly(pud_t *pud)
 	return false;
 }
 
+static inline void kvm_set_pud(pud_t *pud, pud_t new_pud)
+{
+	BUG();
+}
+
+static inline pud_t kvm_s2pud_mkwrite(pud_t pud)
+{
+	BUG();
+	return pud;
+}
+
+static inline pud_t kvm_s2pud_mkexec(pud_t pud)
+{
+	BUG();
+	return pud;
+}
 
 static inline void kvm_set_pmd(pmd_t *pmd, pmd_t new_pmd)
 {
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index f440cf216a23..f49a68fcbf26 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -172,11 +172,14 @@ void kvm_clear_hyp_idmap(void);
 
 #define	kvm_set_pte(ptep, pte)		set_pte(ptep, pte)
 #define	kvm_set_pmd(pmdp, pmd)		set_pmd(pmdp, pmd)
+#define kvm_set_pud(pudp, pud)		set_pud(pudp, pud)
 
 #define kvm_pfn_pte(pfn, prot)		pfn_pte(pfn, prot)
 #define kvm_pfn_pmd(pfn, prot)		pfn_pmd(pfn, prot)
+#define kvm_pfn_pud(pfn, prot)		pfn_pud(pfn, prot)
 
 #define kvm_pmd_mkhuge(pmd)		pmd_mkhuge(pmd)
+#define kvm_pud_mkhuge(pud)		pud_mkhuge(pud)
 
 static inline pte_t kvm_s2pte_mkwrite(pte_t pte)
 {
@@ -190,6 +193,12 @@ static inline pmd_t kvm_s2pmd_mkwrite(pmd_t pmd)
 	return pmd;
 }
 
+static inline pud_t kvm_s2pud_mkwrite(pud_t pud)
+{
+	pud_val(pud) |= PUD_S2_RDWR;
+	return pud;
+}
+
 static inline pte_t kvm_s2pte_mkexec(pte_t pte)
 {
 	pte_val(pte) &= ~PTE_S2_XN;
@@ -202,6 +211,12 @@ static inline pmd_t kvm_s2pmd_mkexec(pmd_t pmd)
 	return pmd;
 }
 
+static inline pud_t kvm_s2pud_mkexec(pud_t pud)
+{
+	pud_val(pud) &= ~PUD_S2_XN;
+	return pud;
+}
+
 static inline void kvm_set_s2pte_readonly(pte_t *ptep)
 {
 	pteval_t old_pteval, pteval;
diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h
index fd208eac9f2a..e327665e94d1 100644
--- a/arch/arm64/include/asm/pgtable-hwdef.h
+++ b/arch/arm64/include/asm/pgtable-hwdef.h
@@ -193,6 +193,10 @@
 #define PMD_S2_RDWR		(_AT(pmdval_t, 3) << 6)   /* HAP[2:1] */
 #define PMD_S2_XN		(_AT(pmdval_t, 2) << 53)  /* XN[1:0] */
 
+#define PUD_S2_RDONLY		(_AT(pudval_t, 1) << 6)   /* HAP[2:1] */
+#define PUD_S2_RDWR		(_AT(pudval_t, 3) << 6)   /* HAP[2:1] */
+#define PUD_S2_XN		(_AT(pudval_t, 2) << 53)  /* XN[1:0] */
+
 /*
  * Memory Attribute override for Stage-2 (MemAttr[3:0])
  */
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 7c4c8f318ba9..31ea9fda07e3 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -386,6 +386,8 @@ static inline int pmd_protnone(pmd_t pmd)
 
 #define pud_write(pud)		pte_write(pud_pte(pud))
 
+#define pud_mkhuge(pud)		(__pud(pud_val(pud) & ~PUD_TABLE_BIT))
+
 #define __pud_to_phys(pud)	__pte_to_phys(pud_pte(pud))
 #define __phys_to_pud_val(phys)	__phys_to_pte_val(phys)
 #define pud_pfn(pud)		((__pud_to_phys(pud) & PUD_MASK) >> PAGE_SHIFT)
diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index 3afbf693e045..1fb108d47dbd 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -1036,6 +1036,26 @@ static int stage2_set_pmd_huge(struct kvm *kvm, struct kvm_mmu_memory_cache
 	return 0;
 }
 
+static int stage2_set_pud_huge(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
+			       phys_addr_t addr, const pud_t *new_pud)
+{
+	pud_t *pud, old_pud;
+
+	pud = stage2_get_pud(kvm, cache, addr);
+	VM_BUG_ON(!pud);
+
+	old_pud = *pud;
+	if (pud_present(old_pud)) {
+		pud_clear(pud);
+		kvm_tlb_flush_vmid_ipa(kvm, addr);
+	} else {
+		get_page(virt_to_page(pud));
+	}
+
+	kvm_set_pud(pud, *new_pud);
+	return 0;
+}
+
 static bool stage2_is_exec(struct kvm *kvm, phys_addr_t addr)
 {
 	pmd_t *pmdp;
@@ -1467,9 +1487,12 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 	}
 
 	vma_pagesize = vma_kernel_pagesize(vma);
-	if (vma_pagesize == PMD_SIZE && !logging_active) {
+	if ((vma_pagesize == PMD_SIZE || vma_pagesize == PUD_SIZE) &&
+	    !logging_active) {
+		struct hstate *h = hstate_vma(vma);
+
 		hugetlb = true;
-		gfn = (fault_ipa & PMD_MASK) >> PAGE_SHIFT;
+		gfn = (fault_ipa & huge_page_mask(h)) >> PAGE_SHIFT;
 	} else {
 		/*
 		 * Pages belonging to memslots that don't have the same
@@ -1556,7 +1579,18 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 	if (exec_fault)
 		invalidate_icache_guest_page(pfn, vma_pagesize);
 
-	if (hugetlb) {
+	if (vma_pagesize == PUD_SIZE) {
+		pud_t new_pud = kvm_pfn_pud(pfn, mem_type);
+
+		new_pud = kvm_pud_mkhuge(new_pud);
+		if (writable)
+			new_pud = kvm_s2pud_mkwrite(new_pud);
+
+		if (stage2_should_exec(kvm, fault_ipa, exec_fault, fault_status))
+			new_pud = kvm_s2pud_mkexec(new_pud);
+
+		ret = stage2_set_pud_huge(kvm, memcache, fault_ipa, &new_pud);
+	} else if (vma_pagesize == PMD_SIZE) {
 		pmd_t new_pmd = kvm_pfn_pmd(pfn, mem_type);
 
 		new_pmd = kvm_pmd_mkhuge(new_pmd);
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 4/4] KVM: arm64: Add support for PUD hugepages at stage 2
@ 2018-05-01 10:26   ` Punit Agrawal
  0 siblings, 0 replies; 28+ messages in thread
From: Punit Agrawal @ 2018-05-01 10:26 UTC (permalink / raw)
  To: linux-arm-kernel

KVM currently supports PMD hugepages at stage 2. Extend the stage 2
fault handling to add support for PUD hugepages.

Addition of pud hugepage support enables additional hugepage
sizes (e.g., 1G with 4K granule) which can be useful on cores that
support mapping larger block sizes in the TLB entries.

Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm/include/asm/kvm_mmu.h         | 19 ++++++++++++
 arch/arm64/include/asm/kvm_mmu.h       | 15 ++++++++++
 arch/arm64/include/asm/pgtable-hwdef.h |  4 +++
 arch/arm64/include/asm/pgtable.h       |  2 ++
 virt/kvm/arm/mmu.c                     | 40 ++++++++++++++++++++++++--
 5 files changed, 77 insertions(+), 3 deletions(-)

diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index 224c22c0a69c..155916dbdd7e 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -77,8 +77,11 @@ void kvm_clear_hyp_idmap(void);
 
 #define kvm_pfn_pte(pfn, prot)	pfn_pte(pfn, prot)
 #define kvm_pfn_pmd(pfn, prot)	pfn_pmd(pfn, prot)
+#define kvm_pfn_pud(pfn, prot)	(__pud(0))
 
 #define kvm_pmd_mkhuge(pmd)	pmd_mkhuge(pmd)
+/* No support for pud hugepages */
+#define kvm_pud_mkhuge(pud)	(pud)
 
 /*
  * The following kvm_*pud*() functionas are provided strictly to allow
@@ -95,6 +98,22 @@ static inline bool kvm_s2pud_readonly(pud_t *pud)
 	return false;
 }
 
+static inline void kvm_set_pud(pud_t *pud, pud_t new_pud)
+{
+	BUG();
+}
+
+static inline pud_t kvm_s2pud_mkwrite(pud_t pud)
+{
+	BUG();
+	return pud;
+}
+
+static inline pud_t kvm_s2pud_mkexec(pud_t pud)
+{
+	BUG();
+	return pud;
+}
 
 static inline void kvm_set_pmd(pmd_t *pmd, pmd_t new_pmd)
 {
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index f440cf216a23..f49a68fcbf26 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -172,11 +172,14 @@ void kvm_clear_hyp_idmap(void);
 
 #define	kvm_set_pte(ptep, pte)		set_pte(ptep, pte)
 #define	kvm_set_pmd(pmdp, pmd)		set_pmd(pmdp, pmd)
+#define kvm_set_pud(pudp, pud)		set_pud(pudp, pud)
 
 #define kvm_pfn_pte(pfn, prot)		pfn_pte(pfn, prot)
 #define kvm_pfn_pmd(pfn, prot)		pfn_pmd(pfn, prot)
+#define kvm_pfn_pud(pfn, prot)		pfn_pud(pfn, prot)
 
 #define kvm_pmd_mkhuge(pmd)		pmd_mkhuge(pmd)
+#define kvm_pud_mkhuge(pud)		pud_mkhuge(pud)
 
 static inline pte_t kvm_s2pte_mkwrite(pte_t pte)
 {
@@ -190,6 +193,12 @@ static inline pmd_t kvm_s2pmd_mkwrite(pmd_t pmd)
 	return pmd;
 }
 
+static inline pud_t kvm_s2pud_mkwrite(pud_t pud)
+{
+	pud_val(pud) |= PUD_S2_RDWR;
+	return pud;
+}
+
 static inline pte_t kvm_s2pte_mkexec(pte_t pte)
 {
 	pte_val(pte) &= ~PTE_S2_XN;
@@ -202,6 +211,12 @@ static inline pmd_t kvm_s2pmd_mkexec(pmd_t pmd)
 	return pmd;
 }
 
+static inline pud_t kvm_s2pud_mkexec(pud_t pud)
+{
+	pud_val(pud) &= ~PUD_S2_XN;
+	return pud;
+}
+
 static inline void kvm_set_s2pte_readonly(pte_t *ptep)
 {
 	pteval_t old_pteval, pteval;
diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h
index fd208eac9f2a..e327665e94d1 100644
--- a/arch/arm64/include/asm/pgtable-hwdef.h
+++ b/arch/arm64/include/asm/pgtable-hwdef.h
@@ -193,6 +193,10 @@
 #define PMD_S2_RDWR		(_AT(pmdval_t, 3) << 6)   /* HAP[2:1] */
 #define PMD_S2_XN		(_AT(pmdval_t, 2) << 53)  /* XN[1:0] */
 
+#define PUD_S2_RDONLY		(_AT(pudval_t, 1) << 6)   /* HAP[2:1] */
+#define PUD_S2_RDWR		(_AT(pudval_t, 3) << 6)   /* HAP[2:1] */
+#define PUD_S2_XN		(_AT(pudval_t, 2) << 53)  /* XN[1:0] */
+
 /*
  * Memory Attribute override for Stage-2 (MemAttr[3:0])
  */
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 7c4c8f318ba9..31ea9fda07e3 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -386,6 +386,8 @@ static inline int pmd_protnone(pmd_t pmd)
 
 #define pud_write(pud)		pte_write(pud_pte(pud))
 
+#define pud_mkhuge(pud)		(__pud(pud_val(pud) & ~PUD_TABLE_BIT))
+
 #define __pud_to_phys(pud)	__pte_to_phys(pud_pte(pud))
 #define __phys_to_pud_val(phys)	__phys_to_pte_val(phys)
 #define pud_pfn(pud)		((__pud_to_phys(pud) & PUD_MASK) >> PAGE_SHIFT)
diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index 3afbf693e045..1fb108d47dbd 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -1036,6 +1036,26 @@ static int stage2_set_pmd_huge(struct kvm *kvm, struct kvm_mmu_memory_cache
 	return 0;
 }
 
+static int stage2_set_pud_huge(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
+			       phys_addr_t addr, const pud_t *new_pud)
+{
+	pud_t *pud, old_pud;
+
+	pud = stage2_get_pud(kvm, cache, addr);
+	VM_BUG_ON(!pud);
+
+	old_pud = *pud;
+	if (pud_present(old_pud)) {
+		pud_clear(pud);
+		kvm_tlb_flush_vmid_ipa(kvm, addr);
+	} else {
+		get_page(virt_to_page(pud));
+	}
+
+	kvm_set_pud(pud, *new_pud);
+	return 0;
+}
+
 static bool stage2_is_exec(struct kvm *kvm, phys_addr_t addr)
 {
 	pmd_t *pmdp;
@@ -1467,9 +1487,12 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 	}
 
 	vma_pagesize = vma_kernel_pagesize(vma);
-	if (vma_pagesize == PMD_SIZE && !logging_active) {
+	if ((vma_pagesize == PMD_SIZE || vma_pagesize == PUD_SIZE) &&
+	    !logging_active) {
+		struct hstate *h = hstate_vma(vma);
+
 		hugetlb = true;
-		gfn = (fault_ipa & PMD_MASK) >> PAGE_SHIFT;
+		gfn = (fault_ipa & huge_page_mask(h)) >> PAGE_SHIFT;
 	} else {
 		/*
 		 * Pages belonging to memslots that don't have the same
@@ -1556,7 +1579,18 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 	if (exec_fault)
 		invalidate_icache_guest_page(pfn, vma_pagesize);
 
-	if (hugetlb) {
+	if (vma_pagesize == PUD_SIZE) {
+		pud_t new_pud = kvm_pfn_pud(pfn, mem_type);
+
+		new_pud = kvm_pud_mkhuge(new_pud);
+		if (writable)
+			new_pud = kvm_s2pud_mkwrite(new_pud);
+
+		if (stage2_should_exec(kvm, fault_ipa, exec_fault, fault_status))
+			new_pud = kvm_s2pud_mkexec(new_pud);
+
+		ret = stage2_set_pud_huge(kvm, memcache, fault_ipa, &new_pud);
+	} else if (vma_pagesize == PMD_SIZE) {
 		pmd_t new_pmd = kvm_pfn_pmd(pfn, mem_type);
 
 		new_pmd = kvm_pmd_mkhuge(new_pmd);
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 2/4] KVM: arm/arm64: Introduce helpers to manupulate page table entries
  2018-05-01 10:26   ` Punit Agrawal
@ 2018-05-01 10:36     ` Suzuki K Poulose
  -1 siblings, 0 replies; 28+ messages in thread
From: Suzuki K Poulose @ 2018-05-01 10:36 UTC (permalink / raw)
  To: Punit Agrawal, kvmarm
  Cc: linux-arm-kernel, marc.zyngier, christoffer.dall, linux-kernel,
	Russell King, Catalin Marinas, Will Deacon

On 01/05/18 11:26, Punit Agrawal wrote:
> Introduce helpers to abstract architectural handling of the conversion
> of pfn to page table entries and marking a PMD page table entry as a
> block entry.
> 
> The helpers are introduced in preparation for supporting PUD hugepages
> at stage 2 - which are supported on arm64 but do not exist on arm.

Punit,

The change are fine by me. However, we usually do not define kvm_*
accessors for something which we know matches with the host variant.
i.e, PMD and PTE helpers, which are always present and we make use
of them directly. (see unmap_stage2_pmds for e.g)

Cheers
Suzuki

> 
> Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
> Acked-by: Christoffer Dall <christoffer.dall@arm.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Russell King <linux@armlinux.org.uk>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> ---
>   arch/arm/include/asm/kvm_mmu.h   | 5 +++++
>   arch/arm64/include/asm/kvm_mmu.h | 5 +++++
>   virt/kvm/arm/mmu.c               | 7 ++++---
>   3 files changed, 14 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
> index 707a1f06dc5d..5907a81ad5c1 100644
> --- a/arch/arm/include/asm/kvm_mmu.h
> +++ b/arch/arm/include/asm/kvm_mmu.h
> @@ -75,6 +75,11 @@ phys_addr_t kvm_get_idmap_vector(void);
>   int kvm_mmu_init(void);
>   void kvm_clear_hyp_idmap(void);
>   
> +#define kvm_pfn_pte(pfn, prot)	pfn_pte(pfn, prot)
> +#define kvm_pfn_pmd(pfn, prot)	pfn_pmd(pfn, prot)
> +
> +#define kvm_pmd_mkhuge(pmd)	pmd_mkhuge(pmd)
> +
>   static inline void kvm_set_pmd(pmd_t *pmd, pmd_t new_pmd)
>   {
>   	*pmd = new_pmd;
> diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
> index 082110993647..d962508ce4b3 100644
> --- a/arch/arm64/include/asm/kvm_mmu.h
> +++ b/arch/arm64/include/asm/kvm_mmu.h
> @@ -173,6 +173,11 @@ void kvm_clear_hyp_idmap(void);
>   #define	kvm_set_pte(ptep, pte)		set_pte(ptep, pte)
>   #define	kvm_set_pmd(pmdp, pmd)		set_pmd(pmdp, pmd)
>   
> +#define kvm_pfn_pte(pfn, prot)		pfn_pte(pfn, prot)
> +#define kvm_pfn_pmd(pfn, prot)		pfn_pmd(pfn, prot)
> +
> +#define kvm_pmd_mkhuge(pmd)		pmd_mkhuge(pmd)
> +
>   static inline pte_t kvm_s2pte_mkwrite(pte_t pte)
>   {
>   	pte_val(pte) |= PTE_S2_RDWR;
> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
> index 686fc6a4b866..74750236f445 100644
> --- a/virt/kvm/arm/mmu.c
> +++ b/virt/kvm/arm/mmu.c
> @@ -1554,8 +1554,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>   		invalidate_icache_guest_page(pfn, vma_pagesize);
>   
>   	if (hugetlb) {
> -		pmd_t new_pmd = pfn_pmd(pfn, mem_type);
> -		new_pmd = pmd_mkhuge(new_pmd);
> +		pmd_t new_pmd = kvm_pfn_pmd(pfn, mem_type);
> +
> +		new_pmd = kvm_pmd_mkhuge(new_pmd);
>   		if (writable)
>   			new_pmd = kvm_s2pmd_mkwrite(new_pmd);
>   
> @@ -1564,7 +1565,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>   
>   		ret = stage2_set_pmd_huge(kvm, memcache, fault_ipa, &new_pmd);
>   	} else {
> -		pte_t new_pte = pfn_pte(pfn, mem_type);
> +		pte_t new_pte = kvm_pfn_pte(pfn, mem_type);
>   
>   		if (writable) {
>   			new_pte = kvm_s2pte_mkwrite(new_pte);
> 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH v2 2/4] KVM: arm/arm64: Introduce helpers to manupulate page table entries
@ 2018-05-01 10:36     ` Suzuki K Poulose
  0 siblings, 0 replies; 28+ messages in thread
From: Suzuki K Poulose @ 2018-05-01 10:36 UTC (permalink / raw)
  To: linux-arm-kernel

On 01/05/18 11:26, Punit Agrawal wrote:
> Introduce helpers to abstract architectural handling of the conversion
> of pfn to page table entries and marking a PMD page table entry as a
> block entry.
> 
> The helpers are introduced in preparation for supporting PUD hugepages
> at stage 2 - which are supported on arm64 but do not exist on arm.

Punit,

The change are fine by me. However, we usually do not define kvm_*
accessors for something which we know matches with the host variant.
i.e, PMD and PTE helpers, which are always present and we make use
of them directly. (see unmap_stage2_pmds for e.g)

Cheers
Suzuki

> 
> Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
> Acked-by: Christoffer Dall <christoffer.dall@arm.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Russell King <linux@armlinux.org.uk>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> ---
>   arch/arm/include/asm/kvm_mmu.h   | 5 +++++
>   arch/arm64/include/asm/kvm_mmu.h | 5 +++++
>   virt/kvm/arm/mmu.c               | 7 ++++---
>   3 files changed, 14 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
> index 707a1f06dc5d..5907a81ad5c1 100644
> --- a/arch/arm/include/asm/kvm_mmu.h
> +++ b/arch/arm/include/asm/kvm_mmu.h
> @@ -75,6 +75,11 @@ phys_addr_t kvm_get_idmap_vector(void);
>   int kvm_mmu_init(void);
>   void kvm_clear_hyp_idmap(void);
>   
> +#define kvm_pfn_pte(pfn, prot)	pfn_pte(pfn, prot)
> +#define kvm_pfn_pmd(pfn, prot)	pfn_pmd(pfn, prot)
> +
> +#define kvm_pmd_mkhuge(pmd)	pmd_mkhuge(pmd)
> +
>   static inline void kvm_set_pmd(pmd_t *pmd, pmd_t new_pmd)
>   {
>   	*pmd = new_pmd;
> diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
> index 082110993647..d962508ce4b3 100644
> --- a/arch/arm64/include/asm/kvm_mmu.h
> +++ b/arch/arm64/include/asm/kvm_mmu.h
> @@ -173,6 +173,11 @@ void kvm_clear_hyp_idmap(void);
>   #define	kvm_set_pte(ptep, pte)		set_pte(ptep, pte)
>   #define	kvm_set_pmd(pmdp, pmd)		set_pmd(pmdp, pmd)
>   
> +#define kvm_pfn_pte(pfn, prot)		pfn_pte(pfn, prot)
> +#define kvm_pfn_pmd(pfn, prot)		pfn_pmd(pfn, prot)
> +
> +#define kvm_pmd_mkhuge(pmd)		pmd_mkhuge(pmd)
> +
>   static inline pte_t kvm_s2pte_mkwrite(pte_t pte)
>   {
>   	pte_val(pte) |= PTE_S2_RDWR;
> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
> index 686fc6a4b866..74750236f445 100644
> --- a/virt/kvm/arm/mmu.c
> +++ b/virt/kvm/arm/mmu.c
> @@ -1554,8 +1554,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>   		invalidate_icache_guest_page(pfn, vma_pagesize);
>   
>   	if (hugetlb) {
> -		pmd_t new_pmd = pfn_pmd(pfn, mem_type);
> -		new_pmd = pmd_mkhuge(new_pmd);
> +		pmd_t new_pmd = kvm_pfn_pmd(pfn, mem_type);
> +
> +		new_pmd = kvm_pmd_mkhuge(new_pmd);
>   		if (writable)
>   			new_pmd = kvm_s2pmd_mkwrite(new_pmd);
>   
> @@ -1564,7 +1565,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>   
>   		ret = stage2_set_pmd_huge(kvm, memcache, fault_ipa, &new_pmd);
>   	} else {
> -		pte_t new_pte = pfn_pte(pfn, mem_type);
> +		pte_t new_pte = kvm_pfn_pte(pfn, mem_type);
>   
>   		if (writable) {
>   			new_pte = kvm_s2pte_mkwrite(new_pte);
> 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 2/4] KVM: arm/arm64: Introduce helpers to manupulate page table entries
  2018-05-01 10:36     ` Suzuki K Poulose
  (?)
@ 2018-05-01 13:00       ` Punit Agrawal
  -1 siblings, 0 replies; 28+ messages in thread
From: Punit Agrawal @ 2018-05-01 13:00 UTC (permalink / raw)
  To: Suzuki K Poulose
  Cc: kvmarm, linux-arm-kernel, marc.zyngier, christoffer.dall,
	linux-kernel, Russell King, Catalin Marinas, Will Deacon

Hi Suzuki,

Thanks for having a look.

Suzuki K Poulose <Suzuki.Poulose@arm.com> writes:

> On 01/05/18 11:26, Punit Agrawal wrote:
>> Introduce helpers to abstract architectural handling of the conversion
>> of pfn to page table entries and marking a PMD page table entry as a
>> block entry.
>>
>> The helpers are introduced in preparation for supporting PUD hugepages
>> at stage 2 - which are supported on arm64 but do not exist on arm.
>
> Punit,
>
> The change are fine by me. However, we usually do not define kvm_*
> accessors for something which we know matches with the host variant.
> i.e, PMD and PTE helpers, which are always present and we make use
> of them directly. (see unmap_stage2_pmds for e.g)

In general, I agree - it makes sense to avoid duplication.

Having said that, the helpers here allow following a common pattern for
handling the various page sizes - pte, pmd and pud - during stage 2
fault handling (see patch 4).

As you've said you're OK with this change, I'd prefer to keep this patch
but will drop it if any others reviewers are concerned about the
duplication as well.

Thanks,
Punit

>
> Cheers
> Suzuki
>
>>
>> Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
>> Acked-by: Christoffer Dall <christoffer.dall@arm.com>
>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>> Cc: Russell King <linux@armlinux.org.uk>
>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>> Cc: Will Deacon <will.deacon@arm.com>
>> ---
>>   arch/arm/include/asm/kvm_mmu.h   | 5 +++++
>>   arch/arm64/include/asm/kvm_mmu.h | 5 +++++
>>   virt/kvm/arm/mmu.c               | 7 ++++---
>>   3 files changed, 14 insertions(+), 3 deletions(-)
>>
>> diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
>> index 707a1f06dc5d..5907a81ad5c1 100644
>> --- a/arch/arm/include/asm/kvm_mmu.h
>> +++ b/arch/arm/include/asm/kvm_mmu.h
>> @@ -75,6 +75,11 @@ phys_addr_t kvm_get_idmap_vector(void);
>>   int kvm_mmu_init(void);
>>   void kvm_clear_hyp_idmap(void);
>>   +#define kvm_pfn_pte(pfn, prot)	pfn_pte(pfn, prot)
>> +#define kvm_pfn_pmd(pfn, prot)	pfn_pmd(pfn, prot)
>> +
>> +#define kvm_pmd_mkhuge(pmd)	pmd_mkhuge(pmd)
>> +
>>   static inline void kvm_set_pmd(pmd_t *pmd, pmd_t new_pmd)
>>   {
>>   	*pmd = new_pmd;
>> diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
>> index 082110993647..d962508ce4b3 100644
>> --- a/arch/arm64/include/asm/kvm_mmu.h
>> +++ b/arch/arm64/include/asm/kvm_mmu.h
>> @@ -173,6 +173,11 @@ void kvm_clear_hyp_idmap(void);
>>   #define	kvm_set_pte(ptep, pte)		set_pte(ptep, pte)
>>   #define	kvm_set_pmd(pmdp, pmd)		set_pmd(pmdp, pmd)
>>   +#define kvm_pfn_pte(pfn, prot)		pfn_pte(pfn, prot)
>> +#define kvm_pfn_pmd(pfn, prot)		pfn_pmd(pfn, prot)
>> +
>> +#define kvm_pmd_mkhuge(pmd)		pmd_mkhuge(pmd)
>> +
>>   static inline pte_t kvm_s2pte_mkwrite(pte_t pte)
>>   {
>>   	pte_val(pte) |= PTE_S2_RDWR;
>> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
>> index 686fc6a4b866..74750236f445 100644
>> --- a/virt/kvm/arm/mmu.c
>> +++ b/virt/kvm/arm/mmu.c
>> @@ -1554,8 +1554,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>>   		invalidate_icache_guest_page(pfn, vma_pagesize);
>>     	if (hugetlb) {
>> -		pmd_t new_pmd = pfn_pmd(pfn, mem_type);
>> -		new_pmd = pmd_mkhuge(new_pmd);
>> +		pmd_t new_pmd = kvm_pfn_pmd(pfn, mem_type);
>> +
>> +		new_pmd = kvm_pmd_mkhuge(new_pmd);
>>   		if (writable)
>>   			new_pmd = kvm_s2pmd_mkwrite(new_pmd);
>>   @@ -1564,7 +1565,7 @@ static int user_mem_abort(struct kvm_vcpu
>> *vcpu, phys_addr_t fault_ipa,
>>     		ret = stage2_set_pmd_huge(kvm, memcache, fault_ipa,
>> &new_pmd);
>>   	} else {
>> -		pte_t new_pte = pfn_pte(pfn, mem_type);
>> +		pte_t new_pte = kvm_pfn_pte(pfn, mem_type);
>>     		if (writable) {
>>   			new_pte = kvm_s2pte_mkwrite(new_pte);
>>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 2/4] KVM: arm/arm64: Introduce helpers to manupulate page table entries
@ 2018-05-01 13:00       ` Punit Agrawal
  0 siblings, 0 replies; 28+ messages in thread
From: Punit Agrawal @ 2018-05-01 13:00 UTC (permalink / raw)
  To: Suzuki K Poulose
  Cc: kvmarm, linux-arm-kernel, marc.zyngier, christoffer.dall,
	linux-kernel, Russell King, Catalin Marinas, Will Deacon

Hi Suzuki,

Thanks for having a look.

Suzuki K Poulose <Suzuki.Poulose@arm.com> writes:

> On 01/05/18 11:26, Punit Agrawal wrote:
>> Introduce helpers to abstract architectural handling of the conversion
>> of pfn to page table entries and marking a PMD page table entry as a
>> block entry.
>>
>> The helpers are introduced in preparation for supporting PUD hugepages
>> at stage 2 - which are supported on arm64 but do not exist on arm.
>
> Punit,
>
> The change are fine by me. However, we usually do not define kvm_*
> accessors for something which we know matches with the host variant.
> i.e, PMD and PTE helpers, which are always present and we make use
> of them directly. (see unmap_stage2_pmds for e.g)

In general, I agree - it makes sense to avoid duplication.

Having said that, the helpers here allow following a common pattern for
handling the various page sizes - pte, pmd and pud - during stage 2
fault handling (see patch 4).

As you've said you're OK with this change, I'd prefer to keep this patch
but will drop it if any others reviewers are concerned about the
duplication as well.

Thanks,
Punit

>
> Cheers
> Suzuki
>
>>
>> Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
>> Acked-by: Christoffer Dall <christoffer.dall@arm.com>
>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>> Cc: Russell King <linux@armlinux.org.uk>
>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>> Cc: Will Deacon <will.deacon@arm.com>
>> ---
>>   arch/arm/include/asm/kvm_mmu.h   | 5 +++++
>>   arch/arm64/include/asm/kvm_mmu.h | 5 +++++
>>   virt/kvm/arm/mmu.c               | 7 ++++---
>>   3 files changed, 14 insertions(+), 3 deletions(-)
>>
>> diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
>> index 707a1f06dc5d..5907a81ad5c1 100644
>> --- a/arch/arm/include/asm/kvm_mmu.h
>> +++ b/arch/arm/include/asm/kvm_mmu.h
>> @@ -75,6 +75,11 @@ phys_addr_t kvm_get_idmap_vector(void);
>>   int kvm_mmu_init(void);
>>   void kvm_clear_hyp_idmap(void);
>>   +#define kvm_pfn_pte(pfn, prot)	pfn_pte(pfn, prot)
>> +#define kvm_pfn_pmd(pfn, prot)	pfn_pmd(pfn, prot)
>> +
>> +#define kvm_pmd_mkhuge(pmd)	pmd_mkhuge(pmd)
>> +
>>   static inline void kvm_set_pmd(pmd_t *pmd, pmd_t new_pmd)
>>   {
>>   	*pmd = new_pmd;
>> diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
>> index 082110993647..d962508ce4b3 100644
>> --- a/arch/arm64/include/asm/kvm_mmu.h
>> +++ b/arch/arm64/include/asm/kvm_mmu.h
>> @@ -173,6 +173,11 @@ void kvm_clear_hyp_idmap(void);
>>   #define	kvm_set_pte(ptep, pte)		set_pte(ptep, pte)
>>   #define	kvm_set_pmd(pmdp, pmd)		set_pmd(pmdp, pmd)
>>   +#define kvm_pfn_pte(pfn, prot)		pfn_pte(pfn, prot)
>> +#define kvm_pfn_pmd(pfn, prot)		pfn_pmd(pfn, prot)
>> +
>> +#define kvm_pmd_mkhuge(pmd)		pmd_mkhuge(pmd)
>> +
>>   static inline pte_t kvm_s2pte_mkwrite(pte_t pte)
>>   {
>>   	pte_val(pte) |= PTE_S2_RDWR;
>> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
>> index 686fc6a4b866..74750236f445 100644
>> --- a/virt/kvm/arm/mmu.c
>> +++ b/virt/kvm/arm/mmu.c
>> @@ -1554,8 +1554,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>>   		invalidate_icache_guest_page(pfn, vma_pagesize);
>>     	if (hugetlb) {
>> -		pmd_t new_pmd = pfn_pmd(pfn, mem_type);
>> -		new_pmd = pmd_mkhuge(new_pmd);
>> +		pmd_t new_pmd = kvm_pfn_pmd(pfn, mem_type);
>> +
>> +		new_pmd = kvm_pmd_mkhuge(new_pmd);
>>   		if (writable)
>>   			new_pmd = kvm_s2pmd_mkwrite(new_pmd);
>>   @@ -1564,7 +1565,7 @@ static int user_mem_abort(struct kvm_vcpu
>> *vcpu, phys_addr_t fault_ipa,
>>     		ret = stage2_set_pmd_huge(kvm, memcache, fault_ipa,
>> &new_pmd);
>>   	} else {
>> -		pte_t new_pte = pfn_pte(pfn, mem_type);
>> +		pte_t new_pte = kvm_pfn_pte(pfn, mem_type);
>>     		if (writable) {
>>   			new_pte = kvm_s2pte_mkwrite(new_pte);
>>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH v2 2/4] KVM: arm/arm64: Introduce helpers to manupulate page table entries
@ 2018-05-01 13:00       ` Punit Agrawal
  0 siblings, 0 replies; 28+ messages in thread
From: Punit Agrawal @ 2018-05-01 13:00 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Suzuki,

Thanks for having a look.

Suzuki K Poulose <Suzuki.Poulose@arm.com> writes:

> On 01/05/18 11:26, Punit Agrawal wrote:
>> Introduce helpers to abstract architectural handling of the conversion
>> of pfn to page table entries and marking a PMD page table entry as a
>> block entry.
>>
>> The helpers are introduced in preparation for supporting PUD hugepages
>> at stage 2 - which are supported on arm64 but do not exist on arm.
>
> Punit,
>
> The change are fine by me. However, we usually do not define kvm_*
> accessors for something which we know matches with the host variant.
> i.e, PMD and PTE helpers, which are always present and we make use
> of them directly. (see unmap_stage2_pmds for e.g)

In general, I agree - it makes sense to avoid duplication.

Having said that, the helpers here allow following a common pattern for
handling the various page sizes - pte, pmd and pud - during stage 2
fault handling (see patch 4).

As you've said you're OK with this change, I'd prefer to keep this patch
but will drop it if any others reviewers are concerned about the
duplication as well.

Thanks,
Punit

>
> Cheers
> Suzuki
>
>>
>> Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
>> Acked-by: Christoffer Dall <christoffer.dall@arm.com>
>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>> Cc: Russell King <linux@armlinux.org.uk>
>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>> Cc: Will Deacon <will.deacon@arm.com>
>> ---
>>   arch/arm/include/asm/kvm_mmu.h   | 5 +++++
>>   arch/arm64/include/asm/kvm_mmu.h | 5 +++++
>>   virt/kvm/arm/mmu.c               | 7 ++++---
>>   3 files changed, 14 insertions(+), 3 deletions(-)
>>
>> diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
>> index 707a1f06dc5d..5907a81ad5c1 100644
>> --- a/arch/arm/include/asm/kvm_mmu.h
>> +++ b/arch/arm/include/asm/kvm_mmu.h
>> @@ -75,6 +75,11 @@ phys_addr_t kvm_get_idmap_vector(void);
>>   int kvm_mmu_init(void);
>>   void kvm_clear_hyp_idmap(void);
>>   +#define kvm_pfn_pte(pfn, prot)	pfn_pte(pfn, prot)
>> +#define kvm_pfn_pmd(pfn, prot)	pfn_pmd(pfn, prot)
>> +
>> +#define kvm_pmd_mkhuge(pmd)	pmd_mkhuge(pmd)
>> +
>>   static inline void kvm_set_pmd(pmd_t *pmd, pmd_t new_pmd)
>>   {
>>   	*pmd = new_pmd;
>> diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
>> index 082110993647..d962508ce4b3 100644
>> --- a/arch/arm64/include/asm/kvm_mmu.h
>> +++ b/arch/arm64/include/asm/kvm_mmu.h
>> @@ -173,6 +173,11 @@ void kvm_clear_hyp_idmap(void);
>>   #define	kvm_set_pte(ptep, pte)		set_pte(ptep, pte)
>>   #define	kvm_set_pmd(pmdp, pmd)		set_pmd(pmdp, pmd)
>>   +#define kvm_pfn_pte(pfn, prot)		pfn_pte(pfn, prot)
>> +#define kvm_pfn_pmd(pfn, prot)		pfn_pmd(pfn, prot)
>> +
>> +#define kvm_pmd_mkhuge(pmd)		pmd_mkhuge(pmd)
>> +
>>   static inline pte_t kvm_s2pte_mkwrite(pte_t pte)
>>   {
>>   	pte_val(pte) |= PTE_S2_RDWR;
>> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
>> index 686fc6a4b866..74750236f445 100644
>> --- a/virt/kvm/arm/mmu.c
>> +++ b/virt/kvm/arm/mmu.c
>> @@ -1554,8 +1554,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>>   		invalidate_icache_guest_page(pfn, vma_pagesize);
>>     	if (hugetlb) {
>> -		pmd_t new_pmd = pfn_pmd(pfn, mem_type);
>> -		new_pmd = pmd_mkhuge(new_pmd);
>> +		pmd_t new_pmd = kvm_pfn_pmd(pfn, mem_type);
>> +
>> +		new_pmd = kvm_pmd_mkhuge(new_pmd);
>>   		if (writable)
>>   			new_pmd = kvm_s2pmd_mkwrite(new_pmd);
>>   @@ -1564,7 +1565,7 @@ static int user_mem_abort(struct kvm_vcpu
>> *vcpu, phys_addr_t fault_ipa,
>>     		ret = stage2_set_pmd_huge(kvm, memcache, fault_ipa,
>> &new_pmd);
>>   	} else {
>> -		pte_t new_pte = pfn_pte(pfn, mem_type);
>> +		pte_t new_pte = kvm_pfn_pte(pfn, mem_type);
>>     		if (writable) {
>>   			new_pte = kvm_s2pte_mkwrite(new_pte);
>>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 1/4] KVM: arm/arm64: Share common code in user_mem_abort()
  2018-05-01 10:26   ` Punit Agrawal
@ 2018-05-04 11:38     ` Christoffer Dall
  -1 siblings, 0 replies; 28+ messages in thread
From: Christoffer Dall @ 2018-05-04 11:38 UTC (permalink / raw)
  To: Punit Agrawal
  Cc: kvmarm, linux-arm-kernel, marc.zyngier, linux-kernel, suzuki.poulose

On Tue, May 01, 2018 at 11:26:56AM +0100, Punit Agrawal wrote:
> The code for operations such as marking the pfn as dirty, and
> dcache/icache maintenance during stage 2 fault handling is duplicated
> between normal pages and PMD hugepages.
> 
> Instead of creating another copy of the operations when we introduce
> PUD hugepages, let's share them across the different pagesizes.
> 
> Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
> Reviewed-by: Christoffer Dall <christoffer.dall@arm.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  virt/kvm/arm/mmu.c | 66 +++++++++++++++++++++++++++-------------------
>  1 file changed, 39 insertions(+), 27 deletions(-)
> 
> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
> index 7f6a944db23d..686fc6a4b866 100644
> --- a/virt/kvm/arm/mmu.c
> +++ b/virt/kvm/arm/mmu.c
> @@ -1396,6 +1396,21 @@ static void invalidate_icache_guest_page(kvm_pfn_t pfn, unsigned long size)
>  	__invalidate_icache_guest_page(pfn, size);
>  }
>  
> +static bool stage2_should_exec(struct kvm *kvm, phys_addr_t addr,
> +			       bool exec_fault, unsigned long fault_status)
> +{
> +	/*
> +	 * If we took an execution fault we will have made the
> +	 * icache/dcache coherent and should now let the s2 mapping be
> +	 * executable.
> +	 *
> +	 * Write faults (!exec_fault && FSC_PERM) are orthogonal to
> +	 * execute permissions, and we preserve whatever we have.
> +	 */
> +	return exec_fault ||
> +		(fault_status == FSC_PERM && stage2_is_exec(kvm, addr));
> +}
> +
>  static void kvm_send_hwpoison_signal(unsigned long address,
>  				     struct vm_area_struct *vma)
>  {
> @@ -1428,7 +1443,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  	kvm_pfn_t pfn;
>  	pgprot_t mem_type = PAGE_S2;
>  	bool logging_active = memslot_is_logging(memslot);
> -	unsigned long flags = 0;
> +	unsigned long vma_pagesize, flags = 0;
>  
>  	write_fault = kvm_is_write_fault(vcpu);
>  	exec_fault = kvm_vcpu_trap_is_iabt(vcpu);
> @@ -1448,7 +1463,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  		return -EFAULT;
>  	}
>  
> -	if (vma_kernel_pagesize(vma) == PMD_SIZE && !logging_active) {
> +	vma_pagesize = vma_kernel_pagesize(vma);
> +	if (vma_pagesize == PMD_SIZE && !logging_active) {
>  		hugetlb = true;
>  		gfn = (fault_ipa & PMD_MASK) >> PAGE_SHIFT;
>  	} else {
> @@ -1517,28 +1533,34 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  	if (mmu_notifier_retry(kvm, mmu_seq))
>  		goto out_unlock;
>  
> -	if (!hugetlb && !force_pte)
> +	if (!hugetlb && !force_pte) {
>  		hugetlb = transparent_hugepage_adjust(&pfn, &fault_ipa);
> +		/*
> +		 * Only PMD_SIZE transparent hugepages(THP) are
> +		 * currently supported. This code will need to be
> +		 * updated to support other THP sizes.
> +		 */
> +		if (hugetlb)
> +			vma_pagesize = PMD_SIZE;

nit: this is a bit of a trap waiting to happen, as the suggested
semantics of hugetlb is now hugetlbfs and not THP.

It may be slightly nicer to do do:

		if (transparent_hugepage_adjust(&pfn, &fault_ipa))
			vma_pagesize = PMD_SIZE;

> +	}
> +
> +	if (writable)
> +		kvm_set_pfn_dirty(pfn);
> +
> +	if (fault_status != FSC_PERM)
> +		clean_dcache_guest_page(pfn, vma_pagesize);
> +
> +	if (exec_fault)
> +		invalidate_icache_guest_page(pfn, vma_pagesize);
>  
>  	if (hugetlb) {
>  		pmd_t new_pmd = pfn_pmd(pfn, mem_type);
>  		new_pmd = pmd_mkhuge(new_pmd);
> -		if (writable) {
> +		if (writable)
>  			new_pmd = kvm_s2pmd_mkwrite(new_pmd);
> -			kvm_set_pfn_dirty(pfn);
> -		}
>  
> -		if (fault_status != FSC_PERM)
> -			clean_dcache_guest_page(pfn, PMD_SIZE);
> -
> -		if (exec_fault) {
> +		if (stage2_should_exec(kvm, fault_ipa, exec_fault, fault_status))
>  			new_pmd = kvm_s2pmd_mkexec(new_pmd);
> -			invalidate_icache_guest_page(pfn, PMD_SIZE);
> -		} else if (fault_status == FSC_PERM) {
> -			/* Preserve execute if XN was already cleared */
> -			if (stage2_is_exec(kvm, fault_ipa))
> -				new_pmd = kvm_s2pmd_mkexec(new_pmd);
> -		}
>  
>  		ret = stage2_set_pmd_huge(kvm, memcache, fault_ipa, &new_pmd);
>  	} else {
> @@ -1546,21 +1568,11 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  
>  		if (writable) {
>  			new_pte = kvm_s2pte_mkwrite(new_pte);
> -			kvm_set_pfn_dirty(pfn);
>  			mark_page_dirty(kvm, gfn);
>  		}
>  
> -		if (fault_status != FSC_PERM)
> -			clean_dcache_guest_page(pfn, PAGE_SIZE);
> -
> -		if (exec_fault) {
> +		if (stage2_should_exec(kvm, fault_ipa, exec_fault, fault_status))
>  			new_pte = kvm_s2pte_mkexec(new_pte);
> -			invalidate_icache_guest_page(pfn, PAGE_SIZE);
> -		} else if (fault_status == FSC_PERM) {
> -			/* Preserve execute if XN was already cleared */
> -			if (stage2_is_exec(kvm, fault_ipa))
> -				new_pte = kvm_s2pte_mkexec(new_pte);
> -		}
>  
>  		ret = stage2_set_pte(kvm, memcache, fault_ipa, &new_pte, flags);
>  	}
> -- 
> 2.17.0
> 

Otherwise looks good.

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH v2 1/4] KVM: arm/arm64: Share common code in user_mem_abort()
@ 2018-05-04 11:38     ` Christoffer Dall
  0 siblings, 0 replies; 28+ messages in thread
From: Christoffer Dall @ 2018-05-04 11:38 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, May 01, 2018 at 11:26:56AM +0100, Punit Agrawal wrote:
> The code for operations such as marking the pfn as dirty, and
> dcache/icache maintenance during stage 2 fault handling is duplicated
> between normal pages and PMD hugepages.
> 
> Instead of creating another copy of the operations when we introduce
> PUD hugepages, let's share them across the different pagesizes.
> 
> Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
> Reviewed-by: Christoffer Dall <christoffer.dall@arm.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  virt/kvm/arm/mmu.c | 66 +++++++++++++++++++++++++++-------------------
>  1 file changed, 39 insertions(+), 27 deletions(-)
> 
> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
> index 7f6a944db23d..686fc6a4b866 100644
> --- a/virt/kvm/arm/mmu.c
> +++ b/virt/kvm/arm/mmu.c
> @@ -1396,6 +1396,21 @@ static void invalidate_icache_guest_page(kvm_pfn_t pfn, unsigned long size)
>  	__invalidate_icache_guest_page(pfn, size);
>  }
>  
> +static bool stage2_should_exec(struct kvm *kvm, phys_addr_t addr,
> +			       bool exec_fault, unsigned long fault_status)
> +{
> +	/*
> +	 * If we took an execution fault we will have made the
> +	 * icache/dcache coherent and should now let the s2 mapping be
> +	 * executable.
> +	 *
> +	 * Write faults (!exec_fault && FSC_PERM) are orthogonal to
> +	 * execute permissions, and we preserve whatever we have.
> +	 */
> +	return exec_fault ||
> +		(fault_status == FSC_PERM && stage2_is_exec(kvm, addr));
> +}
> +
>  static void kvm_send_hwpoison_signal(unsigned long address,
>  				     struct vm_area_struct *vma)
>  {
> @@ -1428,7 +1443,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  	kvm_pfn_t pfn;
>  	pgprot_t mem_type = PAGE_S2;
>  	bool logging_active = memslot_is_logging(memslot);
> -	unsigned long flags = 0;
> +	unsigned long vma_pagesize, flags = 0;
>  
>  	write_fault = kvm_is_write_fault(vcpu);
>  	exec_fault = kvm_vcpu_trap_is_iabt(vcpu);
> @@ -1448,7 +1463,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  		return -EFAULT;
>  	}
>  
> -	if (vma_kernel_pagesize(vma) == PMD_SIZE && !logging_active) {
> +	vma_pagesize = vma_kernel_pagesize(vma);
> +	if (vma_pagesize == PMD_SIZE && !logging_active) {
>  		hugetlb = true;
>  		gfn = (fault_ipa & PMD_MASK) >> PAGE_SHIFT;
>  	} else {
> @@ -1517,28 +1533,34 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  	if (mmu_notifier_retry(kvm, mmu_seq))
>  		goto out_unlock;
>  
> -	if (!hugetlb && !force_pte)
> +	if (!hugetlb && !force_pte) {
>  		hugetlb = transparent_hugepage_adjust(&pfn, &fault_ipa);
> +		/*
> +		 * Only PMD_SIZE transparent hugepages(THP) are
> +		 * currently supported. This code will need to be
> +		 * updated to support other THP sizes.
> +		 */
> +		if (hugetlb)
> +			vma_pagesize = PMD_SIZE;

nit: this is a bit of a trap waiting to happen, as the suggested
semantics of hugetlb is now hugetlbfs and not THP.

It may be slightly nicer to do do:

		if (transparent_hugepage_adjust(&pfn, &fault_ipa))
			vma_pagesize = PMD_SIZE;

> +	}
> +
> +	if (writable)
> +		kvm_set_pfn_dirty(pfn);
> +
> +	if (fault_status != FSC_PERM)
> +		clean_dcache_guest_page(pfn, vma_pagesize);
> +
> +	if (exec_fault)
> +		invalidate_icache_guest_page(pfn, vma_pagesize);
>  
>  	if (hugetlb) {
>  		pmd_t new_pmd = pfn_pmd(pfn, mem_type);
>  		new_pmd = pmd_mkhuge(new_pmd);
> -		if (writable) {
> +		if (writable)
>  			new_pmd = kvm_s2pmd_mkwrite(new_pmd);
> -			kvm_set_pfn_dirty(pfn);
> -		}
>  
> -		if (fault_status != FSC_PERM)
> -			clean_dcache_guest_page(pfn, PMD_SIZE);
> -
> -		if (exec_fault) {
> +		if (stage2_should_exec(kvm, fault_ipa, exec_fault, fault_status))
>  			new_pmd = kvm_s2pmd_mkexec(new_pmd);
> -			invalidate_icache_guest_page(pfn, PMD_SIZE);
> -		} else if (fault_status == FSC_PERM) {
> -			/* Preserve execute if XN was already cleared */
> -			if (stage2_is_exec(kvm, fault_ipa))
> -				new_pmd = kvm_s2pmd_mkexec(new_pmd);
> -		}
>  
>  		ret = stage2_set_pmd_huge(kvm, memcache, fault_ipa, &new_pmd);
>  	} else {
> @@ -1546,21 +1568,11 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  
>  		if (writable) {
>  			new_pte = kvm_s2pte_mkwrite(new_pte);
> -			kvm_set_pfn_dirty(pfn);
>  			mark_page_dirty(kvm, gfn);
>  		}
>  
> -		if (fault_status != FSC_PERM)
> -			clean_dcache_guest_page(pfn, PAGE_SIZE);
> -
> -		if (exec_fault) {
> +		if (stage2_should_exec(kvm, fault_ipa, exec_fault, fault_status))
>  			new_pte = kvm_s2pte_mkexec(new_pte);
> -			invalidate_icache_guest_page(pfn, PAGE_SIZE);
> -		} else if (fault_status == FSC_PERM) {
> -			/* Preserve execute if XN was already cleared */
> -			if (stage2_is_exec(kvm, fault_ipa))
> -				new_pte = kvm_s2pte_mkexec(new_pte);
> -		}
>  
>  		ret = stage2_set_pte(kvm, memcache, fault_ipa, &new_pte, flags);
>  	}
> -- 
> 2.17.0
> 

Otherwise looks good.

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 4/4] KVM: arm64: Add support for PUD hugepages at stage 2
  2018-05-01 10:26   ` Punit Agrawal
@ 2018-05-04 11:39     ` Christoffer Dall
  -1 siblings, 0 replies; 28+ messages in thread
From: Christoffer Dall @ 2018-05-04 11:39 UTC (permalink / raw)
  To: Punit Agrawal
  Cc: kvmarm, linux-arm-kernel, marc.zyngier, linux-kernel,
	suzuki.poulose, Russell King, Catalin Marinas, Will Deacon

On Tue, May 01, 2018 at 11:26:59AM +0100, Punit Agrawal wrote:
> KVM currently supports PMD hugepages at stage 2. Extend the stage 2
> fault handling to add support for PUD hugepages.
> 
> Addition of pud hugepage support enables additional hugepage
> sizes (e.g., 1G with 4K granule) which can be useful on cores that
> support mapping larger block sizes in the TLB entries.
> 
> Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
> Cc: Christoffer Dall <christoffer.dall@arm.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Russell King <linux@armlinux.org.uk>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>

Reviewed-by: Christoffer Dall <christoffer.dall@arm.com>

> ---
>  arch/arm/include/asm/kvm_mmu.h         | 19 ++++++++++++
>  arch/arm64/include/asm/kvm_mmu.h       | 15 ++++++++++
>  arch/arm64/include/asm/pgtable-hwdef.h |  4 +++
>  arch/arm64/include/asm/pgtable.h       |  2 ++
>  virt/kvm/arm/mmu.c                     | 40 ++++++++++++++++++++++++--
>  5 files changed, 77 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
> index 224c22c0a69c..155916dbdd7e 100644
> --- a/arch/arm/include/asm/kvm_mmu.h
> +++ b/arch/arm/include/asm/kvm_mmu.h
> @@ -77,8 +77,11 @@ void kvm_clear_hyp_idmap(void);
>  
>  #define kvm_pfn_pte(pfn, prot)	pfn_pte(pfn, prot)
>  #define kvm_pfn_pmd(pfn, prot)	pfn_pmd(pfn, prot)
> +#define kvm_pfn_pud(pfn, prot)	(__pud(0))
>  
>  #define kvm_pmd_mkhuge(pmd)	pmd_mkhuge(pmd)
> +/* No support for pud hugepages */
> +#define kvm_pud_mkhuge(pud)	(pud)
>  
>  /*
>   * The following kvm_*pud*() functionas are provided strictly to allow
> @@ -95,6 +98,22 @@ static inline bool kvm_s2pud_readonly(pud_t *pud)
>  	return false;
>  }
>  
> +static inline void kvm_set_pud(pud_t *pud, pud_t new_pud)
> +{
> +	BUG();
> +}
> +
> +static inline pud_t kvm_s2pud_mkwrite(pud_t pud)
> +{
> +	BUG();
> +	return pud;
> +}
> +
> +static inline pud_t kvm_s2pud_mkexec(pud_t pud)
> +{
> +	BUG();
> +	return pud;
> +}
>  
>  static inline void kvm_set_pmd(pmd_t *pmd, pmd_t new_pmd)
>  {
> diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
> index f440cf216a23..f49a68fcbf26 100644
> --- a/arch/arm64/include/asm/kvm_mmu.h
> +++ b/arch/arm64/include/asm/kvm_mmu.h
> @@ -172,11 +172,14 @@ void kvm_clear_hyp_idmap(void);
>  
>  #define	kvm_set_pte(ptep, pte)		set_pte(ptep, pte)
>  #define	kvm_set_pmd(pmdp, pmd)		set_pmd(pmdp, pmd)
> +#define kvm_set_pud(pudp, pud)		set_pud(pudp, pud)
>  
>  #define kvm_pfn_pte(pfn, prot)		pfn_pte(pfn, prot)
>  #define kvm_pfn_pmd(pfn, prot)		pfn_pmd(pfn, prot)
> +#define kvm_pfn_pud(pfn, prot)		pfn_pud(pfn, prot)
>  
>  #define kvm_pmd_mkhuge(pmd)		pmd_mkhuge(pmd)
> +#define kvm_pud_mkhuge(pud)		pud_mkhuge(pud)
>  
>  static inline pte_t kvm_s2pte_mkwrite(pte_t pte)
>  {
> @@ -190,6 +193,12 @@ static inline pmd_t kvm_s2pmd_mkwrite(pmd_t pmd)
>  	return pmd;
>  }
>  
> +static inline pud_t kvm_s2pud_mkwrite(pud_t pud)
> +{
> +	pud_val(pud) |= PUD_S2_RDWR;
> +	return pud;
> +}
> +
>  static inline pte_t kvm_s2pte_mkexec(pte_t pte)
>  {
>  	pte_val(pte) &= ~PTE_S2_XN;
> @@ -202,6 +211,12 @@ static inline pmd_t kvm_s2pmd_mkexec(pmd_t pmd)
>  	return pmd;
>  }
>  
> +static inline pud_t kvm_s2pud_mkexec(pud_t pud)
> +{
> +	pud_val(pud) &= ~PUD_S2_XN;
> +	return pud;
> +}
> +
>  static inline void kvm_set_s2pte_readonly(pte_t *ptep)
>  {
>  	pteval_t old_pteval, pteval;
> diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h
> index fd208eac9f2a..e327665e94d1 100644
> --- a/arch/arm64/include/asm/pgtable-hwdef.h
> +++ b/arch/arm64/include/asm/pgtable-hwdef.h
> @@ -193,6 +193,10 @@
>  #define PMD_S2_RDWR		(_AT(pmdval_t, 3) << 6)   /* HAP[2:1] */
>  #define PMD_S2_XN		(_AT(pmdval_t, 2) << 53)  /* XN[1:0] */
>  
> +#define PUD_S2_RDONLY		(_AT(pudval_t, 1) << 6)   /* HAP[2:1] */
> +#define PUD_S2_RDWR		(_AT(pudval_t, 3) << 6)   /* HAP[2:1] */
> +#define PUD_S2_XN		(_AT(pudval_t, 2) << 53)  /* XN[1:0] */
> +
>  /*
>   * Memory Attribute override for Stage-2 (MemAttr[3:0])
>   */
> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> index 7c4c8f318ba9..31ea9fda07e3 100644
> --- a/arch/arm64/include/asm/pgtable.h
> +++ b/arch/arm64/include/asm/pgtable.h
> @@ -386,6 +386,8 @@ static inline int pmd_protnone(pmd_t pmd)
>  
>  #define pud_write(pud)		pte_write(pud_pte(pud))
>  
> +#define pud_mkhuge(pud)		(__pud(pud_val(pud) & ~PUD_TABLE_BIT))
> +
>  #define __pud_to_phys(pud)	__pte_to_phys(pud_pte(pud))
>  #define __phys_to_pud_val(phys)	__phys_to_pte_val(phys)
>  #define pud_pfn(pud)		((__pud_to_phys(pud) & PUD_MASK) >> PAGE_SHIFT)
> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
> index 3afbf693e045..1fb108d47dbd 100644
> --- a/virt/kvm/arm/mmu.c
> +++ b/virt/kvm/arm/mmu.c
> @@ -1036,6 +1036,26 @@ static int stage2_set_pmd_huge(struct kvm *kvm, struct kvm_mmu_memory_cache
>  	return 0;
>  }
>  
> +static int stage2_set_pud_huge(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
> +			       phys_addr_t addr, const pud_t *new_pud)
> +{
> +	pud_t *pud, old_pud;
> +
> +	pud = stage2_get_pud(kvm, cache, addr);
> +	VM_BUG_ON(!pud);
> +
> +	old_pud = *pud;
> +	if (pud_present(old_pud)) {
> +		pud_clear(pud);
> +		kvm_tlb_flush_vmid_ipa(kvm, addr);
> +	} else {
> +		get_page(virt_to_page(pud));
> +	}
> +
> +	kvm_set_pud(pud, *new_pud);
> +	return 0;
> +}
> +
>  static bool stage2_is_exec(struct kvm *kvm, phys_addr_t addr)
>  {
>  	pmd_t *pmdp;
> @@ -1467,9 +1487,12 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  	}
>  
>  	vma_pagesize = vma_kernel_pagesize(vma);
> -	if (vma_pagesize == PMD_SIZE && !logging_active) {
> +	if ((vma_pagesize == PMD_SIZE || vma_pagesize == PUD_SIZE) &&
> +	    !logging_active) {
> +		struct hstate *h = hstate_vma(vma);
> +
>  		hugetlb = true;
> -		gfn = (fault_ipa & PMD_MASK) >> PAGE_SHIFT;
> +		gfn = (fault_ipa & huge_page_mask(h)) >> PAGE_SHIFT;
>  	} else {
>  		/*
>  		 * Pages belonging to memslots that don't have the same
> @@ -1556,7 +1579,18 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  	if (exec_fault)
>  		invalidate_icache_guest_page(pfn, vma_pagesize);
>  
> -	if (hugetlb) {
> +	if (vma_pagesize == PUD_SIZE) {
> +		pud_t new_pud = kvm_pfn_pud(pfn, mem_type);
> +
> +		new_pud = kvm_pud_mkhuge(new_pud);
> +		if (writable)
> +			new_pud = kvm_s2pud_mkwrite(new_pud);
> +
> +		if (stage2_should_exec(kvm, fault_ipa, exec_fault, fault_status))
> +			new_pud = kvm_s2pud_mkexec(new_pud);
> +
> +		ret = stage2_set_pud_huge(kvm, memcache, fault_ipa, &new_pud);
> +	} else if (vma_pagesize == PMD_SIZE) {
>  		pmd_t new_pmd = kvm_pfn_pmd(pfn, mem_type);
>  
>  		new_pmd = kvm_pmd_mkhuge(new_pmd);
> -- 
> 2.17.0
> 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH v2 4/4] KVM: arm64: Add support for PUD hugepages at stage 2
@ 2018-05-04 11:39     ` Christoffer Dall
  0 siblings, 0 replies; 28+ messages in thread
From: Christoffer Dall @ 2018-05-04 11:39 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, May 01, 2018 at 11:26:59AM +0100, Punit Agrawal wrote:
> KVM currently supports PMD hugepages at stage 2. Extend the stage 2
> fault handling to add support for PUD hugepages.
> 
> Addition of pud hugepage support enables additional hugepage
> sizes (e.g., 1G with 4K granule) which can be useful on cores that
> support mapping larger block sizes in the TLB entries.
> 
> Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
> Cc: Christoffer Dall <christoffer.dall@arm.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Russell King <linux@armlinux.org.uk>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>

Reviewed-by: Christoffer Dall <christoffer.dall@arm.com>

> ---
>  arch/arm/include/asm/kvm_mmu.h         | 19 ++++++++++++
>  arch/arm64/include/asm/kvm_mmu.h       | 15 ++++++++++
>  arch/arm64/include/asm/pgtable-hwdef.h |  4 +++
>  arch/arm64/include/asm/pgtable.h       |  2 ++
>  virt/kvm/arm/mmu.c                     | 40 ++++++++++++++++++++++++--
>  5 files changed, 77 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
> index 224c22c0a69c..155916dbdd7e 100644
> --- a/arch/arm/include/asm/kvm_mmu.h
> +++ b/arch/arm/include/asm/kvm_mmu.h
> @@ -77,8 +77,11 @@ void kvm_clear_hyp_idmap(void);
>  
>  #define kvm_pfn_pte(pfn, prot)	pfn_pte(pfn, prot)
>  #define kvm_pfn_pmd(pfn, prot)	pfn_pmd(pfn, prot)
> +#define kvm_pfn_pud(pfn, prot)	(__pud(0))
>  
>  #define kvm_pmd_mkhuge(pmd)	pmd_mkhuge(pmd)
> +/* No support for pud hugepages */
> +#define kvm_pud_mkhuge(pud)	(pud)
>  
>  /*
>   * The following kvm_*pud*() functionas are provided strictly to allow
> @@ -95,6 +98,22 @@ static inline bool kvm_s2pud_readonly(pud_t *pud)
>  	return false;
>  }
>  
> +static inline void kvm_set_pud(pud_t *pud, pud_t new_pud)
> +{
> +	BUG();
> +}
> +
> +static inline pud_t kvm_s2pud_mkwrite(pud_t pud)
> +{
> +	BUG();
> +	return pud;
> +}
> +
> +static inline pud_t kvm_s2pud_mkexec(pud_t pud)
> +{
> +	BUG();
> +	return pud;
> +}
>  
>  static inline void kvm_set_pmd(pmd_t *pmd, pmd_t new_pmd)
>  {
> diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
> index f440cf216a23..f49a68fcbf26 100644
> --- a/arch/arm64/include/asm/kvm_mmu.h
> +++ b/arch/arm64/include/asm/kvm_mmu.h
> @@ -172,11 +172,14 @@ void kvm_clear_hyp_idmap(void);
>  
>  #define	kvm_set_pte(ptep, pte)		set_pte(ptep, pte)
>  #define	kvm_set_pmd(pmdp, pmd)		set_pmd(pmdp, pmd)
> +#define kvm_set_pud(pudp, pud)		set_pud(pudp, pud)
>  
>  #define kvm_pfn_pte(pfn, prot)		pfn_pte(pfn, prot)
>  #define kvm_pfn_pmd(pfn, prot)		pfn_pmd(pfn, prot)
> +#define kvm_pfn_pud(pfn, prot)		pfn_pud(pfn, prot)
>  
>  #define kvm_pmd_mkhuge(pmd)		pmd_mkhuge(pmd)
> +#define kvm_pud_mkhuge(pud)		pud_mkhuge(pud)
>  
>  static inline pte_t kvm_s2pte_mkwrite(pte_t pte)
>  {
> @@ -190,6 +193,12 @@ static inline pmd_t kvm_s2pmd_mkwrite(pmd_t pmd)
>  	return pmd;
>  }
>  
> +static inline pud_t kvm_s2pud_mkwrite(pud_t pud)
> +{
> +	pud_val(pud) |= PUD_S2_RDWR;
> +	return pud;
> +}
> +
>  static inline pte_t kvm_s2pte_mkexec(pte_t pte)
>  {
>  	pte_val(pte) &= ~PTE_S2_XN;
> @@ -202,6 +211,12 @@ static inline pmd_t kvm_s2pmd_mkexec(pmd_t pmd)
>  	return pmd;
>  }
>  
> +static inline pud_t kvm_s2pud_mkexec(pud_t pud)
> +{
> +	pud_val(pud) &= ~PUD_S2_XN;
> +	return pud;
> +}
> +
>  static inline void kvm_set_s2pte_readonly(pte_t *ptep)
>  {
>  	pteval_t old_pteval, pteval;
> diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h
> index fd208eac9f2a..e327665e94d1 100644
> --- a/arch/arm64/include/asm/pgtable-hwdef.h
> +++ b/arch/arm64/include/asm/pgtable-hwdef.h
> @@ -193,6 +193,10 @@
>  #define PMD_S2_RDWR		(_AT(pmdval_t, 3) << 6)   /* HAP[2:1] */
>  #define PMD_S2_XN		(_AT(pmdval_t, 2) << 53)  /* XN[1:0] */
>  
> +#define PUD_S2_RDONLY		(_AT(pudval_t, 1) << 6)   /* HAP[2:1] */
> +#define PUD_S2_RDWR		(_AT(pudval_t, 3) << 6)   /* HAP[2:1] */
> +#define PUD_S2_XN		(_AT(pudval_t, 2) << 53)  /* XN[1:0] */
> +
>  /*
>   * Memory Attribute override for Stage-2 (MemAttr[3:0])
>   */
> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> index 7c4c8f318ba9..31ea9fda07e3 100644
> --- a/arch/arm64/include/asm/pgtable.h
> +++ b/arch/arm64/include/asm/pgtable.h
> @@ -386,6 +386,8 @@ static inline int pmd_protnone(pmd_t pmd)
>  
>  #define pud_write(pud)		pte_write(pud_pte(pud))
>  
> +#define pud_mkhuge(pud)		(__pud(pud_val(pud) & ~PUD_TABLE_BIT))
> +
>  #define __pud_to_phys(pud)	__pte_to_phys(pud_pte(pud))
>  #define __phys_to_pud_val(phys)	__phys_to_pte_val(phys)
>  #define pud_pfn(pud)		((__pud_to_phys(pud) & PUD_MASK) >> PAGE_SHIFT)
> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
> index 3afbf693e045..1fb108d47dbd 100644
> --- a/virt/kvm/arm/mmu.c
> +++ b/virt/kvm/arm/mmu.c
> @@ -1036,6 +1036,26 @@ static int stage2_set_pmd_huge(struct kvm *kvm, struct kvm_mmu_memory_cache
>  	return 0;
>  }
>  
> +static int stage2_set_pud_huge(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
> +			       phys_addr_t addr, const pud_t *new_pud)
> +{
> +	pud_t *pud, old_pud;
> +
> +	pud = stage2_get_pud(kvm, cache, addr);
> +	VM_BUG_ON(!pud);
> +
> +	old_pud = *pud;
> +	if (pud_present(old_pud)) {
> +		pud_clear(pud);
> +		kvm_tlb_flush_vmid_ipa(kvm, addr);
> +	} else {
> +		get_page(virt_to_page(pud));
> +	}
> +
> +	kvm_set_pud(pud, *new_pud);
> +	return 0;
> +}
> +
>  static bool stage2_is_exec(struct kvm *kvm, phys_addr_t addr)
>  {
>  	pmd_t *pmdp;
> @@ -1467,9 +1487,12 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  	}
>  
>  	vma_pagesize = vma_kernel_pagesize(vma);
> -	if (vma_pagesize == PMD_SIZE && !logging_active) {
> +	if ((vma_pagesize == PMD_SIZE || vma_pagesize == PUD_SIZE) &&
> +	    !logging_active) {
> +		struct hstate *h = hstate_vma(vma);
> +
>  		hugetlb = true;
> -		gfn = (fault_ipa & PMD_MASK) >> PAGE_SHIFT;
> +		gfn = (fault_ipa & huge_page_mask(h)) >> PAGE_SHIFT;
>  	} else {
>  		/*
>  		 * Pages belonging to memslots that don't have the same
> @@ -1556,7 +1579,18 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  	if (exec_fault)
>  		invalidate_icache_guest_page(pfn, vma_pagesize);
>  
> -	if (hugetlb) {
> +	if (vma_pagesize == PUD_SIZE) {
> +		pud_t new_pud = kvm_pfn_pud(pfn, mem_type);
> +
> +		new_pud = kvm_pud_mkhuge(new_pud);
> +		if (writable)
> +			new_pud = kvm_s2pud_mkwrite(new_pud);
> +
> +		if (stage2_should_exec(kvm, fault_ipa, exec_fault, fault_status))
> +			new_pud = kvm_s2pud_mkexec(new_pud);
> +
> +		ret = stage2_set_pud_huge(kvm, memcache, fault_ipa, &new_pud);
> +	} else if (vma_pagesize == PMD_SIZE) {
>  		pmd_t new_pmd = kvm_pfn_pmd(pfn, mem_type);
>  
>  		new_pmd = kvm_pmd_mkhuge(new_pmd);
> -- 
> 2.17.0
> 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 2/4] KVM: arm/arm64: Introduce helpers to manupulate page table entries
  2018-05-01 13:00       ` Punit Agrawal
@ 2018-05-04 11:40         ` Christoffer Dall
  -1 siblings, 0 replies; 28+ messages in thread
From: Christoffer Dall @ 2018-05-04 11:40 UTC (permalink / raw)
  To: Punit Agrawal
  Cc: Suzuki K Poulose, kvmarm, linux-arm-kernel, marc.zyngier,
	linux-kernel, Russell King, Catalin Marinas, Will Deacon

On Tue, May 01, 2018 at 02:00:43PM +0100, Punit Agrawal wrote:
> Hi Suzuki,
> 
> Thanks for having a look.
> 
> Suzuki K Poulose <Suzuki.Poulose@arm.com> writes:
> 
> > On 01/05/18 11:26, Punit Agrawal wrote:
> >> Introduce helpers to abstract architectural handling of the conversion
> >> of pfn to page table entries and marking a PMD page table entry as a
> >> block entry.
> >>
> >> The helpers are introduced in preparation for supporting PUD hugepages
> >> at stage 2 - which are supported on arm64 but do not exist on arm.
> >
> > Punit,
> >
> > The change are fine by me. However, we usually do not define kvm_*
> > accessors for something which we know matches with the host variant.
> > i.e, PMD and PTE helpers, which are always present and we make use
> > of them directly. (see unmap_stage2_pmds for e.g)
> 
> In general, I agree - it makes sense to avoid duplication.
> 
> Having said that, the helpers here allow following a common pattern for
> handling the various page sizes - pte, pmd and pud - during stage 2
> fault handling (see patch 4).
> 
> As you've said you're OK with this change, I'd prefer to keep this patch
> but will drop it if any others reviewers are concerned about the
> duplication as well.

There are arguments for both keeping the kvm_ wrappers and not having
them.  I see no big harm or increase in complexity by keeping them
though.

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH v2 2/4] KVM: arm/arm64: Introduce helpers to manupulate page table entries
@ 2018-05-04 11:40         ` Christoffer Dall
  0 siblings, 0 replies; 28+ messages in thread
From: Christoffer Dall @ 2018-05-04 11:40 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, May 01, 2018 at 02:00:43PM +0100, Punit Agrawal wrote:
> Hi Suzuki,
> 
> Thanks for having a look.
> 
> Suzuki K Poulose <Suzuki.Poulose@arm.com> writes:
> 
> > On 01/05/18 11:26, Punit Agrawal wrote:
> >> Introduce helpers to abstract architectural handling of the conversion
> >> of pfn to page table entries and marking a PMD page table entry as a
> >> block entry.
> >>
> >> The helpers are introduced in preparation for supporting PUD hugepages
> >> at stage 2 - which are supported on arm64 but do not exist on arm.
> >
> > Punit,
> >
> > The change are fine by me. However, we usually do not define kvm_*
> > accessors for something which we know matches with the host variant.
> > i.e, PMD and PTE helpers, which are always present and we make use
> > of them directly. (see unmap_stage2_pmds for e.g)
> 
> In general, I agree - it makes sense to avoid duplication.
> 
> Having said that, the helpers here allow following a common pattern for
> handling the various page sizes - pte, pmd and pud - during stage 2
> fault handling (see patch 4).
> 
> As you've said you're OK with this change, I'd prefer to keep this patch
> but will drop it if any others reviewers are concerned about the
> duplication as well.

There are arguments for both keeping the kvm_ wrappers and not having
them.  I see no big harm or increase in complexity by keeping them
though.

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 1/4] KVM: arm/arm64: Share common code in user_mem_abort()
  2018-05-04 11:38     ` Christoffer Dall
@ 2018-05-04 16:22       ` Punit Agrawal
  -1 siblings, 0 replies; 28+ messages in thread
From: Punit Agrawal @ 2018-05-04 16:22 UTC (permalink / raw)
  To: Christoffer Dall; +Cc: marc.zyngier, kvmarm, linux-arm-kernel, linux-kernel

Christoffer Dall <christoffer.dall@arm.com> writes:

> On Tue, May 01, 2018 at 11:26:56AM +0100, Punit Agrawal wrote:
>> The code for operations such as marking the pfn as dirty, and
>> dcache/icache maintenance during stage 2 fault handling is duplicated
>> between normal pages and PMD hugepages.
>> 
>> Instead of creating another copy of the operations when we introduce
>> PUD hugepages, let's share them across the different pagesizes.
>> 
>> Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
>> Reviewed-by: Christoffer Dall <christoffer.dall@arm.com>
>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>> ---
>>  virt/kvm/arm/mmu.c | 66 +++++++++++++++++++++++++++-------------------
>>  1 file changed, 39 insertions(+), 27 deletions(-)
>> 
>> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
>> index 7f6a944db23d..686fc6a4b866 100644
>> --- a/virt/kvm/arm/mmu.c
>> +++ b/virt/kvm/arm/mmu.c

[...]

>> @@ -1517,28 +1533,34 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>>  	if (mmu_notifier_retry(kvm, mmu_seq))
>>  		goto out_unlock;
>>  
>> -	if (!hugetlb && !force_pte)
>> +	if (!hugetlb && !force_pte) {
>>  		hugetlb = transparent_hugepage_adjust(&pfn, &fault_ipa);
>> +		/*
>> +		 * Only PMD_SIZE transparent hugepages(THP) are
>> +		 * currently supported. This code will need to be
>> +		 * updated to support other THP sizes.
>> +		 */
>> +		if (hugetlb)
>> +			vma_pagesize = PMD_SIZE;
>
> nit: this is a bit of a trap waiting to happen, as the suggested
> semantics of hugetlb is now hugetlbfs and not THP.
>
> It may be slightly nicer to do do:
>
> 		if (transparent_hugepage_adjust(&pfn, &fault_ipa))
> 			vma_pagesize = PMD_SIZE;

I should've noticed this.

I'll incorporate your suggestion and update the condition below using
hugetlb to rely on vma_pagesize instead.

Thanks,
Punit

>
>> +	}
>> +
>> +	if (writable)
>> +		kvm_set_pfn_dirty(pfn);
>> +
>> +	if (fault_status != FSC_PERM)
>> +		clean_dcache_guest_page(pfn, vma_pagesize);
>> +
>> +	if (exec_fault)
>> +		invalidate_icache_guest_page(pfn, vma_pagesize);
>>  
>>  	if (hugetlb) {
>>  		pmd_t new_pmd = pfn_pmd(pfn, mem_type);
>>  		new_pmd = pmd_mkhuge(new_pmd);
>> -		if (writable) {
>> +		if (writable)
>>  			new_pmd = kvm_s2pmd_mkwrite(new_pmd);
>> -			kvm_set_pfn_dirty(pfn);
>> -		}
>>  
>> -		if (fault_status != FSC_PERM)
>> -			clean_dcache_guest_page(pfn, PMD_SIZE);
>> -
>> -		if (exec_fault) {
>> +		if (stage2_should_exec(kvm, fault_ipa, exec_fault, fault_status))
>>  			new_pmd = kvm_s2pmd_mkexec(new_pmd);
>> -			invalidate_icache_guest_page(pfn, PMD_SIZE);
>> -		} else if (fault_status == FSC_PERM) {
>> -			/* Preserve execute if XN was already cleared */
>> -			if (stage2_is_exec(kvm, fault_ipa))
>> -				new_pmd = kvm_s2pmd_mkexec(new_pmd);
>> -		}
>>  
>>  		ret = stage2_set_pmd_huge(kvm, memcache, fault_ipa, &new_pmd);
>>  	} else {
>> @@ -1546,21 +1568,11 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>>  
>>  		if (writable) {
>>  			new_pte = kvm_s2pte_mkwrite(new_pte);
>> -			kvm_set_pfn_dirty(pfn);
>>  			mark_page_dirty(kvm, gfn);
>>  		}
>>  
>> -		if (fault_status != FSC_PERM)
>> -			clean_dcache_guest_page(pfn, PAGE_SIZE);
>> -
>> -		if (exec_fault) {
>> +		if (stage2_should_exec(kvm, fault_ipa, exec_fault, fault_status))
>>  			new_pte = kvm_s2pte_mkexec(new_pte);
>> -			invalidate_icache_guest_page(pfn, PAGE_SIZE);
>> -		} else if (fault_status == FSC_PERM) {
>> -			/* Preserve execute if XN was already cleared */
>> -			if (stage2_is_exec(kvm, fault_ipa))
>> -				new_pte = kvm_s2pte_mkexec(new_pte);
>> -		}
>>  
>>  		ret = stage2_set_pte(kvm, memcache, fault_ipa, &new_pte, flags);
>>  	}
>> -- 
>> 2.17.0
>> 
>
> Otherwise looks good.
>
> Thanks,
> -Christoffer
> _______________________________________________
> kvmarm mailing list
> kvmarm@lists.cs.columbia.edu
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH v2 1/4] KVM: arm/arm64: Share common code in user_mem_abort()
@ 2018-05-04 16:22       ` Punit Agrawal
  0 siblings, 0 replies; 28+ messages in thread
From: Punit Agrawal @ 2018-05-04 16:22 UTC (permalink / raw)
  To: linux-arm-kernel

Christoffer Dall <christoffer.dall@arm.com> writes:

> On Tue, May 01, 2018 at 11:26:56AM +0100, Punit Agrawal wrote:
>> The code for operations such as marking the pfn as dirty, and
>> dcache/icache maintenance during stage 2 fault handling is duplicated
>> between normal pages and PMD hugepages.
>> 
>> Instead of creating another copy of the operations when we introduce
>> PUD hugepages, let's share them across the different pagesizes.
>> 
>> Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
>> Reviewed-by: Christoffer Dall <christoffer.dall@arm.com>
>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>> ---
>>  virt/kvm/arm/mmu.c | 66 +++++++++++++++++++++++++++-------------------
>>  1 file changed, 39 insertions(+), 27 deletions(-)
>> 
>> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
>> index 7f6a944db23d..686fc6a4b866 100644
>> --- a/virt/kvm/arm/mmu.c
>> +++ b/virt/kvm/arm/mmu.c

[...]

>> @@ -1517,28 +1533,34 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>>  	if (mmu_notifier_retry(kvm, mmu_seq))
>>  		goto out_unlock;
>>  
>> -	if (!hugetlb && !force_pte)
>> +	if (!hugetlb && !force_pte) {
>>  		hugetlb = transparent_hugepage_adjust(&pfn, &fault_ipa);
>> +		/*
>> +		 * Only PMD_SIZE transparent hugepages(THP) are
>> +		 * currently supported. This code will need to be
>> +		 * updated to support other THP sizes.
>> +		 */
>> +		if (hugetlb)
>> +			vma_pagesize = PMD_SIZE;
>
> nit: this is a bit of a trap waiting to happen, as the suggested
> semantics of hugetlb is now hugetlbfs and not THP.
>
> It may be slightly nicer to do do:
>
> 		if (transparent_hugepage_adjust(&pfn, &fault_ipa))
> 			vma_pagesize = PMD_SIZE;

I should've noticed this.

I'll incorporate your suggestion and update the condition below using
hugetlb to rely on vma_pagesize instead.

Thanks,
Punit

>
>> +	}
>> +
>> +	if (writable)
>> +		kvm_set_pfn_dirty(pfn);
>> +
>> +	if (fault_status != FSC_PERM)
>> +		clean_dcache_guest_page(pfn, vma_pagesize);
>> +
>> +	if (exec_fault)
>> +		invalidate_icache_guest_page(pfn, vma_pagesize);
>>  
>>  	if (hugetlb) {
>>  		pmd_t new_pmd = pfn_pmd(pfn, mem_type);
>>  		new_pmd = pmd_mkhuge(new_pmd);
>> -		if (writable) {
>> +		if (writable)
>>  			new_pmd = kvm_s2pmd_mkwrite(new_pmd);
>> -			kvm_set_pfn_dirty(pfn);
>> -		}
>>  
>> -		if (fault_status != FSC_PERM)
>> -			clean_dcache_guest_page(pfn, PMD_SIZE);
>> -
>> -		if (exec_fault) {
>> +		if (stage2_should_exec(kvm, fault_ipa, exec_fault, fault_status))
>>  			new_pmd = kvm_s2pmd_mkexec(new_pmd);
>> -			invalidate_icache_guest_page(pfn, PMD_SIZE);
>> -		} else if (fault_status == FSC_PERM) {
>> -			/* Preserve execute if XN was already cleared */
>> -			if (stage2_is_exec(kvm, fault_ipa))
>> -				new_pmd = kvm_s2pmd_mkexec(new_pmd);
>> -		}
>>  
>>  		ret = stage2_set_pmd_huge(kvm, memcache, fault_ipa, &new_pmd);
>>  	} else {
>> @@ -1546,21 +1568,11 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>>  
>>  		if (writable) {
>>  			new_pte = kvm_s2pte_mkwrite(new_pte);
>> -			kvm_set_pfn_dirty(pfn);
>>  			mark_page_dirty(kvm, gfn);
>>  		}
>>  
>> -		if (fault_status != FSC_PERM)
>> -			clean_dcache_guest_page(pfn, PAGE_SIZE);
>> -
>> -		if (exec_fault) {
>> +		if (stage2_should_exec(kvm, fault_ipa, exec_fault, fault_status))
>>  			new_pte = kvm_s2pte_mkexec(new_pte);
>> -			invalidate_icache_guest_page(pfn, PAGE_SIZE);
>> -		} else if (fault_status == FSC_PERM) {
>> -			/* Preserve execute if XN was already cleared */
>> -			if (stage2_is_exec(kvm, fault_ipa))
>> -				new_pte = kvm_s2pte_mkexec(new_pte);
>> -		}
>>  
>>  		ret = stage2_set_pte(kvm, memcache, fault_ipa, &new_pte, flags);
>>  	}
>> -- 
>> 2.17.0
>> 
>
> Otherwise looks good.
>
> Thanks,
> -Christoffer
> _______________________________________________
> kvmarm mailing list
> kvmarm at lists.cs.columbia.edu
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 4/4] KVM: arm64: Add support for PUD hugepages at stage 2
  2018-05-01 10:26   ` Punit Agrawal
@ 2018-05-15 16:56     ` Catalin Marinas
  -1 siblings, 0 replies; 28+ messages in thread
From: Catalin Marinas @ 2018-05-15 16:56 UTC (permalink / raw)
  To: Punit Agrawal
  Cc: kvmarm, suzuki.poulose, marc.zyngier, Will Deacon,
	christoffer.dall, linux-kernel, Russell King, linux-arm-kernel

On Tue, May 01, 2018 at 11:26:59AM +0100, Punit Agrawal wrote:
> KVM currently supports PMD hugepages at stage 2. Extend the stage 2
> fault handling to add support for PUD hugepages.
> 
> Addition of pud hugepage support enables additional hugepage
> sizes (e.g., 1G with 4K granule) which can be useful on cores that
> support mapping larger block sizes in the TLB entries.
> 
> Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
> Cc: Christoffer Dall <christoffer.dall@arm.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Russell King <linux@armlinux.org.uk>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> ---
>  arch/arm/include/asm/kvm_mmu.h         | 19 ++++++++++++
>  arch/arm64/include/asm/kvm_mmu.h       | 15 ++++++++++
>  arch/arm64/include/asm/pgtable-hwdef.h |  4 +++
>  arch/arm64/include/asm/pgtable.h       |  2 ++
>  virt/kvm/arm/mmu.c                     | 40 ++++++++++++++++++++++++--
>  5 files changed, 77 insertions(+), 3 deletions(-)

Since this patch touches a couple of core arm64 files:

Acked-by: Catalin Marinas <catalin.marinas@arm.com>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH v2 4/4] KVM: arm64: Add support for PUD hugepages at stage 2
@ 2018-05-15 16:56     ` Catalin Marinas
  0 siblings, 0 replies; 28+ messages in thread
From: Catalin Marinas @ 2018-05-15 16:56 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, May 01, 2018 at 11:26:59AM +0100, Punit Agrawal wrote:
> KVM currently supports PMD hugepages at stage 2. Extend the stage 2
> fault handling to add support for PUD hugepages.
> 
> Addition of pud hugepage support enables additional hugepage
> sizes (e.g., 1G with 4K granule) which can be useful on cores that
> support mapping larger block sizes in the TLB entries.
> 
> Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
> Cc: Christoffer Dall <christoffer.dall@arm.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Russell King <linux@armlinux.org.uk>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> ---
>  arch/arm/include/asm/kvm_mmu.h         | 19 ++++++++++++
>  arch/arm64/include/asm/kvm_mmu.h       | 15 ++++++++++
>  arch/arm64/include/asm/pgtable-hwdef.h |  4 +++
>  arch/arm64/include/asm/pgtable.h       |  2 ++
>  virt/kvm/arm/mmu.c                     | 40 ++++++++++++++++++++++++--
>  5 files changed, 77 insertions(+), 3 deletions(-)

Since this patch touches a couple of core arm64 files:

Acked-by: Catalin Marinas <catalin.marinas@arm.com>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 4/4] KVM: arm64: Add support for PUD hugepages at stage 2
  2018-05-15 16:56     ` Catalin Marinas
  (?)
@ 2018-05-15 17:12       ` Punit Agrawal
  -1 siblings, 0 replies; 28+ messages in thread
From: Punit Agrawal @ 2018-05-15 17:12 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: kvmarm, suzuki.poulose, marc.zyngier, Will Deacon,
	christoffer.dall, linux-kernel, Russell King, linux-arm-kernel

Catalin Marinas <catalin.marinas@arm.com> writes:

> On Tue, May 01, 2018 at 11:26:59AM +0100, Punit Agrawal wrote:
>> KVM currently supports PMD hugepages at stage 2. Extend the stage 2
>> fault handling to add support for PUD hugepages.
>> 
>> Addition of pud hugepage support enables additional hugepage
>> sizes (e.g., 1G with 4K granule) which can be useful on cores that
>> support mapping larger block sizes in the TLB entries.
>> 
>> Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
>> Cc: Christoffer Dall <christoffer.dall@arm.com>
>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>> Cc: Russell King <linux@armlinux.org.uk>
>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>> Cc: Will Deacon <will.deacon@arm.com>
>> ---
>>  arch/arm/include/asm/kvm_mmu.h         | 19 ++++++++++++
>>  arch/arm64/include/asm/kvm_mmu.h       | 15 ++++++++++
>>  arch/arm64/include/asm/pgtable-hwdef.h |  4 +++
>>  arch/arm64/include/asm/pgtable.h       |  2 ++
>>  virt/kvm/arm/mmu.c                     | 40 ++++++++++++++++++++++++--
>>  5 files changed, 77 insertions(+), 3 deletions(-)
>
> Since this patch touches a couple of core arm64 files:
>
> Acked-by: Catalin Marinas <catalin.marinas@arm.com>

Thanks Catalin.

I've posted a v3 with minor changes yesterday[0]. Can you comment there?
Or maybe Marc can apply the tag while merging the patches.

[0] https://lkml.org/lkml/2018/5/14/912

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 4/4] KVM: arm64: Add support for PUD hugepages at stage 2
@ 2018-05-15 17:12       ` Punit Agrawal
  0 siblings, 0 replies; 28+ messages in thread
From: Punit Agrawal @ 2018-05-15 17:12 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: kvmarm, suzuki.poulose, marc.zyngier, Will Deacon,
	christoffer.dall, linux-kernel, Russell King, linux-arm-kernel

Catalin Marinas <catalin.marinas@arm.com> writes:

> On Tue, May 01, 2018 at 11:26:59AM +0100, Punit Agrawal wrote:
>> KVM currently supports PMD hugepages at stage 2. Extend the stage 2
>> fault handling to add support for PUD hugepages.
>> 
>> Addition of pud hugepage support enables additional hugepage
>> sizes (e.g., 1G with 4K granule) which can be useful on cores that
>> support mapping larger block sizes in the TLB entries.
>> 
>> Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
>> Cc: Christoffer Dall <christoffer.dall@arm.com>
>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>> Cc: Russell King <linux@armlinux.org.uk>
>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>> Cc: Will Deacon <will.deacon@arm.com>
>> ---
>>  arch/arm/include/asm/kvm_mmu.h         | 19 ++++++++++++
>>  arch/arm64/include/asm/kvm_mmu.h       | 15 ++++++++++
>>  arch/arm64/include/asm/pgtable-hwdef.h |  4 +++
>>  arch/arm64/include/asm/pgtable.h       |  2 ++
>>  virt/kvm/arm/mmu.c                     | 40 ++++++++++++++++++++++++--
>>  5 files changed, 77 insertions(+), 3 deletions(-)
>
> Since this patch touches a couple of core arm64 files:
>
> Acked-by: Catalin Marinas <catalin.marinas@arm.com>

Thanks Catalin.

I've posted a v3 with minor changes yesterday[0]. Can you comment there?
Or maybe Marc can apply the tag while merging the patches.

[0] https://lkml.org/lkml/2018/5/14/912

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH v2 4/4] KVM: arm64: Add support for PUD hugepages at stage 2
@ 2018-05-15 17:12       ` Punit Agrawal
  0 siblings, 0 replies; 28+ messages in thread
From: Punit Agrawal @ 2018-05-15 17:12 UTC (permalink / raw)
  To: linux-arm-kernel

Catalin Marinas <catalin.marinas@arm.com> writes:

> On Tue, May 01, 2018 at 11:26:59AM +0100, Punit Agrawal wrote:
>> KVM currently supports PMD hugepages at stage 2. Extend the stage 2
>> fault handling to add support for PUD hugepages.
>> 
>> Addition of pud hugepage support enables additional hugepage
>> sizes (e.g., 1G with 4K granule) which can be useful on cores that
>> support mapping larger block sizes in the TLB entries.
>> 
>> Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
>> Cc: Christoffer Dall <christoffer.dall@arm.com>
>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>> Cc: Russell King <linux@armlinux.org.uk>
>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>> Cc: Will Deacon <will.deacon@arm.com>
>> ---
>>  arch/arm/include/asm/kvm_mmu.h         | 19 ++++++++++++
>>  arch/arm64/include/asm/kvm_mmu.h       | 15 ++++++++++
>>  arch/arm64/include/asm/pgtable-hwdef.h |  4 +++
>>  arch/arm64/include/asm/pgtable.h       |  2 ++
>>  virt/kvm/arm/mmu.c                     | 40 ++++++++++++++++++++++++--
>>  5 files changed, 77 insertions(+), 3 deletions(-)
>
> Since this patch touches a couple of core arm64 files:
>
> Acked-by: Catalin Marinas <catalin.marinas@arm.com>

Thanks Catalin.

I've posted a v3 with minor changes yesterday[0]. Can you comment there?
Or maybe Marc can apply the tag while merging the patches.

[0] https://lkml.org/lkml/2018/5/14/912

^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2018-05-15 17:13 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-05-01 10:26 [PATCH v2 0/4] KVM: Support PUD hugepages at stage 2 Punit Agrawal
2018-05-01 10:26 ` Punit Agrawal
2018-05-01 10:26 ` [PATCH v2 1/4] KVM: arm/arm64: Share common code in user_mem_abort() Punit Agrawal
2018-05-01 10:26   ` Punit Agrawal
2018-05-04 11:38   ` Christoffer Dall
2018-05-04 11:38     ` Christoffer Dall
2018-05-04 16:22     ` Punit Agrawal
2018-05-04 16:22       ` Punit Agrawal
2018-05-01 10:26 ` [PATCH v2 2/4] KVM: arm/arm64: Introduce helpers to manupulate page table entries Punit Agrawal
2018-05-01 10:26   ` Punit Agrawal
2018-05-01 10:36   ` Suzuki K Poulose
2018-05-01 10:36     ` Suzuki K Poulose
2018-05-01 13:00     ` Punit Agrawal
2018-05-01 13:00       ` Punit Agrawal
2018-05-01 13:00       ` Punit Agrawal
2018-05-04 11:40       ` Christoffer Dall
2018-05-04 11:40         ` Christoffer Dall
2018-05-01 10:26 ` [PATCH v2 3/4] KVM: arm64: Support dirty page tracking for PUD hugepages Punit Agrawal
2018-05-01 10:26   ` Punit Agrawal
2018-05-01 10:26 ` [PATCH v2 4/4] KVM: arm64: Add support for PUD hugepages at stage 2 Punit Agrawal
2018-05-01 10:26   ` Punit Agrawal
2018-05-04 11:39   ` Christoffer Dall
2018-05-04 11:39     ` Christoffer Dall
2018-05-15 16:56   ` Catalin Marinas
2018-05-15 16:56     ` Catalin Marinas
2018-05-15 17:12     ` Punit Agrawal
2018-05-15 17:12       ` Punit Agrawal
2018-05-15 17:12       ` Punit Agrawal

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.