All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH 00/12] kvm-arm: Add stage2 page table walker
@ 2016-03-14 16:52 ` Suzuki K Poulose
  0 siblings, 0 replies; 50+ messages in thread
From: Suzuki K Poulose @ 2016-03-14 16:52 UTC (permalink / raw)
  To: christoffer.dall, marc.zyngier
  Cc: kvm, catalin.marinas, will.deacon, kvmarm, linux-arm-kernel

This series adds support for stage2 page table helpers and makes
the core kvm-arm MMU code make use of it. At the moment we assume
that the host/hyp and the stage2 page tables have same number of
levels and hence use the host level accessors (except for some
hooks, e.g kvm_p.d_addr_end).

On arm32, the only change w.r.t the page tables is dealing
with > 32bit physical addresses.

However on arm64, the hardware supports concatenation of tables (upto 16)
at the entry level, which could affect :
 1) number of entries in the PGD table (upto 16 * PTRS_PER_PTE)
 2) number of page table levels (reduced number of page table levels).

Also depending on the VA_BITS for the host kernel, the number of page table
levels for both host and stage2(40bit IPA) could differ. At present, we insert
(upto) one fake software page table(as the hardware is not aware of it and is
only used by the OS to walk the table) level to bring the number of levels to
that of the host/hyp table. However, with 16K + 48bit, and 40bit IPA, we could
end up in 2 fake levels, which complicates the code.

This series adds the support for stage2 translation helpers and plugs them
into the core KVM MMU code, switching between the hyp vs stage2 table
based on the 'kvm' parameter (i.e, kvm ? stage2_xxx : host_xxx )

And then we define the stage2 helpers based on the number of actual
hardware page table levels for arm64 (see Patch 10 for more details).

Finally we enable the KVM for 16K, which was depending on this series.

Patch 1: Abstracts the fake page table level handling code to arch
	 specific files.

Patch 2-4: Contains a fix and some cleanups

Patch 5,6: Prepares the existing kvm_p{g,u,m}d_ wrappers for choosing the
	 appropriate tables.

Patch 7,8: Adds kvm_ wrappers for the other page table helpers which differ
	 for hyp vs stage2, so that they can be later switched to the correct
	 table for arm and arm64.
Patch 9: Switches the kvm-arm MMU code to use the kvm_ wrappers for all page
	 table walks and explicit accessors wherever applicable.

Patch 10: Define the real stage2 helpers based on the hardware page table
	 and get rid of the fake page table levels.

Patch 11: Remove the fake pgd handling helpers from core code.
Patch 12: Enable KVM for 16K, now that we can support any configuration.

Applies on v4.5-rc5

Tested this series with LTP/Trinity/hackbench on VMs + host :

arm64 : with all possible PAGE_SIZE + VA_BITS for the host on real hardware(s) and FPGAs.
arm32: TC2

Suzuki K Poulose (12):
  kvm arm: Move fake PGD handling to arch specific files
  arm64: kvm: Fix {V}TCR_EL2_TG0 mask
  arm64: kvm: Cleanup VTCR_EL2/VTTBR computation
  kvm-arm: Rename kvm_pmd_huge to huge_pmd
  kvm-arm: Move kvm_pud_huge to arch specific headers
  kvm-arm: Pass kvm parameter for pagetable helpers
  kvm: arm: Introduce stage2 page table helpers
  kvm: arm64: Introduce stage2 page table helpers
  kvm-arm: Switch to kvm pagetable helpers
  kvm: arm64: Get rid of fake page table levels
  kvm-arm: Cleanup stage2 pgd handling
  arm64: kvm: Add support for 16K pages

 arch/arm/include/asm/kvm_mmu.h                |  121 ++++++++++++++---
 arch/arm/include/asm/stage2_pgtable.h         |   55 ++++++++
 arch/arm/kvm/arm.c                            |    2 +-
 arch/arm/kvm/mmu.c                            |  159 ++++++++--------------
 arch/arm64/include/asm/kvm_arm.h              |   37 +++--
 arch/arm64/include/asm/kvm_mmu.h              |  181 ++++++++++++++++---------
 arch/arm64/include/asm/stage2_pgtable-nopmd.h |   26 ++++
 arch/arm64/include/asm/stage2_pgtable-nopud.h |   23 ++++
 arch/arm64/include/asm/stage2_pgtable.h       |  134 ++++++++++++++++++
 arch/arm64/kvm/Kconfig                        |    1 -
 10 files changed, 538 insertions(+), 201 deletions(-)
 create mode 100644 arch/arm/include/asm/stage2_pgtable.h
 create mode 100644 arch/arm64/include/asm/stage2_pgtable-nopmd.h
 create mode 100644 arch/arm64/include/asm/stage2_pgtable-nopud.h
 create mode 100644 arch/arm64/include/asm/stage2_pgtable.h

-- 
1.7.9.5

^ permalink raw reply	[flat|nested] 50+ messages in thread

* [RFC PATCH 00/12] kvm-arm: Add stage2 page table walker
@ 2016-03-14 16:52 ` Suzuki K Poulose
  0 siblings, 0 replies; 50+ messages in thread
From: Suzuki K Poulose @ 2016-03-14 16:52 UTC (permalink / raw)
  To: linux-arm-kernel

This series adds support for stage2 page table helpers and makes
the core kvm-arm MMU code make use of it. At the moment we assume
that the host/hyp and the stage2 page tables have same number of
levels and hence use the host level accessors (except for some
hooks, e.g kvm_p.d_addr_end).

On arm32, the only change w.r.t the page tables is dealing
with > 32bit physical addresses.

However on arm64, the hardware supports concatenation of tables (upto 16)
at the entry level, which could affect :
 1) number of entries in the PGD table (upto 16 * PTRS_PER_PTE)
 2) number of page table levels (reduced number of page table levels).

Also depending on the VA_BITS for the host kernel, the number of page table
levels for both host and stage2(40bit IPA) could differ. At present, we insert
(upto) one fake software page table(as the hardware is not aware of it and is
only used by the OS to walk the table) level to bring the number of levels to
that of the host/hyp table. However, with 16K + 48bit, and 40bit IPA, we could
end up in 2 fake levels, which complicates the code.

This series adds the support for stage2 translation helpers and plugs them
into the core KVM MMU code, switching between the hyp vs stage2 table
based on the 'kvm' parameter (i.e, kvm ? stage2_xxx : host_xxx )

And then we define the stage2 helpers based on the number of actual
hardware page table levels for arm64 (see Patch 10 for more details).

Finally we enable the KVM for 16K, which was depending on this series.

Patch 1: Abstracts the fake page table level handling code to arch
	 specific files.

Patch 2-4: Contains a fix and some cleanups

Patch 5,6: Prepares the existing kvm_p{g,u,m}d_ wrappers for choosing the
	 appropriate tables.

Patch 7,8: Adds kvm_ wrappers for the other page table helpers which differ
	 for hyp vs stage2, so that they can be later switched to the correct
	 table for arm and arm64.
Patch 9: Switches the kvm-arm MMU code to use the kvm_ wrappers for all page
	 table walks and explicit accessors wherever applicable.

Patch 10: Define the real stage2 helpers based on the hardware page table
	 and get rid of the fake page table levels.

Patch 11: Remove the fake pgd handling helpers from core code.
Patch 12: Enable KVM for 16K, now that we can support any configuration.

Applies on v4.5-rc5

Tested this series with LTP/Trinity/hackbench on VMs + host :

arm64 : with all possible PAGE_SIZE + VA_BITS for the host on real hardware(s) and FPGAs.
arm32: TC2

Suzuki K Poulose (12):
  kvm arm: Move fake PGD handling to arch specific files
  arm64: kvm: Fix {V}TCR_EL2_TG0 mask
  arm64: kvm: Cleanup VTCR_EL2/VTTBR computation
  kvm-arm: Rename kvm_pmd_huge to huge_pmd
  kvm-arm: Move kvm_pud_huge to arch specific headers
  kvm-arm: Pass kvm parameter for pagetable helpers
  kvm: arm: Introduce stage2 page table helpers
  kvm: arm64: Introduce stage2 page table helpers
  kvm-arm: Switch to kvm pagetable helpers
  kvm: arm64: Get rid of fake page table levels
  kvm-arm: Cleanup stage2 pgd handling
  arm64: kvm: Add support for 16K pages

 arch/arm/include/asm/kvm_mmu.h                |  121 ++++++++++++++---
 arch/arm/include/asm/stage2_pgtable.h         |   55 ++++++++
 arch/arm/kvm/arm.c                            |    2 +-
 arch/arm/kvm/mmu.c                            |  159 ++++++++--------------
 arch/arm64/include/asm/kvm_arm.h              |   37 +++--
 arch/arm64/include/asm/kvm_mmu.h              |  181 ++++++++++++++++---------
 arch/arm64/include/asm/stage2_pgtable-nopmd.h |   26 ++++
 arch/arm64/include/asm/stage2_pgtable-nopud.h |   23 ++++
 arch/arm64/include/asm/stage2_pgtable.h       |  134 ++++++++++++++++++
 arch/arm64/kvm/Kconfig                        |    1 -
 10 files changed, 538 insertions(+), 201 deletions(-)
 create mode 100644 arch/arm/include/asm/stage2_pgtable.h
 create mode 100644 arch/arm64/include/asm/stage2_pgtable-nopmd.h
 create mode 100644 arch/arm64/include/asm/stage2_pgtable-nopud.h
 create mode 100644 arch/arm64/include/asm/stage2_pgtable.h

-- 
1.7.9.5

^ permalink raw reply	[flat|nested] 50+ messages in thread

* [RFC PATCH 01/12] kvm arm: Move fake PGD handling to arch specific files
  2016-03-14 16:52 ` Suzuki K Poulose
@ 2016-03-14 16:53   ` Suzuki K Poulose
  -1 siblings, 0 replies; 50+ messages in thread
From: Suzuki K Poulose @ 2016-03-14 16:53 UTC (permalink / raw)
  To: christoffer.dall, marc.zyngier
  Cc: kvm, catalin.marinas, will.deacon, kvmarm, linux-arm-kernel

Rearrange the code for fake pgd handling, which is applicable
only for arm64. This will later be removed once we introduce
the stage2 page table walker macros.

Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
---
 arch/arm/include/asm/kvm_mmu.h   |   11 +++++++--
 arch/arm/kvm/mmu.c               |   47 ++++++--------------------------------
 arch/arm64/include/asm/kvm_mmu.h |   43 ++++++++++++++++++++++++++++++++++
 3 files changed, 59 insertions(+), 42 deletions(-)

diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index a520b79..e2b2a5a 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -161,8 +161,6 @@ static inline bool kvm_page_empty(void *ptr)
 #define kvm_pmd_table_empty(kvm, pmdp) kvm_page_empty(pmdp)
 #define kvm_pud_table_empty(kvm, pudp) (0)
 
-#define KVM_PREALLOC_LEVEL	0
-
 static inline void *kvm_get_hwpgd(struct kvm *kvm)
 {
 	return kvm->arch.pgd;
@@ -173,6 +171,15 @@ static inline unsigned int kvm_get_hwpgd_size(void)
 	return PTRS_PER_S2_PGD * sizeof(pgd_t);
 }
 
+static inline pgd_t *kvm_setup_fake_pgd(pgd_t *hwpgd)
+{
+	return hwpgd;
+}
+
+static inline void kvm_free_fake_pgd(pgd_t *pgd)
+{
+}
+
 struct kvm;
 
 #define kvm_flush_dcache_to_poc(a,l)	__cpuc_flush_dcache_area((a), (l))
diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index aba61fd..a16631c 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -677,47 +677,16 @@ int kvm_alloc_stage2_pgd(struct kvm *kvm)
 	if (!hwpgd)
 		return -ENOMEM;
 
-	/* When the kernel uses more levels of page tables than the
+	/*
+	 * When the kernel uses more levels of page tables than the
 	 * guest, we allocate a fake PGD and pre-populate it to point
 	 * to the next-level page table, which will be the real
 	 * initial page table pointed to by the VTTBR.
-	 *
-	 * When KVM_PREALLOC_LEVEL==2, we allocate a single page for
-	 * the PMD and the kernel will use folded pud.
-	 * When KVM_PREALLOC_LEVEL==1, we allocate 2 consecutive PUD
-	 * pages.
 	 */
-	if (KVM_PREALLOC_LEVEL > 0) {
-		int i;
-
-		/*
-		 * Allocate fake pgd for the page table manipulation macros to
-		 * work.  This is not used by the hardware and we have no
-		 * alignment requirement for this allocation.
-		 */
-		pgd = kmalloc(PTRS_PER_S2_PGD * sizeof(pgd_t),
-				GFP_KERNEL | __GFP_ZERO);
-
-		if (!pgd) {
-			kvm_free_hwpgd(hwpgd);
-			return -ENOMEM;
-		}
-
-		/* Plug the HW PGD into the fake one. */
-		for (i = 0; i < PTRS_PER_S2_PGD; i++) {
-			if (KVM_PREALLOC_LEVEL == 1)
-				pgd_populate(NULL, pgd + i,
-					     (pud_t *)hwpgd + i * PTRS_PER_PUD);
-			else if (KVM_PREALLOC_LEVEL == 2)
-				pud_populate(NULL, pud_offset(pgd, 0) + i,
-					     (pmd_t *)hwpgd + i * PTRS_PER_PMD);
-		}
-	} else {
-		/*
-		 * Allocate actual first-level Stage-2 page table used by the
-		 * hardware for Stage-2 page table walks.
-		 */
-		pgd = (pgd_t *)hwpgd;
+	pgd = kvm_setup_fake_pgd(hwpgd);
+	if (IS_ERR(pgd)) {
+		kvm_free_hwpgd(hwpgd);
+		return PTR_ERR(pgd);
 	}
 
 	kvm_clean_pgd(pgd);
@@ -824,9 +793,7 @@ void kvm_free_stage2_pgd(struct kvm *kvm)
 
 	unmap_stage2_range(kvm, 0, KVM_PHYS_SIZE);
 	kvm_free_hwpgd(kvm_get_hwpgd(kvm));
-	if (KVM_PREALLOC_LEVEL > 0)
-		kfree(kvm->arch.pgd);
-
+	kvm_free_fake_pgd(kvm->arch.pgd);
 	kvm->arch.pgd = NULL;
 }
 
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 7364339..07a09b2 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -198,6 +198,49 @@ static inline unsigned int kvm_get_hwpgd_size(void)
 	return PTRS_PER_S2_PGD * sizeof(pgd_t);
 }
 
+/*
+ * Allocate fake pgd for the host kernel page table macros to work.
+ * This is not used by the hardware and we have no alignment
+ * requirement for this allocation.
+ */
+static inline pgd_t *kvm_setup_fake_pgd(pgd_t *hwpgd)
+{
+	int i;
+	pgd_t *pgd;
+
+	if (!KVM_PREALLOC_LEVEL)
+		return hwpgd;
+
+	/*
+	 * When KVM_PREALLOC_LEVEL==2, we allocate a single page for
+	 * the PMD and the kernel will use folded pud.
+	 * When KVM_PREALLOC_LEVEL==1, we allocate 2 consecutive PUD
+	 * pages.
+	 */
+
+	pgd = kmalloc(PTRS_PER_S2_PGD * sizeof(pgd_t),
+			GFP_KERNEL | __GFP_ZERO);
+	if (!pgd)
+		return ERR_PTR(-ENOMEM);
+
+	/* Plug the HW PGD into the fake one. */
+	for (i = 0; i < PTRS_PER_S2_PGD; i++) {
+		if (KVM_PREALLOC_LEVEL == 1)
+			pgd_populate(NULL, pgd + i,
+				     (pud_t *)hwpgd + i * PTRS_PER_PUD);
+		else if (KVM_PREALLOC_LEVEL == 2)
+			pud_populate(NULL, pud_offset(pgd, 0) + i,
+				     (pmd_t *)hwpgd + i * PTRS_PER_PMD);
+	}
+
+	return pgd;
+}
+
+static inline void kvm_free_fake_pgd(pgd_t *pgd)
+{
+	if (KVM_PREALLOC_LEVEL > 0)
+		kfree(pgd);
+}
 static inline bool kvm_page_empty(void *ptr)
 {
 	struct page *ptr_page = virt_to_page(ptr);
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [RFC PATCH 01/12] kvm arm: Move fake PGD handling to arch specific files
@ 2016-03-14 16:53   ` Suzuki K Poulose
  0 siblings, 0 replies; 50+ messages in thread
From: Suzuki K Poulose @ 2016-03-14 16:53 UTC (permalink / raw)
  To: linux-arm-kernel

Rearrange the code for fake pgd handling, which is applicable
only for arm64. This will later be removed once we introduce
the stage2 page table walker macros.

Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
---
 arch/arm/include/asm/kvm_mmu.h   |   11 +++++++--
 arch/arm/kvm/mmu.c               |   47 ++++++--------------------------------
 arch/arm64/include/asm/kvm_mmu.h |   43 ++++++++++++++++++++++++++++++++++
 3 files changed, 59 insertions(+), 42 deletions(-)

diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index a520b79..e2b2a5a 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -161,8 +161,6 @@ static inline bool kvm_page_empty(void *ptr)
 #define kvm_pmd_table_empty(kvm, pmdp) kvm_page_empty(pmdp)
 #define kvm_pud_table_empty(kvm, pudp) (0)
 
-#define KVM_PREALLOC_LEVEL	0
-
 static inline void *kvm_get_hwpgd(struct kvm *kvm)
 {
 	return kvm->arch.pgd;
@@ -173,6 +171,15 @@ static inline unsigned int kvm_get_hwpgd_size(void)
 	return PTRS_PER_S2_PGD * sizeof(pgd_t);
 }
 
+static inline pgd_t *kvm_setup_fake_pgd(pgd_t *hwpgd)
+{
+	return hwpgd;
+}
+
+static inline void kvm_free_fake_pgd(pgd_t *pgd)
+{
+}
+
 struct kvm;
 
 #define kvm_flush_dcache_to_poc(a,l)	__cpuc_flush_dcache_area((a), (l))
diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index aba61fd..a16631c 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -677,47 +677,16 @@ int kvm_alloc_stage2_pgd(struct kvm *kvm)
 	if (!hwpgd)
 		return -ENOMEM;
 
-	/* When the kernel uses more levels of page tables than the
+	/*
+	 * When the kernel uses more levels of page tables than the
 	 * guest, we allocate a fake PGD and pre-populate it to point
 	 * to the next-level page table, which will be the real
 	 * initial page table pointed to by the VTTBR.
-	 *
-	 * When KVM_PREALLOC_LEVEL==2, we allocate a single page for
-	 * the PMD and the kernel will use folded pud.
-	 * When KVM_PREALLOC_LEVEL==1, we allocate 2 consecutive PUD
-	 * pages.
 	 */
-	if (KVM_PREALLOC_LEVEL > 0) {
-		int i;
-
-		/*
-		 * Allocate fake pgd for the page table manipulation macros to
-		 * work.  This is not used by the hardware and we have no
-		 * alignment requirement for this allocation.
-		 */
-		pgd = kmalloc(PTRS_PER_S2_PGD * sizeof(pgd_t),
-				GFP_KERNEL | __GFP_ZERO);
-
-		if (!pgd) {
-			kvm_free_hwpgd(hwpgd);
-			return -ENOMEM;
-		}
-
-		/* Plug the HW PGD into the fake one. */
-		for (i = 0; i < PTRS_PER_S2_PGD; i++) {
-			if (KVM_PREALLOC_LEVEL == 1)
-				pgd_populate(NULL, pgd + i,
-					     (pud_t *)hwpgd + i * PTRS_PER_PUD);
-			else if (KVM_PREALLOC_LEVEL == 2)
-				pud_populate(NULL, pud_offset(pgd, 0) + i,
-					     (pmd_t *)hwpgd + i * PTRS_PER_PMD);
-		}
-	} else {
-		/*
-		 * Allocate actual first-level Stage-2 page table used by the
-		 * hardware for Stage-2 page table walks.
-		 */
-		pgd = (pgd_t *)hwpgd;
+	pgd = kvm_setup_fake_pgd(hwpgd);
+	if (IS_ERR(pgd)) {
+		kvm_free_hwpgd(hwpgd);
+		return PTR_ERR(pgd);
 	}
 
 	kvm_clean_pgd(pgd);
@@ -824,9 +793,7 @@ void kvm_free_stage2_pgd(struct kvm *kvm)
 
 	unmap_stage2_range(kvm, 0, KVM_PHYS_SIZE);
 	kvm_free_hwpgd(kvm_get_hwpgd(kvm));
-	if (KVM_PREALLOC_LEVEL > 0)
-		kfree(kvm->arch.pgd);
-
+	kvm_free_fake_pgd(kvm->arch.pgd);
 	kvm->arch.pgd = NULL;
 }
 
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 7364339..07a09b2 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -198,6 +198,49 @@ static inline unsigned int kvm_get_hwpgd_size(void)
 	return PTRS_PER_S2_PGD * sizeof(pgd_t);
 }
 
+/*
+ * Allocate fake pgd for the host kernel page table macros to work.
+ * This is not used by the hardware and we have no alignment
+ * requirement for this allocation.
+ */
+static inline pgd_t *kvm_setup_fake_pgd(pgd_t *hwpgd)
+{
+	int i;
+	pgd_t *pgd;
+
+	if (!KVM_PREALLOC_LEVEL)
+		return hwpgd;
+
+	/*
+	 * When KVM_PREALLOC_LEVEL==2, we allocate a single page for
+	 * the PMD and the kernel will use folded pud.
+	 * When KVM_PREALLOC_LEVEL==1, we allocate 2 consecutive PUD
+	 * pages.
+	 */
+
+	pgd = kmalloc(PTRS_PER_S2_PGD * sizeof(pgd_t),
+			GFP_KERNEL | __GFP_ZERO);
+	if (!pgd)
+		return ERR_PTR(-ENOMEM);
+
+	/* Plug the HW PGD into the fake one. */
+	for (i = 0; i < PTRS_PER_S2_PGD; i++) {
+		if (KVM_PREALLOC_LEVEL == 1)
+			pgd_populate(NULL, pgd + i,
+				     (pud_t *)hwpgd + i * PTRS_PER_PUD);
+		else if (KVM_PREALLOC_LEVEL == 2)
+			pud_populate(NULL, pud_offset(pgd, 0) + i,
+				     (pmd_t *)hwpgd + i * PTRS_PER_PMD);
+	}
+
+	return pgd;
+}
+
+static inline void kvm_free_fake_pgd(pgd_t *pgd)
+{
+	if (KVM_PREALLOC_LEVEL > 0)
+		kfree(pgd);
+}
 static inline bool kvm_page_empty(void *ptr)
 {
 	struct page *ptr_page = virt_to_page(ptr);
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [RFC PATCH 02/12] arm64: kvm: Fix {V}TCR_EL2_TG0 mask
  2016-03-14 16:52 ` Suzuki K Poulose
@ 2016-03-14 16:53   ` Suzuki K Poulose
  -1 siblings, 0 replies; 50+ messages in thread
From: Suzuki K Poulose @ 2016-03-14 16:53 UTC (permalink / raw)
  To: christoffer.dall, marc.zyngier
  Cc: kvm, catalin.marinas, will.deacon, kvmarm, linux-arm-kernel

{V}TCR_EL2_TG0 is a 2bit wide field, where:

 00 - 4K
 01 - 64K
 10 - 16K

But we use only 1 bit, which has worked well so far since
we never cared about 16K. Fix it for 16K support.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Christoffer Dall <christoffer.dall@linaro.org>
Cc: kvmarm@lists.cs.columbia.edu
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
---
 arch/arm64/include/asm/kvm_arm.h |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index d201d4b..b7d61e4 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -99,7 +99,7 @@
 #define TCR_EL2_TBI	(1 << 20)
 #define TCR_EL2_PS	(7 << 16)
 #define TCR_EL2_PS_40B	(2 << 16)
-#define TCR_EL2_TG0	(1 << 14)
+#define TCR_EL2_TG0	(3 << 14)
 #define TCR_EL2_SH0	(3 << 12)
 #define TCR_EL2_ORGN0	(3 << 10)
 #define TCR_EL2_IRGN0	(3 << 8)
@@ -110,7 +110,7 @@
 /* VTCR_EL2 Registers bits */
 #define VTCR_EL2_RES1		(1 << 31)
 #define VTCR_EL2_PS_MASK	(7 << 16)
-#define VTCR_EL2_TG0_MASK	(1 << 14)
+#define VTCR_EL2_TG0_MASK	(3 << 14)
 #define VTCR_EL2_TG0_4K		(0 << 14)
 #define VTCR_EL2_TG0_64K	(1 << 14)
 #define VTCR_EL2_SH0_MASK	(3 << 12)
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [RFC PATCH 02/12] arm64: kvm: Fix {V}TCR_EL2_TG0 mask
@ 2016-03-14 16:53   ` Suzuki K Poulose
  0 siblings, 0 replies; 50+ messages in thread
From: Suzuki K Poulose @ 2016-03-14 16:53 UTC (permalink / raw)
  To: linux-arm-kernel

{V}TCR_EL2_TG0 is a 2bit wide field, where:

 00 - 4K
 01 - 64K
 10 - 16K

But we use only 1 bit, which has worked well so far since
we never cared about 16K. Fix it for 16K support.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Christoffer Dall <christoffer.dall@linaro.org>
Cc: kvmarm at lists.cs.columbia.edu
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
---
 arch/arm64/include/asm/kvm_arm.h |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index d201d4b..b7d61e4 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -99,7 +99,7 @@
 #define TCR_EL2_TBI	(1 << 20)
 #define TCR_EL2_PS	(7 << 16)
 #define TCR_EL2_PS_40B	(2 << 16)
-#define TCR_EL2_TG0	(1 << 14)
+#define TCR_EL2_TG0	(3 << 14)
 #define TCR_EL2_SH0	(3 << 12)
 #define TCR_EL2_ORGN0	(3 << 10)
 #define TCR_EL2_IRGN0	(3 << 8)
@@ -110,7 +110,7 @@
 /* VTCR_EL2 Registers bits */
 #define VTCR_EL2_RES1		(1 << 31)
 #define VTCR_EL2_PS_MASK	(7 << 16)
-#define VTCR_EL2_TG0_MASK	(1 << 14)
+#define VTCR_EL2_TG0_MASK	(3 << 14)
 #define VTCR_EL2_TG0_4K		(0 << 14)
 #define VTCR_EL2_TG0_64K	(1 << 14)
 #define VTCR_EL2_SH0_MASK	(3 << 12)
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [RFC PATCH 03/12] arm64: kvm: Cleanup VTCR_EL2/VTTBR computation
  2016-03-14 16:52 ` Suzuki K Poulose
@ 2016-03-14 16:53   ` Suzuki K Poulose
  -1 siblings, 0 replies; 50+ messages in thread
From: Suzuki K Poulose @ 2016-03-14 16:53 UTC (permalink / raw)
  To: christoffer.dall, marc.zyngier
  Cc: kvm, catalin.marinas, will.deacon, kvmarm, linux-arm-kernel

No functional changes. Group the common bits for VCTR_EL2
initialisation for better readability. The granule size
and the entry level are controlled by the page size.

Cc: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
---
 arch/arm64/include/asm/kvm_arm.h |   22 ++++++++++------------
 1 file changed, 10 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index b7d61e4..d49dd50 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -139,32 +139,30 @@
  * The magic numbers used for VTTBR_X in this patch can be found in Tables
  * D4-23 and D4-25 in ARM DDI 0487A.b.
  */
+#define VTCR_EL2_COMMON_BITS	(VTCR_EL2_SH0_INNER | VTCR_EL2_ORGN0_WBWA | \
+				 VTCR_EL2_IRGN0_WBWA | VTCR_EL2_SL0_LVL1 | \
+				 VTCR_EL2_RES1 | VTCR_EL2_T0SZ_40B)
 #ifdef CONFIG_ARM64_64K_PAGES
 /*
  * Stage2 translation configuration:
- * 40bits input  (T0SZ = 24)
  * 64kB pages (TG0 = 1)
  * 2 level page tables (SL = 1)
  */
-#define VTCR_EL2_FLAGS		(VTCR_EL2_TG0_64K | VTCR_EL2_SH0_INNER | \
-				 VTCR_EL2_ORGN0_WBWA | VTCR_EL2_IRGN0_WBWA | \
-				 VTCR_EL2_SL0_LVL1 | VTCR_EL2_T0SZ_40B | \
-				 VTCR_EL2_RES1)
-#define VTTBR_X		(38 - VTCR_EL2_T0SZ_40B)
+#define VTCR_EL2_TGRAN_FLAGS	(VTCR_EL2_TG0_64K | VTCR_EL2_SL0_LVL1)
+#define VTTBR_X_TGRAN_MAGIC		38
 #else
 /*
  * Stage2 translation configuration:
- * 40bits input  (T0SZ = 24)
  * 4kB pages (TG0 = 0)
  * 3 level page tables (SL = 1)
  */
-#define VTCR_EL2_FLAGS		(VTCR_EL2_TG0_4K | VTCR_EL2_SH0_INNER | \
-				 VTCR_EL2_ORGN0_WBWA | VTCR_EL2_IRGN0_WBWA | \
-				 VTCR_EL2_SL0_LVL1 | VTCR_EL2_T0SZ_40B | \
-				 VTCR_EL2_RES1)
-#define VTTBR_X		(37 - VTCR_EL2_T0SZ_40B)
+#define VTCR_EL2_TGRAN_FLAGS		(VTCR_EL2_TG0_4K | VTCR_EL2_SL0_LVL1)
+#define VTTBR_X_TGRAN_MAGIC		37
 #endif
 
+#define VTCR_EL2_FLAGS		(VTCR_EL2_TGRAN_FLAGS | VTCR_EL2_COMMON_BITS)
+#define VTTBR_X			((VTTBR_X_TGRAN_MAGIC) - VTCR_EL2_T0SZ_40B)
+
 #define VTTBR_BADDR_SHIFT (VTTBR_X - 1)
 #define VTTBR_BADDR_MASK  (((UL(1) << (PHYS_MASK_SHIFT - VTTBR_X)) - 1) << VTTBR_BADDR_SHIFT)
 #define VTTBR_VMID_SHIFT  (UL(48))
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [RFC PATCH 03/12] arm64: kvm: Cleanup VTCR_EL2/VTTBR computation
@ 2016-03-14 16:53   ` Suzuki K Poulose
  0 siblings, 0 replies; 50+ messages in thread
From: Suzuki K Poulose @ 2016-03-14 16:53 UTC (permalink / raw)
  To: linux-arm-kernel

No functional changes. Group the common bits for VCTR_EL2
initialisation for better readability. The granule size
and the entry level are controlled by the page size.

Cc: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: kvmarm at lists.cs.columbia.edu
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
---
 arch/arm64/include/asm/kvm_arm.h |   22 ++++++++++------------
 1 file changed, 10 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index b7d61e4..d49dd50 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -139,32 +139,30 @@
  * The magic numbers used for VTTBR_X in this patch can be found in Tables
  * D4-23 and D4-25 in ARM DDI 0487A.b.
  */
+#define VTCR_EL2_COMMON_BITS	(VTCR_EL2_SH0_INNER | VTCR_EL2_ORGN0_WBWA | \
+				 VTCR_EL2_IRGN0_WBWA | VTCR_EL2_SL0_LVL1 | \
+				 VTCR_EL2_RES1 | VTCR_EL2_T0SZ_40B)
 #ifdef CONFIG_ARM64_64K_PAGES
 /*
  * Stage2 translation configuration:
- * 40bits input  (T0SZ = 24)
  * 64kB pages (TG0 = 1)
  * 2 level page tables (SL = 1)
  */
-#define VTCR_EL2_FLAGS		(VTCR_EL2_TG0_64K | VTCR_EL2_SH0_INNER | \
-				 VTCR_EL2_ORGN0_WBWA | VTCR_EL2_IRGN0_WBWA | \
-				 VTCR_EL2_SL0_LVL1 | VTCR_EL2_T0SZ_40B | \
-				 VTCR_EL2_RES1)
-#define VTTBR_X		(38 - VTCR_EL2_T0SZ_40B)
+#define VTCR_EL2_TGRAN_FLAGS	(VTCR_EL2_TG0_64K | VTCR_EL2_SL0_LVL1)
+#define VTTBR_X_TGRAN_MAGIC		38
 #else
 /*
  * Stage2 translation configuration:
- * 40bits input  (T0SZ = 24)
  * 4kB pages (TG0 = 0)
  * 3 level page tables (SL = 1)
  */
-#define VTCR_EL2_FLAGS		(VTCR_EL2_TG0_4K | VTCR_EL2_SH0_INNER | \
-				 VTCR_EL2_ORGN0_WBWA | VTCR_EL2_IRGN0_WBWA | \
-				 VTCR_EL2_SL0_LVL1 | VTCR_EL2_T0SZ_40B | \
-				 VTCR_EL2_RES1)
-#define VTTBR_X		(37 - VTCR_EL2_T0SZ_40B)
+#define VTCR_EL2_TGRAN_FLAGS		(VTCR_EL2_TG0_4K | VTCR_EL2_SL0_LVL1)
+#define VTTBR_X_TGRAN_MAGIC		37
 #endif
 
+#define VTCR_EL2_FLAGS		(VTCR_EL2_TGRAN_FLAGS | VTCR_EL2_COMMON_BITS)
+#define VTTBR_X			((VTTBR_X_TGRAN_MAGIC) - VTCR_EL2_T0SZ_40B)
+
 #define VTTBR_BADDR_SHIFT (VTTBR_X - 1)
 #define VTTBR_BADDR_MASK  (((UL(1) << (PHYS_MASK_SHIFT - VTTBR_X)) - 1) << VTTBR_BADDR_SHIFT)
 #define VTTBR_VMID_SHIFT  (UL(48))
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [RFC PATCH 04/12] kvm-arm: Rename kvm_pmd_huge to huge_pmd
  2016-03-14 16:52 ` Suzuki K Poulose
@ 2016-03-14 16:53   ` Suzuki K Poulose
  -1 siblings, 0 replies; 50+ messages in thread
From: Suzuki K Poulose @ 2016-03-14 16:53 UTC (permalink / raw)
  To: christoffer.dall, marc.zyngier
  Cc: kvm, catalin.marinas, will.deacon, kvmarm, linux-arm-kernel

kvm_pmd_huge doesn't have any dependency on the page table
where the pmd lives (i.e, hyp vs. stage2). So, rename it to
huge_pmd() to make it explicit.

kvm_p.d_* wrappers will be used for helpers which differ
across hyp vs stage2.

Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
---
 arch/arm/kvm/mmu.c |   18 +++++++++---------
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index a16631c..3b038bb 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -44,7 +44,7 @@ static phys_addr_t hyp_idmap_vector;
 
 #define hyp_pgd_order get_order(PTRS_PER_PGD * sizeof(pgd_t))
 
-#define kvm_pmd_huge(_x)	(pmd_huge(_x) || pmd_trans_huge(_x))
+#define huge_pmd(_x)		(pmd_huge(_x) || pmd_trans_huge(_x))
 #define kvm_pud_huge(_x)	pud_huge(_x)
 
 #define KVM_S2PTE_FLAG_IS_IOMAP		(1UL << 0)
@@ -114,7 +114,7 @@ static bool kvm_is_device_pfn(unsigned long pfn)
  */
 static void stage2_dissolve_pmd(struct kvm *kvm, phys_addr_t addr, pmd_t *pmd)
 {
-	if (!kvm_pmd_huge(*pmd))
+	if (!huge_pmd(*pmd))
 		return;
 
 	pmd_clear(pmd);
@@ -176,7 +176,7 @@ static void clear_pud_entry(struct kvm *kvm, pud_t *pud, phys_addr_t addr)
 static void clear_pmd_entry(struct kvm *kvm, pmd_t *pmd, phys_addr_t addr)
 {
 	pte_t *pte_table = pte_offset_kernel(pmd, 0);
-	VM_BUG_ON(kvm_pmd_huge(*pmd));
+	VM_BUG_ON(huge_pmd(*pmd));
 	pmd_clear(pmd);
 	kvm_tlb_flush_vmid_ipa(kvm, addr);
 	pte_free_kernel(NULL, pte_table);
@@ -239,7 +239,7 @@ static void unmap_pmds(struct kvm *kvm, pud_t *pud,
 	do {
 		next = kvm_pmd_addr_end(addr, end);
 		if (!pmd_none(*pmd)) {
-			if (kvm_pmd_huge(*pmd)) {
+			if (huge_pmd(*pmd)) {
 				pmd_t old_pmd = *pmd;
 
 				pmd_clear(pmd);
@@ -325,7 +325,7 @@ static void stage2_flush_pmds(struct kvm *kvm, pud_t *pud,
 	do {
 		next = kvm_pmd_addr_end(addr, end);
 		if (!pmd_none(*pmd)) {
-			if (kvm_pmd_huge(*pmd))
+			if (huge_pmd(*pmd))
 				kvm_flush_dcache_pmd(*pmd);
 			else
 				stage2_flush_ptes(kvm, pmd, addr, next);
@@ -1043,7 +1043,7 @@ static void stage2_wp_pmds(pud_t *pud, phys_addr_t addr, phys_addr_t end)
 	do {
 		next = kvm_pmd_addr_end(addr, end);
 		if (!pmd_none(*pmd)) {
-			if (kvm_pmd_huge(*pmd)) {
+			if (huge_pmd(*pmd)) {
 				if (!kvm_s2pmd_readonly(pmd))
 					kvm_set_s2pmd_readonly(pmd);
 			} else {
@@ -1324,7 +1324,7 @@ static void handle_access_fault(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa)
 	if (!pmd || pmd_none(*pmd))	/* Nothing there */
 		goto out;
 
-	if (kvm_pmd_huge(*pmd)) {	/* THP, HugeTLB */
+	if (huge_pmd(*pmd)) {	/* THP, HugeTLB */
 		*pmd = pmd_mkyoung(*pmd);
 		pfn = pmd_pfn(*pmd);
 		pfn_valid = true;
@@ -1532,7 +1532,7 @@ static int kvm_age_hva_handler(struct kvm *kvm, gpa_t gpa, void *data)
 	if (!pmd || pmd_none(*pmd))	/* Nothing there */
 		return 0;
 
-	if (kvm_pmd_huge(*pmd)) {	/* THP, HugeTLB */
+	if (huge_pmd(*pmd)) {	/* THP, HugeTLB */
 		if (pmd_young(*pmd)) {
 			*pmd = pmd_mkold(*pmd);
 			return 1;
@@ -1562,7 +1562,7 @@ static int kvm_test_age_hva_handler(struct kvm *kvm, gpa_t gpa, void *data)
 	if (!pmd || pmd_none(*pmd))	/* Nothing there */
 		return 0;
 
-	if (kvm_pmd_huge(*pmd))		/* THP, HugeTLB */
+	if (huge_pmd(*pmd))		/* THP, HugeTLB */
 		return pmd_young(*pmd);
 
 	pte = pte_offset_kernel(pmd, gpa);
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [RFC PATCH 04/12] kvm-arm: Rename kvm_pmd_huge to huge_pmd
@ 2016-03-14 16:53   ` Suzuki K Poulose
  0 siblings, 0 replies; 50+ messages in thread
From: Suzuki K Poulose @ 2016-03-14 16:53 UTC (permalink / raw)
  To: linux-arm-kernel

kvm_pmd_huge doesn't have any dependency on the page table
where the pmd lives (i.e, hyp vs. stage2). So, rename it to
huge_pmd() to make it explicit.

kvm_p.d_* wrappers will be used for helpers which differ
across hyp vs stage2.

Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
---
 arch/arm/kvm/mmu.c |   18 +++++++++---------
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index a16631c..3b038bb 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -44,7 +44,7 @@ static phys_addr_t hyp_idmap_vector;
 
 #define hyp_pgd_order get_order(PTRS_PER_PGD * sizeof(pgd_t))
 
-#define kvm_pmd_huge(_x)	(pmd_huge(_x) || pmd_trans_huge(_x))
+#define huge_pmd(_x)		(pmd_huge(_x) || pmd_trans_huge(_x))
 #define kvm_pud_huge(_x)	pud_huge(_x)
 
 #define KVM_S2PTE_FLAG_IS_IOMAP		(1UL << 0)
@@ -114,7 +114,7 @@ static bool kvm_is_device_pfn(unsigned long pfn)
  */
 static void stage2_dissolve_pmd(struct kvm *kvm, phys_addr_t addr, pmd_t *pmd)
 {
-	if (!kvm_pmd_huge(*pmd))
+	if (!huge_pmd(*pmd))
 		return;
 
 	pmd_clear(pmd);
@@ -176,7 +176,7 @@ static void clear_pud_entry(struct kvm *kvm, pud_t *pud, phys_addr_t addr)
 static void clear_pmd_entry(struct kvm *kvm, pmd_t *pmd, phys_addr_t addr)
 {
 	pte_t *pte_table = pte_offset_kernel(pmd, 0);
-	VM_BUG_ON(kvm_pmd_huge(*pmd));
+	VM_BUG_ON(huge_pmd(*pmd));
 	pmd_clear(pmd);
 	kvm_tlb_flush_vmid_ipa(kvm, addr);
 	pte_free_kernel(NULL, pte_table);
@@ -239,7 +239,7 @@ static void unmap_pmds(struct kvm *kvm, pud_t *pud,
 	do {
 		next = kvm_pmd_addr_end(addr, end);
 		if (!pmd_none(*pmd)) {
-			if (kvm_pmd_huge(*pmd)) {
+			if (huge_pmd(*pmd)) {
 				pmd_t old_pmd = *pmd;
 
 				pmd_clear(pmd);
@@ -325,7 +325,7 @@ static void stage2_flush_pmds(struct kvm *kvm, pud_t *pud,
 	do {
 		next = kvm_pmd_addr_end(addr, end);
 		if (!pmd_none(*pmd)) {
-			if (kvm_pmd_huge(*pmd))
+			if (huge_pmd(*pmd))
 				kvm_flush_dcache_pmd(*pmd);
 			else
 				stage2_flush_ptes(kvm, pmd, addr, next);
@@ -1043,7 +1043,7 @@ static void stage2_wp_pmds(pud_t *pud, phys_addr_t addr, phys_addr_t end)
 	do {
 		next = kvm_pmd_addr_end(addr, end);
 		if (!pmd_none(*pmd)) {
-			if (kvm_pmd_huge(*pmd)) {
+			if (huge_pmd(*pmd)) {
 				if (!kvm_s2pmd_readonly(pmd))
 					kvm_set_s2pmd_readonly(pmd);
 			} else {
@@ -1324,7 +1324,7 @@ static void handle_access_fault(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa)
 	if (!pmd || pmd_none(*pmd))	/* Nothing there */
 		goto out;
 
-	if (kvm_pmd_huge(*pmd)) {	/* THP, HugeTLB */
+	if (huge_pmd(*pmd)) {	/* THP, HugeTLB */
 		*pmd = pmd_mkyoung(*pmd);
 		pfn = pmd_pfn(*pmd);
 		pfn_valid = true;
@@ -1532,7 +1532,7 @@ static int kvm_age_hva_handler(struct kvm *kvm, gpa_t gpa, void *data)
 	if (!pmd || pmd_none(*pmd))	/* Nothing there */
 		return 0;
 
-	if (kvm_pmd_huge(*pmd)) {	/* THP, HugeTLB */
+	if (huge_pmd(*pmd)) {	/* THP, HugeTLB */
 		if (pmd_young(*pmd)) {
 			*pmd = pmd_mkold(*pmd);
 			return 1;
@@ -1562,7 +1562,7 @@ static int kvm_test_age_hva_handler(struct kvm *kvm, gpa_t gpa, void *data)
 	if (!pmd || pmd_none(*pmd))	/* Nothing there */
 		return 0;
 
-	if (kvm_pmd_huge(*pmd))		/* THP, HugeTLB */
+	if (huge_pmd(*pmd))		/* THP, HugeTLB */
 		return pmd_young(*pmd);
 
 	pte = pte_offset_kernel(pmd, gpa);
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [RFC PATCH 05/12] kvm-arm: Move kvm_pud_huge to arch specific headers
  2016-03-14 16:52 ` Suzuki K Poulose
@ 2016-03-14 16:53   ` Suzuki K Poulose
  -1 siblings, 0 replies; 50+ messages in thread
From: Suzuki K Poulose @ 2016-03-14 16:53 UTC (permalink / raw)
  To: christoffer.dall, marc.zyngier
  Cc: kvm, catalin.marinas, will.deacon, kvmarm, linux-arm-kernel

Move the kvm_pud_huge to asm/kvm_mmu.h, as on arm64, it would really
depend on the number of page table levels on the table it deals with
(hyp vs. stage2).

Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
---
 arch/arm/include/asm/kvm_mmu.h   |    1 +
 arch/arm/kvm/mmu.c               |    1 -
 arch/arm64/include/asm/kvm_mmu.h |    1 +
 3 files changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index e2b2a5a..4448e77 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -135,6 +135,7 @@ static inline bool kvm_s2pmd_readonly(pmd_t *pmd)
 	return (pmd_val(*pmd) & L_PMD_S2_RDWR) == L_PMD_S2_RDONLY;
 }
 
+#define kvm_pud_huge(_x)	pud_huge(_x)
 
 /* Open coded p*d_addr_end that can deal with 64bit addresses */
 #define kvm_pgd_addr_end(addr, end)					\
diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index 3b038bb..d1e9a71 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -45,7 +45,6 @@ static phys_addr_t hyp_idmap_vector;
 #define hyp_pgd_order get_order(PTRS_PER_PGD * sizeof(pgd_t))
 
 #define huge_pmd(_x)		(pmd_huge(_x) || pmd_trans_huge(_x))
-#define kvm_pud_huge(_x)	pud_huge(_x)
 
 #define KVM_S2PTE_FLAG_IS_IOMAP		(1UL << 0)
 #define KVM_S2_FLAG_LOGGING_ACTIVE	(1UL << 1)
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 07a09b2..a01d87d 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -141,6 +141,7 @@ static inline bool kvm_s2pmd_readonly(pmd_t *pmd)
 	return (pmd_val(*pmd) & PMD_S2_RDWR) == PMD_S2_RDONLY;
 }
 
+#define kvm_pud_huge(_x)	pud_huge(_x)
 
 #define kvm_pgd_addr_end(addr, end)	pgd_addr_end(addr, end)
 #define kvm_pud_addr_end(addr, end)	pud_addr_end(addr, end)
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [RFC PATCH 05/12] kvm-arm: Move kvm_pud_huge to arch specific headers
@ 2016-03-14 16:53   ` Suzuki K Poulose
  0 siblings, 0 replies; 50+ messages in thread
From: Suzuki K Poulose @ 2016-03-14 16:53 UTC (permalink / raw)
  To: linux-arm-kernel

Move the kvm_pud_huge to asm/kvm_mmu.h, as on arm64, it would really
depend on the number of page table levels on the table it deals with
(hyp vs. stage2).

Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
---
 arch/arm/include/asm/kvm_mmu.h   |    1 +
 arch/arm/kvm/mmu.c               |    1 -
 arch/arm64/include/asm/kvm_mmu.h |    1 +
 3 files changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index e2b2a5a..4448e77 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -135,6 +135,7 @@ static inline bool kvm_s2pmd_readonly(pmd_t *pmd)
 	return (pmd_val(*pmd) & L_PMD_S2_RDWR) == L_PMD_S2_RDONLY;
 }
 
+#define kvm_pud_huge(_x)	pud_huge(_x)
 
 /* Open coded p*d_addr_end that can deal with 64bit addresses */
 #define kvm_pgd_addr_end(addr, end)					\
diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index 3b038bb..d1e9a71 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -45,7 +45,6 @@ static phys_addr_t hyp_idmap_vector;
 #define hyp_pgd_order get_order(PTRS_PER_PGD * sizeof(pgd_t))
 
 #define huge_pmd(_x)		(pmd_huge(_x) || pmd_trans_huge(_x))
-#define kvm_pud_huge(_x)	pud_huge(_x)
 
 #define KVM_S2PTE_FLAG_IS_IOMAP		(1UL << 0)
 #define KVM_S2_FLAG_LOGGING_ACTIVE	(1UL << 1)
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 07a09b2..a01d87d 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -141,6 +141,7 @@ static inline bool kvm_s2pmd_readonly(pmd_t *pmd)
 	return (pmd_val(*pmd) & PMD_S2_RDWR) == PMD_S2_RDONLY;
 }
 
+#define kvm_pud_huge(_x)	pud_huge(_x)
 
 #define kvm_pgd_addr_end(addr, end)	pgd_addr_end(addr, end)
 #define kvm_pud_addr_end(addr, end)	pud_addr_end(addr, end)
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [RFC PATCH 06/12] kvm-arm: Pass kvm parameter for pagetable helpers
  2016-03-14 16:52 ` Suzuki K Poulose
@ 2016-03-14 16:53   ` Suzuki K Poulose
  -1 siblings, 0 replies; 50+ messages in thread
From: Suzuki K Poulose @ 2016-03-14 16:53 UTC (permalink / raw)
  To: christoffer.dall, marc.zyngier
  Cc: kvmarm, linux-arm-kernel, mark.rutland, kvm, will.deacon,
	catalin.marinas, Suzuki K Poulose

Pass 'kvm' to existing kvm_p.d_* page table wrappers to prepare
them to choose between hyp and stage2 page table. No functional
changes yet. Also while at it, convert them to static inline
functions.

Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
---
 arch/arm/include/asm/kvm_mmu.h   |   38 +++++++++++++++++++++++++++-----------
 arch/arm/kvm/mmu.c               |   34 +++++++++++++++++-----------------
 arch/arm64/include/asm/kvm_mmu.h |   31 ++++++++++++++++++++++++++-----
 3 files changed, 70 insertions(+), 33 deletions(-)

diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index 4448e77..17c6781 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -45,6 +45,7 @@
 #ifndef __ASSEMBLY__
 
 #include <linux/highmem.h>
+#include <linux/hugetlb.h>
 #include <asm/cacheflush.h>
 #include <asm/pgalloc.h>
 
@@ -135,22 +136,37 @@ static inline bool kvm_s2pmd_readonly(pmd_t *pmd)
 	return (pmd_val(*pmd) & L_PMD_S2_RDWR) == L_PMD_S2_RDONLY;
 }
 
-#define kvm_pud_huge(_x)	pud_huge(_x)
+static inline int kvm_pud_huge(struct kvm *kvm, pud_t pud)
+{
+	return pud_huge(pud);
+}
+
 
 /* Open coded p*d_addr_end that can deal with 64bit addresses */
-#define kvm_pgd_addr_end(addr, end)					\
-({	u64 __boundary = ((addr) + PGDIR_SIZE) & PGDIR_MASK;		\
-	(__boundary - 1 < (end) - 1)? __boundary: (end);		\
-})
+static inline phys_addr_t
+kvm_pgd_addr_end(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
+{
+	phys_addr_t boundary = (addr + PGDIR_SIZE) & PGDIR_MASK;
+	return (boundary - 1 < end - 1) ? boundary : end;
+}
 
-#define kvm_pud_addr_end(addr,end)		(end)
+static inline phys_addr_t
+kvm_pud_addr_end(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
+{
+	return end;
+}
 
-#define kvm_pmd_addr_end(addr, end)					\
-({	u64 __boundary = ((addr) + PMD_SIZE) & PMD_MASK;		\
-	(__boundary - 1 < (end) - 1)? __boundary: (end);		\
-})
+static inline phys_addr_t
+kvm_pmd_addr_end(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
+{
+	phys_addr_t boundary = (addr + PMD_SIZE) & PMD_MASK;
+	return (boundary - 1 < end - 1) ? boundary : end;
+}
 
-#define kvm_pgd_index(addr)			pgd_index(addr)
+static inline phys_addr_t kvm_pgd_index(struct kvm *kvm, phys_addr_t addr)
+{
+	return pgd_index(addr);
+}
 
 static inline bool kvm_page_empty(void *ptr)
 {
diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index d1e9a71..22b4c99 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -165,7 +165,7 @@ static void clear_pgd_entry(struct kvm *kvm, pgd_t *pgd, phys_addr_t addr)
 static void clear_pud_entry(struct kvm *kvm, pud_t *pud, phys_addr_t addr)
 {
 	pmd_t *pmd_table = pmd_offset(pud, 0);
-	VM_BUG_ON(pud_huge(*pud));
+	VM_BUG_ON(kvm_pud_huge(kvm, *pud));
 	pud_clear(pud);
 	kvm_tlb_flush_vmid_ipa(kvm, addr);
 	pmd_free(NULL, pmd_table);
@@ -236,7 +236,7 @@ static void unmap_pmds(struct kvm *kvm, pud_t *pud,
 
 	start_pmd = pmd = pmd_offset(pud, addr);
 	do {
-		next = kvm_pmd_addr_end(addr, end);
+		next = kvm_pmd_addr_end(kvm, addr, end);
 		if (!pmd_none(*pmd)) {
 			if (huge_pmd(*pmd)) {
 				pmd_t old_pmd = *pmd;
@@ -265,9 +265,9 @@ static void unmap_puds(struct kvm *kvm, pgd_t *pgd,
 
 	start_pud = pud = pud_offset(pgd, addr);
 	do {
-		next = kvm_pud_addr_end(addr, end);
+		next = kvm_pud_addr_end(kvm, addr, end);
 		if (!pud_none(*pud)) {
-			if (pud_huge(*pud)) {
+			if (kvm_pud_huge(kvm, *pud)) {
 				pud_t old_pud = *pud;
 
 				pud_clear(pud);
@@ -294,9 +294,9 @@ static void unmap_range(struct kvm *kvm, pgd_t *pgdp,
 	phys_addr_t addr = start, end = start + size;
 	phys_addr_t next;
 
-	pgd = pgdp + kvm_pgd_index(addr);
+	pgd = pgdp + kvm_pgd_index(kvm, addr);
 	do {
-		next = kvm_pgd_addr_end(addr, end);
+		next = kvm_pgd_addr_end(kvm, addr, end);
 		if (!pgd_none(*pgd))
 			unmap_puds(kvm, pgd, addr, next);
 	} while (pgd++, addr = next, addr != end);
@@ -322,7 +322,7 @@ static void stage2_flush_pmds(struct kvm *kvm, pud_t *pud,
 
 	pmd = pmd_offset(pud, addr);
 	do {
-		next = kvm_pmd_addr_end(addr, end);
+		next = kvm_pmd_addr_end(kvm, addr, end);
 		if (!pmd_none(*pmd)) {
 			if (huge_pmd(*pmd))
 				kvm_flush_dcache_pmd(*pmd);
@@ -340,9 +340,9 @@ static void stage2_flush_puds(struct kvm *kvm, pgd_t *pgd,
 
 	pud = pud_offset(pgd, addr);
 	do {
-		next = kvm_pud_addr_end(addr, end);
+		next = kvm_pud_addr_end(kvm, addr, end);
 		if (!pud_none(*pud)) {
-			if (pud_huge(*pud))
+			if (kvm_pud_huge(kvm, *pud))
 				kvm_flush_dcache_pud(*pud);
 			else
 				stage2_flush_pmds(kvm, pud, addr, next);
@@ -358,9 +358,9 @@ static void stage2_flush_memslot(struct kvm *kvm,
 	phys_addr_t next;
 	pgd_t *pgd;
 
-	pgd = kvm->arch.pgd + kvm_pgd_index(addr);
+	pgd = kvm->arch.pgd + kvm_pgd_index(kvm, addr);
 	do {
-		next = kvm_pgd_addr_end(addr, end);
+		next = kvm_pgd_addr_end(kvm, addr, end);
 		stage2_flush_puds(kvm, pgd, addr, next);
 	} while (pgd++, addr = next, addr != end);
 }
@@ -802,7 +802,7 @@ static pud_t *stage2_get_pud(struct kvm *kvm, struct kvm_mmu_memory_cache *cache
 	pgd_t *pgd;
 	pud_t *pud;
 
-	pgd = kvm->arch.pgd + kvm_pgd_index(addr);
+	pgd = kvm->arch.pgd + kvm_pgd_index(kvm, addr);
 	if (WARN_ON(pgd_none(*pgd))) {
 		if (!cache)
 			return NULL;
@@ -1040,7 +1040,7 @@ static void stage2_wp_pmds(pud_t *pud, phys_addr_t addr, phys_addr_t end)
 	pmd = pmd_offset(pud, addr);
 
 	do {
-		next = kvm_pmd_addr_end(addr, end);
+		next = kvm_pmd_addr_end(NULL, addr, end);
 		if (!pmd_none(*pmd)) {
 			if (huge_pmd(*pmd)) {
 				if (!kvm_s2pmd_readonly(pmd))
@@ -1067,10 +1067,10 @@ static void  stage2_wp_puds(pgd_t *pgd, phys_addr_t addr, phys_addr_t end)
 
 	pud = pud_offset(pgd, addr);
 	do {
-		next = kvm_pud_addr_end(addr, end);
+		next = kvm_pud_addr_end(NULL, addr, end);
 		if (!pud_none(*pud)) {
 			/* TODO:PUD not supported, revisit later if supported */
-			BUG_ON(kvm_pud_huge(*pud));
+			BUG_ON(kvm_pud_huge(NULL, *pud));
 			stage2_wp_pmds(pud, addr, next);
 		}
 	} while (pud++, addr = next, addr != end);
@@ -1087,7 +1087,7 @@ static void stage2_wp_range(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
 	pgd_t *pgd;
 	phys_addr_t next;
 
-	pgd = kvm->arch.pgd + kvm_pgd_index(addr);
+	pgd = kvm->arch.pgd + kvm_pgd_index(kvm, addr);
 	do {
 		/*
 		 * Release kvm_mmu_lock periodically if the memory region is
@@ -1099,7 +1099,7 @@ static void stage2_wp_range(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
 		if (need_resched() || spin_needbreak(&kvm->mmu_lock))
 			cond_resched_lock(&kvm->mmu_lock);
 
-		next = kvm_pgd_addr_end(addr, end);
+		next = kvm_pgd_addr_end(kvm, addr, end);
 		if (pgd_present(*pgd))
 			stage2_wp_puds(pgd, addr, next);
 	} while (pgd++, addr = next, addr != end);
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index a01d87d..416ca23 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -71,6 +71,7 @@
 #include <asm/cacheflush.h>
 #include <asm/mmu_context.h>
 #include <asm/pgtable.h>
+#include <linux/hugetlb.h>
 
 #define KERN_TO_HYP(kva)	((unsigned long)kva - PAGE_OFFSET + HYP_PAGE_OFFSET)
 
@@ -141,11 +142,28 @@ static inline bool kvm_s2pmd_readonly(pmd_t *pmd)
 	return (pmd_val(*pmd) & PMD_S2_RDWR) == PMD_S2_RDONLY;
 }
 
-#define kvm_pud_huge(_x)	pud_huge(_x)
+static inline int kvm_pud_huge(struct kvm *kvm, pud_t pud)
+{
+	return pud_huge(pud);
+}
+
+static inline phys_addr_t
+kvm_pgd_addr_end(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
+{
+	return	pgd_addr_end(addr, end);
+}
+
+static inline phys_addr_t
+kvm_pud_addr_end(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
+{
+	return	pud_addr_end(addr, end);
+}
 
-#define kvm_pgd_addr_end(addr, end)	pgd_addr_end(addr, end)
-#define kvm_pud_addr_end(addr, end)	pud_addr_end(addr, end)
-#define kvm_pmd_addr_end(addr, end)	pmd_addr_end(addr, end)
+static inline phys_addr_t
+kvm_pmd_addr_end(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
+{
+	return	pmd_addr_end(addr, end);
+}
 
 /*
  * In the case where PGDIR_SHIFT is larger than KVM_PHYS_SHIFT, we can address
@@ -161,7 +179,10 @@ static inline bool kvm_s2pmd_readonly(pmd_t *pmd)
 #endif
 #define PTRS_PER_S2_PGD		(1 << PTRS_PER_S2_PGD_SHIFT)
 
-#define kvm_pgd_index(addr)	(((addr) >> PGDIR_SHIFT) & (PTRS_PER_S2_PGD - 1))
+static inline phys_addr_t kvm_pgd_index(struct kvm *kvm, phys_addr_t addr)
+{
+	return (addr >> PGDIR_SHIFT) & (PTRS_PER_S2_PGD - 1);
+}
 
 /*
  * If we are concatenating first level stage-2 page tables, we would have less
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [RFC PATCH 06/12] kvm-arm: Pass kvm parameter for pagetable helpers
@ 2016-03-14 16:53   ` Suzuki K Poulose
  0 siblings, 0 replies; 50+ messages in thread
From: Suzuki K Poulose @ 2016-03-14 16:53 UTC (permalink / raw)
  To: linux-arm-kernel

Pass 'kvm' to existing kvm_p.d_* page table wrappers to prepare
them to choose between hyp and stage2 page table. No functional
changes yet. Also while at it, convert them to static inline
functions.

Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
---
 arch/arm/include/asm/kvm_mmu.h   |   38 +++++++++++++++++++++++++++-----------
 arch/arm/kvm/mmu.c               |   34 +++++++++++++++++-----------------
 arch/arm64/include/asm/kvm_mmu.h |   31 ++++++++++++++++++++++++++-----
 3 files changed, 70 insertions(+), 33 deletions(-)

diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index 4448e77..17c6781 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -45,6 +45,7 @@
 #ifndef __ASSEMBLY__
 
 #include <linux/highmem.h>
+#include <linux/hugetlb.h>
 #include <asm/cacheflush.h>
 #include <asm/pgalloc.h>
 
@@ -135,22 +136,37 @@ static inline bool kvm_s2pmd_readonly(pmd_t *pmd)
 	return (pmd_val(*pmd) & L_PMD_S2_RDWR) == L_PMD_S2_RDONLY;
 }
 
-#define kvm_pud_huge(_x)	pud_huge(_x)
+static inline int kvm_pud_huge(struct kvm *kvm, pud_t pud)
+{
+	return pud_huge(pud);
+}
+
 
 /* Open coded p*d_addr_end that can deal with 64bit addresses */
-#define kvm_pgd_addr_end(addr, end)					\
-({	u64 __boundary = ((addr) + PGDIR_SIZE) & PGDIR_MASK;		\
-	(__boundary - 1 < (end) - 1)? __boundary: (end);		\
-})
+static inline phys_addr_t
+kvm_pgd_addr_end(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
+{
+	phys_addr_t boundary = (addr + PGDIR_SIZE) & PGDIR_MASK;
+	return (boundary - 1 < end - 1) ? boundary : end;
+}
 
-#define kvm_pud_addr_end(addr,end)		(end)
+static inline phys_addr_t
+kvm_pud_addr_end(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
+{
+	return end;
+}
 
-#define kvm_pmd_addr_end(addr, end)					\
-({	u64 __boundary = ((addr) + PMD_SIZE) & PMD_MASK;		\
-	(__boundary - 1 < (end) - 1)? __boundary: (end);		\
-})
+static inline phys_addr_t
+kvm_pmd_addr_end(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
+{
+	phys_addr_t boundary = (addr + PMD_SIZE) & PMD_MASK;
+	return (boundary - 1 < end - 1) ? boundary : end;
+}
 
-#define kvm_pgd_index(addr)			pgd_index(addr)
+static inline phys_addr_t kvm_pgd_index(struct kvm *kvm, phys_addr_t addr)
+{
+	return pgd_index(addr);
+}
 
 static inline bool kvm_page_empty(void *ptr)
 {
diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index d1e9a71..22b4c99 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -165,7 +165,7 @@ static void clear_pgd_entry(struct kvm *kvm, pgd_t *pgd, phys_addr_t addr)
 static void clear_pud_entry(struct kvm *kvm, pud_t *pud, phys_addr_t addr)
 {
 	pmd_t *pmd_table = pmd_offset(pud, 0);
-	VM_BUG_ON(pud_huge(*pud));
+	VM_BUG_ON(kvm_pud_huge(kvm, *pud));
 	pud_clear(pud);
 	kvm_tlb_flush_vmid_ipa(kvm, addr);
 	pmd_free(NULL, pmd_table);
@@ -236,7 +236,7 @@ static void unmap_pmds(struct kvm *kvm, pud_t *pud,
 
 	start_pmd = pmd = pmd_offset(pud, addr);
 	do {
-		next = kvm_pmd_addr_end(addr, end);
+		next = kvm_pmd_addr_end(kvm, addr, end);
 		if (!pmd_none(*pmd)) {
 			if (huge_pmd(*pmd)) {
 				pmd_t old_pmd = *pmd;
@@ -265,9 +265,9 @@ static void unmap_puds(struct kvm *kvm, pgd_t *pgd,
 
 	start_pud = pud = pud_offset(pgd, addr);
 	do {
-		next = kvm_pud_addr_end(addr, end);
+		next = kvm_pud_addr_end(kvm, addr, end);
 		if (!pud_none(*pud)) {
-			if (pud_huge(*pud)) {
+			if (kvm_pud_huge(kvm, *pud)) {
 				pud_t old_pud = *pud;
 
 				pud_clear(pud);
@@ -294,9 +294,9 @@ static void unmap_range(struct kvm *kvm, pgd_t *pgdp,
 	phys_addr_t addr = start, end = start + size;
 	phys_addr_t next;
 
-	pgd = pgdp + kvm_pgd_index(addr);
+	pgd = pgdp + kvm_pgd_index(kvm, addr);
 	do {
-		next = kvm_pgd_addr_end(addr, end);
+		next = kvm_pgd_addr_end(kvm, addr, end);
 		if (!pgd_none(*pgd))
 			unmap_puds(kvm, pgd, addr, next);
 	} while (pgd++, addr = next, addr != end);
@@ -322,7 +322,7 @@ static void stage2_flush_pmds(struct kvm *kvm, pud_t *pud,
 
 	pmd = pmd_offset(pud, addr);
 	do {
-		next = kvm_pmd_addr_end(addr, end);
+		next = kvm_pmd_addr_end(kvm, addr, end);
 		if (!pmd_none(*pmd)) {
 			if (huge_pmd(*pmd))
 				kvm_flush_dcache_pmd(*pmd);
@@ -340,9 +340,9 @@ static void stage2_flush_puds(struct kvm *kvm, pgd_t *pgd,
 
 	pud = pud_offset(pgd, addr);
 	do {
-		next = kvm_pud_addr_end(addr, end);
+		next = kvm_pud_addr_end(kvm, addr, end);
 		if (!pud_none(*pud)) {
-			if (pud_huge(*pud))
+			if (kvm_pud_huge(kvm, *pud))
 				kvm_flush_dcache_pud(*pud);
 			else
 				stage2_flush_pmds(kvm, pud, addr, next);
@@ -358,9 +358,9 @@ static void stage2_flush_memslot(struct kvm *kvm,
 	phys_addr_t next;
 	pgd_t *pgd;
 
-	pgd = kvm->arch.pgd + kvm_pgd_index(addr);
+	pgd = kvm->arch.pgd + kvm_pgd_index(kvm, addr);
 	do {
-		next = kvm_pgd_addr_end(addr, end);
+		next = kvm_pgd_addr_end(kvm, addr, end);
 		stage2_flush_puds(kvm, pgd, addr, next);
 	} while (pgd++, addr = next, addr != end);
 }
@@ -802,7 +802,7 @@ static pud_t *stage2_get_pud(struct kvm *kvm, struct kvm_mmu_memory_cache *cache
 	pgd_t *pgd;
 	pud_t *pud;
 
-	pgd = kvm->arch.pgd + kvm_pgd_index(addr);
+	pgd = kvm->arch.pgd + kvm_pgd_index(kvm, addr);
 	if (WARN_ON(pgd_none(*pgd))) {
 		if (!cache)
 			return NULL;
@@ -1040,7 +1040,7 @@ static void stage2_wp_pmds(pud_t *pud, phys_addr_t addr, phys_addr_t end)
 	pmd = pmd_offset(pud, addr);
 
 	do {
-		next = kvm_pmd_addr_end(addr, end);
+		next = kvm_pmd_addr_end(NULL, addr, end);
 		if (!pmd_none(*pmd)) {
 			if (huge_pmd(*pmd)) {
 				if (!kvm_s2pmd_readonly(pmd))
@@ -1067,10 +1067,10 @@ static void  stage2_wp_puds(pgd_t *pgd, phys_addr_t addr, phys_addr_t end)
 
 	pud = pud_offset(pgd, addr);
 	do {
-		next = kvm_pud_addr_end(addr, end);
+		next = kvm_pud_addr_end(NULL, addr, end);
 		if (!pud_none(*pud)) {
 			/* TODO:PUD not supported, revisit later if supported */
-			BUG_ON(kvm_pud_huge(*pud));
+			BUG_ON(kvm_pud_huge(NULL, *pud));
 			stage2_wp_pmds(pud, addr, next);
 		}
 	} while (pud++, addr = next, addr != end);
@@ -1087,7 +1087,7 @@ static void stage2_wp_range(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
 	pgd_t *pgd;
 	phys_addr_t next;
 
-	pgd = kvm->arch.pgd + kvm_pgd_index(addr);
+	pgd = kvm->arch.pgd + kvm_pgd_index(kvm, addr);
 	do {
 		/*
 		 * Release kvm_mmu_lock periodically if the memory region is
@@ -1099,7 +1099,7 @@ static void stage2_wp_range(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
 		if (need_resched() || spin_needbreak(&kvm->mmu_lock))
 			cond_resched_lock(&kvm->mmu_lock);
 
-		next = kvm_pgd_addr_end(addr, end);
+		next = kvm_pgd_addr_end(kvm, addr, end);
 		if (pgd_present(*pgd))
 			stage2_wp_puds(pgd, addr, next);
 	} while (pgd++, addr = next, addr != end);
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index a01d87d..416ca23 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -71,6 +71,7 @@
 #include <asm/cacheflush.h>
 #include <asm/mmu_context.h>
 #include <asm/pgtable.h>
+#include <linux/hugetlb.h>
 
 #define KERN_TO_HYP(kva)	((unsigned long)kva - PAGE_OFFSET + HYP_PAGE_OFFSET)
 
@@ -141,11 +142,28 @@ static inline bool kvm_s2pmd_readonly(pmd_t *pmd)
 	return (pmd_val(*pmd) & PMD_S2_RDWR) == PMD_S2_RDONLY;
 }
 
-#define kvm_pud_huge(_x)	pud_huge(_x)
+static inline int kvm_pud_huge(struct kvm *kvm, pud_t pud)
+{
+	return pud_huge(pud);
+}
+
+static inline phys_addr_t
+kvm_pgd_addr_end(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
+{
+	return	pgd_addr_end(addr, end);
+}
+
+static inline phys_addr_t
+kvm_pud_addr_end(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
+{
+	return	pud_addr_end(addr, end);
+}
 
-#define kvm_pgd_addr_end(addr, end)	pgd_addr_end(addr, end)
-#define kvm_pud_addr_end(addr, end)	pud_addr_end(addr, end)
-#define kvm_pmd_addr_end(addr, end)	pmd_addr_end(addr, end)
+static inline phys_addr_t
+kvm_pmd_addr_end(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
+{
+	return	pmd_addr_end(addr, end);
+}
 
 /*
  * In the case where PGDIR_SHIFT is larger than KVM_PHYS_SHIFT, we can address
@@ -161,7 +179,10 @@ static inline bool kvm_s2pmd_readonly(pmd_t *pmd)
 #endif
 #define PTRS_PER_S2_PGD		(1 << PTRS_PER_S2_PGD_SHIFT)
 
-#define kvm_pgd_index(addr)	(((addr) >> PGDIR_SHIFT) & (PTRS_PER_S2_PGD - 1))
+static inline phys_addr_t kvm_pgd_index(struct kvm *kvm, phys_addr_t addr)
+{
+	return (addr >> PGDIR_SHIFT) & (PTRS_PER_S2_PGD - 1);
+}
 
 /*
  * If we are concatenating first level stage-2 page tables, we would have less
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [RFC PATCH 07/12] kvm: arm: Introduce stage2 page table helpers
  2016-03-14 16:52 ` Suzuki K Poulose
@ 2016-03-14 16:53   ` Suzuki K Poulose
  -1 siblings, 0 replies; 50+ messages in thread
From: Suzuki K Poulose @ 2016-03-14 16:53 UTC (permalink / raw)
  To: christoffer.dall, marc.zyngier
  Cc: kvm, catalin.marinas, will.deacon, kvmarm, linux-arm-kernel

So far we have used most of the page table helpers for host,
except for a few kvm_ wrappers. This patch introduces wrappers
for all the helpers which could potentially differ for underlying
page table. It also introduces the corresponding stage2 helpers
for arm32.

This will be plugged in to the core hypervisor code once
we have the arm64 counter parts implemented. On 32bit ARM,
we continue to use the host helpers under the hood, except
for kvm_p.d_addr_end which need to handle 64bit addresses.
Also converts kvm_p.d_table_empty() to static inline for
consistency with arm64 versions.

Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
---
 arch/arm/include/asm/kvm_mmu.h        |   95 +++++++++++++++++++++++++++++----
 arch/arm/include/asm/stage2_pgtable.h |   55 +++++++++++++++++++
 2 files changed, 141 insertions(+), 9 deletions(-)
 create mode 100644 arch/arm/include/asm/stage2_pgtable.h

diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index 17c6781..e670afa 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -48,6 +48,7 @@
 #include <linux/hugetlb.h>
 #include <asm/cacheflush.h>
 #include <asm/pgalloc.h>
+#include <asm/stage2_pgtable.h>
 
 int create_hyp_mappings(void *from, void *to);
 int create_hyp_io_mappings(void *from, void *to, phys_addr_t);
@@ -141,26 +142,90 @@ static inline int kvm_pud_huge(struct kvm *kvm, pud_t pud)
 	return pud_huge(pud);
 }
 
+static inline int kvm_pgd_none(struct kvm *kvm, pgd_t pgd)
+{
+	return pgd_none(pgd);
+}
+
+static inline void kvm_pgd_clear(struct kvm *kvm, pgd_t *pgdp)
+{
+	pgd_clear(pgdp);
+}
+
+static inline int kvm_pgd_present(struct kvm *kvm, pgd_t pgd)
+{
+	return pgd_present(pgd);
+}
+
+static inline void
+kvm_pgd_populate(struct kvm *kvm, struct mm_struct *mm, pgd_t *pgd, pud_t *pud)
+{
+	pgd_populate(mm, pgd, pud);
+}
+
+static inline pud_t *
+kvm_pud_offset(struct kvm *kvm, pgd_t *pgd, phys_addr_t address)
+{
+	return pud_offset(pgd, address);
+}
+
+static inline int kvm_pud_none(struct kvm *kvm, pud_t pud)
+{
+	return pud_none(pud);
+}
+
+static inline void kvm_pud_clear(struct kvm *kvm, pud_t *pudp)
+{
+	pud_clear(pudp);
+}
+
+static inline int kvm_pud_present(struct kvm *kvm, pud_t pud)
+{
+	return pud_present(pud);
+}
 
-/* Open coded p*d_addr_end that can deal with 64bit addresses */
+static inline void kvm_pud_free(struct kvm *kvm, struct mm_struct *mm, pud_t *pudp)
+{
+	pud_free(mm, pudp);
+}
+
+static inline void
+kvm_pud_populate(struct kvm *kvm, struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
+{
+	pud_populate(mm, pud, pmd);
+}
+
+static inline pmd_t *
+kvm_pmd_offset(struct kvm *kvm, pud_t *pud, phys_addr_t address)
+{
+	return pmd_offset(pud, address);
+}
+
+static inline void kvm_pmd_free(struct kvm *kvm, struct mm_struct *mm, pmd_t *pmd)
+{
+	pmd_free(mm, pmd);
+}
+
+/*
+ * stage2_p.d_addr_end can handle 64bit address, use them explicitly for
+ * hyp and stage2.
+ */
 static inline phys_addr_t
 kvm_pgd_addr_end(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
 {
-	phys_addr_t boundary = (addr + PGDIR_SIZE) & PGDIR_MASK;
-	return (boundary - 1 < end - 1) ? boundary : end;
+	return stage2_pgd_addr_end(addr, end);
 }
 
 static inline phys_addr_t
 kvm_pud_addr_end(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
 {
-	return end;
+	return stage2_pud_addr_end(addr, end);
 }
 
 static inline phys_addr_t
 kvm_pmd_addr_end(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
 {
-	phys_addr_t boundary = (addr + PMD_SIZE) & PMD_MASK;
-	return (boundary - 1 < end - 1) ? boundary : end;
+	return stage2_pmd_addr_end(addr, end);
 }
 
 static inline phys_addr_t kvm_pgd_index(struct kvm *kvm, phys_addr_t addr)
@@ -174,9 +239,21 @@ static inline bool kvm_page_empty(void *ptr)
 	return page_count(ptr_page) == 1;
 }
 
-#define kvm_pte_table_empty(kvm, ptep) kvm_page_empty(ptep)
-#define kvm_pmd_table_empty(kvm, pmdp) kvm_page_empty(pmdp)
-#define kvm_pud_table_empty(kvm, pudp) (0)
+static inline bool kvm_pte_table_empty(struct kvm *kvm, pte_t *ptep)
+{
+	return kvm_page_empty(ptep);
+}
+
+static inline bool kvm_pmd_table_empty(struct kvm *kvm, pmd_t *pmdp)
+{
+	return kvm_page_empty(pmdp);
+}
+
+static inline bool kvm_pud_table_empty(struct kvm *kvm, pud_t *pudp)
+{
+	return 0;
+}
+
 
 static inline void *kvm_get_hwpgd(struct kvm *kvm)
 {
diff --git a/arch/arm/include/asm/stage2_pgtable.h b/arch/arm/include/asm/stage2_pgtable.h
new file mode 100644
index 0000000..91c1a63
--- /dev/null
+++ b/arch/arm/include/asm/stage2_pgtable.h
@@ -0,0 +1,55 @@
+/*
+ * Copyright (C) 2016 - ARM Ltd
+ *
+ * stage2 page table helpers
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __ARM_S2_PGTABLE_H_
+#define __ARM_S2_PGTABLE_H_
+
+#define stage2_pgd_none(pgd)				pgd_none(pgd)
+#define stage2_pgd_clear(pgd)				pgd_clear(pgd)
+#define stage2_pgd_present(pgd)				pgd_present(pgd)
+#define stage2_pgd_populate(mm, pgd, pud)		pgd_populate(mm, pgd, pud)
+#define stage2_pud_offset(pgd, address)			pud_offset(pgd, address)
+#define stage2_pud_free(mm, pud)			pud_free(mm, pud)
+
+#define stage2_pud_none(pud)				pud_none(pud)
+#define stage2_pud_clear(pud)				pud_clear(pud)
+#define stage2_pud_present(pud)				pud_present(pud)
+#define stage2_pud_populate(mm, pud, pmd)		pud_populate(mm, pud, pmd)
+#define stage2_pmd_offset(pud, address)			pmd_offset(pud, address)
+#define stage2_pmd_free(mm, pmd)			pmd_free(mm, pmd)
+
+#define stage2_pud_huge(pud)				pud_huge(pud)
+
+/* Open coded p*d_addr_end that can deal with 64bit addresses */
+static inline phys_addr_t stage2_pgd_addr_end(phys_addr_t addr, phys_addr_t end)
+{
+	phys_addr_t boundary = (addr + PGDIR_SIZE) & PGDIR_MASK;
+	return (boundary - 1 < end - 1) ? boundary : end;
+}
+
+#define stage2_pud_addr_end(addr, end)		(end)
+
+static inline phys_addr_t stage2_pmd_addr_end(phys_addr_t addr, phys_addr_t end)
+{
+	phys_addr_t boundary = (addr + PMD_SIZE) & PMD_MASK;
+	return (boundary - 1 < end - 1) ? boundary : end;
+}
+
+#define stage2_pgd_index(addr)				pgd_index(addr)
+
+#endif	/* __ARM_S2_PGTABLE_H_ */
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [RFC PATCH 07/12] kvm: arm: Introduce stage2 page table helpers
@ 2016-03-14 16:53   ` Suzuki K Poulose
  0 siblings, 0 replies; 50+ messages in thread
From: Suzuki K Poulose @ 2016-03-14 16:53 UTC (permalink / raw)
  To: linux-arm-kernel

So far we have used most of the page table helpers for host,
except for a few kvm_ wrappers. This patch introduces wrappers
for all the helpers which could potentially differ for underlying
page table. It also introduces the corresponding stage2 helpers
for arm32.

This will be plugged in to the core hypervisor code once
we have the arm64 counter parts implemented. On 32bit ARM,
we continue to use the host helpers under the hood, except
for kvm_p.d_addr_end which need to handle 64bit addresses.
Also converts kvm_p.d_table_empty() to static inline for
consistency with arm64 versions.

Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
---
 arch/arm/include/asm/kvm_mmu.h        |   95 +++++++++++++++++++++++++++++----
 arch/arm/include/asm/stage2_pgtable.h |   55 +++++++++++++++++++
 2 files changed, 141 insertions(+), 9 deletions(-)
 create mode 100644 arch/arm/include/asm/stage2_pgtable.h

diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index 17c6781..e670afa 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -48,6 +48,7 @@
 #include <linux/hugetlb.h>
 #include <asm/cacheflush.h>
 #include <asm/pgalloc.h>
+#include <asm/stage2_pgtable.h>
 
 int create_hyp_mappings(void *from, void *to);
 int create_hyp_io_mappings(void *from, void *to, phys_addr_t);
@@ -141,26 +142,90 @@ static inline int kvm_pud_huge(struct kvm *kvm, pud_t pud)
 	return pud_huge(pud);
 }
 
+static inline int kvm_pgd_none(struct kvm *kvm, pgd_t pgd)
+{
+	return pgd_none(pgd);
+}
+
+static inline void kvm_pgd_clear(struct kvm *kvm, pgd_t *pgdp)
+{
+	pgd_clear(pgdp);
+}
+
+static inline int kvm_pgd_present(struct kvm *kvm, pgd_t pgd)
+{
+	return pgd_present(pgd);
+}
+
+static inline void
+kvm_pgd_populate(struct kvm *kvm, struct mm_struct *mm, pgd_t *pgd, pud_t *pud)
+{
+	pgd_populate(mm, pgd, pud);
+}
+
+static inline pud_t *
+kvm_pud_offset(struct kvm *kvm, pgd_t *pgd, phys_addr_t address)
+{
+	return pud_offset(pgd, address);
+}
+
+static inline int kvm_pud_none(struct kvm *kvm, pud_t pud)
+{
+	return pud_none(pud);
+}
+
+static inline void kvm_pud_clear(struct kvm *kvm, pud_t *pudp)
+{
+	pud_clear(pudp);
+}
+
+static inline int kvm_pud_present(struct kvm *kvm, pud_t pud)
+{
+	return pud_present(pud);
+}
 
-/* Open coded p*d_addr_end that can deal with 64bit addresses */
+static inline void kvm_pud_free(struct kvm *kvm, struct mm_struct *mm, pud_t *pudp)
+{
+	pud_free(mm, pudp);
+}
+
+static inline void
+kvm_pud_populate(struct kvm *kvm, struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
+{
+	pud_populate(mm, pud, pmd);
+}
+
+static inline pmd_t *
+kvm_pmd_offset(struct kvm *kvm, pud_t *pud, phys_addr_t address)
+{
+	return pmd_offset(pud, address);
+}
+
+static inline void kvm_pmd_free(struct kvm *kvm, struct mm_struct *mm, pmd_t *pmd)
+{
+	pmd_free(mm, pmd);
+}
+
+/*
+ * stage2_p.d_addr_end can handle 64bit address, use them explicitly for
+ * hyp and stage2.
+ */
 static inline phys_addr_t
 kvm_pgd_addr_end(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
 {
-	phys_addr_t boundary = (addr + PGDIR_SIZE) & PGDIR_MASK;
-	return (boundary - 1 < end - 1) ? boundary : end;
+	return stage2_pgd_addr_end(addr, end);
 }
 
 static inline phys_addr_t
 kvm_pud_addr_end(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
 {
-	return end;
+	return stage2_pud_addr_end(addr, end);
 }
 
 static inline phys_addr_t
 kvm_pmd_addr_end(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
 {
-	phys_addr_t boundary = (addr + PMD_SIZE) & PMD_MASK;
-	return (boundary - 1 < end - 1) ? boundary : end;
+	return stage2_pmd_addr_end(addr, end);
 }
 
 static inline phys_addr_t kvm_pgd_index(struct kvm *kvm, phys_addr_t addr)
@@ -174,9 +239,21 @@ static inline bool kvm_page_empty(void *ptr)
 	return page_count(ptr_page) == 1;
 }
 
-#define kvm_pte_table_empty(kvm, ptep) kvm_page_empty(ptep)
-#define kvm_pmd_table_empty(kvm, pmdp) kvm_page_empty(pmdp)
-#define kvm_pud_table_empty(kvm, pudp) (0)
+static inline bool kvm_pte_table_empty(struct kvm *kvm, pte_t *ptep)
+{
+	return kvm_page_empty(ptep);
+}
+
+static inline bool kvm_pmd_table_empty(struct kvm *kvm, pmd_t *pmdp)
+{
+	return kvm_page_empty(pmdp);
+}
+
+static inline bool kvm_pud_table_empty(struct kvm *kvm, pud_t *pudp)
+{
+	return 0;
+}
+
 
 static inline void *kvm_get_hwpgd(struct kvm *kvm)
 {
diff --git a/arch/arm/include/asm/stage2_pgtable.h b/arch/arm/include/asm/stage2_pgtable.h
new file mode 100644
index 0000000..91c1a63
--- /dev/null
+++ b/arch/arm/include/asm/stage2_pgtable.h
@@ -0,0 +1,55 @@
+/*
+ * Copyright (C) 2016 - ARM Ltd
+ *
+ * stage2 page table helpers
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __ARM_S2_PGTABLE_H_
+#define __ARM_S2_PGTABLE_H_
+
+#define stage2_pgd_none(pgd)				pgd_none(pgd)
+#define stage2_pgd_clear(pgd)				pgd_clear(pgd)
+#define stage2_pgd_present(pgd)				pgd_present(pgd)
+#define stage2_pgd_populate(mm, pgd, pud)		pgd_populate(mm, pgd, pud)
+#define stage2_pud_offset(pgd, address)			pud_offset(pgd, address)
+#define stage2_pud_free(mm, pud)			pud_free(mm, pud)
+
+#define stage2_pud_none(pud)				pud_none(pud)
+#define stage2_pud_clear(pud)				pud_clear(pud)
+#define stage2_pud_present(pud)				pud_present(pud)
+#define stage2_pud_populate(mm, pud, pmd)		pud_populate(mm, pud, pmd)
+#define stage2_pmd_offset(pud, address)			pmd_offset(pud, address)
+#define stage2_pmd_free(mm, pmd)			pmd_free(mm, pmd)
+
+#define stage2_pud_huge(pud)				pud_huge(pud)
+
+/* Open coded p*d_addr_end that can deal with 64bit addresses */
+static inline phys_addr_t stage2_pgd_addr_end(phys_addr_t addr, phys_addr_t end)
+{
+	phys_addr_t boundary = (addr + PGDIR_SIZE) & PGDIR_MASK;
+	return (boundary - 1 < end - 1) ? boundary : end;
+}
+
+#define stage2_pud_addr_end(addr, end)		(end)
+
+static inline phys_addr_t stage2_pmd_addr_end(phys_addr_t addr, phys_addr_t end)
+{
+	phys_addr_t boundary = (addr + PMD_SIZE) & PMD_MASK;
+	return (boundary - 1 < end - 1) ? boundary : end;
+}
+
+#define stage2_pgd_index(addr)				pgd_index(addr)
+
+#endif	/* __ARM_S2_PGTABLE_H_ */
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [RFC PATCH 08/12] kvm: arm64: Introduce stage2 page table helpers
  2016-03-14 16:52 ` Suzuki K Poulose
@ 2016-03-14 16:53   ` Suzuki K Poulose
  -1 siblings, 0 replies; 50+ messages in thread
From: Suzuki K Poulose @ 2016-03-14 16:53 UTC (permalink / raw)
  To: christoffer.dall, marc.zyngier
  Cc: kvm, catalin.marinas, will.deacon, kvmarm, linux-arm-kernel

Introduce arm64 kvm wrappers for page table walkers and the corresponding
stage2 table helpers. On arm64 we could have different number of page
table levels for hyp and stage2 translations. Hence, the wrapper
switches between hyp vs. stage2 depending on the 'kvm' instance passed
to it. For now, stage2 helpers fall back to that of the host, since we
still have fake page table levels to match the page table levels with
that of the host. The hypervisor code will switch to using the kvm_
wrappers in subsequent patches.

All the stage2 table related definitions are moved to asm/stage2_pgtable.h.

Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
---
 arch/arm64/include/asm/kvm_mmu.h        |  148 ++++++++++++++++++++++---------
 arch/arm64/include/asm/stage2_pgtable.h |   85 ++++++++++++++++++
 2 files changed, 193 insertions(+), 40 deletions(-)
 create mode 100644 arch/arm64/include/asm/stage2_pgtable.h

diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 416ca23..55cde87 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -82,6 +82,8 @@
 #define KVM_PHYS_SIZE	(1UL << KVM_PHYS_SHIFT)
 #define KVM_PHYS_MASK	(KVM_PHYS_SIZE - 1UL)
 
+#include <asm/stage2_pgtable.h>
+
 int create_hyp_mappings(void *from, void *to);
 int create_hyp_io_mappings(void *from, void *to, phys_addr_t);
 void free_boot_hyp_pgd(void);
@@ -144,59 +146,114 @@ static inline bool kvm_s2pmd_readonly(pmd_t *pmd)
 
 static inline int kvm_pud_huge(struct kvm *kvm, pud_t pud)
 {
-	return pud_huge(pud);
+	return kvm ? stage2_pud_huge(pud) : pud_huge(pud);
+}
+
+static inline int kvm_pgd_none(struct kvm *kvm, pgd_t pgd)
+{
+	return kvm ? stage2_pgd_none(pgd) : pgd_none(pgd);
+}
+
+static inline void kvm_pgd_clear(struct kvm *kvm, pgd_t *pgdp)
+{
+	if (kvm)
+		stage2_pgd_clear(pgdp);
+	else
+		pgd_clear(pgdp);
+}
+
+static inline int kvm_pgd_present(struct kvm *kvm, pgd_t pgd)
+{
+	return kvm ? stage2_pgd_present(pgd) : pgd_present(pgd);
+}
+
+static inline void
+kvm_pgd_populate(struct kvm *kvm, struct mm_struct *mm, pgd_t *pgd, pud_t *pud)
+{
+	if (kvm)
+		stage2_pgd_populate(mm, pgd, pud);
+	else
+		pgd_populate(mm, pgd, pud);
+}
+
+static inline pud_t *
+kvm_pud_offset(struct kvm *kvm, pgd_t *pgd, phys_addr_t address)
+{
+	return kvm ? stage2_pud_offset(pgd, address) : pud_offset(pgd, address);
+}
+
+static inline void kvm_pud_free(struct kvm *kvm, struct mm_struct *mm, pud_t *pudp)
+{
+	if (kvm)
+		stage2_pud_free(mm, pudp);
+	else
+		pud_free(mm, pudp);
+}
+
+static inline int kvm_pud_none(struct kvm *kvm, pud_t pud)
+{
+	return kvm ? stage2_pud_none(pud) : pud_none(pud);
+}
+
+static inline void kvm_pud_clear(struct kvm *kvm, pud_t *pudp)
+{
+	if (kvm)
+		stage2_pud_clear(pudp);
+	else
+		pud_clear(pudp);
+}
+
+static inline int kvm_pud_present(struct kvm *kvm, pud_t pud)
+{
+	return kvm ? stage2_pud_present(pud) : pud_present(pud);
+}
+
+static inline void
+kvm_pud_populate(struct kvm *kvm, struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
+{
+	if (kvm)
+		stage2_pud_populate(mm, pud, pmd);
+	else
+		pud_populate(mm, pud, pmd);
+}
+
+static inline pmd_t *
+kvm_pmd_offset(struct kvm *kvm, pud_t *pud, phys_addr_t address)
+{
+	return kvm ? stage2_pmd_offset(pud, address) : pmd_offset(pud, address);
+}
+
+static inline void kvm_pmd_free(struct kvm *kvm, struct mm_struct *mm, pmd_t *pmd)
+{
+	if (kvm)
+		stage2_pmd_free(mm, pmd);
+	else
+		pmd_free(mm, pmd);
 }
 
 static inline phys_addr_t
 kvm_pgd_addr_end(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
 {
-	return	pgd_addr_end(addr, end);
+	return	kvm ? stage2_pgd_addr_end(addr, end) : pgd_addr_end(addr, end);
 }
 
 static inline phys_addr_t
 kvm_pud_addr_end(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
 {
-	return	pud_addr_end(addr, end);
+	return	kvm ? stage2_pud_addr_end(addr, end) : pud_addr_end(addr, end);
 }
 
 static inline phys_addr_t
 kvm_pmd_addr_end(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
 {
-	return	pmd_addr_end(addr, end);
+	return kvm ? stage2_pmd_addr_end(addr, end) : pmd_addr_end(addr, end);
 }
 
-/*
- * In the case where PGDIR_SHIFT is larger than KVM_PHYS_SHIFT, we can address
- * the entire IPA input range with a single pgd entry, and we would only need
- * one pgd entry.  Note that in this case, the pgd is actually not used by
- * the MMU for Stage-2 translations, but is merely a fake pgd used as a data
- * structure for the kernel pgtable macros to work.
- */
-#if PGDIR_SHIFT > KVM_PHYS_SHIFT
-#define PTRS_PER_S2_PGD_SHIFT	0
-#else
-#define PTRS_PER_S2_PGD_SHIFT	(KVM_PHYS_SHIFT - PGDIR_SHIFT)
-#endif
-#define PTRS_PER_S2_PGD		(1 << PTRS_PER_S2_PGD_SHIFT)
-
 static inline phys_addr_t kvm_pgd_index(struct kvm *kvm, phys_addr_t addr)
 {
-	return (addr >> PGDIR_SHIFT) & (PTRS_PER_S2_PGD - 1);
+	return kvm ? stage2_pgd_index(addr) : pgd_index(addr);
 }
 
-/*
- * If we are concatenating first level stage-2 page tables, we would have less
- * than or equal to 16 pointers in the fake PGD, because that's what the
- * architecture allows.  In this case, (4 - CONFIG_PGTABLE_LEVELS)
- * represents the first level for the host, and we add 1 to go to the next
- * level (which uses contatenation) for the stage-2 tables.
- */
-#if PTRS_PER_S2_PGD <= 16
-#define KVM_PREALLOC_LEVEL	(4 - CONFIG_PGTABLE_LEVELS + 1)
-#else
-#define KVM_PREALLOC_LEVEL	(0)
-#endif
-
 static inline void *kvm_get_hwpgd(struct kvm *kvm)
 {
 	pgd_t *pgd = kvm->arch.pgd;
@@ -269,22 +326,33 @@ static inline bool kvm_page_empty(void *ptr)
 	return page_count(ptr_page) == 1;
 }
 
-#define kvm_pte_table_empty(kvm, ptep) kvm_page_empty(ptep)
-
 #ifdef __PAGETABLE_PMD_FOLDED
-#define kvm_pmd_table_empty(kvm, pmdp) (0)
+#define hyp_pmd_table_empty(pmdp)	(0)
 #else
-#define kvm_pmd_table_empty(kvm, pmdp) \
-	(kvm_page_empty(pmdp) && (!(kvm) || KVM_PREALLOC_LEVEL < 2))
+#define hyp_pmd_table_empty(pmdp)	kvm_page_empty(pmdp)
 #endif
 
 #ifdef __PAGETABLE_PUD_FOLDED
-#define kvm_pud_table_empty(kvm, pudp) (0)
+#define hyp_pud_table_empty(pudp)	(0)
 #else
-#define kvm_pud_table_empty(kvm, pudp) \
-	(kvm_page_empty(pudp) && (!(kvm) || KVM_PREALLOC_LEVEL < 1))
+#define hyp_pud_table_empty(pudp)	kvm_page_empty(pudp)
 #endif
 
+static inline bool kvm_pte_table_empty(struct kvm *kvm, pte_t *ptep)
+{
+	return kvm_page_empty(ptep);
+}
+
+static inline bool kvm_pmd_table_empty(struct kvm *kvm, pmd_t *pmdp)
+{
+	return kvm ? stage2_pmd_table_empty(pmdp) : hyp_pmd_table_empty(pmdp);
+}
+
+static inline bool kvm_pud_table_empty(struct kvm *kvm, pud_t *pudp)
+{
+	return kvm ? stage2_pud_table_empty(pudp) : hyp_pud_table_empty(pudp);
+}
+
 
 struct kvm;
 
diff --git a/arch/arm64/include/asm/stage2_pgtable.h b/arch/arm64/include/asm/stage2_pgtable.h
new file mode 100644
index 0000000..95496e6
--- /dev/null
+++ b/arch/arm64/include/asm/stage2_pgtable.h
@@ -0,0 +1,85 @@
+/*
+ * Copyright (C) 2016 - ARM Ltd
+ *
+ * stage2 page table helpers
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __ARM64_S2_PGTABLE_H_
+#define __ARM64_S2_PGTABLE_H_
+
+#include <asm/pgtable.h>
+
+/*
+ * In the case where PGDIR_SHIFT is larger than KVM_PHYS_SHIFT, we can address
+ * the entire IPA input range with a single pgd entry, and we would only need
+ * one pgd entry.  Note that in this case, the pgd is actually not used by
+ * the MMU for Stage-2 translations, but is merely a fake pgd used as a data
+ * structure for the kernel pgtable macros to work.
+ */
+#if PGDIR_SHIFT > KVM_PHYS_SHIFT
+#define PTRS_PER_S2_PGD_SHIFT	0
+#else
+#define PTRS_PER_S2_PGD_SHIFT	(KVM_PHYS_SHIFT - PGDIR_SHIFT)
+#endif
+#define PTRS_PER_S2_PGD		(1 << PTRS_PER_S2_PGD_SHIFT)
+
+/*
+ * If we are concatenating first level stage-2 page tables, we would have less
+ * than or equal to 16 pointers in the fake PGD, because that's what the
+ * architecture allows.  In this case, (4 - CONFIG_PGTABLE_LEVELS)
+ * represents the first level for the host, and we add 1 to go to the next
+ * level (which uses contatenation) for the stage-2 tables.
+ */
+#if PTRS_PER_S2_PGD <= 16
+#define KVM_PREALLOC_LEVEL	(4 - CONFIG_PGTABLE_LEVELS + 1)
+#else
+#define KVM_PREALLOC_LEVEL	(0)
+#endif
+
+#define stage2_pgd_none(pgd)				pgd_none(pgd)
+#define stage2_pgd_clear(pgd)				pgd_clear(pgd)
+#define stage2_pgd_present(pgd)				pgd_present(pgd)
+#define stage2_pgd_populate(mm, pgd, pud)		pgd_populate(mm, pgd, pud)
+#define stage2_pud_offset(pgd, address)			pud_offset(pgd, address)
+#define stage2_pud_free(mm, pud)			pud_free(mm, pud)
+
+#define stage2_pud_none(pud)				pud_none(pud)
+#define stage2_pud_clear(pud)				pud_clear(pud)
+#define stage2_pud_present(pud)				pud_present(pud)
+#define stage2_pud_populate(mm, pud, pmd)		pud_populate(mm, pud, pmd)
+#define stage2_pmd_offset(pud, address)			pmd_offset(pud, address)
+#define stage2_pmd_free(mm, pmd)			pmd_free(mm, pmd)
+
+#define stage2_pud_huge(pud)				pud_huge(pud)
+
+#define stage2_pgd_addr_end(address, end)		pgd_addr_end(address, end)
+#define stage2_pud_addr_end(address, end)		pud_addr_end(address, end)
+#define stage2_pmd_addr_end(address, end)		pmd_addr_end(address, end)
+
+#ifdef __PGTABLE_PMD_FOLDED
+#define stage2_pmd_table_empty(pmdp)			(0)
+#else
+#define stage2_pmd_table_empty(pmdp)			((KVM_PREALLOC_LEVEL < 2) && kvm_page_empty(pmdp))
+#endif
+
+#ifdef __PGTABLE_PUD_FOLDED
+#define stage2_pmd_table_empty(pmdp)			(0)
+#else
+#define stage2_pud_table_empty(pudp)			((KVM_PREALLOC_LEVEL < 1) && kvm_page_empty(pudp))
+#endif
+
+#define stage2_pgd_index(addr)				(((addr) >> PGDIR_SHIFT) & (PTRS_PER_S2_PGD - 1))
+
+#endif	/* __ARM64_S2_PGTABLE_H_ */
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [RFC PATCH 08/12] kvm: arm64: Introduce stage2 page table helpers
@ 2016-03-14 16:53   ` Suzuki K Poulose
  0 siblings, 0 replies; 50+ messages in thread
From: Suzuki K Poulose @ 2016-03-14 16:53 UTC (permalink / raw)
  To: linux-arm-kernel

Introduce arm64 kvm wrappers for page table walkers and the corresponding
stage2 table helpers. On arm64 we could have different number of page
table levels for hyp and stage2 translations. Hence, the wrapper
switches between hyp vs. stage2 depending on the 'kvm' instance passed
to it. For now, stage2 helpers fall back to that of the host, since we
still have fake page table levels to match the page table levels with
that of the host. The hypervisor code will switch to using the kvm_
wrappers in subsequent patches.

All the stage2 table related definitions are moved to asm/stage2_pgtable.h.

Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
---
 arch/arm64/include/asm/kvm_mmu.h        |  148 ++++++++++++++++++++++---------
 arch/arm64/include/asm/stage2_pgtable.h |   85 ++++++++++++++++++
 2 files changed, 193 insertions(+), 40 deletions(-)
 create mode 100644 arch/arm64/include/asm/stage2_pgtable.h

diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 416ca23..55cde87 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -82,6 +82,8 @@
 #define KVM_PHYS_SIZE	(1UL << KVM_PHYS_SHIFT)
 #define KVM_PHYS_MASK	(KVM_PHYS_SIZE - 1UL)
 
+#include <asm/stage2_pgtable.h>
+
 int create_hyp_mappings(void *from, void *to);
 int create_hyp_io_mappings(void *from, void *to, phys_addr_t);
 void free_boot_hyp_pgd(void);
@@ -144,59 +146,114 @@ static inline bool kvm_s2pmd_readonly(pmd_t *pmd)
 
 static inline int kvm_pud_huge(struct kvm *kvm, pud_t pud)
 {
-	return pud_huge(pud);
+	return kvm ? stage2_pud_huge(pud) : pud_huge(pud);
+}
+
+static inline int kvm_pgd_none(struct kvm *kvm, pgd_t pgd)
+{
+	return kvm ? stage2_pgd_none(pgd) : pgd_none(pgd);
+}
+
+static inline void kvm_pgd_clear(struct kvm *kvm, pgd_t *pgdp)
+{
+	if (kvm)
+		stage2_pgd_clear(pgdp);
+	else
+		pgd_clear(pgdp);
+}
+
+static inline int kvm_pgd_present(struct kvm *kvm, pgd_t pgd)
+{
+	return kvm ? stage2_pgd_present(pgd) : pgd_present(pgd);
+}
+
+static inline void
+kvm_pgd_populate(struct kvm *kvm, struct mm_struct *mm, pgd_t *pgd, pud_t *pud)
+{
+	if (kvm)
+		stage2_pgd_populate(mm, pgd, pud);
+	else
+		pgd_populate(mm, pgd, pud);
+}
+
+static inline pud_t *
+kvm_pud_offset(struct kvm *kvm, pgd_t *pgd, phys_addr_t address)
+{
+	return kvm ? stage2_pud_offset(pgd, address) : pud_offset(pgd, address);
+}
+
+static inline void kvm_pud_free(struct kvm *kvm, struct mm_struct *mm, pud_t *pudp)
+{
+	if (kvm)
+		stage2_pud_free(mm, pudp);
+	else
+		pud_free(mm, pudp);
+}
+
+static inline int kvm_pud_none(struct kvm *kvm, pud_t pud)
+{
+	return kvm ? stage2_pud_none(pud) : pud_none(pud);
+}
+
+static inline void kvm_pud_clear(struct kvm *kvm, pud_t *pudp)
+{
+	if (kvm)
+		stage2_pud_clear(pudp);
+	else
+		pud_clear(pudp);
+}
+
+static inline int kvm_pud_present(struct kvm *kvm, pud_t pud)
+{
+	return kvm ? stage2_pud_present(pud) : pud_present(pud);
+}
+
+static inline void
+kvm_pud_populate(struct kvm *kvm, struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
+{
+	if (kvm)
+		stage2_pud_populate(mm, pud, pmd);
+	else
+		pud_populate(mm, pud, pmd);
+}
+
+static inline pmd_t *
+kvm_pmd_offset(struct kvm *kvm, pud_t *pud, phys_addr_t address)
+{
+	return kvm ? stage2_pmd_offset(pud, address) : pmd_offset(pud, address);
+}
+
+static inline void kvm_pmd_free(struct kvm *kvm, struct mm_struct *mm, pmd_t *pmd)
+{
+	if (kvm)
+		stage2_pmd_free(mm, pmd);
+	else
+		pmd_free(mm, pmd);
 }
 
 static inline phys_addr_t
 kvm_pgd_addr_end(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
 {
-	return	pgd_addr_end(addr, end);
+	return	kvm ? stage2_pgd_addr_end(addr, end) : pgd_addr_end(addr, end);
 }
 
 static inline phys_addr_t
 kvm_pud_addr_end(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
 {
-	return	pud_addr_end(addr, end);
+	return	kvm ? stage2_pud_addr_end(addr, end) : pud_addr_end(addr, end);
 }
 
 static inline phys_addr_t
 kvm_pmd_addr_end(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
 {
-	return	pmd_addr_end(addr, end);
+	return kvm ? stage2_pmd_addr_end(addr, end) : pmd_addr_end(addr, end);
 }
 
-/*
- * In the case where PGDIR_SHIFT is larger than KVM_PHYS_SHIFT, we can address
- * the entire IPA input range with a single pgd entry, and we would only need
- * one pgd entry.  Note that in this case, the pgd is actually not used by
- * the MMU for Stage-2 translations, but is merely a fake pgd used as a data
- * structure for the kernel pgtable macros to work.
- */
-#if PGDIR_SHIFT > KVM_PHYS_SHIFT
-#define PTRS_PER_S2_PGD_SHIFT	0
-#else
-#define PTRS_PER_S2_PGD_SHIFT	(KVM_PHYS_SHIFT - PGDIR_SHIFT)
-#endif
-#define PTRS_PER_S2_PGD		(1 << PTRS_PER_S2_PGD_SHIFT)
-
 static inline phys_addr_t kvm_pgd_index(struct kvm *kvm, phys_addr_t addr)
 {
-	return (addr >> PGDIR_SHIFT) & (PTRS_PER_S2_PGD - 1);
+	return kvm ? stage2_pgd_index(addr) : pgd_index(addr);
 }
 
-/*
- * If we are concatenating first level stage-2 page tables, we would have less
- * than or equal to 16 pointers in the fake PGD, because that's what the
- * architecture allows.  In this case, (4 - CONFIG_PGTABLE_LEVELS)
- * represents the first level for the host, and we add 1 to go to the next
- * level (which uses contatenation) for the stage-2 tables.
- */
-#if PTRS_PER_S2_PGD <= 16
-#define KVM_PREALLOC_LEVEL	(4 - CONFIG_PGTABLE_LEVELS + 1)
-#else
-#define KVM_PREALLOC_LEVEL	(0)
-#endif
-
 static inline void *kvm_get_hwpgd(struct kvm *kvm)
 {
 	pgd_t *pgd = kvm->arch.pgd;
@@ -269,22 +326,33 @@ static inline bool kvm_page_empty(void *ptr)
 	return page_count(ptr_page) == 1;
 }
 
-#define kvm_pte_table_empty(kvm, ptep) kvm_page_empty(ptep)
-
 #ifdef __PAGETABLE_PMD_FOLDED
-#define kvm_pmd_table_empty(kvm, pmdp) (0)
+#define hyp_pmd_table_empty(pmdp)	(0)
 #else
-#define kvm_pmd_table_empty(kvm, pmdp) \
-	(kvm_page_empty(pmdp) && (!(kvm) || KVM_PREALLOC_LEVEL < 2))
+#define hyp_pmd_table_empty(pmdp)	kvm_page_empty(pmdp)
 #endif
 
 #ifdef __PAGETABLE_PUD_FOLDED
-#define kvm_pud_table_empty(kvm, pudp) (0)
+#define hyp_pud_table_empty(pudp)	(0)
 #else
-#define kvm_pud_table_empty(kvm, pudp) \
-	(kvm_page_empty(pudp) && (!(kvm) || KVM_PREALLOC_LEVEL < 1))
+#define hyp_pud_table_empty(pudp)	kvm_page_empty(pudp)
 #endif
 
+static inline bool kvm_pte_table_empty(struct kvm *kvm, pte_t *ptep)
+{
+	return kvm_page_empty(ptep);
+}
+
+static inline bool kvm_pmd_table_empty(struct kvm *kvm, pmd_t *pmdp)
+{
+	return kvm ? stage2_pmd_table_empty(pmdp) : hyp_pmd_table_empty(pmdp);
+}
+
+static inline bool kvm_pud_table_empty(struct kvm *kvm, pud_t *pudp)
+{
+	return kvm ? stage2_pud_table_empty(pudp) : hyp_pud_table_empty(pudp);
+}
+
 
 struct kvm;
 
diff --git a/arch/arm64/include/asm/stage2_pgtable.h b/arch/arm64/include/asm/stage2_pgtable.h
new file mode 100644
index 0000000..95496e6
--- /dev/null
+++ b/arch/arm64/include/asm/stage2_pgtable.h
@@ -0,0 +1,85 @@
+/*
+ * Copyright (C) 2016 - ARM Ltd
+ *
+ * stage2 page table helpers
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __ARM64_S2_PGTABLE_H_
+#define __ARM64_S2_PGTABLE_H_
+
+#include <asm/pgtable.h>
+
+/*
+ * In the case where PGDIR_SHIFT is larger than KVM_PHYS_SHIFT, we can address
+ * the entire IPA input range with a single pgd entry, and we would only need
+ * one pgd entry.  Note that in this case, the pgd is actually not used by
+ * the MMU for Stage-2 translations, but is merely a fake pgd used as a data
+ * structure for the kernel pgtable macros to work.
+ */
+#if PGDIR_SHIFT > KVM_PHYS_SHIFT
+#define PTRS_PER_S2_PGD_SHIFT	0
+#else
+#define PTRS_PER_S2_PGD_SHIFT	(KVM_PHYS_SHIFT - PGDIR_SHIFT)
+#endif
+#define PTRS_PER_S2_PGD		(1 << PTRS_PER_S2_PGD_SHIFT)
+
+/*
+ * If we are concatenating first level stage-2 page tables, we would have less
+ * than or equal to 16 pointers in the fake PGD, because that's what the
+ * architecture allows.  In this case, (4 - CONFIG_PGTABLE_LEVELS)
+ * represents the first level for the host, and we add 1 to go to the next
+ * level (which uses contatenation) for the stage-2 tables.
+ */
+#if PTRS_PER_S2_PGD <= 16
+#define KVM_PREALLOC_LEVEL	(4 - CONFIG_PGTABLE_LEVELS + 1)
+#else
+#define KVM_PREALLOC_LEVEL	(0)
+#endif
+
+#define stage2_pgd_none(pgd)				pgd_none(pgd)
+#define stage2_pgd_clear(pgd)				pgd_clear(pgd)
+#define stage2_pgd_present(pgd)				pgd_present(pgd)
+#define stage2_pgd_populate(mm, pgd, pud)		pgd_populate(mm, pgd, pud)
+#define stage2_pud_offset(pgd, address)			pud_offset(pgd, address)
+#define stage2_pud_free(mm, pud)			pud_free(mm, pud)
+
+#define stage2_pud_none(pud)				pud_none(pud)
+#define stage2_pud_clear(pud)				pud_clear(pud)
+#define stage2_pud_present(pud)				pud_present(pud)
+#define stage2_pud_populate(mm, pud, pmd)		pud_populate(mm, pud, pmd)
+#define stage2_pmd_offset(pud, address)			pmd_offset(pud, address)
+#define stage2_pmd_free(mm, pmd)			pmd_free(mm, pmd)
+
+#define stage2_pud_huge(pud)				pud_huge(pud)
+
+#define stage2_pgd_addr_end(address, end)		pgd_addr_end(address, end)
+#define stage2_pud_addr_end(address, end)		pud_addr_end(address, end)
+#define stage2_pmd_addr_end(address, end)		pmd_addr_end(address, end)
+
+#ifdef __PGTABLE_PMD_FOLDED
+#define stage2_pmd_table_empty(pmdp)			(0)
+#else
+#define stage2_pmd_table_empty(pmdp)			((KVM_PREALLOC_LEVEL < 2) && kvm_page_empty(pmdp))
+#endif
+
+#ifdef __PGTABLE_PUD_FOLDED
+#define stage2_pmd_table_empty(pmdp)			(0)
+#else
+#define stage2_pud_table_empty(pudp)			((KVM_PREALLOC_LEVEL < 1) && kvm_page_empty(pudp))
+#endif
+
+#define stage2_pgd_index(addr)				(((addr) >> PGDIR_SHIFT) & (PTRS_PER_S2_PGD - 1))
+
+#endif	/* __ARM64_S2_PGTABLE_H_ */
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [RFC PATCH 09/12] kvm-arm: Switch to kvm pagetable helpers
  2016-03-14 16:52 ` Suzuki K Poulose
@ 2016-03-14 16:53   ` Suzuki K Poulose
  -1 siblings, 0 replies; 50+ messages in thread
From: Suzuki K Poulose @ 2016-03-14 16:53 UTC (permalink / raw)
  To: christoffer.dall, marc.zyngier
  Cc: kvm, catalin.marinas, will.deacon, kvmarm, linux-arm-kernel

Now that we have kvm wrappers for page table walk, switch
to using them everywhere. Also, use the explicit page table
accessor (stage2_ vs hyp), whenever we know we deal only
with a particular table.

Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
---
 arch/arm/kvm/mmu.c |   64 ++++++++++++++++++++++++++--------------------------
 1 file changed, 32 insertions(+), 32 deletions(-)

diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index 22b4c99..8568790 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -155,20 +155,20 @@ static void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc)
 
 static void clear_pgd_entry(struct kvm *kvm, pgd_t *pgd, phys_addr_t addr)
 {
-	pud_t *pud_table __maybe_unused = pud_offset(pgd, 0);
-	pgd_clear(pgd);
+	pud_t *pud_table __maybe_unused = kvm_pud_offset(kvm, pgd, 0);
+	kvm_pgd_clear(kvm, pgd);
 	kvm_tlb_flush_vmid_ipa(kvm, addr);
-	pud_free(NULL, pud_table);
+	kvm_pud_free(kvm, NULL, pud_table);
 	put_page(virt_to_page(pgd));
 }
 
 static void clear_pud_entry(struct kvm *kvm, pud_t *pud, phys_addr_t addr)
 {
-	pmd_t *pmd_table = pmd_offset(pud, 0);
+	pmd_t *pmd_table = kvm_pmd_offset(kvm, pud, 0);
 	VM_BUG_ON(kvm_pud_huge(kvm, *pud));
-	pud_clear(pud);
+	kvm_pud_clear(kvm, pud);
 	kvm_tlb_flush_vmid_ipa(kvm, addr);
-	pmd_free(NULL, pmd_table);
+	kvm_pmd_free(kvm, NULL, pmd_table);
 	put_page(virt_to_page(pud));
 }
 
@@ -234,7 +234,7 @@ static void unmap_pmds(struct kvm *kvm, pud_t *pud,
 	phys_addr_t next, start_addr = addr;
 	pmd_t *pmd, *start_pmd;
 
-	start_pmd = pmd = pmd_offset(pud, addr);
+	start_pmd = pmd = kvm_pmd_offset(kvm, pud, addr);
 	do {
 		next = kvm_pmd_addr_end(kvm, addr, end);
 		if (!pmd_none(*pmd)) {
@@ -263,14 +263,14 @@ static void unmap_puds(struct kvm *kvm, pgd_t *pgd,
 	phys_addr_t next, start_addr = addr;
 	pud_t *pud, *start_pud;
 
-	start_pud = pud = pud_offset(pgd, addr);
+	start_pud = pud = kvm_pud_offset(kvm, pgd, addr);
 	do {
 		next = kvm_pud_addr_end(kvm, addr, end);
-		if (!pud_none(*pud)) {
+		if (!kvm_pud_none(kvm, *pud)) {
 			if (kvm_pud_huge(kvm, *pud)) {
 				pud_t old_pud = *pud;
 
-				pud_clear(pud);
+				kvm_pud_clear(kvm, pud);
 				kvm_tlb_flush_vmid_ipa(kvm, addr);
 
 				kvm_flush_dcache_pud(old_pud);
@@ -297,7 +297,7 @@ static void unmap_range(struct kvm *kvm, pgd_t *pgdp,
 	pgd = pgdp + kvm_pgd_index(kvm, addr);
 	do {
 		next = kvm_pgd_addr_end(kvm, addr, end);
-		if (!pgd_none(*pgd))
+		if (!kvm_pgd_none(kvm, *pgd))
 			unmap_puds(kvm, pgd, addr, next);
 	} while (pgd++, addr = next, addr != end);
 }
@@ -320,9 +320,9 @@ static void stage2_flush_pmds(struct kvm *kvm, pud_t *pud,
 	pmd_t *pmd;
 	phys_addr_t next;
 
-	pmd = pmd_offset(pud, addr);
+	pmd = stage2_pmd_offset(pud, addr);
 	do {
-		next = kvm_pmd_addr_end(kvm, addr, end);
+		next = stage2_pmd_addr_end(addr, end);
 		if (!pmd_none(*pmd)) {
 			if (huge_pmd(*pmd))
 				kvm_flush_dcache_pmd(*pmd);
@@ -338,11 +338,11 @@ static void stage2_flush_puds(struct kvm *kvm, pgd_t *pgd,
 	pud_t *pud;
 	phys_addr_t next;
 
-	pud = pud_offset(pgd, addr);
+	pud = stage2_pud_offset(pgd, addr);
 	do {
-		next = kvm_pud_addr_end(kvm, addr, end);
-		if (!pud_none(*pud)) {
-			if (kvm_pud_huge(kvm, *pud))
+		next = stage2_pud_addr_end(addr, end);
+		if (!stage2_pud_none(*pud)) {
+			if (stage2_pud_huge(*pud))
 				kvm_flush_dcache_pud(*pud);
 			else
 				stage2_flush_pmds(kvm, pud, addr, next);
@@ -358,9 +358,9 @@ static void stage2_flush_memslot(struct kvm *kvm,
 	phys_addr_t next;
 	pgd_t *pgd;
 
-	pgd = kvm->arch.pgd + kvm_pgd_index(kvm, addr);
+	pgd = kvm->arch.pgd + stage2_pgd_index(addr);
 	do {
-		next = kvm_pgd_addr_end(kvm, addr, end);
+		next = stage2_pgd_addr_end(addr, end);
 		stage2_flush_puds(kvm, pgd, addr, next);
 	} while (pgd++, addr = next, addr != end);
 }
@@ -803,15 +803,15 @@ static pud_t *stage2_get_pud(struct kvm *kvm, struct kvm_mmu_memory_cache *cache
 	pud_t *pud;
 
 	pgd = kvm->arch.pgd + kvm_pgd_index(kvm, addr);
-	if (WARN_ON(pgd_none(*pgd))) {
+	if (WARN_ON(stage2_pgd_none(*pgd))) {
 		if (!cache)
 			return NULL;
 		pud = mmu_memory_cache_alloc(cache);
-		pgd_populate(NULL, pgd, pud);
+		stage2_pgd_populate(NULL, pgd, pud);
 		get_page(virt_to_page(pgd));
 	}
 
-	return pud_offset(pgd, addr);
+	return stage2_pud_offset(pgd, addr);
 }
 
 static pmd_t *stage2_get_pmd(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
@@ -821,15 +821,15 @@ static pmd_t *stage2_get_pmd(struct kvm *kvm, struct kvm_mmu_memory_cache *cache
 	pmd_t *pmd;
 
 	pud = stage2_get_pud(kvm, cache, addr);
-	if (pud_none(*pud)) {
+	if (stage2_pud_none(*pud)) {
 		if (!cache)
 			return NULL;
 		pmd = mmu_memory_cache_alloc(cache);
-		pud_populate(NULL, pud, pmd);
+		stage2_pud_populate(NULL, pud, pmd);
 		get_page(virt_to_page(pud));
 	}
 
-	return pmd_offset(pud, addr);
+	return stage2_pmd_offset(pud, addr);
 }
 
 static int stage2_set_pmd_huge(struct kvm *kvm, struct kvm_mmu_memory_cache
@@ -1037,10 +1037,10 @@ static void stage2_wp_pmds(pud_t *pud, phys_addr_t addr, phys_addr_t end)
 	pmd_t *pmd;
 	phys_addr_t next;
 
-	pmd = pmd_offset(pud, addr);
+	pmd = stage2_pmd_offset(pud, addr);
 
 	do {
-		next = kvm_pmd_addr_end(NULL, addr, end);
+		next = stage2_pmd_addr_end(addr, end);
 		if (!pmd_none(*pmd)) {
 			if (huge_pmd(*pmd)) {
 				if (!kvm_s2pmd_readonly(pmd))
@@ -1065,12 +1065,12 @@ static void  stage2_wp_puds(pgd_t *pgd, phys_addr_t addr, phys_addr_t end)
 	pud_t *pud;
 	phys_addr_t next;
 
-	pud = pud_offset(pgd, addr);
+	pud = stage2_pud_offset(pgd, addr);
 	do {
-		next = kvm_pud_addr_end(NULL, addr, end);
-		if (!pud_none(*pud)) {
+		next = stage2_pud_addr_end(addr, end);
+		if (!stage2_pud_none(*pud)) {
 			/* TODO:PUD not supported, revisit later if supported */
-			BUG_ON(kvm_pud_huge(NULL, *pud));
+			BUG_ON(stage2_pud_huge(*pud));
 			stage2_wp_pmds(pud, addr, next);
 		}
 	} while (pud++, addr = next, addr != end);
@@ -1100,7 +1100,7 @@ static void stage2_wp_range(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
 			cond_resched_lock(&kvm->mmu_lock);
 
 		next = kvm_pgd_addr_end(kvm, addr, end);
-		if (pgd_present(*pgd))
+		if (stage2_pgd_present(*pgd))
 			stage2_wp_puds(pgd, addr, next);
 	} while (pgd++, addr = next, addr != end);
 }
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [RFC PATCH 09/12] kvm-arm: Switch to kvm pagetable helpers
@ 2016-03-14 16:53   ` Suzuki K Poulose
  0 siblings, 0 replies; 50+ messages in thread
From: Suzuki K Poulose @ 2016-03-14 16:53 UTC (permalink / raw)
  To: linux-arm-kernel

Now that we have kvm wrappers for page table walk, switch
to using them everywhere. Also, use the explicit page table
accessor (stage2_ vs hyp), whenever we know we deal only
with a particular table.

Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
---
 arch/arm/kvm/mmu.c |   64 ++++++++++++++++++++++++++--------------------------
 1 file changed, 32 insertions(+), 32 deletions(-)

diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index 22b4c99..8568790 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -155,20 +155,20 @@ static void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc)
 
 static void clear_pgd_entry(struct kvm *kvm, pgd_t *pgd, phys_addr_t addr)
 {
-	pud_t *pud_table __maybe_unused = pud_offset(pgd, 0);
-	pgd_clear(pgd);
+	pud_t *pud_table __maybe_unused = kvm_pud_offset(kvm, pgd, 0);
+	kvm_pgd_clear(kvm, pgd);
 	kvm_tlb_flush_vmid_ipa(kvm, addr);
-	pud_free(NULL, pud_table);
+	kvm_pud_free(kvm, NULL, pud_table);
 	put_page(virt_to_page(pgd));
 }
 
 static void clear_pud_entry(struct kvm *kvm, pud_t *pud, phys_addr_t addr)
 {
-	pmd_t *pmd_table = pmd_offset(pud, 0);
+	pmd_t *pmd_table = kvm_pmd_offset(kvm, pud, 0);
 	VM_BUG_ON(kvm_pud_huge(kvm, *pud));
-	pud_clear(pud);
+	kvm_pud_clear(kvm, pud);
 	kvm_tlb_flush_vmid_ipa(kvm, addr);
-	pmd_free(NULL, pmd_table);
+	kvm_pmd_free(kvm, NULL, pmd_table);
 	put_page(virt_to_page(pud));
 }
 
@@ -234,7 +234,7 @@ static void unmap_pmds(struct kvm *kvm, pud_t *pud,
 	phys_addr_t next, start_addr = addr;
 	pmd_t *pmd, *start_pmd;
 
-	start_pmd = pmd = pmd_offset(pud, addr);
+	start_pmd = pmd = kvm_pmd_offset(kvm, pud, addr);
 	do {
 		next = kvm_pmd_addr_end(kvm, addr, end);
 		if (!pmd_none(*pmd)) {
@@ -263,14 +263,14 @@ static void unmap_puds(struct kvm *kvm, pgd_t *pgd,
 	phys_addr_t next, start_addr = addr;
 	pud_t *pud, *start_pud;
 
-	start_pud = pud = pud_offset(pgd, addr);
+	start_pud = pud = kvm_pud_offset(kvm, pgd, addr);
 	do {
 		next = kvm_pud_addr_end(kvm, addr, end);
-		if (!pud_none(*pud)) {
+		if (!kvm_pud_none(kvm, *pud)) {
 			if (kvm_pud_huge(kvm, *pud)) {
 				pud_t old_pud = *pud;
 
-				pud_clear(pud);
+				kvm_pud_clear(kvm, pud);
 				kvm_tlb_flush_vmid_ipa(kvm, addr);
 
 				kvm_flush_dcache_pud(old_pud);
@@ -297,7 +297,7 @@ static void unmap_range(struct kvm *kvm, pgd_t *pgdp,
 	pgd = pgdp + kvm_pgd_index(kvm, addr);
 	do {
 		next = kvm_pgd_addr_end(kvm, addr, end);
-		if (!pgd_none(*pgd))
+		if (!kvm_pgd_none(kvm, *pgd))
 			unmap_puds(kvm, pgd, addr, next);
 	} while (pgd++, addr = next, addr != end);
 }
@@ -320,9 +320,9 @@ static void stage2_flush_pmds(struct kvm *kvm, pud_t *pud,
 	pmd_t *pmd;
 	phys_addr_t next;
 
-	pmd = pmd_offset(pud, addr);
+	pmd = stage2_pmd_offset(pud, addr);
 	do {
-		next = kvm_pmd_addr_end(kvm, addr, end);
+		next = stage2_pmd_addr_end(addr, end);
 		if (!pmd_none(*pmd)) {
 			if (huge_pmd(*pmd))
 				kvm_flush_dcache_pmd(*pmd);
@@ -338,11 +338,11 @@ static void stage2_flush_puds(struct kvm *kvm, pgd_t *pgd,
 	pud_t *pud;
 	phys_addr_t next;
 
-	pud = pud_offset(pgd, addr);
+	pud = stage2_pud_offset(pgd, addr);
 	do {
-		next = kvm_pud_addr_end(kvm, addr, end);
-		if (!pud_none(*pud)) {
-			if (kvm_pud_huge(kvm, *pud))
+		next = stage2_pud_addr_end(addr, end);
+		if (!stage2_pud_none(*pud)) {
+			if (stage2_pud_huge(*pud))
 				kvm_flush_dcache_pud(*pud);
 			else
 				stage2_flush_pmds(kvm, pud, addr, next);
@@ -358,9 +358,9 @@ static void stage2_flush_memslot(struct kvm *kvm,
 	phys_addr_t next;
 	pgd_t *pgd;
 
-	pgd = kvm->arch.pgd + kvm_pgd_index(kvm, addr);
+	pgd = kvm->arch.pgd + stage2_pgd_index(addr);
 	do {
-		next = kvm_pgd_addr_end(kvm, addr, end);
+		next = stage2_pgd_addr_end(addr, end);
 		stage2_flush_puds(kvm, pgd, addr, next);
 	} while (pgd++, addr = next, addr != end);
 }
@@ -803,15 +803,15 @@ static pud_t *stage2_get_pud(struct kvm *kvm, struct kvm_mmu_memory_cache *cache
 	pud_t *pud;
 
 	pgd = kvm->arch.pgd + kvm_pgd_index(kvm, addr);
-	if (WARN_ON(pgd_none(*pgd))) {
+	if (WARN_ON(stage2_pgd_none(*pgd))) {
 		if (!cache)
 			return NULL;
 		pud = mmu_memory_cache_alloc(cache);
-		pgd_populate(NULL, pgd, pud);
+		stage2_pgd_populate(NULL, pgd, pud);
 		get_page(virt_to_page(pgd));
 	}
 
-	return pud_offset(pgd, addr);
+	return stage2_pud_offset(pgd, addr);
 }
 
 static pmd_t *stage2_get_pmd(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
@@ -821,15 +821,15 @@ static pmd_t *stage2_get_pmd(struct kvm *kvm, struct kvm_mmu_memory_cache *cache
 	pmd_t *pmd;
 
 	pud = stage2_get_pud(kvm, cache, addr);
-	if (pud_none(*pud)) {
+	if (stage2_pud_none(*pud)) {
 		if (!cache)
 			return NULL;
 		pmd = mmu_memory_cache_alloc(cache);
-		pud_populate(NULL, pud, pmd);
+		stage2_pud_populate(NULL, pud, pmd);
 		get_page(virt_to_page(pud));
 	}
 
-	return pmd_offset(pud, addr);
+	return stage2_pmd_offset(pud, addr);
 }
 
 static int stage2_set_pmd_huge(struct kvm *kvm, struct kvm_mmu_memory_cache
@@ -1037,10 +1037,10 @@ static void stage2_wp_pmds(pud_t *pud, phys_addr_t addr, phys_addr_t end)
 	pmd_t *pmd;
 	phys_addr_t next;
 
-	pmd = pmd_offset(pud, addr);
+	pmd = stage2_pmd_offset(pud, addr);
 
 	do {
-		next = kvm_pmd_addr_end(NULL, addr, end);
+		next = stage2_pmd_addr_end(addr, end);
 		if (!pmd_none(*pmd)) {
 			if (huge_pmd(*pmd)) {
 				if (!kvm_s2pmd_readonly(pmd))
@@ -1065,12 +1065,12 @@ static void  stage2_wp_puds(pgd_t *pgd, phys_addr_t addr, phys_addr_t end)
 	pud_t *pud;
 	phys_addr_t next;
 
-	pud = pud_offset(pgd, addr);
+	pud = stage2_pud_offset(pgd, addr);
 	do {
-		next = kvm_pud_addr_end(NULL, addr, end);
-		if (!pud_none(*pud)) {
+		next = stage2_pud_addr_end(addr, end);
+		if (!stage2_pud_none(*pud)) {
 			/* TODO:PUD not supported, revisit later if supported */
-			BUG_ON(kvm_pud_huge(NULL, *pud));
+			BUG_ON(stage2_pud_huge(*pud));
 			stage2_wp_pmds(pud, addr, next);
 		}
 	} while (pud++, addr = next, addr != end);
@@ -1100,7 +1100,7 @@ static void stage2_wp_range(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
 			cond_resched_lock(&kvm->mmu_lock);
 
 		next = kvm_pgd_addr_end(kvm, addr, end);
-		if (pgd_present(*pgd))
+		if (stage2_pgd_present(*pgd))
 			stage2_wp_puds(pgd, addr, next);
 	} while (pgd++, addr = next, addr != end);
 }
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [RFC PATCH 10/12] kvm: arm64: Get rid of fake page table levels
  2016-03-14 16:52 ` Suzuki K Poulose
@ 2016-03-14 16:53   ` Suzuki K Poulose
  -1 siblings, 0 replies; 50+ messages in thread
From: Suzuki K Poulose @ 2016-03-14 16:53 UTC (permalink / raw)
  To: christoffer.dall, marc.zyngier
  Cc: kvm, catalin.marinas, will.deacon, kvmarm, linux-arm-kernel

On arm64, the hardware mandates concatenation of upto 16 tables,
at entry level for stage2 translations. This could lead to reduced
number of translation levels than the normal (stage1 table).
Also, since the IPA(40bit) is smaller than the some of the supported
VA_BITS (e.g, 48bit), there could be different number of levels
in stage-1 vs stage-2 tables. To work around this, so far we have been
using a fake software page table level, not known to the hardware.
But with 16K translations, there could be upto 2 fake software
levels, which complicates the code. Hence, we want to get rid of the hack.

Now that we have explicit accessors for hyp vs stage2 page tables,
define the stage2 walker helpers accordingly based on the actual
table used by the hardware.

Once we know the number of translation levels used by the hardware,
it is merely a job of defining the helpers based on whether a
particular level is folded or not, looking at the number of levels.

Some facts before we calculate the translation levels:

1) Smallest page size supported by arm64 is 4K.
2) The minimum number of bits resolved at any page table level
   is (PAGE_SHIFT - 3) at intermediate levels.
Both of them implies, minimum number of bits required for a level
change is 9.

Since we can concatenate upto 16 tables at stage2 entry, the total
number of page table levels used by the hardware for resolving N bits
is same as that for (N - 4) bits (with concatenation), as there cannot
be a level in between (N, N-4).

Hence, we have

 STAGE2_PGTABLE_LEVELS = PGTABLE_LEVELS(KVM_PHYS_SHIFT - 4)

With the current IPA limit (40bit), for all supported translations
and VA_BITS, we have the following condition:

 CONFIG_PGTABLE_LEVELS >= STAGE2_PGTABLE_LEVELS.

So, for e.g,  if PUD is present in stage2, it is present in the hyp(host).
Hence, we fall back to the host definition if find that a level is not
folded. Otherwise we redefine it accordingly.

A build time check is added to make sure the above condition holds.

Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
---

Removing this restriction is straight forward, by re-arranging the
host page table accessor macros to their ___p.d_xxx versions and
using them instead of the p.d_xxxx. To make the review easier,
I have kept it as it is.

---
---
 arch/arm64/include/asm/kvm_mmu.h              |   64 +------------
 arch/arm64/include/asm/stage2_pgtable-nopmd.h |   26 ++++++
 arch/arm64/include/asm/stage2_pgtable-nopud.h |   23 +++++
 arch/arm64/include/asm/stage2_pgtable.h       |  119 +++++++++++++++++--------
 4 files changed, 135 insertions(+), 97 deletions(-)
 create mode 100644 arch/arm64/include/asm/stage2_pgtable-nopmd.h
 create mode 100644 arch/arm64/include/asm/stage2_pgtable-nopud.h

diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 55cde87..a6f9846 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -42,18 +42,6 @@
  */
 #define TRAMPOLINE_VA		(HYP_PAGE_OFFSET_MASK & PAGE_MASK)
 
-/*
- * KVM_MMU_CACHE_MIN_PAGES is the number of stage2 page table translation
- * levels in addition to the PGD and potentially the PUD which are
- * pre-allocated (we pre-allocate the fake PGD and the PUD when the Stage-2
- * tables use one level of tables less than the kernel.
- */
-#ifdef CONFIG_ARM64_64K_PAGES
-#define KVM_MMU_CACHE_MIN_PAGES	1
-#else
-#define KVM_MMU_CACHE_MIN_PAGES	2
-#endif
-
 #ifdef __ASSEMBLY__
 
 /*
@@ -256,69 +244,21 @@ static inline phys_addr_t kvm_pgd_index(struct kvm *kvm, phys_addr_t addr)
 
 static inline void *kvm_get_hwpgd(struct kvm *kvm)
 {
-	pgd_t *pgd = kvm->arch.pgd;
-	pud_t *pud;
-
-	if (KVM_PREALLOC_LEVEL == 0)
-		return pgd;
-
-	pud = pud_offset(pgd, 0);
-	if (KVM_PREALLOC_LEVEL == 1)
-		return pud;
-
-	BUG_ON(KVM_PREALLOC_LEVEL != 2);
-	return pmd_offset(pud, 0);
+	return kvm->arch.pgd;
 }
 
 static inline unsigned int kvm_get_hwpgd_size(void)
 {
-	if (KVM_PREALLOC_LEVEL > 0)
-		return PTRS_PER_S2_PGD * PAGE_SIZE;
 	return PTRS_PER_S2_PGD * sizeof(pgd_t);
 }
 
-/*
- * Allocate fake pgd for the host kernel page table macros to work.
- * This is not used by the hardware and we have no alignment
- * requirement for this allocation.
- */
 static inline pgd_t *kvm_setup_fake_pgd(pgd_t *hwpgd)
 {
-	int i;
-	pgd_t *pgd;
-
-	if (!KVM_PREALLOC_LEVEL)
-		return hwpgd;
-
-	/*
-	 * When KVM_PREALLOC_LEVEL==2, we allocate a single page for
-	 * the PMD and the kernel will use folded pud.
-	 * When KVM_PREALLOC_LEVEL==1, we allocate 2 consecutive PUD
-	 * pages.
-	 */
-
-	pgd = kmalloc(PTRS_PER_S2_PGD * sizeof(pgd_t),
-			GFP_KERNEL | __GFP_ZERO);
-	if (!pgd)
-		return ERR_PTR(-ENOMEM);
-
-	/* Plug the HW PGD into the fake one. */
-	for (i = 0; i < PTRS_PER_S2_PGD; i++) {
-		if (KVM_PREALLOC_LEVEL == 1)
-			pgd_populate(NULL, pgd + i,
-				     (pud_t *)hwpgd + i * PTRS_PER_PUD);
-		else if (KVM_PREALLOC_LEVEL == 2)
-			pud_populate(NULL, pud_offset(pgd, 0) + i,
-				     (pmd_t *)hwpgd + i * PTRS_PER_PMD);
-	}
-
-	return pgd;
+	return hwpgd;
 }
 
 static inline void kvm_free_fake_pgd(pgd_t *pgd)
 {
-	if (KVM_PREALLOC_LEVEL > 0)
-		kfree(pgd);
 }
 static inline bool kvm_page_empty(void *ptr)
 {
diff --git a/arch/arm64/include/asm/stage2_pgtable-nopmd.h b/arch/arm64/include/asm/stage2_pgtable-nopmd.h
new file mode 100644
index 0000000..2cbd315
--- /dev/null
+++ b/arch/arm64/include/asm/stage2_pgtable-nopmd.h
@@ -0,0 +1,26 @@
+#ifndef __ARM64_S2_PGTABLE_NOPMD_H_
+#define __ARM64_S2_PGTABLE_NOPMD_H_
+
+#include <asm/stage2_pgtable-nopud.h>
+
+#define __S2_PGTABLE_PMD_FOLDED
+
+#define S2_PMD_SHIFT		S2_PUD_SHIFT
+#define S2_PTRS_PER_PMD		1
+#define S2_PMD_SIZE		(1UL << S2_PMD_SHIFT)
+#define S2_PMD_MASK		(~(S2_PMD_SIZE-1))
+
+#define stage2_pud_none(pud)			(0)
+#define stage2_pud_present(pud)			(1)
+#define stage2_pud_clear(pud)			do { } while (0)
+#define stage2_pud_populate(mm, pud, pmd)	do { } while (0)
+#define stage2_pmd_offset(pud, address)		((pmd_t *)(pud))
+
+#define stage2_pmd_free(mm, pmd)		do { } while (0)
+
+#define stage2_pmd_addr_end(addr, end)		(end)
+
+#define stage2_pud_huge(pud)			(0)
+#define stage2_pmd_table_empty(pmdp)		(0)
+
+#endif
diff --git a/arch/arm64/include/asm/stage2_pgtable-nopud.h b/arch/arm64/include/asm/stage2_pgtable-nopud.h
new file mode 100644
index 0000000..591ac53
--- /dev/null
+++ b/arch/arm64/include/asm/stage2_pgtable-nopud.h
@@ -0,0 +1,23 @@
+#ifndef __ARM64_S2_PGTABLE_NOPUD_H_
+#define __ARM64_S2_PGTABLE_NOPUD_H_
+
+#define __S2_PGTABLE_PUD_FOLDED
+
+#define S2_PUD_SHIFT		S2_PGDIR_SHIFT
+#define S2_PTRS_PER_PUD		1
+#define S2_PUD_SIZE		(_AC(1, UL) << S2_PUD_SHIFT)
+#define S2_PUD_MASK		(~(S2_PUD_SIZE-1))
+
+#define stage2_pgd_none(pgd)			(0)
+#define stage2_pgd_present(pgd)			(1)
+#define stage2_pgd_clear(pgd)			do { } while (0)
+#define stage2_pgd_populate(mm, pgd, pud)	do { } while (0)
+
+#define stage2_pud_offset(pgd, address)		((pud_t *)(pgd))
+
+#define stage2_pud_free(mm, x)			do { } while (0)
+
+#define stage2_pud_addr_end(addr, end)		(end)
+#define stage2_pud_table_empty(pmdp)		(0)
+
+#endif
diff --git a/arch/arm64/include/asm/stage2_pgtable.h b/arch/arm64/include/asm/stage2_pgtable.h
index 95496e6..90f1403 100644
--- a/arch/arm64/include/asm/stage2_pgtable.h
+++ b/arch/arm64/include/asm/stage2_pgtable.h
@@ -16,38 +16,61 @@
  * along with this program.  If not, see <http://www.gnu.org/licenses/>.
  */
 
-#ifndef __ARM64_S2_PGTABLE_H_
-#define __ARM64_S2_PGTABLE_H_
+#ifndef __ARM64_STAGE2_PGTABLE_H_
+#define __ARM64_STAGE2_PGTABLE_H_
 
 #include <asm/pgtable.h>
 
 /*
- * In the case where PGDIR_SHIFT is larger than KVM_PHYS_SHIFT, we can address
- * the entire IPA input range with a single pgd entry, and we would only need
- * one pgd entry.  Note that in this case, the pgd is actually not used by
- * the MMU for Stage-2 translations, but is merely a fake pgd used as a data
- * structure for the kernel pgtable macros to work.
+ * The hardware mandates concatenation of upto 16 tables at stage2 entry level.
+ * Now, the minimum number of bits resolved at any level is (PAGE_SHIFT - 3),
+ * or in other words log2(PTRS_PER_PTE). On arm64, the smallest PAGE_SIZE
+ * supported is 4k, which means (PAGE_SHIFT - 3) > 4 holds for all page sizes.
+ * This implies, the total number of page table levels at stage2 expected
+ * by the hardware is actually the number of levels required for (KVM_PHYS_SHIFT - 4)
+ * in normal translations(e.g, stage-1), since we cannot have another level in
+ * the range (KVM_PHYS_SHIFT, KVM_PHYS_SHIFT - 4).
  */
-#if PGDIR_SHIFT > KVM_PHYS_SHIFT
-#define PTRS_PER_S2_PGD_SHIFT	0
-#else
-#define PTRS_PER_S2_PGD_SHIFT	(KVM_PHYS_SHIFT - PGDIR_SHIFT)
-#endif
-#define PTRS_PER_S2_PGD		(1 << PTRS_PER_S2_PGD_SHIFT)
+#define STAGE2_PGTABLE_LEVELS		ARM64_HW_PGTABLE_LEVELS(KVM_PHYS_SHIFT - 4)
 
 /*
- * If we are concatenating first level stage-2 page tables, we would have less
- * than or equal to 16 pointers in the fake PGD, because that's what the
- * architecture allows.  In this case, (4 - CONFIG_PGTABLE_LEVELS)
- * represents the first level for the host, and we add 1 to go to the next
- * level (which uses contatenation) for the stage-2 tables.
+ * At the moment, we do not support a combination of guest IPA and host VA_BITS
+ * where
+ * 	STAGE2_PGTABLE_LEVELS > CONFIG_PGTABLE_LEVELS
+ *
+ * We base our stage-2 page table walker helpers based on this assumption and
+ * fallback to using the host version of the helper wherever possible.
+ * i.e, if a particular level is not folded (e.g, PUD) at stage2, we fall back
+ * to using the host version, since it is guaranteed it is not folded at host.
+ *
+ * TODO: We could lift this limitation easily by rearranging the host level
+ * definitions to a more reusable version.
  */
-#if PTRS_PER_S2_PGD <= 16
-#define KVM_PREALLOC_LEVEL	(4 - CONFIG_PGTABLE_LEVELS + 1)
-#else
-#define KVM_PREALLOC_LEVEL	(0)
+#if STAGE2_PGTABLE_LEVELS > CONFIG_PGTABLE_LEVELS
+#error "Unsupported combination of guest IPA and host VA_BITS."
 #endif
 
+
+#define S2_PGDIR_SHIFT			ARM64_HW_PGTABLE_LEVEL_SHIFT(4 - STAGE2_PGTABLE_LEVELS)
+#define S2_PGDIR_SIZE			(_AC(1, UL) << S2_PGDIR_SHIFT)
+#define S2_PGDIR_MASK			(~(S2_PGDIR_SIZE - 1))
+
+/* We can have concatenated tables at stage2 entry. */
+#define PTRS_PER_S2_PGD			(1 << (KVM_PHYS_SHIFT - S2_PGDIR_SHIFT))
+
+/*
+ * KVM_MMU_CACHE_MIN_PAGES is the number of stage2 page table translation
+ * levels in addition to the PGD.
+ */
+#define KVM_MMU_CACHE_MIN_PAGES		(STAGE2_PGTABLE_LEVELS - 1)
+
+
+#if STAGE2_PGTABLE_LEVELS > 3
+
+#define S2_PUD_SHIFT			ARM64_HW_PGTABLE_LEVEL_SHIFT(1)
+#define S2_PUD_SIZE			(_AC(1, UL) << S2_PUD_SHIFT)
+#define S2_PUD_MASK			(~(S2_PUD_SIZE - 1))
+
 #define stage2_pgd_none(pgd)				pgd_none(pgd)
 #define stage2_pgd_clear(pgd)				pgd_clear(pgd)
 #define stage2_pgd_present(pgd)				pgd_present(pgd)
@@ -55,6 +78,17 @@
 #define stage2_pud_offset(pgd, address)			pud_offset(pgd, address)
 #define stage2_pud_free(mm, pud)			pud_free(mm, pud)
 
+#define stage2_pud_table_empty(pudp)			kvm_page_empty(pudp)
+
+#endif		/* STAGE2_PGTABLE_LEVELS > 3 */
+
+
+#if STAGE2_PGTABLE_LEVELS > 2
+
+#define S2_PMD_SHIFT			ARM64_HW_PGTABLE_LEVEL_SHIFT(2)
+#define S2_PMD_SIZE			(_AC(1, UL) << S2_PMD_SHIFT)
+#define S2_PMD_MASK			(~(S2_PMD_SIZE - 1))
+
 #define stage2_pud_none(pud)				pud_none(pud)
 #define stage2_pud_clear(pud)				pud_clear(pud)
 #define stage2_pud_present(pud)				pud_present(pud)
@@ -63,23 +97,38 @@
 #define stage2_pmd_free(mm, pmd)			pmd_free(mm, pmd)
 
 #define stage2_pud_huge(pud)				pud_huge(pud)
+#define stage2_pmd_table_empty(pmdp)			kvm_page_empty(pmdp)
 
-#define stage2_pgd_addr_end(address, end)		pgd_addr_end(address, end)
-#define stage2_pud_addr_end(address, end)		pud_addr_end(address, end)
-#define stage2_pmd_addr_end(address, end)		pmd_addr_end(address, end)
+#endif		/* STAGE2_PGTABLE_LEVELS > 2 */
 
-#ifdef __PGTABLE_PMD_FOLDED
-#define stage2_pmd_table_empty(pmdp)			(0)
-#else
-#define stage2_pmd_table_empty(pmdp)			((KVM_PREALLOC_LEVEL < 2) && kvm_page_empty(pmdp))
+#if STAGE2_PGTABLE_LEVELS == 2
+#include <asm/stage2_pgtable-nopmd.h>
+#elif STAGE2_PGTABLE_LEVELS == 3
+#include <asm/stage2_pgtable-nopud.h>
 #endif
 
-#ifdef __PGTABLE_PUD_FOLDED
-#define stage2_pmd_table_empty(pmdp)			(0)
-#else
-#define stage2_pud_table_empty(pudp)			((KVM_PREALLOC_LEVEL < 1) && kvm_page_empty(pudp))
+#ifndef stage2_pgd_addr_end
+#define stage2_pgd_addr_end(addr, end)							\
+({	phys_addr_t __boundary = (((addr) + S2_PGDIR_SIZE) & S2_PGDIR_MASK);		\
+	(__boundary - 1 < (end) - 1) ? __boundary : (end);				\
+})
 #endif
 
-#define stage2_pgd_index(addr)				(((addr) >> PGDIR_SHIFT) & (PTRS_PER_S2_PGD - 1))
+#ifndef stage2_pud_addr_end
+#define stage2_pud_addr_end(addr, end)							\
+({	phys_addr_t __boundary = (((addr) + S2_PUD_SIZE) & S2_PUD_MASK);		\
+	(__boundary - 1 < (end) - 1) ? __boundary : (end);				\
+})
+#endif
+
+#ifndef stage2_pmd_addr_end
+#define stage2_pmd_addr_end(addr, end)							\
+({	phys_addr_t __boundary = (((addr) + S2_PMD_SIZE) & S2_PMD_MASK);		\
+	(__boundary - 1 < (end) - 1) ? __boundary : (end);				\
+})
+#endif
+
+
+#define stage2_pgd_index(addr)				(((addr) >> S2_PGDIR_SHIFT) & (PTRS_PER_S2_PGD - 1))
 
-#endif	/* __ARM64_S2_PGTABLE_H_ */
+#endif	/* __ARM64_STAGE2_PGTABLE_H_ */
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [RFC PATCH 10/12] kvm: arm64: Get rid of fake page table levels
@ 2016-03-14 16:53   ` Suzuki K Poulose
  0 siblings, 0 replies; 50+ messages in thread
From: Suzuki K Poulose @ 2016-03-14 16:53 UTC (permalink / raw)
  To: linux-arm-kernel

On arm64, the hardware mandates concatenation of upto 16 tables,
at entry level for stage2 translations. This could lead to reduced
number of translation levels than the normal (stage1 table).
Also, since the IPA(40bit) is smaller than the some of the supported
VA_BITS (e.g, 48bit), there could be different number of levels
in stage-1 vs stage-2 tables. To work around this, so far we have been
using a fake software page table level, not known to the hardware.
But with 16K translations, there could be upto 2 fake software
levels, which complicates the code. Hence, we want to get rid of the hack.

Now that we have explicit accessors for hyp vs stage2 page tables,
define the stage2 walker helpers accordingly based on the actual
table used by the hardware.

Once we know the number of translation levels used by the hardware,
it is merely a job of defining the helpers based on whether a
particular level is folded or not, looking at the number of levels.

Some facts before we calculate the translation levels:

1) Smallest page size supported by arm64 is 4K.
2) The minimum number of bits resolved at any page table level
   is (PAGE_SHIFT - 3) at intermediate levels.
Both of them implies, minimum number of bits required for a level
change is 9.

Since we can concatenate upto 16 tables at stage2 entry, the total
number of page table levels used by the hardware for resolving N bits
is same as that for (N - 4) bits (with concatenation), as there cannot
be a level in between (N, N-4).

Hence, we have

 STAGE2_PGTABLE_LEVELS = PGTABLE_LEVELS(KVM_PHYS_SHIFT - 4)

With the current IPA limit (40bit), for all supported translations
and VA_BITS, we have the following condition:

 CONFIG_PGTABLE_LEVELS >= STAGE2_PGTABLE_LEVELS.

So, for e.g,  if PUD is present in stage2, it is present in the hyp(host).
Hence, we fall back to the host definition if find that a level is not
folded. Otherwise we redefine it accordingly.

A build time check is added to make sure the above condition holds.

Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
---

Removing this restriction is straight forward, by re-arranging the
host page table accessor macros to their ___p.d_xxx versions and
using them instead of the p.d_xxxx. To make the review easier,
I have kept it as it is.

---
---
 arch/arm64/include/asm/kvm_mmu.h              |   64 +------------
 arch/arm64/include/asm/stage2_pgtable-nopmd.h |   26 ++++++
 arch/arm64/include/asm/stage2_pgtable-nopud.h |   23 +++++
 arch/arm64/include/asm/stage2_pgtable.h       |  119 +++++++++++++++++--------
 4 files changed, 135 insertions(+), 97 deletions(-)
 create mode 100644 arch/arm64/include/asm/stage2_pgtable-nopmd.h
 create mode 100644 arch/arm64/include/asm/stage2_pgtable-nopud.h

diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 55cde87..a6f9846 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -42,18 +42,6 @@
  */
 #define TRAMPOLINE_VA		(HYP_PAGE_OFFSET_MASK & PAGE_MASK)
 
-/*
- * KVM_MMU_CACHE_MIN_PAGES is the number of stage2 page table translation
- * levels in addition to the PGD and potentially the PUD which are
- * pre-allocated (we pre-allocate the fake PGD and the PUD when the Stage-2
- * tables use one level of tables less than the kernel.
- */
-#ifdef CONFIG_ARM64_64K_PAGES
-#define KVM_MMU_CACHE_MIN_PAGES	1
-#else
-#define KVM_MMU_CACHE_MIN_PAGES	2
-#endif
-
 #ifdef __ASSEMBLY__
 
 /*
@@ -256,69 +244,21 @@ static inline phys_addr_t kvm_pgd_index(struct kvm *kvm, phys_addr_t addr)
 
 static inline void *kvm_get_hwpgd(struct kvm *kvm)
 {
-	pgd_t *pgd = kvm->arch.pgd;
-	pud_t *pud;
-
-	if (KVM_PREALLOC_LEVEL == 0)
-		return pgd;
-
-	pud = pud_offset(pgd, 0);
-	if (KVM_PREALLOC_LEVEL == 1)
-		return pud;
-
-	BUG_ON(KVM_PREALLOC_LEVEL != 2);
-	return pmd_offset(pud, 0);
+	return kvm->arch.pgd;
 }
 
 static inline unsigned int kvm_get_hwpgd_size(void)
 {
-	if (KVM_PREALLOC_LEVEL > 0)
-		return PTRS_PER_S2_PGD * PAGE_SIZE;
 	return PTRS_PER_S2_PGD * sizeof(pgd_t);
 }
 
-/*
- * Allocate fake pgd for the host kernel page table macros to work.
- * This is not used by the hardware and we have no alignment
- * requirement for this allocation.
- */
 static inline pgd_t *kvm_setup_fake_pgd(pgd_t *hwpgd)
 {
-	int i;
-	pgd_t *pgd;
-
-	if (!KVM_PREALLOC_LEVEL)
-		return hwpgd;
-
-	/*
-	 * When KVM_PREALLOC_LEVEL==2, we allocate a single page for
-	 * the PMD and the kernel will use folded pud.
-	 * When KVM_PREALLOC_LEVEL==1, we allocate 2 consecutive PUD
-	 * pages.
-	 */
-
-	pgd = kmalloc(PTRS_PER_S2_PGD * sizeof(pgd_t),
-			GFP_KERNEL | __GFP_ZERO);
-	if (!pgd)
-		return ERR_PTR(-ENOMEM);
-
-	/* Plug the HW PGD into the fake one. */
-	for (i = 0; i < PTRS_PER_S2_PGD; i++) {
-		if (KVM_PREALLOC_LEVEL == 1)
-			pgd_populate(NULL, pgd + i,
-				     (pud_t *)hwpgd + i * PTRS_PER_PUD);
-		else if (KVM_PREALLOC_LEVEL == 2)
-			pud_populate(NULL, pud_offset(pgd, 0) + i,
-				     (pmd_t *)hwpgd + i * PTRS_PER_PMD);
-	}
-
-	return pgd;
+	return hwpgd;
 }
 
 static inline void kvm_free_fake_pgd(pgd_t *pgd)
 {
-	if (KVM_PREALLOC_LEVEL > 0)
-		kfree(pgd);
 }
 static inline bool kvm_page_empty(void *ptr)
 {
diff --git a/arch/arm64/include/asm/stage2_pgtable-nopmd.h b/arch/arm64/include/asm/stage2_pgtable-nopmd.h
new file mode 100644
index 0000000..2cbd315
--- /dev/null
+++ b/arch/arm64/include/asm/stage2_pgtable-nopmd.h
@@ -0,0 +1,26 @@
+#ifndef __ARM64_S2_PGTABLE_NOPMD_H_
+#define __ARM64_S2_PGTABLE_NOPMD_H_
+
+#include <asm/stage2_pgtable-nopud.h>
+
+#define __S2_PGTABLE_PMD_FOLDED
+
+#define S2_PMD_SHIFT		S2_PUD_SHIFT
+#define S2_PTRS_PER_PMD		1
+#define S2_PMD_SIZE		(1UL << S2_PMD_SHIFT)
+#define S2_PMD_MASK		(~(S2_PMD_SIZE-1))
+
+#define stage2_pud_none(pud)			(0)
+#define stage2_pud_present(pud)			(1)
+#define stage2_pud_clear(pud)			do { } while (0)
+#define stage2_pud_populate(mm, pud, pmd)	do { } while (0)
+#define stage2_pmd_offset(pud, address)		((pmd_t *)(pud))
+
+#define stage2_pmd_free(mm, pmd)		do { } while (0)
+
+#define stage2_pmd_addr_end(addr, end)		(end)
+
+#define stage2_pud_huge(pud)			(0)
+#define stage2_pmd_table_empty(pmdp)		(0)
+
+#endif
diff --git a/arch/arm64/include/asm/stage2_pgtable-nopud.h b/arch/arm64/include/asm/stage2_pgtable-nopud.h
new file mode 100644
index 0000000..591ac53
--- /dev/null
+++ b/arch/arm64/include/asm/stage2_pgtable-nopud.h
@@ -0,0 +1,23 @@
+#ifndef __ARM64_S2_PGTABLE_NOPUD_H_
+#define __ARM64_S2_PGTABLE_NOPUD_H_
+
+#define __S2_PGTABLE_PUD_FOLDED
+
+#define S2_PUD_SHIFT		S2_PGDIR_SHIFT
+#define S2_PTRS_PER_PUD		1
+#define S2_PUD_SIZE		(_AC(1, UL) << S2_PUD_SHIFT)
+#define S2_PUD_MASK		(~(S2_PUD_SIZE-1))
+
+#define stage2_pgd_none(pgd)			(0)
+#define stage2_pgd_present(pgd)			(1)
+#define stage2_pgd_clear(pgd)			do { } while (0)
+#define stage2_pgd_populate(mm, pgd, pud)	do { } while (0)
+
+#define stage2_pud_offset(pgd, address)		((pud_t *)(pgd))
+
+#define stage2_pud_free(mm, x)			do { } while (0)
+
+#define stage2_pud_addr_end(addr, end)		(end)
+#define stage2_pud_table_empty(pmdp)		(0)
+
+#endif
diff --git a/arch/arm64/include/asm/stage2_pgtable.h b/arch/arm64/include/asm/stage2_pgtable.h
index 95496e6..90f1403 100644
--- a/arch/arm64/include/asm/stage2_pgtable.h
+++ b/arch/arm64/include/asm/stage2_pgtable.h
@@ -16,38 +16,61 @@
  * along with this program.  If not, see <http://www.gnu.org/licenses/>.
  */
 
-#ifndef __ARM64_S2_PGTABLE_H_
-#define __ARM64_S2_PGTABLE_H_
+#ifndef __ARM64_STAGE2_PGTABLE_H_
+#define __ARM64_STAGE2_PGTABLE_H_
 
 #include <asm/pgtable.h>
 
 /*
- * In the case where PGDIR_SHIFT is larger than KVM_PHYS_SHIFT, we can address
- * the entire IPA input range with a single pgd entry, and we would only need
- * one pgd entry.  Note that in this case, the pgd is actually not used by
- * the MMU for Stage-2 translations, but is merely a fake pgd used as a data
- * structure for the kernel pgtable macros to work.
+ * The hardware mandates concatenation of upto 16 tables at stage2 entry level.
+ * Now, the minimum number of bits resolved at any level is (PAGE_SHIFT - 3),
+ * or in other words log2(PTRS_PER_PTE). On arm64, the smallest PAGE_SIZE
+ * supported is 4k, which means (PAGE_SHIFT - 3) > 4 holds for all page sizes.
+ * This implies, the total number of page table levels at stage2 expected
+ * by the hardware is actually the number of levels required for (KVM_PHYS_SHIFT - 4)
+ * in normal translations(e.g, stage-1), since we cannot have another level in
+ * the range (KVM_PHYS_SHIFT, KVM_PHYS_SHIFT - 4).
  */
-#if PGDIR_SHIFT > KVM_PHYS_SHIFT
-#define PTRS_PER_S2_PGD_SHIFT	0
-#else
-#define PTRS_PER_S2_PGD_SHIFT	(KVM_PHYS_SHIFT - PGDIR_SHIFT)
-#endif
-#define PTRS_PER_S2_PGD		(1 << PTRS_PER_S2_PGD_SHIFT)
+#define STAGE2_PGTABLE_LEVELS		ARM64_HW_PGTABLE_LEVELS(KVM_PHYS_SHIFT - 4)
 
 /*
- * If we are concatenating first level stage-2 page tables, we would have less
- * than or equal to 16 pointers in the fake PGD, because that's what the
- * architecture allows.  In this case, (4 - CONFIG_PGTABLE_LEVELS)
- * represents the first level for the host, and we add 1 to go to the next
- * level (which uses contatenation) for the stage-2 tables.
+ * At the moment, we do not support a combination of guest IPA and host VA_BITS
+ * where
+ * 	STAGE2_PGTABLE_LEVELS > CONFIG_PGTABLE_LEVELS
+ *
+ * We base our stage-2 page table walker helpers based on this assumption and
+ * fallback to using the host version of the helper wherever possible.
+ * i.e, if a particular level is not folded (e.g, PUD) at stage2, we fall back
+ * to using the host version, since it is guaranteed it is not folded at host.
+ *
+ * TODO: We could lift this limitation easily by rearranging the host level
+ * definitions to a more reusable version.
  */
-#if PTRS_PER_S2_PGD <= 16
-#define KVM_PREALLOC_LEVEL	(4 - CONFIG_PGTABLE_LEVELS + 1)
-#else
-#define KVM_PREALLOC_LEVEL	(0)
+#if STAGE2_PGTABLE_LEVELS > CONFIG_PGTABLE_LEVELS
+#error "Unsupported combination of guest IPA and host VA_BITS."
 #endif
 
+
+#define S2_PGDIR_SHIFT			ARM64_HW_PGTABLE_LEVEL_SHIFT(4 - STAGE2_PGTABLE_LEVELS)
+#define S2_PGDIR_SIZE			(_AC(1, UL) << S2_PGDIR_SHIFT)
+#define S2_PGDIR_MASK			(~(S2_PGDIR_SIZE - 1))
+
+/* We can have concatenated tables@stage2 entry. */
+#define PTRS_PER_S2_PGD			(1 << (KVM_PHYS_SHIFT - S2_PGDIR_SHIFT))
+
+/*
+ * KVM_MMU_CACHE_MIN_PAGES is the number of stage2 page table translation
+ * levels in addition to the PGD.
+ */
+#define KVM_MMU_CACHE_MIN_PAGES		(STAGE2_PGTABLE_LEVELS - 1)
+
+
+#if STAGE2_PGTABLE_LEVELS > 3
+
+#define S2_PUD_SHIFT			ARM64_HW_PGTABLE_LEVEL_SHIFT(1)
+#define S2_PUD_SIZE			(_AC(1, UL) << S2_PUD_SHIFT)
+#define S2_PUD_MASK			(~(S2_PUD_SIZE - 1))
+
 #define stage2_pgd_none(pgd)				pgd_none(pgd)
 #define stage2_pgd_clear(pgd)				pgd_clear(pgd)
 #define stage2_pgd_present(pgd)				pgd_present(pgd)
@@ -55,6 +78,17 @@
 #define stage2_pud_offset(pgd, address)			pud_offset(pgd, address)
 #define stage2_pud_free(mm, pud)			pud_free(mm, pud)
 
+#define stage2_pud_table_empty(pudp)			kvm_page_empty(pudp)
+
+#endif		/* STAGE2_PGTABLE_LEVELS > 3 */
+
+
+#if STAGE2_PGTABLE_LEVELS > 2
+
+#define S2_PMD_SHIFT			ARM64_HW_PGTABLE_LEVEL_SHIFT(2)
+#define S2_PMD_SIZE			(_AC(1, UL) << S2_PMD_SHIFT)
+#define S2_PMD_MASK			(~(S2_PMD_SIZE - 1))
+
 #define stage2_pud_none(pud)				pud_none(pud)
 #define stage2_pud_clear(pud)				pud_clear(pud)
 #define stage2_pud_present(pud)				pud_present(pud)
@@ -63,23 +97,38 @@
 #define stage2_pmd_free(mm, pmd)			pmd_free(mm, pmd)
 
 #define stage2_pud_huge(pud)				pud_huge(pud)
+#define stage2_pmd_table_empty(pmdp)			kvm_page_empty(pmdp)
 
-#define stage2_pgd_addr_end(address, end)		pgd_addr_end(address, end)
-#define stage2_pud_addr_end(address, end)		pud_addr_end(address, end)
-#define stage2_pmd_addr_end(address, end)		pmd_addr_end(address, end)
+#endif		/* STAGE2_PGTABLE_LEVELS > 2 */
 
-#ifdef __PGTABLE_PMD_FOLDED
-#define stage2_pmd_table_empty(pmdp)			(0)
-#else
-#define stage2_pmd_table_empty(pmdp)			((KVM_PREALLOC_LEVEL < 2) && kvm_page_empty(pmdp))
+#if STAGE2_PGTABLE_LEVELS == 2
+#include <asm/stage2_pgtable-nopmd.h>
+#elif STAGE2_PGTABLE_LEVELS == 3
+#include <asm/stage2_pgtable-nopud.h>
 #endif
 
-#ifdef __PGTABLE_PUD_FOLDED
-#define stage2_pmd_table_empty(pmdp)			(0)
-#else
-#define stage2_pud_table_empty(pudp)			((KVM_PREALLOC_LEVEL < 1) && kvm_page_empty(pudp))
+#ifndef stage2_pgd_addr_end
+#define stage2_pgd_addr_end(addr, end)							\
+({	phys_addr_t __boundary = (((addr) + S2_PGDIR_SIZE) & S2_PGDIR_MASK);		\
+	(__boundary - 1 < (end) - 1) ? __boundary : (end);				\
+})
 #endif
 
-#define stage2_pgd_index(addr)				(((addr) >> PGDIR_SHIFT) & (PTRS_PER_S2_PGD - 1))
+#ifndef stage2_pud_addr_end
+#define stage2_pud_addr_end(addr, end)							\
+({	phys_addr_t __boundary = (((addr) + S2_PUD_SIZE) & S2_PUD_MASK);		\
+	(__boundary - 1 < (end) - 1) ? __boundary : (end);				\
+})
+#endif
+
+#ifndef stage2_pmd_addr_end
+#define stage2_pmd_addr_end(addr, end)							\
+({	phys_addr_t __boundary = (((addr) + S2_PMD_SIZE) & S2_PMD_MASK);		\
+	(__boundary - 1 < (end) - 1) ? __boundary : (end);				\
+})
+#endif
+
+
+#define stage2_pgd_index(addr)				(((addr) >> S2_PGDIR_SHIFT) & (PTRS_PER_S2_PGD - 1))
 
-#endif	/* __ARM64_S2_PGTABLE_H_ */
+#endif	/* __ARM64_STAGE2_PGTABLE_H_ */
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [RFC PATCH 11/12] kvm-arm: Cleanup stage2 pgd handling
  2016-03-14 16:52 ` Suzuki K Poulose
@ 2016-03-14 16:53   ` Suzuki K Poulose
  -1 siblings, 0 replies; 50+ messages in thread
From: Suzuki K Poulose @ 2016-03-14 16:53 UTC (permalink / raw)
  To: christoffer.dall, marc.zyngier
  Cc: kvm, catalin.marinas, will.deacon, kvmarm, linux-arm-kernel

Now that we don't have any fake page table levels for arm64,
cleanup the common code to get rid of the dead code.

Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
---
 arch/arm/include/asm/kvm_mmu.h   |   20 --------------------
 arch/arm/kvm/arm.c               |    2 +-
 arch/arm/kvm/mmu.c               |   25 ++++++++-----------------
 arch/arm64/include/asm/kvm_mmu.h |   18 ------------------
 4 files changed, 9 insertions(+), 56 deletions(-)

diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index e670afa..02b2b3d 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -254,26 +254,6 @@ static inline bool kvm_pud_table_empty(struct kvm *kvm, pud_t *pudp)
 	return 0;
 }
 
-
-static inline void *kvm_get_hwpgd(struct kvm *kvm)
-{
-	return kvm->arch.pgd;
-}
-
-static inline unsigned int kvm_get_hwpgd_size(void)
-{
-	return PTRS_PER_S2_PGD * sizeof(pgd_t);
-}
-
-static inline pgd_t *kvm_setup_fake_pgd(pgd_t *hwpgd)
-{
-	return hwpgd;
-}
-
-static inline void kvm_free_fake_pgd(pgd_t *pgd)
-{
-}
-
 struct kvm;
 
 #define kvm_flush_dcache_to_poc(a,l)	__cpuc_flush_dcache_area((a), (l))
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index dda1959..a1443ef 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -443,7 +443,7 @@ static void update_vttbr(struct kvm *kvm)
 	kvm_next_vmid &= (1 << kvm_vmid_bits) - 1;
 
 	/* update vttbr to be used with the new vmid */
-	pgd_phys = virt_to_phys(kvm_get_hwpgd(kvm));
+	pgd_phys = virt_to_phys(kvm->arch.pgd);
 	BUG_ON(pgd_phys & ~VTTBR_BADDR_MASK);
 	vmid = ((u64)(kvm->arch.vmid) << VTTBR_VMID_SHIFT) & VTTBR_VMID_MASK(kvm_vmid_bits);
 	kvm->arch.vttbr = pgd_phys | vmid;
diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index 8568790..a283e7b 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -637,6 +637,11 @@ int create_hyp_io_mappings(void *from, void *to, phys_addr_t phys_addr)
 				     __phys_to_pfn(phys_addr), PAGE_HYP_DEVICE);
 }
 
+static inline unsigned int kvm_get_hwpgd_size(void)
+{
+	return PTRS_PER_S2_PGD * sizeof(pgd_t);
+}
+
 /* Free the HW pgd, one page at a time */
 static void kvm_free_hwpgd(void *hwpgd)
 {
@@ -665,29 +670,16 @@ static void *kvm_alloc_hwpgd(void)
 int kvm_alloc_stage2_pgd(struct kvm *kvm)
 {
 	pgd_t *pgd;
-	void *hwpgd;
 
 	if (kvm->arch.pgd != NULL) {
 		kvm_err("kvm_arch already initialized?\n");
 		return -EINVAL;
 	}
 
-	hwpgd = kvm_alloc_hwpgd();
-	if (!hwpgd)
+	pgd = kvm_alloc_hwpgd();
+	if (!pgd)
 		return -ENOMEM;
 
-	/*
-	 * When the kernel uses more levels of page tables than the
-	 * guest, we allocate a fake PGD and pre-populate it to point
-	 * to the next-level page table, which will be the real
-	 * initial page table pointed to by the VTTBR.
-	 */
-	pgd = kvm_setup_fake_pgd(hwpgd);
-	if (IS_ERR(pgd)) {
-		kvm_free_hwpgd(hwpgd);
-		return PTR_ERR(pgd);
-	}
-
 	kvm_clean_pgd(pgd);
 	kvm->arch.pgd = pgd;
 	return 0;
@@ -791,8 +783,7 @@ void kvm_free_stage2_pgd(struct kvm *kvm)
 		return;
 
 	unmap_stage2_range(kvm, 0, KVM_PHYS_SIZE);
-	kvm_free_hwpgd(kvm_get_hwpgd(kvm));
-	kvm_free_fake_pgd(kvm->arch.pgd);
+	kvm_free_hwpgd(kvm->arch.pgd);
 	kvm->arch.pgd = NULL;
 }
 
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index a6f9846..e998f63 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -242,24 +242,6 @@ static inline phys_addr_t kvm_pgd_index(struct kvm *kvm, phys_addr_t addr)
 	return kvm ? stage2_pgd_index(addr) : pgd_index(addr);
 }
 
-static inline void *kvm_get_hwpgd(struct kvm *kvm)
-{
-	return kvm->arch.pgd;
-}
-
-static inline unsigned int kvm_get_hwpgd_size(void)
-{
-	return PTRS_PER_S2_PGD * sizeof(pgd_t);
-}
-
-static inline pgd_t *kvm_setup_fake_pgd(pgd_t *hwpgd)
-{
-	return hwpgd;
-}
-
-static inline void kvm_free_fake_pgd(pgd_t *pgd)
-{
-}
 static inline bool kvm_page_empty(void *ptr)
 {
 	struct page *ptr_page = virt_to_page(ptr);
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [RFC PATCH 11/12] kvm-arm: Cleanup stage2 pgd handling
@ 2016-03-14 16:53   ` Suzuki K Poulose
  0 siblings, 0 replies; 50+ messages in thread
From: Suzuki K Poulose @ 2016-03-14 16:53 UTC (permalink / raw)
  To: linux-arm-kernel

Now that we don't have any fake page table levels for arm64,
cleanup the common code to get rid of the dead code.

Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
---
 arch/arm/include/asm/kvm_mmu.h   |   20 --------------------
 arch/arm/kvm/arm.c               |    2 +-
 arch/arm/kvm/mmu.c               |   25 ++++++++-----------------
 arch/arm64/include/asm/kvm_mmu.h |   18 ------------------
 4 files changed, 9 insertions(+), 56 deletions(-)

diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index e670afa..02b2b3d 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -254,26 +254,6 @@ static inline bool kvm_pud_table_empty(struct kvm *kvm, pud_t *pudp)
 	return 0;
 }
 
-
-static inline void *kvm_get_hwpgd(struct kvm *kvm)
-{
-	return kvm->arch.pgd;
-}
-
-static inline unsigned int kvm_get_hwpgd_size(void)
-{
-	return PTRS_PER_S2_PGD * sizeof(pgd_t);
-}
-
-static inline pgd_t *kvm_setup_fake_pgd(pgd_t *hwpgd)
-{
-	return hwpgd;
-}
-
-static inline void kvm_free_fake_pgd(pgd_t *pgd)
-{
-}
-
 struct kvm;
 
 #define kvm_flush_dcache_to_poc(a,l)	__cpuc_flush_dcache_area((a), (l))
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index dda1959..a1443ef 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -443,7 +443,7 @@ static void update_vttbr(struct kvm *kvm)
 	kvm_next_vmid &= (1 << kvm_vmid_bits) - 1;
 
 	/* update vttbr to be used with the new vmid */
-	pgd_phys = virt_to_phys(kvm_get_hwpgd(kvm));
+	pgd_phys = virt_to_phys(kvm->arch.pgd);
 	BUG_ON(pgd_phys & ~VTTBR_BADDR_MASK);
 	vmid = ((u64)(kvm->arch.vmid) << VTTBR_VMID_SHIFT) & VTTBR_VMID_MASK(kvm_vmid_bits);
 	kvm->arch.vttbr = pgd_phys | vmid;
diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index 8568790..a283e7b 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -637,6 +637,11 @@ int create_hyp_io_mappings(void *from, void *to, phys_addr_t phys_addr)
 				     __phys_to_pfn(phys_addr), PAGE_HYP_DEVICE);
 }
 
+static inline unsigned int kvm_get_hwpgd_size(void)
+{
+	return PTRS_PER_S2_PGD * sizeof(pgd_t);
+}
+
 /* Free the HW pgd, one page at a time */
 static void kvm_free_hwpgd(void *hwpgd)
 {
@@ -665,29 +670,16 @@ static void *kvm_alloc_hwpgd(void)
 int kvm_alloc_stage2_pgd(struct kvm *kvm)
 {
 	pgd_t *pgd;
-	void *hwpgd;
 
 	if (kvm->arch.pgd != NULL) {
 		kvm_err("kvm_arch already initialized?\n");
 		return -EINVAL;
 	}
 
-	hwpgd = kvm_alloc_hwpgd();
-	if (!hwpgd)
+	pgd = kvm_alloc_hwpgd();
+	if (!pgd)
 		return -ENOMEM;
 
-	/*
-	 * When the kernel uses more levels of page tables than the
-	 * guest, we allocate a fake PGD and pre-populate it to point
-	 * to the next-level page table, which will be the real
-	 * initial page table pointed to by the VTTBR.
-	 */
-	pgd = kvm_setup_fake_pgd(hwpgd);
-	if (IS_ERR(pgd)) {
-		kvm_free_hwpgd(hwpgd);
-		return PTR_ERR(pgd);
-	}
-
 	kvm_clean_pgd(pgd);
 	kvm->arch.pgd = pgd;
 	return 0;
@@ -791,8 +783,7 @@ void kvm_free_stage2_pgd(struct kvm *kvm)
 		return;
 
 	unmap_stage2_range(kvm, 0, KVM_PHYS_SIZE);
-	kvm_free_hwpgd(kvm_get_hwpgd(kvm));
-	kvm_free_fake_pgd(kvm->arch.pgd);
+	kvm_free_hwpgd(kvm->arch.pgd);
 	kvm->arch.pgd = NULL;
 }
 
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index a6f9846..e998f63 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -242,24 +242,6 @@ static inline phys_addr_t kvm_pgd_index(struct kvm *kvm, phys_addr_t addr)
 	return kvm ? stage2_pgd_index(addr) : pgd_index(addr);
 }
 
-static inline void *kvm_get_hwpgd(struct kvm *kvm)
-{
-	return kvm->arch.pgd;
-}
-
-static inline unsigned int kvm_get_hwpgd_size(void)
-{
-	return PTRS_PER_S2_PGD * sizeof(pgd_t);
-}
-
-static inline pgd_t *kvm_setup_fake_pgd(pgd_t *hwpgd)
-{
-	return hwpgd;
-}
-
-static inline void kvm_free_fake_pgd(pgd_t *pgd)
-{
-}
 static inline bool kvm_page_empty(void *ptr)
 {
 	struct page *ptr_page = virt_to_page(ptr);
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [RFC PATCH 12/12] arm64: kvm: Add support for 16K pages
  2016-03-14 16:52 ` Suzuki K Poulose
@ 2016-03-14 16:53   ` Suzuki K Poulose
  -1 siblings, 0 replies; 50+ messages in thread
From: Suzuki K Poulose @ 2016-03-14 16:53 UTC (permalink / raw)
  To: christoffer.dall, marc.zyngier
  Cc: kvmarm, linux-arm-kernel, mark.rutland, kvm, will.deacon,
	catalin.marinas, Suzuki K Poulose

Now that we can handle stage-2 page tables independent
of the host page table levels, wire up the 16K page
support.

Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
---
 arch/arm64/include/asm/kvm_arm.h |   11 ++++++++++-
 arch/arm64/kvm/Kconfig           |    1 -
 2 files changed, 10 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index d49dd50..aa2e1d4 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -113,6 +113,7 @@
 #define VTCR_EL2_TG0_MASK	(3 << 14)
 #define VTCR_EL2_TG0_4K		(0 << 14)
 #define VTCR_EL2_TG0_64K	(1 << 14)
+#define VTCR_EL2_TG0_16K	(2 << 14)
 #define VTCR_EL2_SH0_MASK	(3 << 12)
 #define VTCR_EL2_SH0_INNER	(3 << 12)
 #define VTCR_EL2_ORGN0_MASK	(3 << 10)
@@ -134,7 +135,7 @@
  * (see hyp-init.S).
  *
  * Note that when using 4K pages, we concatenate two first level page tables
- * together.
+ * together. With 16K pages, we concatenate 16 first level page tables.
  *
  * The magic numbers used for VTTBR_X in this patch can be found in Tables
  * D4-23 and D4-25 in ARM DDI 0487A.b.
@@ -150,6 +151,14 @@
  */
 #define VTCR_EL2_TGRAN_FLAGS	(VTCR_EL2_TG0_64K | VTCR_EL2_SL0_LVL1)
 #define VTTBR_X_TGRAN_MAGIC		38
+#elif defined(CONFIG_ARM64_16K_PAGES)
+/*
+ * Stage2 translation configuration:
+ * 16kB pages (TG0 = 2)
+ * 2 level page tables (SL = 1)
+ */
+#define VTCR_EL2_TGRAN_FLAGS	(VTCR_EL2_TG0_16K | VTCR_EL2_SL0_LVL1)
+#define VTTBR_X_TGRAN_MAGIC	42
 #else
 /*
  * Stage2 translation configuration:
diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
index a5272c0..e63022c 100644
--- a/arch/arm64/kvm/Kconfig
+++ b/arch/arm64/kvm/Kconfig
@@ -22,7 +22,6 @@ config KVM_ARM_VGIC_V3
 config KVM
 	bool "Kernel-based Virtual Machine (KVM) support"
 	depends on OF
-	depends on !ARM64_16K_PAGES
 	select MMU_NOTIFIER
 	select PREEMPT_NOTIFIERS
 	select ANON_INODES
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [RFC PATCH 12/12] arm64: kvm: Add support for 16K pages
@ 2016-03-14 16:53   ` Suzuki K Poulose
  0 siblings, 0 replies; 50+ messages in thread
From: Suzuki K Poulose @ 2016-03-14 16:53 UTC (permalink / raw)
  To: linux-arm-kernel

Now that we can handle stage-2 page tables independent
of the host page table levels, wire up the 16K page
support.

Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
---
 arch/arm64/include/asm/kvm_arm.h |   11 ++++++++++-
 arch/arm64/kvm/Kconfig           |    1 -
 2 files changed, 10 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index d49dd50..aa2e1d4 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -113,6 +113,7 @@
 #define VTCR_EL2_TG0_MASK	(3 << 14)
 #define VTCR_EL2_TG0_4K		(0 << 14)
 #define VTCR_EL2_TG0_64K	(1 << 14)
+#define VTCR_EL2_TG0_16K	(2 << 14)
 #define VTCR_EL2_SH0_MASK	(3 << 12)
 #define VTCR_EL2_SH0_INNER	(3 << 12)
 #define VTCR_EL2_ORGN0_MASK	(3 << 10)
@@ -134,7 +135,7 @@
  * (see hyp-init.S).
  *
  * Note that when using 4K pages, we concatenate two first level page tables
- * together.
+ * together. With 16K pages, we concatenate 16 first level page tables.
  *
  * The magic numbers used for VTTBR_X in this patch can be found in Tables
  * D4-23 and D4-25 in ARM DDI 0487A.b.
@@ -150,6 +151,14 @@
  */
 #define VTCR_EL2_TGRAN_FLAGS	(VTCR_EL2_TG0_64K | VTCR_EL2_SL0_LVL1)
 #define VTTBR_X_TGRAN_MAGIC		38
+#elif defined(CONFIG_ARM64_16K_PAGES)
+/*
+ * Stage2 translation configuration:
+ * 16kB pages (TG0 = 2)
+ * 2 level page tables (SL = 1)
+ */
+#define VTCR_EL2_TGRAN_FLAGS	(VTCR_EL2_TG0_16K | VTCR_EL2_SL0_LVL1)
+#define VTTBR_X_TGRAN_MAGIC	42
 #else
 /*
  * Stage2 translation configuration:
diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
index a5272c0..e63022c 100644
--- a/arch/arm64/kvm/Kconfig
+++ b/arch/arm64/kvm/Kconfig
@@ -22,7 +22,6 @@ config KVM_ARM_VGIC_V3
 config KVM
 	bool "Kernel-based Virtual Machine (KVM) support"
 	depends on OF
-	depends on !ARM64_16K_PAGES
 	select MMU_NOTIFIER
 	select PREEMPT_NOTIFIERS
 	select ANON_INODES
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* Re: [RFC PATCH 04/12] kvm-arm: Rename kvm_pmd_huge to huge_pmd
  2016-03-14 16:53   ` Suzuki K Poulose
@ 2016-03-14 17:06     ` Mark Rutland
  -1 siblings, 0 replies; 50+ messages in thread
From: Mark Rutland @ 2016-03-14 17:06 UTC (permalink / raw)
  To: Suzuki K Poulose
  Cc: christoffer.dall, marc.zyngier, kvmarm, linux-arm-kernel, kvm,
	will.deacon, catalin.marinas

On Mon, Mar 14, 2016 at 04:53:03PM +0000, Suzuki K Poulose wrote:
> kvm_pmd_huge doesn't have any dependency on the page table
> where the pmd lives (i.e, hyp vs. stage2). So, rename it to
> huge_pmd() to make it explicit.
> 
> kvm_p.d_* wrappers will be used for helpers which differ
> across hyp vs stage2.
> 
> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> ---
>  arch/arm/kvm/mmu.c |   18 +++++++++---------
>  1 file changed, 9 insertions(+), 9 deletions(-)
> 
> diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
> index a16631c..3b038bb 100644
> --- a/arch/arm/kvm/mmu.c
> +++ b/arch/arm/kvm/mmu.c
> @@ -44,7 +44,7 @@ static phys_addr_t hyp_idmap_vector;
>  
>  #define hyp_pgd_order get_order(PTRS_PER_PGD * sizeof(pgd_t))
>  
> -#define kvm_pmd_huge(_x)	(pmd_huge(_x) || pmd_trans_huge(_x))
> +#define huge_pmd(_x)		(pmd_huge(_x) || pmd_trans_huge(_x))

I note that in arch/arm we have pmd_thp_or_huge() for this in
arch/arm/include/asm/pgtable-{2,3}level.h.

If we're going to rename this, it's probably best to align on that name,
which will also avoid and confusion as to the difference between
pmd_huge and huge_pmd.

Similarly, it might best live in pgtable.h if it isn't KVM-specific.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 50+ messages in thread

* [RFC PATCH 04/12] kvm-arm: Rename kvm_pmd_huge to huge_pmd
@ 2016-03-14 17:06     ` Mark Rutland
  0 siblings, 0 replies; 50+ messages in thread
From: Mark Rutland @ 2016-03-14 17:06 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Mar 14, 2016 at 04:53:03PM +0000, Suzuki K Poulose wrote:
> kvm_pmd_huge doesn't have any dependency on the page table
> where the pmd lives (i.e, hyp vs. stage2). So, rename it to
> huge_pmd() to make it explicit.
> 
> kvm_p.d_* wrappers will be used for helpers which differ
> across hyp vs stage2.
> 
> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> ---
>  arch/arm/kvm/mmu.c |   18 +++++++++---------
>  1 file changed, 9 insertions(+), 9 deletions(-)
> 
> diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
> index a16631c..3b038bb 100644
> --- a/arch/arm/kvm/mmu.c
> +++ b/arch/arm/kvm/mmu.c
> @@ -44,7 +44,7 @@ static phys_addr_t hyp_idmap_vector;
>  
>  #define hyp_pgd_order get_order(PTRS_PER_PGD * sizeof(pgd_t))
>  
> -#define kvm_pmd_huge(_x)	(pmd_huge(_x) || pmd_trans_huge(_x))
> +#define huge_pmd(_x)		(pmd_huge(_x) || pmd_trans_huge(_x))

I note that in arch/arm we have pmd_thp_or_huge() for this in
arch/arm/include/asm/pgtable-{2,3}level.h.

If we're going to rename this, it's probably best to align on that name,
which will also avoid and confusion as to the difference between
pmd_huge and huge_pmd.

Similarly, it might best live in pgtable.h if it isn't KVM-specific.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC PATCH 04/12] kvm-arm: Rename kvm_pmd_huge to huge_pmd
  2016-03-14 17:06     ` Mark Rutland
@ 2016-03-14 17:22       ` Suzuki K. Poulose
  -1 siblings, 0 replies; 50+ messages in thread
From: Suzuki K. Poulose @ 2016-03-14 17:22 UTC (permalink / raw)
  To: Mark Rutland
  Cc: christoffer.dall, marc.zyngier, kvmarm, linux-arm-kernel, kvm,
	will.deacon, catalin.marinas

On 14/03/16 17:06, Mark Rutland wrote:
> On Mon, Mar 14, 2016 at 04:53:03PM +0000, Suzuki K Poulose wrote:
>> kvm_pmd_huge doesn't have any dependency on the page table
>> where the pmd lives (i.e, hyp vs. stage2). So, rename it to
>> huge_pmd() to make it explicit.


>>   #define hyp_pgd_order get_order(PTRS_PER_PGD * sizeof(pgd_t))
>>
>> -#define kvm_pmd_huge(_x)	(pmd_huge(_x) || pmd_trans_huge(_x))
>> +#define huge_pmd(_x)		(pmd_huge(_x) || pmd_trans_huge(_x))
>
> I note that in arch/arm we have pmd_thp_or_huge() for this in
> arch/arm/include/asm/pgtable-{2,3}level.h.
>
> If we're going to rename this, it's probably best to align on that name,
> which will also avoid and confusion as to the difference between
> pmd_huge and huge_pmd.
>
> Similarly, it might best live in pgtable.h if it isn't KVM-specific.

Thanks for that pointer, will define one for arm64 and use that in kvm.

Cheers
Suzuki


^ permalink raw reply	[flat|nested] 50+ messages in thread

* [RFC PATCH 04/12] kvm-arm: Rename kvm_pmd_huge to huge_pmd
@ 2016-03-14 17:22       ` Suzuki K. Poulose
  0 siblings, 0 replies; 50+ messages in thread
From: Suzuki K. Poulose @ 2016-03-14 17:22 UTC (permalink / raw)
  To: linux-arm-kernel

On 14/03/16 17:06, Mark Rutland wrote:
> On Mon, Mar 14, 2016 at 04:53:03PM +0000, Suzuki K Poulose wrote:
>> kvm_pmd_huge doesn't have any dependency on the page table
>> where the pmd lives (i.e, hyp vs. stage2). So, rename it to
>> huge_pmd() to make it explicit.


>>   #define hyp_pgd_order get_order(PTRS_PER_PGD * sizeof(pgd_t))
>>
>> -#define kvm_pmd_huge(_x)	(pmd_huge(_x) || pmd_trans_huge(_x))
>> +#define huge_pmd(_x)		(pmd_huge(_x) || pmd_trans_huge(_x))
>
> I note that in arch/arm we have pmd_thp_or_huge() for this in
> arch/arm/include/asm/pgtable-{2,3}level.h.
>
> If we're going to rename this, it's probably best to align on that name,
> which will also avoid and confusion as to the difference between
> pmd_huge and huge_pmd.
>
> Similarly, it might best live in pgtable.h if it isn't KVM-specific.

Thanks for that pointer, will define one for arm64 and use that in kvm.

Cheers
Suzuki

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC PATCH 02/12] arm64: kvm: Fix {V}TCR_EL2_TG0 mask
  2016-03-14 16:53   ` Suzuki K Poulose
@ 2016-03-16 14:54     ` Marc Zyngier
  -1 siblings, 0 replies; 50+ messages in thread
From: Marc Zyngier @ 2016-03-16 14:54 UTC (permalink / raw)
  To: Suzuki K Poulose, christoffer.dall
  Cc: kvmarm, linux-arm-kernel, mark.rutland, kvm, will.deacon,
	catalin.marinas

On 14/03/16 16:53, Suzuki K Poulose wrote:
> {V}TCR_EL2_TG0 is a 2bit wide field, where:
> 
>  00 - 4K
>  01 - 64K
>  10 - 16K
> 
> But we use only 1 bit, which has worked well so far since
> we never cared about 16K. Fix it for 16K support.
> 
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Christoffer Dall <christoffer.dall@linaro.org>
> Cc: kvmarm@lists.cs.columbia.edu
> Acked-by: Mark Rutland <mark.rutland@arm.com>
> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> ---
>  arch/arm64/include/asm/kvm_arm.h |    4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
> index d201d4b..b7d61e4 100644
> --- a/arch/arm64/include/asm/kvm_arm.h
> +++ b/arch/arm64/include/asm/kvm_arm.h
> @@ -99,7 +99,7 @@
>  #define TCR_EL2_TBI	(1 << 20)
>  #define TCR_EL2_PS	(7 << 16)
>  #define TCR_EL2_PS_40B	(2 << 16)
> -#define TCR_EL2_TG0	(1 << 14)
> +#define TCR_EL2_TG0	(3 << 14)
>  #define TCR_EL2_SH0	(3 << 12)
>  #define TCR_EL2_ORGN0	(3 << 10)
>  #define TCR_EL2_IRGN0	(3 << 8)
> @@ -110,7 +110,7 @@
>  /* VTCR_EL2 Registers bits */
>  #define VTCR_EL2_RES1		(1 << 31)
>  #define VTCR_EL2_PS_MASK	(7 << 16)
> -#define VTCR_EL2_TG0_MASK	(1 << 14)
> +#define VTCR_EL2_TG0_MASK	(3 << 14)
>  #define VTCR_EL2_TG0_4K		(0 << 14)
>  #define VTCR_EL2_TG0_64K	(1 << 14)
>  #define VTCR_EL2_SH0_MASK	(3 << 12)
> 

As we already have arch/arm64/include/asm/pgtable-hwdef.h defining
TCR_TG0_{4,16,64}K, would it make sense to reuse those and drop the
locally defined values? Something like:

#define TCR_EL2_TG0_4K	TCR_TG0_4K

?

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 50+ messages in thread

* [RFC PATCH 02/12] arm64: kvm: Fix {V}TCR_EL2_TG0 mask
@ 2016-03-16 14:54     ` Marc Zyngier
  0 siblings, 0 replies; 50+ messages in thread
From: Marc Zyngier @ 2016-03-16 14:54 UTC (permalink / raw)
  To: linux-arm-kernel

On 14/03/16 16:53, Suzuki K Poulose wrote:
> {V}TCR_EL2_TG0 is a 2bit wide field, where:
> 
>  00 - 4K
>  01 - 64K
>  10 - 16K
> 
> But we use only 1 bit, which has worked well so far since
> we never cared about 16K. Fix it for 16K support.
> 
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Christoffer Dall <christoffer.dall@linaro.org>
> Cc: kvmarm at lists.cs.columbia.edu
> Acked-by: Mark Rutland <mark.rutland@arm.com>
> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> ---
>  arch/arm64/include/asm/kvm_arm.h |    4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
> index d201d4b..b7d61e4 100644
> --- a/arch/arm64/include/asm/kvm_arm.h
> +++ b/arch/arm64/include/asm/kvm_arm.h
> @@ -99,7 +99,7 @@
>  #define TCR_EL2_TBI	(1 << 20)
>  #define TCR_EL2_PS	(7 << 16)
>  #define TCR_EL2_PS_40B	(2 << 16)
> -#define TCR_EL2_TG0	(1 << 14)
> +#define TCR_EL2_TG0	(3 << 14)
>  #define TCR_EL2_SH0	(3 << 12)
>  #define TCR_EL2_ORGN0	(3 << 10)
>  #define TCR_EL2_IRGN0	(3 << 8)
> @@ -110,7 +110,7 @@
>  /* VTCR_EL2 Registers bits */
>  #define VTCR_EL2_RES1		(1 << 31)
>  #define VTCR_EL2_PS_MASK	(7 << 16)
> -#define VTCR_EL2_TG0_MASK	(1 << 14)
> +#define VTCR_EL2_TG0_MASK	(3 << 14)
>  #define VTCR_EL2_TG0_4K		(0 << 14)
>  #define VTCR_EL2_TG0_64K	(1 << 14)
>  #define VTCR_EL2_SH0_MASK	(3 << 12)
> 

As we already have arch/arm64/include/asm/pgtable-hwdef.h defining
TCR_TG0_{4,16,64}K, would it make sense to reuse those and drop the
locally defined values? Something like:

#define TCR_EL2_TG0_4K	TCR_TG0_4K

?

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC PATCH 03/12] arm64: kvm: Cleanup VTCR_EL2/VTTBR computation
  2016-03-14 16:53   ` Suzuki K Poulose
@ 2016-03-16 15:01     ` Marc Zyngier
  -1 siblings, 0 replies; 50+ messages in thread
From: Marc Zyngier @ 2016-03-16 15:01 UTC (permalink / raw)
  To: Suzuki K Poulose, christoffer.dall
  Cc: kvmarm, linux-arm-kernel, mark.rutland, kvm, will.deacon,
	catalin.marinas

On 14/03/16 16:53, Suzuki K Poulose wrote:
> No functional changes. Group the common bits for VCTR_EL2
> initialisation for better readability. The granule size
> and the entry level are controlled by the page size.
> 
> Cc: Christoffer Dall <christoffer.dall@linaro.org>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: kvmarm@lists.cs.columbia.edu
> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> ---
>  arch/arm64/include/asm/kvm_arm.h |   22 ++++++++++------------
>  1 file changed, 10 insertions(+), 12 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
> index b7d61e4..d49dd50 100644
> --- a/arch/arm64/include/asm/kvm_arm.h
> +++ b/arch/arm64/include/asm/kvm_arm.h
> @@ -139,32 +139,30 @@
>   * The magic numbers used for VTTBR_X in this patch can be found in Tables
>   * D4-23 and D4-25 in ARM DDI 0487A.b.
>   */
> +#define VTCR_EL2_COMMON_BITS	(VTCR_EL2_SH0_INNER | VTCR_EL2_ORGN0_WBWA | \
> +				 VTCR_EL2_IRGN0_WBWA | VTCR_EL2_SL0_LVL1 | \
> +				 VTCR_EL2_RES1 | VTCR_EL2_T0SZ_40B)
>  #ifdef CONFIG_ARM64_64K_PAGES
>  /*
>   * Stage2 translation configuration:
> - * 40bits input  (T0SZ = 24)
>   * 64kB pages (TG0 = 1)
>   * 2 level page tables (SL = 1)
>   */
> -#define VTCR_EL2_FLAGS		(VTCR_EL2_TG0_64K | VTCR_EL2_SH0_INNER | \
> -				 VTCR_EL2_ORGN0_WBWA | VTCR_EL2_IRGN0_WBWA | \
> -				 VTCR_EL2_SL0_LVL1 | VTCR_EL2_T0SZ_40B | \
> -				 VTCR_EL2_RES1)
> -#define VTTBR_X		(38 - VTCR_EL2_T0SZ_40B)
> +#define VTCR_EL2_TGRAN_FLAGS	(VTCR_EL2_TG0_64K | VTCR_EL2_SL0_LVL1)
> +#define VTTBR_X_TGRAN_MAGIC		38
>  #else
>  /*
>   * Stage2 translation configuration:
> - * 40bits input  (T0SZ = 24)
>   * 4kB pages (TG0 = 0)
>   * 3 level page tables (SL = 1)
>   */
> -#define VTCR_EL2_FLAGS		(VTCR_EL2_TG0_4K | VTCR_EL2_SH0_INNER | \
> -				 VTCR_EL2_ORGN0_WBWA | VTCR_EL2_IRGN0_WBWA | \
> -				 VTCR_EL2_SL0_LVL1 | VTCR_EL2_T0SZ_40B | \
> -				 VTCR_EL2_RES1)
> -#define VTTBR_X		(37 - VTCR_EL2_T0SZ_40B)
> +#define VTCR_EL2_TGRAN_FLAGS		(VTCR_EL2_TG0_4K | VTCR_EL2_SL0_LVL1)
> +#define VTTBR_X_TGRAN_MAGIC		37
>  #endif
>  
> +#define VTCR_EL2_FLAGS		(VTCR_EL2_TGRAN_FLAGS | VTCR_EL2_COMMON_BITS)
> +#define VTTBR_X			((VTTBR_X_TGRAN_MAGIC) - VTCR_EL2_T0SZ_40B)

Nit: spurious brackets.

It would be nice to add an ARMv8 ARM reference to where the "magic"
value is coming from.

> +
>  #define VTTBR_BADDR_SHIFT (VTTBR_X - 1)
>  #define VTTBR_BADDR_MASK  (((UL(1) << (PHYS_MASK_SHIFT - VTTBR_X)) - 1) << VTTBR_BADDR_SHIFT)
>  #define VTTBR_VMID_SHIFT  (UL(48))
> 

Otherwise:

Acked-by: Marc Zyngier <marc.zyngier@arm.com>

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 50+ messages in thread

* [RFC PATCH 03/12] arm64: kvm: Cleanup VTCR_EL2/VTTBR computation
@ 2016-03-16 15:01     ` Marc Zyngier
  0 siblings, 0 replies; 50+ messages in thread
From: Marc Zyngier @ 2016-03-16 15:01 UTC (permalink / raw)
  To: linux-arm-kernel

On 14/03/16 16:53, Suzuki K Poulose wrote:
> No functional changes. Group the common bits for VCTR_EL2
> initialisation for better readability. The granule size
> and the entry level are controlled by the page size.
> 
> Cc: Christoffer Dall <christoffer.dall@linaro.org>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: kvmarm at lists.cs.columbia.edu
> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> ---
>  arch/arm64/include/asm/kvm_arm.h |   22 ++++++++++------------
>  1 file changed, 10 insertions(+), 12 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
> index b7d61e4..d49dd50 100644
> --- a/arch/arm64/include/asm/kvm_arm.h
> +++ b/arch/arm64/include/asm/kvm_arm.h
> @@ -139,32 +139,30 @@
>   * The magic numbers used for VTTBR_X in this patch can be found in Tables
>   * D4-23 and D4-25 in ARM DDI 0487A.b.
>   */
> +#define VTCR_EL2_COMMON_BITS	(VTCR_EL2_SH0_INNER | VTCR_EL2_ORGN0_WBWA | \
> +				 VTCR_EL2_IRGN0_WBWA | VTCR_EL2_SL0_LVL1 | \
> +				 VTCR_EL2_RES1 | VTCR_EL2_T0SZ_40B)
>  #ifdef CONFIG_ARM64_64K_PAGES
>  /*
>   * Stage2 translation configuration:
> - * 40bits input  (T0SZ = 24)
>   * 64kB pages (TG0 = 1)
>   * 2 level page tables (SL = 1)
>   */
> -#define VTCR_EL2_FLAGS		(VTCR_EL2_TG0_64K | VTCR_EL2_SH0_INNER | \
> -				 VTCR_EL2_ORGN0_WBWA | VTCR_EL2_IRGN0_WBWA | \
> -				 VTCR_EL2_SL0_LVL1 | VTCR_EL2_T0SZ_40B | \
> -				 VTCR_EL2_RES1)
> -#define VTTBR_X		(38 - VTCR_EL2_T0SZ_40B)
> +#define VTCR_EL2_TGRAN_FLAGS	(VTCR_EL2_TG0_64K | VTCR_EL2_SL0_LVL1)
> +#define VTTBR_X_TGRAN_MAGIC		38
>  #else
>  /*
>   * Stage2 translation configuration:
> - * 40bits input  (T0SZ = 24)
>   * 4kB pages (TG0 = 0)
>   * 3 level page tables (SL = 1)
>   */
> -#define VTCR_EL2_FLAGS		(VTCR_EL2_TG0_4K | VTCR_EL2_SH0_INNER | \
> -				 VTCR_EL2_ORGN0_WBWA | VTCR_EL2_IRGN0_WBWA | \
> -				 VTCR_EL2_SL0_LVL1 | VTCR_EL2_T0SZ_40B | \
> -				 VTCR_EL2_RES1)
> -#define VTTBR_X		(37 - VTCR_EL2_T0SZ_40B)
> +#define VTCR_EL2_TGRAN_FLAGS		(VTCR_EL2_TG0_4K | VTCR_EL2_SL0_LVL1)
> +#define VTTBR_X_TGRAN_MAGIC		37
>  #endif
>  
> +#define VTCR_EL2_FLAGS		(VTCR_EL2_TGRAN_FLAGS | VTCR_EL2_COMMON_BITS)
> +#define VTTBR_X			((VTTBR_X_TGRAN_MAGIC) - VTCR_EL2_T0SZ_40B)

Nit: spurious brackets.

It would be nice to add an ARMv8 ARM reference to where the "magic"
value is coming from.

> +
>  #define VTTBR_BADDR_SHIFT (VTTBR_X - 1)
>  #define VTTBR_BADDR_MASK  (((UL(1) << (PHYS_MASK_SHIFT - VTTBR_X)) - 1) << VTTBR_BADDR_SHIFT)
>  #define VTTBR_VMID_SHIFT  (UL(48))
> 

Otherwise:

Acked-by: Marc Zyngier <marc.zyngier@arm.com>

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC PATCH 02/12] arm64: kvm: Fix {V}TCR_EL2_TG0 mask
  2016-03-16 14:54     ` Marc Zyngier
@ 2016-03-16 15:35       ` Suzuki K. Poulose
  -1 siblings, 0 replies; 50+ messages in thread
From: Suzuki K. Poulose @ 2016-03-16 15:35 UTC (permalink / raw)
  To: Marc Zyngier, christoffer.dall
  Cc: kvmarm, linux-arm-kernel, mark.rutland, kvm, will.deacon,
	catalin.marinas

On 16/03/16 14:54, Marc Zyngier wrote:
> On 14/03/16 16:53, Suzuki K Poulose wrote:
>> {V}TCR_EL2_TG0 is a 2bit wide field, where:
>>
>>   00 - 4K
>>   01 - 64K
>>   10 - 16K
>>
>> But we use only 1 bit, which has worked well so far since
>> we never cared about 16K. Fix it for 16K support.


>> --- a/arch/arm64/include/asm/kvm_arm.h
>> +++ b/arch/arm64/include/asm/kvm_arm.h
>> @@ -99,7 +99,7 @@
>>   #define TCR_EL2_TBI	(1 << 20)
>>   #define TCR_EL2_PS	(7 << 16)
>>   #define TCR_EL2_PS_40B	(2 << 16)
>> -#define TCR_EL2_TG0	(1 << 14)
>> +#define TCR_EL2_TG0	(3 << 14)
>>   #define TCR_EL2_SH0	(3 << 12)
>>   #define TCR_EL2_ORGN0	(3 << 10)
>>   #define TCR_EL2_IRGN0	(3 << 8)
>> @@ -110,7 +110,7 @@
>>   /* VTCR_EL2 Registers bits */
>>   #define VTCR_EL2_RES1		(1 << 31)
>>   #define VTCR_EL2_PS_MASK	(7 << 16)
>> -#define VTCR_EL2_TG0_MASK	(1 << 14)
>> +#define VTCR_EL2_TG0_MASK	(3 << 14)
>>   #define VTCR_EL2_TG0_4K		(0 << 14)
>>   #define VTCR_EL2_TG0_64K	(1 << 14)
>>   #define VTCR_EL2_SH0_MASK	(3 << 12)
>>
>
> As we already have arch/arm64/include/asm/pgtable-hwdef.h defining
> TCR_TG0_{4,16,64}K, would it make sense to reuse those and drop the
> locally defined values? Something like:
>
> #define TCR_EL2_TG0_4K	TCR_TG0_4K
>
> ?

We could do that for both TCR_EL2 and VTCR_EL2.

Btw, since this patch doesn't touch any of those fields and fixes an issue,
I will keep that change separate. I can squash it to the following cleanup
patch in this series.

Cheers
Suzuki



^ permalink raw reply	[flat|nested] 50+ messages in thread

* [RFC PATCH 02/12] arm64: kvm: Fix {V}TCR_EL2_TG0 mask
@ 2016-03-16 15:35       ` Suzuki K. Poulose
  0 siblings, 0 replies; 50+ messages in thread
From: Suzuki K. Poulose @ 2016-03-16 15:35 UTC (permalink / raw)
  To: linux-arm-kernel

On 16/03/16 14:54, Marc Zyngier wrote:
> On 14/03/16 16:53, Suzuki K Poulose wrote:
>> {V}TCR_EL2_TG0 is a 2bit wide field, where:
>>
>>   00 - 4K
>>   01 - 64K
>>   10 - 16K
>>
>> But we use only 1 bit, which has worked well so far since
>> we never cared about 16K. Fix it for 16K support.


>> --- a/arch/arm64/include/asm/kvm_arm.h
>> +++ b/arch/arm64/include/asm/kvm_arm.h
>> @@ -99,7 +99,7 @@
>>   #define TCR_EL2_TBI	(1 << 20)
>>   #define TCR_EL2_PS	(7 << 16)
>>   #define TCR_EL2_PS_40B	(2 << 16)
>> -#define TCR_EL2_TG0	(1 << 14)
>> +#define TCR_EL2_TG0	(3 << 14)
>>   #define TCR_EL2_SH0	(3 << 12)
>>   #define TCR_EL2_ORGN0	(3 << 10)
>>   #define TCR_EL2_IRGN0	(3 << 8)
>> @@ -110,7 +110,7 @@
>>   /* VTCR_EL2 Registers bits */
>>   #define VTCR_EL2_RES1		(1 << 31)
>>   #define VTCR_EL2_PS_MASK	(7 << 16)
>> -#define VTCR_EL2_TG0_MASK	(1 << 14)
>> +#define VTCR_EL2_TG0_MASK	(3 << 14)
>>   #define VTCR_EL2_TG0_4K		(0 << 14)
>>   #define VTCR_EL2_TG0_64K	(1 << 14)
>>   #define VTCR_EL2_SH0_MASK	(3 << 12)
>>
>
> As we already have arch/arm64/include/asm/pgtable-hwdef.h defining
> TCR_TG0_{4,16,64}K, would it make sense to reuse those and drop the
> locally defined values? Something like:
>
> #define TCR_EL2_TG0_4K	TCR_TG0_4K
>
> ?

We could do that for both TCR_EL2 and VTCR_EL2.

Btw, since this patch doesn't touch any of those fields and fixes an issue,
I will keep that change separate. I can squash it to the following cleanup
patch in this series.

Cheers
Suzuki

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC PATCH 03/12] arm64: kvm: Cleanup VTCR_EL2/VTTBR computation
  2016-03-16 15:01     ` Marc Zyngier
@ 2016-03-16 15:37       ` Suzuki K. Poulose
  -1 siblings, 0 replies; 50+ messages in thread
From: Suzuki K. Poulose @ 2016-03-16 15:37 UTC (permalink / raw)
  To: Marc Zyngier, christoffer.dall
  Cc: kvmarm, linux-arm-kernel, mark.rutland, kvm, will.deacon,
	catalin.marinas

On 16/03/16 15:01, Marc Zyngier wrote:
> On 14/03/16 16:53, Suzuki K Poulose wrote:
>> No functional changes. Group the common bits for VCTR_EL2
>> initialisation for better readability. The granule size
>> and the entry level are controlled by the page size.

>>
>> diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
>> index b7d61e4..d49dd50 100644
>> --- a/arch/arm64/include/asm/kvm_arm.h
>> +++ b/arch/arm64/include/asm/kvm_arm.h
>> @@ -139,32 +139,30 @@
>>    * The magic numbers used for VTTBR_X in this patch can be found in Tables
>>    * D4-23 and D4-25 in ARM DDI 0487A.b.
>>    */

...

>>
>> +#define VTCR_EL2_FLAGS		(VTCR_EL2_TGRAN_FLAGS | VTCR_EL2_COMMON_BITS)
>> +#define VTTBR_X			((VTTBR_X_TGRAN_MAGIC) - VTCR_EL2_T0SZ_40B)
>
> Nit: spurious brackets.
  
Will remove them.

> It would be nice to add an ARMv8 ARM reference to where the "magic"
> value is coming from.

That reference already exists in the code, see above.

>
>> +
>>   #define VTTBR_BADDR_SHIFT (VTTBR_X - 1)
>>   #define VTTBR_BADDR_MASK  (((UL(1) << (PHYS_MASK_SHIFT - VTTBR_X)) - 1) << VTTBR_BADDR_SHIFT)
>>   #define VTTBR_VMID_SHIFT  (UL(48))
>>
>
> Otherwise:
>
> Acked-by: Marc Zyngier <marc.zyngier@arm.com>

Thanks
Suzuki


^ permalink raw reply	[flat|nested] 50+ messages in thread

* [RFC PATCH 03/12] arm64: kvm: Cleanup VTCR_EL2/VTTBR computation
@ 2016-03-16 15:37       ` Suzuki K. Poulose
  0 siblings, 0 replies; 50+ messages in thread
From: Suzuki K. Poulose @ 2016-03-16 15:37 UTC (permalink / raw)
  To: linux-arm-kernel

On 16/03/16 15:01, Marc Zyngier wrote:
> On 14/03/16 16:53, Suzuki K Poulose wrote:
>> No functional changes. Group the common bits for VCTR_EL2
>> initialisation for better readability. The granule size
>> and the entry level are controlled by the page size.

>>
>> diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
>> index b7d61e4..d49dd50 100644
>> --- a/arch/arm64/include/asm/kvm_arm.h
>> +++ b/arch/arm64/include/asm/kvm_arm.h
>> @@ -139,32 +139,30 @@
>>    * The magic numbers used for VTTBR_X in this patch can be found in Tables
>>    * D4-23 and D4-25 in ARM DDI 0487A.b.
>>    */

...

>>
>> +#define VTCR_EL2_FLAGS		(VTCR_EL2_TGRAN_FLAGS | VTCR_EL2_COMMON_BITS)
>> +#define VTTBR_X			((VTTBR_X_TGRAN_MAGIC) - VTCR_EL2_T0SZ_40B)
>
> Nit: spurious brackets.
  
Will remove them.

> It would be nice to add an ARMv8 ARM reference to where the "magic"
> value is coming from.

That reference already exists in the code, see above.

>
>> +
>>   #define VTTBR_BADDR_SHIFT (VTTBR_X - 1)
>>   #define VTTBR_BADDR_MASK  (((UL(1) << (PHYS_MASK_SHIFT - VTTBR_X)) - 1) << VTTBR_BADDR_SHIFT)
>>   #define VTTBR_VMID_SHIFT  (UL(48))
>>
>
> Otherwise:
>
> Acked-by: Marc Zyngier <marc.zyngier@arm.com>

Thanks
Suzuki

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC PATCH 03/12] arm64: kvm: Cleanup VTCR_EL2/VTTBR computation
  2016-03-16 15:37       ` Suzuki K. Poulose
@ 2016-03-16 15:45         ` Marc Zyngier
  -1 siblings, 0 replies; 50+ messages in thread
From: Marc Zyngier @ 2016-03-16 15:45 UTC (permalink / raw)
  To: Suzuki K. Poulose
  Cc: kvm, catalin.marinas, will.deacon, kvmarm, linux-arm-kernel

On Wed, 16 Mar 2016 15:37:18 +0000
"Suzuki K. Poulose" <Suzuki.Poulose@arm.com> wrote:

> On 16/03/16 15:01, Marc Zyngier wrote:
> > On 14/03/16 16:53, Suzuki K Poulose wrote:
> >> No functional changes. Group the common bits for VCTR_EL2
> >> initialisation for better readability. The granule size
> >> and the entry level are controlled by the page size.
> 
> >>
> >> diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
> >> index b7d61e4..d49dd50 100644
> >> --- a/arch/arm64/include/asm/kvm_arm.h
> >> +++ b/arch/arm64/include/asm/kvm_arm.h
> >> @@ -139,32 +139,30 @@
> >>    * The magic numbers used for VTTBR_X in this patch can be found in Tables
> >>    * D4-23 and D4-25 in ARM DDI 0487A.b.
> >>    */
> 
> ...
> 
> >>
> >> +#define VTCR_EL2_FLAGS		(VTCR_EL2_TGRAN_FLAGS | VTCR_EL2_COMMON_BITS)
> >> +#define VTTBR_X			((VTTBR_X_TGRAN_MAGIC) - VTCR_EL2_T0SZ_40B)
> >
> > Nit: spurious brackets.
>   
> Will remove them.
> 
> > It would be nice to add an ARMv8 ARM reference to where the "magic"
> > value is coming from.
> 
> That reference already exists in the code, see above.

Ah, good point!

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny.

^ permalink raw reply	[flat|nested] 50+ messages in thread

* [RFC PATCH 03/12] arm64: kvm: Cleanup VTCR_EL2/VTTBR computation
@ 2016-03-16 15:45         ` Marc Zyngier
  0 siblings, 0 replies; 50+ messages in thread
From: Marc Zyngier @ 2016-03-16 15:45 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, 16 Mar 2016 15:37:18 +0000
"Suzuki K. Poulose" <Suzuki.Poulose@arm.com> wrote:

> On 16/03/16 15:01, Marc Zyngier wrote:
> > On 14/03/16 16:53, Suzuki K Poulose wrote:
> >> No functional changes. Group the common bits for VCTR_EL2
> >> initialisation for better readability. The granule size
> >> and the entry level are controlled by the page size.
> 
> >>
> >> diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
> >> index b7d61e4..d49dd50 100644
> >> --- a/arch/arm64/include/asm/kvm_arm.h
> >> +++ b/arch/arm64/include/asm/kvm_arm.h
> >> @@ -139,32 +139,30 @@
> >>    * The magic numbers used for VTTBR_X in this patch can be found in Tables
> >>    * D4-23 and D4-25 in ARM DDI 0487A.b.
> >>    */
> 
> ...
> 
> >>
> >> +#define VTCR_EL2_FLAGS		(VTCR_EL2_TGRAN_FLAGS | VTCR_EL2_COMMON_BITS)
> >> +#define VTTBR_X			((VTTBR_X_TGRAN_MAGIC) - VTCR_EL2_T0SZ_40B)
> >
> > Nit: spurious brackets.
>   
> Will remove them.
> 
> > It would be nice to add an ARMv8 ARM reference to where the "magic"
> > value is coming from.
> 
> That reference already exists in the code, see above.

Ah, good point!

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny.

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC PATCH 04/12] kvm-arm: Rename kvm_pmd_huge to huge_pmd
  2016-03-14 16:53   ` Suzuki K Poulose
@ 2016-03-22  8:55     ` Christoffer Dall
  -1 siblings, 0 replies; 50+ messages in thread
From: Christoffer Dall @ 2016-03-22  8:55 UTC (permalink / raw)
  To: Suzuki K Poulose
  Cc: kvm, marc.zyngier, catalin.marinas, will.deacon, kvmarm,
	linux-arm-kernel

On Mon, Mar 14, 2016 at 04:53:03PM +0000, Suzuki K Poulose wrote:
> kvm_pmd_huge doesn't have any dependency on the page table
> where the pmd lives (i.e, hyp vs. stage2). So, rename it to
> huge_pmd() to make it explicit.
> 
> kvm_p.d_* wrappers will be used for helpers which differ
> across hyp vs stage2.

I don't understand this commit message.  Do you associate the kvm_
prefix specifically with one of hyp or stage2?

I remember reviewers in the past specifically asked to name anything
relating to pgtable macros in the kvm code with a kvm_ prefix to
distinguish them from logic used elsewhere in the kernel.

I specifically do not like having huge_pmd() be significantly different
in logic from pmd_huge(), so defining pmd_thp_or_huge() for arm64 is a
much better option.

Thanks,
-Christoffer


> 
> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> ---
>  arch/arm/kvm/mmu.c |   18 +++++++++---------
>  1 file changed, 9 insertions(+), 9 deletions(-)
> 
> diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
> index a16631c..3b038bb 100644
> --- a/arch/arm/kvm/mmu.c
> +++ b/arch/arm/kvm/mmu.c
> @@ -44,7 +44,7 @@ static phys_addr_t hyp_idmap_vector;
>  
>  #define hyp_pgd_order get_order(PTRS_PER_PGD * sizeof(pgd_t))
>  
> -#define kvm_pmd_huge(_x)	(pmd_huge(_x) || pmd_trans_huge(_x))
> +#define huge_pmd(_x)		(pmd_huge(_x) || pmd_trans_huge(_x))
>  #define kvm_pud_huge(_x)	pud_huge(_x)
>  
>  #define KVM_S2PTE_FLAG_IS_IOMAP		(1UL << 0)
> @@ -114,7 +114,7 @@ static bool kvm_is_device_pfn(unsigned long pfn)
>   */
>  static void stage2_dissolve_pmd(struct kvm *kvm, phys_addr_t addr, pmd_t *pmd)
>  {
> -	if (!kvm_pmd_huge(*pmd))
> +	if (!huge_pmd(*pmd))
>  		return;
>  
>  	pmd_clear(pmd);
> @@ -176,7 +176,7 @@ static void clear_pud_entry(struct kvm *kvm, pud_t *pud, phys_addr_t addr)
>  static void clear_pmd_entry(struct kvm *kvm, pmd_t *pmd, phys_addr_t addr)
>  {
>  	pte_t *pte_table = pte_offset_kernel(pmd, 0);
> -	VM_BUG_ON(kvm_pmd_huge(*pmd));
> +	VM_BUG_ON(huge_pmd(*pmd));
>  	pmd_clear(pmd);
>  	kvm_tlb_flush_vmid_ipa(kvm, addr);
>  	pte_free_kernel(NULL, pte_table);
> @@ -239,7 +239,7 @@ static void unmap_pmds(struct kvm *kvm, pud_t *pud,
>  	do {
>  		next = kvm_pmd_addr_end(addr, end);
>  		if (!pmd_none(*pmd)) {
> -			if (kvm_pmd_huge(*pmd)) {
> +			if (huge_pmd(*pmd)) {
>  				pmd_t old_pmd = *pmd;
>  
>  				pmd_clear(pmd);
> @@ -325,7 +325,7 @@ static void stage2_flush_pmds(struct kvm *kvm, pud_t *pud,
>  	do {
>  		next = kvm_pmd_addr_end(addr, end);
>  		if (!pmd_none(*pmd)) {
> -			if (kvm_pmd_huge(*pmd))
> +			if (huge_pmd(*pmd))
>  				kvm_flush_dcache_pmd(*pmd);
>  			else
>  				stage2_flush_ptes(kvm, pmd, addr, next);
> @@ -1043,7 +1043,7 @@ static void stage2_wp_pmds(pud_t *pud, phys_addr_t addr, phys_addr_t end)
>  	do {
>  		next = kvm_pmd_addr_end(addr, end);
>  		if (!pmd_none(*pmd)) {
> -			if (kvm_pmd_huge(*pmd)) {
> +			if (huge_pmd(*pmd)) {
>  				if (!kvm_s2pmd_readonly(pmd))
>  					kvm_set_s2pmd_readonly(pmd);
>  			} else {
> @@ -1324,7 +1324,7 @@ static void handle_access_fault(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa)
>  	if (!pmd || pmd_none(*pmd))	/* Nothing there */
>  		goto out;
>  
> -	if (kvm_pmd_huge(*pmd)) {	/* THP, HugeTLB */
> +	if (huge_pmd(*pmd)) {	/* THP, HugeTLB */
>  		*pmd = pmd_mkyoung(*pmd);
>  		pfn = pmd_pfn(*pmd);
>  		pfn_valid = true;
> @@ -1532,7 +1532,7 @@ static int kvm_age_hva_handler(struct kvm *kvm, gpa_t gpa, void *data)
>  	if (!pmd || pmd_none(*pmd))	/* Nothing there */
>  		return 0;
>  
> -	if (kvm_pmd_huge(*pmd)) {	/* THP, HugeTLB */
> +	if (huge_pmd(*pmd)) {	/* THP, HugeTLB */
>  		if (pmd_young(*pmd)) {
>  			*pmd = pmd_mkold(*pmd);
>  			return 1;
> @@ -1562,7 +1562,7 @@ static int kvm_test_age_hva_handler(struct kvm *kvm, gpa_t gpa, void *data)
>  	if (!pmd || pmd_none(*pmd))	/* Nothing there */
>  		return 0;
>  
> -	if (kvm_pmd_huge(*pmd))		/* THP, HugeTLB */
> +	if (huge_pmd(*pmd))		/* THP, HugeTLB */
>  		return pmd_young(*pmd);
>  
>  	pte = pte_offset_kernel(pmd, gpa);
> -- 
> 1.7.9.5
> 

^ permalink raw reply	[flat|nested] 50+ messages in thread

* [RFC PATCH 04/12] kvm-arm: Rename kvm_pmd_huge to huge_pmd
@ 2016-03-22  8:55     ` Christoffer Dall
  0 siblings, 0 replies; 50+ messages in thread
From: Christoffer Dall @ 2016-03-22  8:55 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Mar 14, 2016 at 04:53:03PM +0000, Suzuki K Poulose wrote:
> kvm_pmd_huge doesn't have any dependency on the page table
> where the pmd lives (i.e, hyp vs. stage2). So, rename it to
> huge_pmd() to make it explicit.
> 
> kvm_p.d_* wrappers will be used for helpers which differ
> across hyp vs stage2.

I don't understand this commit message.  Do you associate the kvm_
prefix specifically with one of hyp or stage2?

I remember reviewers in the past specifically asked to name anything
relating to pgtable macros in the kvm code with a kvm_ prefix to
distinguish them from logic used elsewhere in the kernel.

I specifically do not like having huge_pmd() be significantly different
in logic from pmd_huge(), so defining pmd_thp_or_huge() for arm64 is a
much better option.

Thanks,
-Christoffer


> 
> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> ---
>  arch/arm/kvm/mmu.c |   18 +++++++++---------
>  1 file changed, 9 insertions(+), 9 deletions(-)
> 
> diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
> index a16631c..3b038bb 100644
> --- a/arch/arm/kvm/mmu.c
> +++ b/arch/arm/kvm/mmu.c
> @@ -44,7 +44,7 @@ static phys_addr_t hyp_idmap_vector;
>  
>  #define hyp_pgd_order get_order(PTRS_PER_PGD * sizeof(pgd_t))
>  
> -#define kvm_pmd_huge(_x)	(pmd_huge(_x) || pmd_trans_huge(_x))
> +#define huge_pmd(_x)		(pmd_huge(_x) || pmd_trans_huge(_x))
>  #define kvm_pud_huge(_x)	pud_huge(_x)
>  
>  #define KVM_S2PTE_FLAG_IS_IOMAP		(1UL << 0)
> @@ -114,7 +114,7 @@ static bool kvm_is_device_pfn(unsigned long pfn)
>   */
>  static void stage2_dissolve_pmd(struct kvm *kvm, phys_addr_t addr, pmd_t *pmd)
>  {
> -	if (!kvm_pmd_huge(*pmd))
> +	if (!huge_pmd(*pmd))
>  		return;
>  
>  	pmd_clear(pmd);
> @@ -176,7 +176,7 @@ static void clear_pud_entry(struct kvm *kvm, pud_t *pud, phys_addr_t addr)
>  static void clear_pmd_entry(struct kvm *kvm, pmd_t *pmd, phys_addr_t addr)
>  {
>  	pte_t *pte_table = pte_offset_kernel(pmd, 0);
> -	VM_BUG_ON(kvm_pmd_huge(*pmd));
> +	VM_BUG_ON(huge_pmd(*pmd));
>  	pmd_clear(pmd);
>  	kvm_tlb_flush_vmid_ipa(kvm, addr);
>  	pte_free_kernel(NULL, pte_table);
> @@ -239,7 +239,7 @@ static void unmap_pmds(struct kvm *kvm, pud_t *pud,
>  	do {
>  		next = kvm_pmd_addr_end(addr, end);
>  		if (!pmd_none(*pmd)) {
> -			if (kvm_pmd_huge(*pmd)) {
> +			if (huge_pmd(*pmd)) {
>  				pmd_t old_pmd = *pmd;
>  
>  				pmd_clear(pmd);
> @@ -325,7 +325,7 @@ static void stage2_flush_pmds(struct kvm *kvm, pud_t *pud,
>  	do {
>  		next = kvm_pmd_addr_end(addr, end);
>  		if (!pmd_none(*pmd)) {
> -			if (kvm_pmd_huge(*pmd))
> +			if (huge_pmd(*pmd))
>  				kvm_flush_dcache_pmd(*pmd);
>  			else
>  				stage2_flush_ptes(kvm, pmd, addr, next);
> @@ -1043,7 +1043,7 @@ static void stage2_wp_pmds(pud_t *pud, phys_addr_t addr, phys_addr_t end)
>  	do {
>  		next = kvm_pmd_addr_end(addr, end);
>  		if (!pmd_none(*pmd)) {
> -			if (kvm_pmd_huge(*pmd)) {
> +			if (huge_pmd(*pmd)) {
>  				if (!kvm_s2pmd_readonly(pmd))
>  					kvm_set_s2pmd_readonly(pmd);
>  			} else {
> @@ -1324,7 +1324,7 @@ static void handle_access_fault(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa)
>  	if (!pmd || pmd_none(*pmd))	/* Nothing there */
>  		goto out;
>  
> -	if (kvm_pmd_huge(*pmd)) {	/* THP, HugeTLB */
> +	if (huge_pmd(*pmd)) {	/* THP, HugeTLB */
>  		*pmd = pmd_mkyoung(*pmd);
>  		pfn = pmd_pfn(*pmd);
>  		pfn_valid = true;
> @@ -1532,7 +1532,7 @@ static int kvm_age_hva_handler(struct kvm *kvm, gpa_t gpa, void *data)
>  	if (!pmd || pmd_none(*pmd))	/* Nothing there */
>  		return 0;
>  
> -	if (kvm_pmd_huge(*pmd)) {	/* THP, HugeTLB */
> +	if (huge_pmd(*pmd)) {	/* THP, HugeTLB */
>  		if (pmd_young(*pmd)) {
>  			*pmd = pmd_mkold(*pmd);
>  			return 1;
> @@ -1562,7 +1562,7 @@ static int kvm_test_age_hva_handler(struct kvm *kvm, gpa_t gpa, void *data)
>  	if (!pmd || pmd_none(*pmd))	/* Nothing there */
>  		return 0;
>  
> -	if (kvm_pmd_huge(*pmd))		/* THP, HugeTLB */
> +	if (huge_pmd(*pmd))		/* THP, HugeTLB */
>  		return pmd_young(*pmd);
>  
>  	pte = pte_offset_kernel(pmd, gpa);
> -- 
> 1.7.9.5
> 

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC PATCH 06/12] kvm-arm: Pass kvm parameter for pagetable helpers
  2016-03-14 16:53   ` Suzuki K Poulose
@ 2016-03-22  9:30     ` Christoffer Dall
  -1 siblings, 0 replies; 50+ messages in thread
From: Christoffer Dall @ 2016-03-22  9:30 UTC (permalink / raw)
  To: Suzuki K Poulose
  Cc: marc.zyngier, kvmarm, linux-arm-kernel, mark.rutland, kvm,
	will.deacon, catalin.marinas

On Mon, Mar 14, 2016 at 04:53:05PM +0000, Suzuki K Poulose wrote:
> Pass 'kvm' to existing kvm_p.d_* page table wrappers to prepare
> them to choose between hyp and stage2 page table. No functional
> changes yet. Also while at it, convert them to static inline
> functions.

I have to say that I'm not really crazy about the idea of having common
hyp and stage2 code and having the pgtable macros change behavior
depending on the type.

Is it not so that that host pgtable macros will always be valid for the
hyp mappings, because we have the same VA space available etc.?  It's
just a matter of different page table entry attributes.

Looking at arch/arm/kvm/mmu.c, it looks to me like we would get the
cleanest separation by separating stuff that touches hyp page tables
from stuff that touches stage2 page tables.

Then you can get rid of the whole kvm_ prefix and directly use stage2
accessors (which you may want to consider renaming to s2_) directly.

I think we've seen in the past that the confusion from functions
potentially touching both hyp and stage2 page tables is a bad thing and
we should seek to avoid it.

Thanks,
-Christoffer

> 
> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> ---
>  arch/arm/include/asm/kvm_mmu.h   |   38 +++++++++++++++++++++++++++-----------
>  arch/arm/kvm/mmu.c               |   34 +++++++++++++++++-----------------
>  arch/arm64/include/asm/kvm_mmu.h |   31 ++++++++++++++++++++++++++-----
>  3 files changed, 70 insertions(+), 33 deletions(-)
> 
> diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
> index 4448e77..17c6781 100644
> --- a/arch/arm/include/asm/kvm_mmu.h
> +++ b/arch/arm/include/asm/kvm_mmu.h
> @@ -45,6 +45,7 @@
>  #ifndef __ASSEMBLY__
>  
>  #include <linux/highmem.h>
> +#include <linux/hugetlb.h>
>  #include <asm/cacheflush.h>
>  #include <asm/pgalloc.h>
>  
> @@ -135,22 +136,37 @@ static inline bool kvm_s2pmd_readonly(pmd_t *pmd)
>  	return (pmd_val(*pmd) & L_PMD_S2_RDWR) == L_PMD_S2_RDONLY;
>  }
>  
> -#define kvm_pud_huge(_x)	pud_huge(_x)
> +static inline int kvm_pud_huge(struct kvm *kvm, pud_t pud)
> +{
> +	return pud_huge(pud);
> +}
> +
>  
>  /* Open coded p*d_addr_end that can deal with 64bit addresses */
> -#define kvm_pgd_addr_end(addr, end)					\
> -({	u64 __boundary = ((addr) + PGDIR_SIZE) & PGDIR_MASK;		\
> -	(__boundary - 1 < (end) - 1)? __boundary: (end);		\
> -})
> +static inline phys_addr_t
> +kvm_pgd_addr_end(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
> +{
> +	phys_addr_t boundary = (addr + PGDIR_SIZE) & PGDIR_MASK;
> +	return (boundary - 1 < end - 1) ? boundary : end;
> +}
>  
> -#define kvm_pud_addr_end(addr,end)		(end)
> +static inline phys_addr_t
> +kvm_pud_addr_end(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
> +{
> +	return end;
> +}
>  
> -#define kvm_pmd_addr_end(addr, end)					\
> -({	u64 __boundary = ((addr) + PMD_SIZE) & PMD_MASK;		\
> -	(__boundary - 1 < (end) - 1)? __boundary: (end);		\
> -})
> +static inline phys_addr_t
> +kvm_pmd_addr_end(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
> +{
> +	phys_addr_t boundary = (addr + PMD_SIZE) & PMD_MASK;
> +	return (boundary - 1 < end - 1) ? boundary : end;
> +}
>  
> -#define kvm_pgd_index(addr)			pgd_index(addr)
> +static inline phys_addr_t kvm_pgd_index(struct kvm *kvm, phys_addr_t addr)
> +{
> +	return pgd_index(addr);
> +}
>  
>  static inline bool kvm_page_empty(void *ptr)
>  {
> diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
> index d1e9a71..22b4c99 100644
> --- a/arch/arm/kvm/mmu.c
> +++ b/arch/arm/kvm/mmu.c
> @@ -165,7 +165,7 @@ static void clear_pgd_entry(struct kvm *kvm, pgd_t *pgd, phys_addr_t addr)
>  static void clear_pud_entry(struct kvm *kvm, pud_t *pud, phys_addr_t addr)
>  {
>  	pmd_t *pmd_table = pmd_offset(pud, 0);
> -	VM_BUG_ON(pud_huge(*pud));
> +	VM_BUG_ON(kvm_pud_huge(kvm, *pud));
>  	pud_clear(pud);
>  	kvm_tlb_flush_vmid_ipa(kvm, addr);
>  	pmd_free(NULL, pmd_table);
> @@ -236,7 +236,7 @@ static void unmap_pmds(struct kvm *kvm, pud_t *pud,
>  
>  	start_pmd = pmd = pmd_offset(pud, addr);
>  	do {
> -		next = kvm_pmd_addr_end(addr, end);
> +		next = kvm_pmd_addr_end(kvm, addr, end);
>  		if (!pmd_none(*pmd)) {
>  			if (huge_pmd(*pmd)) {
>  				pmd_t old_pmd = *pmd;
> @@ -265,9 +265,9 @@ static void unmap_puds(struct kvm *kvm, pgd_t *pgd,
>  
>  	start_pud = pud = pud_offset(pgd, addr);
>  	do {
> -		next = kvm_pud_addr_end(addr, end);
> +		next = kvm_pud_addr_end(kvm, addr, end);
>  		if (!pud_none(*pud)) {
> -			if (pud_huge(*pud)) {
> +			if (kvm_pud_huge(kvm, *pud)) {
>  				pud_t old_pud = *pud;
>  
>  				pud_clear(pud);
> @@ -294,9 +294,9 @@ static void unmap_range(struct kvm *kvm, pgd_t *pgdp,
>  	phys_addr_t addr = start, end = start + size;
>  	phys_addr_t next;
>  
> -	pgd = pgdp + kvm_pgd_index(addr);
> +	pgd = pgdp + kvm_pgd_index(kvm, addr);
>  	do {
> -		next = kvm_pgd_addr_end(addr, end);
> +		next = kvm_pgd_addr_end(kvm, addr, end);
>  		if (!pgd_none(*pgd))
>  			unmap_puds(kvm, pgd, addr, next);
>  	} while (pgd++, addr = next, addr != end);
> @@ -322,7 +322,7 @@ static void stage2_flush_pmds(struct kvm *kvm, pud_t *pud,
>  
>  	pmd = pmd_offset(pud, addr);
>  	do {
> -		next = kvm_pmd_addr_end(addr, end);
> +		next = kvm_pmd_addr_end(kvm, addr, end);
>  		if (!pmd_none(*pmd)) {
>  			if (huge_pmd(*pmd))
>  				kvm_flush_dcache_pmd(*pmd);
> @@ -340,9 +340,9 @@ static void stage2_flush_puds(struct kvm *kvm, pgd_t *pgd,
>  
>  	pud = pud_offset(pgd, addr);
>  	do {
> -		next = kvm_pud_addr_end(addr, end);
> +		next = kvm_pud_addr_end(kvm, addr, end);
>  		if (!pud_none(*pud)) {
> -			if (pud_huge(*pud))
> +			if (kvm_pud_huge(kvm, *pud))
>  				kvm_flush_dcache_pud(*pud);
>  			else
>  				stage2_flush_pmds(kvm, pud, addr, next);
> @@ -358,9 +358,9 @@ static void stage2_flush_memslot(struct kvm *kvm,
>  	phys_addr_t next;
>  	pgd_t *pgd;
>  
> -	pgd = kvm->arch.pgd + kvm_pgd_index(addr);
> +	pgd = kvm->arch.pgd + kvm_pgd_index(kvm, addr);
>  	do {
> -		next = kvm_pgd_addr_end(addr, end);
> +		next = kvm_pgd_addr_end(kvm, addr, end);
>  		stage2_flush_puds(kvm, pgd, addr, next);
>  	} while (pgd++, addr = next, addr != end);
>  }
> @@ -802,7 +802,7 @@ static pud_t *stage2_get_pud(struct kvm *kvm, struct kvm_mmu_memory_cache *cache
>  	pgd_t *pgd;
>  	pud_t *pud;
>  
> -	pgd = kvm->arch.pgd + kvm_pgd_index(addr);
> +	pgd = kvm->arch.pgd + kvm_pgd_index(kvm, addr);
>  	if (WARN_ON(pgd_none(*pgd))) {
>  		if (!cache)
>  			return NULL;
> @@ -1040,7 +1040,7 @@ static void stage2_wp_pmds(pud_t *pud, phys_addr_t addr, phys_addr_t end)
>  	pmd = pmd_offset(pud, addr);
>  
>  	do {
> -		next = kvm_pmd_addr_end(addr, end);
> +		next = kvm_pmd_addr_end(NULL, addr, end);
>  		if (!pmd_none(*pmd)) {
>  			if (huge_pmd(*pmd)) {
>  				if (!kvm_s2pmd_readonly(pmd))
> @@ -1067,10 +1067,10 @@ static void  stage2_wp_puds(pgd_t *pgd, phys_addr_t addr, phys_addr_t end)
>  
>  	pud = pud_offset(pgd, addr);
>  	do {
> -		next = kvm_pud_addr_end(addr, end);
> +		next = kvm_pud_addr_end(NULL, addr, end);
>  		if (!pud_none(*pud)) {
>  			/* TODO:PUD not supported, revisit later if supported */
> -			BUG_ON(kvm_pud_huge(*pud));
> +			BUG_ON(kvm_pud_huge(NULL, *pud));
>  			stage2_wp_pmds(pud, addr, next);
>  		}
>  	} while (pud++, addr = next, addr != end);
> @@ -1087,7 +1087,7 @@ static void stage2_wp_range(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
>  	pgd_t *pgd;
>  	phys_addr_t next;
>  
> -	pgd = kvm->arch.pgd + kvm_pgd_index(addr);
> +	pgd = kvm->arch.pgd + kvm_pgd_index(kvm, addr);
>  	do {
>  		/*
>  		 * Release kvm_mmu_lock periodically if the memory region is
> @@ -1099,7 +1099,7 @@ static void stage2_wp_range(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
>  		if (need_resched() || spin_needbreak(&kvm->mmu_lock))
>  			cond_resched_lock(&kvm->mmu_lock);
>  
> -		next = kvm_pgd_addr_end(addr, end);
> +		next = kvm_pgd_addr_end(kvm, addr, end);
>  		if (pgd_present(*pgd))
>  			stage2_wp_puds(pgd, addr, next);
>  	} while (pgd++, addr = next, addr != end);
> diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
> index a01d87d..416ca23 100644
> --- a/arch/arm64/include/asm/kvm_mmu.h
> +++ b/arch/arm64/include/asm/kvm_mmu.h
> @@ -71,6 +71,7 @@
>  #include <asm/cacheflush.h>
>  #include <asm/mmu_context.h>
>  #include <asm/pgtable.h>
> +#include <linux/hugetlb.h>
>  
>  #define KERN_TO_HYP(kva)	((unsigned long)kva - PAGE_OFFSET + HYP_PAGE_OFFSET)
>  
> @@ -141,11 +142,28 @@ static inline bool kvm_s2pmd_readonly(pmd_t *pmd)
>  	return (pmd_val(*pmd) & PMD_S2_RDWR) == PMD_S2_RDONLY;
>  }
>  
> -#define kvm_pud_huge(_x)	pud_huge(_x)
> +static inline int kvm_pud_huge(struct kvm *kvm, pud_t pud)
> +{
> +	return pud_huge(pud);
> +}
> +
> +static inline phys_addr_t
> +kvm_pgd_addr_end(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
> +{
> +	return	pgd_addr_end(addr, end);
> +}
> +
> +static inline phys_addr_t
> +kvm_pud_addr_end(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
> +{
> +	return	pud_addr_end(addr, end);
> +}
>  
> -#define kvm_pgd_addr_end(addr, end)	pgd_addr_end(addr, end)
> -#define kvm_pud_addr_end(addr, end)	pud_addr_end(addr, end)
> -#define kvm_pmd_addr_end(addr, end)	pmd_addr_end(addr, end)
> +static inline phys_addr_t
> +kvm_pmd_addr_end(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
> +{
> +	return	pmd_addr_end(addr, end);
> +}
>  
>  /*
>   * In the case where PGDIR_SHIFT is larger than KVM_PHYS_SHIFT, we can address
> @@ -161,7 +179,10 @@ static inline bool kvm_s2pmd_readonly(pmd_t *pmd)
>  #endif
>  #define PTRS_PER_S2_PGD		(1 << PTRS_PER_S2_PGD_SHIFT)
>  
> -#define kvm_pgd_index(addr)	(((addr) >> PGDIR_SHIFT) & (PTRS_PER_S2_PGD - 1))
> +static inline phys_addr_t kvm_pgd_index(struct kvm *kvm, phys_addr_t addr)
> +{
> +	return (addr >> PGDIR_SHIFT) & (PTRS_PER_S2_PGD - 1);
> +}
>  
>  /*
>   * If we are concatenating first level stage-2 page tables, we would have less
> -- 
> 1.7.9.5
> 

^ permalink raw reply	[flat|nested] 50+ messages in thread

* [RFC PATCH 06/12] kvm-arm: Pass kvm parameter for pagetable helpers
@ 2016-03-22  9:30     ` Christoffer Dall
  0 siblings, 0 replies; 50+ messages in thread
From: Christoffer Dall @ 2016-03-22  9:30 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Mar 14, 2016 at 04:53:05PM +0000, Suzuki K Poulose wrote:
> Pass 'kvm' to existing kvm_p.d_* page table wrappers to prepare
> them to choose between hyp and stage2 page table. No functional
> changes yet. Also while at it, convert them to static inline
> functions.

I have to say that I'm not really crazy about the idea of having common
hyp and stage2 code and having the pgtable macros change behavior
depending on the type.

Is it not so that that host pgtable macros will always be valid for the
hyp mappings, because we have the same VA space available etc.?  It's
just a matter of different page table entry attributes.

Looking at arch/arm/kvm/mmu.c, it looks to me like we would get the
cleanest separation by separating stuff that touches hyp page tables
from stuff that touches stage2 page tables.

Then you can get rid of the whole kvm_ prefix and directly use stage2
accessors (which you may want to consider renaming to s2_) directly.

I think we've seen in the past that the confusion from functions
potentially touching both hyp and stage2 page tables is a bad thing and
we should seek to avoid it.

Thanks,
-Christoffer

> 
> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> ---
>  arch/arm/include/asm/kvm_mmu.h   |   38 +++++++++++++++++++++++++++-----------
>  arch/arm/kvm/mmu.c               |   34 +++++++++++++++++-----------------
>  arch/arm64/include/asm/kvm_mmu.h |   31 ++++++++++++++++++++++++++-----
>  3 files changed, 70 insertions(+), 33 deletions(-)
> 
> diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
> index 4448e77..17c6781 100644
> --- a/arch/arm/include/asm/kvm_mmu.h
> +++ b/arch/arm/include/asm/kvm_mmu.h
> @@ -45,6 +45,7 @@
>  #ifndef __ASSEMBLY__
>  
>  #include <linux/highmem.h>
> +#include <linux/hugetlb.h>
>  #include <asm/cacheflush.h>
>  #include <asm/pgalloc.h>
>  
> @@ -135,22 +136,37 @@ static inline bool kvm_s2pmd_readonly(pmd_t *pmd)
>  	return (pmd_val(*pmd) & L_PMD_S2_RDWR) == L_PMD_S2_RDONLY;
>  }
>  
> -#define kvm_pud_huge(_x)	pud_huge(_x)
> +static inline int kvm_pud_huge(struct kvm *kvm, pud_t pud)
> +{
> +	return pud_huge(pud);
> +}
> +
>  
>  /* Open coded p*d_addr_end that can deal with 64bit addresses */
> -#define kvm_pgd_addr_end(addr, end)					\
> -({	u64 __boundary = ((addr) + PGDIR_SIZE) & PGDIR_MASK;		\
> -	(__boundary - 1 < (end) - 1)? __boundary: (end);		\
> -})
> +static inline phys_addr_t
> +kvm_pgd_addr_end(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
> +{
> +	phys_addr_t boundary = (addr + PGDIR_SIZE) & PGDIR_MASK;
> +	return (boundary - 1 < end - 1) ? boundary : end;
> +}
>  
> -#define kvm_pud_addr_end(addr,end)		(end)
> +static inline phys_addr_t
> +kvm_pud_addr_end(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
> +{
> +	return end;
> +}
>  
> -#define kvm_pmd_addr_end(addr, end)					\
> -({	u64 __boundary = ((addr) + PMD_SIZE) & PMD_MASK;		\
> -	(__boundary - 1 < (end) - 1)? __boundary: (end);		\
> -})
> +static inline phys_addr_t
> +kvm_pmd_addr_end(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
> +{
> +	phys_addr_t boundary = (addr + PMD_SIZE) & PMD_MASK;
> +	return (boundary - 1 < end - 1) ? boundary : end;
> +}
>  
> -#define kvm_pgd_index(addr)			pgd_index(addr)
> +static inline phys_addr_t kvm_pgd_index(struct kvm *kvm, phys_addr_t addr)
> +{
> +	return pgd_index(addr);
> +}
>  
>  static inline bool kvm_page_empty(void *ptr)
>  {
> diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
> index d1e9a71..22b4c99 100644
> --- a/arch/arm/kvm/mmu.c
> +++ b/arch/arm/kvm/mmu.c
> @@ -165,7 +165,7 @@ static void clear_pgd_entry(struct kvm *kvm, pgd_t *pgd, phys_addr_t addr)
>  static void clear_pud_entry(struct kvm *kvm, pud_t *pud, phys_addr_t addr)
>  {
>  	pmd_t *pmd_table = pmd_offset(pud, 0);
> -	VM_BUG_ON(pud_huge(*pud));
> +	VM_BUG_ON(kvm_pud_huge(kvm, *pud));
>  	pud_clear(pud);
>  	kvm_tlb_flush_vmid_ipa(kvm, addr);
>  	pmd_free(NULL, pmd_table);
> @@ -236,7 +236,7 @@ static void unmap_pmds(struct kvm *kvm, pud_t *pud,
>  
>  	start_pmd = pmd = pmd_offset(pud, addr);
>  	do {
> -		next = kvm_pmd_addr_end(addr, end);
> +		next = kvm_pmd_addr_end(kvm, addr, end);
>  		if (!pmd_none(*pmd)) {
>  			if (huge_pmd(*pmd)) {
>  				pmd_t old_pmd = *pmd;
> @@ -265,9 +265,9 @@ static void unmap_puds(struct kvm *kvm, pgd_t *pgd,
>  
>  	start_pud = pud = pud_offset(pgd, addr);
>  	do {
> -		next = kvm_pud_addr_end(addr, end);
> +		next = kvm_pud_addr_end(kvm, addr, end);
>  		if (!pud_none(*pud)) {
> -			if (pud_huge(*pud)) {
> +			if (kvm_pud_huge(kvm, *pud)) {
>  				pud_t old_pud = *pud;
>  
>  				pud_clear(pud);
> @@ -294,9 +294,9 @@ static void unmap_range(struct kvm *kvm, pgd_t *pgdp,
>  	phys_addr_t addr = start, end = start + size;
>  	phys_addr_t next;
>  
> -	pgd = pgdp + kvm_pgd_index(addr);
> +	pgd = pgdp + kvm_pgd_index(kvm, addr);
>  	do {
> -		next = kvm_pgd_addr_end(addr, end);
> +		next = kvm_pgd_addr_end(kvm, addr, end);
>  		if (!pgd_none(*pgd))
>  			unmap_puds(kvm, pgd, addr, next);
>  	} while (pgd++, addr = next, addr != end);
> @@ -322,7 +322,7 @@ static void stage2_flush_pmds(struct kvm *kvm, pud_t *pud,
>  
>  	pmd = pmd_offset(pud, addr);
>  	do {
> -		next = kvm_pmd_addr_end(addr, end);
> +		next = kvm_pmd_addr_end(kvm, addr, end);
>  		if (!pmd_none(*pmd)) {
>  			if (huge_pmd(*pmd))
>  				kvm_flush_dcache_pmd(*pmd);
> @@ -340,9 +340,9 @@ static void stage2_flush_puds(struct kvm *kvm, pgd_t *pgd,
>  
>  	pud = pud_offset(pgd, addr);
>  	do {
> -		next = kvm_pud_addr_end(addr, end);
> +		next = kvm_pud_addr_end(kvm, addr, end);
>  		if (!pud_none(*pud)) {
> -			if (pud_huge(*pud))
> +			if (kvm_pud_huge(kvm, *pud))
>  				kvm_flush_dcache_pud(*pud);
>  			else
>  				stage2_flush_pmds(kvm, pud, addr, next);
> @@ -358,9 +358,9 @@ static void stage2_flush_memslot(struct kvm *kvm,
>  	phys_addr_t next;
>  	pgd_t *pgd;
>  
> -	pgd = kvm->arch.pgd + kvm_pgd_index(addr);
> +	pgd = kvm->arch.pgd + kvm_pgd_index(kvm, addr);
>  	do {
> -		next = kvm_pgd_addr_end(addr, end);
> +		next = kvm_pgd_addr_end(kvm, addr, end);
>  		stage2_flush_puds(kvm, pgd, addr, next);
>  	} while (pgd++, addr = next, addr != end);
>  }
> @@ -802,7 +802,7 @@ static pud_t *stage2_get_pud(struct kvm *kvm, struct kvm_mmu_memory_cache *cache
>  	pgd_t *pgd;
>  	pud_t *pud;
>  
> -	pgd = kvm->arch.pgd + kvm_pgd_index(addr);
> +	pgd = kvm->arch.pgd + kvm_pgd_index(kvm, addr);
>  	if (WARN_ON(pgd_none(*pgd))) {
>  		if (!cache)
>  			return NULL;
> @@ -1040,7 +1040,7 @@ static void stage2_wp_pmds(pud_t *pud, phys_addr_t addr, phys_addr_t end)
>  	pmd = pmd_offset(pud, addr);
>  
>  	do {
> -		next = kvm_pmd_addr_end(addr, end);
> +		next = kvm_pmd_addr_end(NULL, addr, end);
>  		if (!pmd_none(*pmd)) {
>  			if (huge_pmd(*pmd)) {
>  				if (!kvm_s2pmd_readonly(pmd))
> @@ -1067,10 +1067,10 @@ static void  stage2_wp_puds(pgd_t *pgd, phys_addr_t addr, phys_addr_t end)
>  
>  	pud = pud_offset(pgd, addr);
>  	do {
> -		next = kvm_pud_addr_end(addr, end);
> +		next = kvm_pud_addr_end(NULL, addr, end);
>  		if (!pud_none(*pud)) {
>  			/* TODO:PUD not supported, revisit later if supported */
> -			BUG_ON(kvm_pud_huge(*pud));
> +			BUG_ON(kvm_pud_huge(NULL, *pud));
>  			stage2_wp_pmds(pud, addr, next);
>  		}
>  	} while (pud++, addr = next, addr != end);
> @@ -1087,7 +1087,7 @@ static void stage2_wp_range(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
>  	pgd_t *pgd;
>  	phys_addr_t next;
>  
> -	pgd = kvm->arch.pgd + kvm_pgd_index(addr);
> +	pgd = kvm->arch.pgd + kvm_pgd_index(kvm, addr);
>  	do {
>  		/*
>  		 * Release kvm_mmu_lock periodically if the memory region is
> @@ -1099,7 +1099,7 @@ static void stage2_wp_range(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
>  		if (need_resched() || spin_needbreak(&kvm->mmu_lock))
>  			cond_resched_lock(&kvm->mmu_lock);
>  
> -		next = kvm_pgd_addr_end(addr, end);
> +		next = kvm_pgd_addr_end(kvm, addr, end);
>  		if (pgd_present(*pgd))
>  			stage2_wp_puds(pgd, addr, next);
>  	} while (pgd++, addr = next, addr != end);
> diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
> index a01d87d..416ca23 100644
> --- a/arch/arm64/include/asm/kvm_mmu.h
> +++ b/arch/arm64/include/asm/kvm_mmu.h
> @@ -71,6 +71,7 @@
>  #include <asm/cacheflush.h>
>  #include <asm/mmu_context.h>
>  #include <asm/pgtable.h>
> +#include <linux/hugetlb.h>
>  
>  #define KERN_TO_HYP(kva)	((unsigned long)kva - PAGE_OFFSET + HYP_PAGE_OFFSET)
>  
> @@ -141,11 +142,28 @@ static inline bool kvm_s2pmd_readonly(pmd_t *pmd)
>  	return (pmd_val(*pmd) & PMD_S2_RDWR) == PMD_S2_RDONLY;
>  }
>  
> -#define kvm_pud_huge(_x)	pud_huge(_x)
> +static inline int kvm_pud_huge(struct kvm *kvm, pud_t pud)
> +{
> +	return pud_huge(pud);
> +}
> +
> +static inline phys_addr_t
> +kvm_pgd_addr_end(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
> +{
> +	return	pgd_addr_end(addr, end);
> +}
> +
> +static inline phys_addr_t
> +kvm_pud_addr_end(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
> +{
> +	return	pud_addr_end(addr, end);
> +}
>  
> -#define kvm_pgd_addr_end(addr, end)	pgd_addr_end(addr, end)
> -#define kvm_pud_addr_end(addr, end)	pud_addr_end(addr, end)
> -#define kvm_pmd_addr_end(addr, end)	pmd_addr_end(addr, end)
> +static inline phys_addr_t
> +kvm_pmd_addr_end(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
> +{
> +	return	pmd_addr_end(addr, end);
> +}
>  
>  /*
>   * In the case where PGDIR_SHIFT is larger than KVM_PHYS_SHIFT, we can address
> @@ -161,7 +179,10 @@ static inline bool kvm_s2pmd_readonly(pmd_t *pmd)
>  #endif
>  #define PTRS_PER_S2_PGD		(1 << PTRS_PER_S2_PGD_SHIFT)
>  
> -#define kvm_pgd_index(addr)	(((addr) >> PGDIR_SHIFT) & (PTRS_PER_S2_PGD - 1))
> +static inline phys_addr_t kvm_pgd_index(struct kvm *kvm, phys_addr_t addr)
> +{
> +	return (addr >> PGDIR_SHIFT) & (PTRS_PER_S2_PGD - 1);
> +}
>  
>  /*
>   * If we are concatenating first level stage-2 page tables, we would have less
> -- 
> 1.7.9.5
> 

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC PATCH 04/12] kvm-arm: Rename kvm_pmd_huge to huge_pmd
  2016-03-22  8:55     ` Christoffer Dall
@ 2016-03-22 10:03       ` Suzuki K. Poulose
  -1 siblings, 0 replies; 50+ messages in thread
From: Suzuki K. Poulose @ 2016-03-22 10:03 UTC (permalink / raw)
  To: Christoffer Dall
  Cc: marc.zyngier, kvmarm, linux-arm-kernel, mark.rutland, kvm,
	will.deacon, catalin.marinas

On 22/03/16 08:55, Christoffer Dall wrote:
> On Mon, Mar 14, 2016 at 04:53:03PM +0000, Suzuki K Poulose wrote:
>> kvm_pmd_huge doesn't have any dependency on the page table
>> where the pmd lives (i.e, hyp vs. stage2). So, rename it to
>> huge_pmd() to make it explicit.
>>
>> kvm_p.d_* wrappers will be used for helpers which differ
>> across hyp vs stage2.
>
> I don't understand this commit message.  Do you associate the kvm_
> prefix specifically with one of hyp or stage2?

So the idea is kvm_ prefix will be used for handling either hyp or stage2
depending on what we are dealing with (i.e, kvm parameter to the helpers).

So here, we just want to know if a given pmd represents a huge page, either
via thp or via hugetlb and that doesn't have anything to do with hyp or stage2.
Hence the change. As

>
> I remember reviewers in the past specifically asked to name anything
> relating to pgtable macros in the kvm code with a kvm_ prefix to
> distinguish them from logic used elsewhere in the kernel.

Correct. In this case it doesn't apply to kvm_pmd_huge().

>
> I specifically do not like having huge_pmd() be significantly different
> in logic from pmd_huge(), so defining pmd_thp_or_huge() for arm64 is a
> much better option.

Yes, I have switched to that in the next version.

Thanks
Suzuki


^ permalink raw reply	[flat|nested] 50+ messages in thread

* [RFC PATCH 04/12] kvm-arm: Rename kvm_pmd_huge to huge_pmd
@ 2016-03-22 10:03       ` Suzuki K. Poulose
  0 siblings, 0 replies; 50+ messages in thread
From: Suzuki K. Poulose @ 2016-03-22 10:03 UTC (permalink / raw)
  To: linux-arm-kernel

On 22/03/16 08:55, Christoffer Dall wrote:
> On Mon, Mar 14, 2016 at 04:53:03PM +0000, Suzuki K Poulose wrote:
>> kvm_pmd_huge doesn't have any dependency on the page table
>> where the pmd lives (i.e, hyp vs. stage2). So, rename it to
>> huge_pmd() to make it explicit.
>>
>> kvm_p.d_* wrappers will be used for helpers which differ
>> across hyp vs stage2.
>
> I don't understand this commit message.  Do you associate the kvm_
> prefix specifically with one of hyp or stage2?

So the idea is kvm_ prefix will be used for handling either hyp or stage2
depending on what we are dealing with (i.e, kvm parameter to the helpers).

So here, we just want to know if a given pmd represents a huge page, either
via thp or via hugetlb and that doesn't have anything to do with hyp or stage2.
Hence the change. As

>
> I remember reviewers in the past specifically asked to name anything
> relating to pgtable macros in the kvm code with a kvm_ prefix to
> distinguish them from logic used elsewhere in the kernel.

Correct. In this case it doesn't apply to kvm_pmd_huge().

>
> I specifically do not like having huge_pmd() be significantly different
> in logic from pmd_huge(), so defining pmd_thp_or_huge() for arm64 is a
> much better option.

Yes, I have switched to that in the next version.

Thanks
Suzuki

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC PATCH 06/12] kvm-arm: Pass kvm parameter for pagetable helpers
  2016-03-22  9:30     ` Christoffer Dall
@ 2016-03-22 10:15       ` Suzuki K. Poulose
  -1 siblings, 0 replies; 50+ messages in thread
From: Suzuki K. Poulose @ 2016-03-22 10:15 UTC (permalink / raw)
  To: Christoffer Dall
  Cc: marc.zyngier, kvmarm, linux-arm-kernel, mark.rutland, kvm,
	will.deacon, catalin.marinas

On 22/03/16 09:30, Christoffer Dall wrote:
> On Mon, Mar 14, 2016 at 04:53:05PM +0000, Suzuki K Poulose wrote:
>> Pass 'kvm' to existing kvm_p.d_* page table wrappers to prepare
>> them to choose between hyp and stage2 page table. No functional
>> changes yet. Also while at it, convert them to static inline
>> functions.
>
> I have to say that I'm not really crazy about the idea of having common
> hyp and stage2 code and having the pgtable macros change behavior
> depending on the type.
>
> Is it not so that that host pgtable macros will always be valid for the
> hyp mappings, because we have the same VA space available etc.?  It's
> just a matter of different page table entry attributes.

Yes, host pgtable macros are still used for hyp mappings, when kvm == NULL.
and we do use explicit accessors (stage2_xxx wherever possible with this series).

>
> Looking at arch/arm/kvm/mmu.c, it looks to me like we would get the
> cleanest separation by separating stuff that touches hyp page tables
> from stuff that touches stage2 page tables.

OK. Here are the routines which deal with both types:

unmap_range, unmap_p{u,m}ds, unmap_ptes, clear_p{g,u,m}_entry

Duplicating them won't be that much of trouble.

> Then you can get rid of the whole kvm_ prefix and directly use stage2
> accessors (which you may want to consider renaming to s2_) directly.

Right.

>
> I think we've seen in the past that the confusion from functions
> potentially touching both hyp and stage2 page tables is a bad thing and
> we should seek to avoid it.

OK, I will respin the series with the proposed changes.


Thanks
Suzuki

^ permalink raw reply	[flat|nested] 50+ messages in thread

* [RFC PATCH 06/12] kvm-arm: Pass kvm parameter for pagetable helpers
@ 2016-03-22 10:15       ` Suzuki K. Poulose
  0 siblings, 0 replies; 50+ messages in thread
From: Suzuki K. Poulose @ 2016-03-22 10:15 UTC (permalink / raw)
  To: linux-arm-kernel

On 22/03/16 09:30, Christoffer Dall wrote:
> On Mon, Mar 14, 2016 at 04:53:05PM +0000, Suzuki K Poulose wrote:
>> Pass 'kvm' to existing kvm_p.d_* page table wrappers to prepare
>> them to choose between hyp and stage2 page table. No functional
>> changes yet. Also while at it, convert them to static inline
>> functions.
>
> I have to say that I'm not really crazy about the idea of having common
> hyp and stage2 code and having the pgtable macros change behavior
> depending on the type.
>
> Is it not so that that host pgtable macros will always be valid for the
> hyp mappings, because we have the same VA space available etc.?  It's
> just a matter of different page table entry attributes.

Yes, host pgtable macros are still used for hyp mappings, when kvm == NULL.
and we do use explicit accessors (stage2_xxx wherever possible with this series).

>
> Looking at arch/arm/kvm/mmu.c, it looks to me like we would get the
> cleanest separation by separating stuff that touches hyp page tables
> from stuff that touches stage2 page tables.

OK. Here are the routines which deal with both types:

unmap_range, unmap_p{u,m}ds, unmap_ptes, clear_p{g,u,m}_entry

Duplicating them won't be that much of trouble.

> Then you can get rid of the whole kvm_ prefix and directly use stage2
> accessors (which you may want to consider renaming to s2_) directly.

Right.

>
> I think we've seen in the past that the confusion from functions
> potentially touching both hyp and stage2 page tables is a bad thing and
> we should seek to avoid it.

OK, I will respin the series with the proposed changes.


Thanks
Suzuki

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC PATCH 06/12] kvm-arm: Pass kvm parameter for pagetable helpers
  2016-03-22 10:15       ` Suzuki K. Poulose
@ 2016-03-22 10:30         ` Christoffer Dall
  -1 siblings, 0 replies; 50+ messages in thread
From: Christoffer Dall @ 2016-03-22 10:30 UTC (permalink / raw)
  To: Suzuki K. Poulose
  Cc: kvm, marc.zyngier, catalin.marinas, will.deacon, kvmarm,
	linux-arm-kernel

On Tue, Mar 22, 2016 at 10:15:11AM +0000, Suzuki K. Poulose wrote:
> On 22/03/16 09:30, Christoffer Dall wrote:
> >On Mon, Mar 14, 2016 at 04:53:05PM +0000, Suzuki K Poulose wrote:
> >>Pass 'kvm' to existing kvm_p.d_* page table wrappers to prepare
> >>them to choose between hyp and stage2 page table. No functional
> >>changes yet. Also while at it, convert them to static inline
> >>functions.
> >
> >I have to say that I'm not really crazy about the idea of having common
> >hyp and stage2 code and having the pgtable macros change behavior
> >depending on the type.
> >
> >Is it not so that that host pgtable macros will always be valid for the
> >hyp mappings, because we have the same VA space available etc.?  It's
> >just a matter of different page table entry attributes.
> 
> Yes, host pgtable macros are still used for hyp mappings, when kvm == NULL.
> and we do use explicit accessors (stage2_xxx wherever possible with this series).
> 
> >
> >Looking at arch/arm/kvm/mmu.c, it looks to me like we would get the
> >cleanest separation by separating stuff that touches hyp page tables
> >from stuff that touches stage2 page tables.
> 
> OK. Here are the routines which deal with both types:
> 
> unmap_range, unmap_p{u,m}ds, unmap_ptes, clear_p{g,u,m}_entry
> 
> Duplicating them won't be that much of trouble.
> 
> >Then you can get rid of the whole kvm_ prefix and directly use stage2
> >accessors (which you may want to consider renaming to s2_) directly.
> 
> Right.
> 
> >
> >I think we've seen in the past that the confusion from functions
> >potentially touching both hyp and stage2 page tables is a bad thing and
> >we should seek to avoid it.
> 
> OK, I will respin the series with the proposed changes.
> 
Great, thanks a lot!!

-Christoffer

^ permalink raw reply	[flat|nested] 50+ messages in thread

* [RFC PATCH 06/12] kvm-arm: Pass kvm parameter for pagetable helpers
@ 2016-03-22 10:30         ` Christoffer Dall
  0 siblings, 0 replies; 50+ messages in thread
From: Christoffer Dall @ 2016-03-22 10:30 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Mar 22, 2016 at 10:15:11AM +0000, Suzuki K. Poulose wrote:
> On 22/03/16 09:30, Christoffer Dall wrote:
> >On Mon, Mar 14, 2016 at 04:53:05PM +0000, Suzuki K Poulose wrote:
> >>Pass 'kvm' to existing kvm_p.d_* page table wrappers to prepare
> >>them to choose between hyp and stage2 page table. No functional
> >>changes yet. Also while at it, convert them to static inline
> >>functions.
> >
> >I have to say that I'm not really crazy about the idea of having common
> >hyp and stage2 code and having the pgtable macros change behavior
> >depending on the type.
> >
> >Is it not so that that host pgtable macros will always be valid for the
> >hyp mappings, because we have the same VA space available etc.?  It's
> >just a matter of different page table entry attributes.
> 
> Yes, host pgtable macros are still used for hyp mappings, when kvm == NULL.
> and we do use explicit accessors (stage2_xxx wherever possible with this series).
> 
> >
> >Looking at arch/arm/kvm/mmu.c, it looks to me like we would get the
> >cleanest separation by separating stuff that touches hyp page tables
> >from stuff that touches stage2 page tables.
> 
> OK. Here are the routines which deal with both types:
> 
> unmap_range, unmap_p{u,m}ds, unmap_ptes, clear_p{g,u,m}_entry
> 
> Duplicating them won't be that much of trouble.
> 
> >Then you can get rid of the whole kvm_ prefix and directly use stage2
> >accessors (which you may want to consider renaming to s2_) directly.
> 
> Right.
> 
> >
> >I think we've seen in the past that the confusion from functions
> >potentially touching both hyp and stage2 page tables is a bad thing and
> >we should seek to avoid it.
> 
> OK, I will respin the series with the proposed changes.
> 
Great, thanks a lot!!

-Christoffer

^ permalink raw reply	[flat|nested] 50+ messages in thread

end of thread, other threads:[~2016-03-22 10:30 UTC | newest]

Thread overview: 50+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-03-14 16:52 [RFC PATCH 00/12] kvm-arm: Add stage2 page table walker Suzuki K Poulose
2016-03-14 16:52 ` Suzuki K Poulose
2016-03-14 16:53 ` [RFC PATCH 01/12] kvm arm: Move fake PGD handling to arch specific files Suzuki K Poulose
2016-03-14 16:53   ` Suzuki K Poulose
2016-03-14 16:53 ` [RFC PATCH 02/12] arm64: kvm: Fix {V}TCR_EL2_TG0 mask Suzuki K Poulose
2016-03-14 16:53   ` Suzuki K Poulose
2016-03-16 14:54   ` Marc Zyngier
2016-03-16 14:54     ` Marc Zyngier
2016-03-16 15:35     ` Suzuki K. Poulose
2016-03-16 15:35       ` Suzuki K. Poulose
2016-03-14 16:53 ` [RFC PATCH 03/12] arm64: kvm: Cleanup VTCR_EL2/VTTBR computation Suzuki K Poulose
2016-03-14 16:53   ` Suzuki K Poulose
2016-03-16 15:01   ` Marc Zyngier
2016-03-16 15:01     ` Marc Zyngier
2016-03-16 15:37     ` Suzuki K. Poulose
2016-03-16 15:37       ` Suzuki K. Poulose
2016-03-16 15:45       ` Marc Zyngier
2016-03-16 15:45         ` Marc Zyngier
2016-03-14 16:53 ` [RFC PATCH 04/12] kvm-arm: Rename kvm_pmd_huge to huge_pmd Suzuki K Poulose
2016-03-14 16:53   ` Suzuki K Poulose
2016-03-14 17:06   ` Mark Rutland
2016-03-14 17:06     ` Mark Rutland
2016-03-14 17:22     ` Suzuki K. Poulose
2016-03-14 17:22       ` Suzuki K. Poulose
2016-03-22  8:55   ` Christoffer Dall
2016-03-22  8:55     ` Christoffer Dall
2016-03-22 10:03     ` Suzuki K. Poulose
2016-03-22 10:03       ` Suzuki K. Poulose
2016-03-14 16:53 ` [RFC PATCH 05/12] kvm-arm: Move kvm_pud_huge to arch specific headers Suzuki K Poulose
2016-03-14 16:53   ` Suzuki K Poulose
2016-03-14 16:53 ` [RFC PATCH 06/12] kvm-arm: Pass kvm parameter for pagetable helpers Suzuki K Poulose
2016-03-14 16:53   ` Suzuki K Poulose
2016-03-22  9:30   ` Christoffer Dall
2016-03-22  9:30     ` Christoffer Dall
2016-03-22 10:15     ` Suzuki K. Poulose
2016-03-22 10:15       ` Suzuki K. Poulose
2016-03-22 10:30       ` Christoffer Dall
2016-03-22 10:30         ` Christoffer Dall
2016-03-14 16:53 ` [RFC PATCH 07/12] kvm: arm: Introduce stage2 page table helpers Suzuki K Poulose
2016-03-14 16:53   ` Suzuki K Poulose
2016-03-14 16:53 ` [RFC PATCH 08/12] kvm: arm64: " Suzuki K Poulose
2016-03-14 16:53   ` Suzuki K Poulose
2016-03-14 16:53 ` [RFC PATCH 09/12] kvm-arm: Switch to kvm pagetable helpers Suzuki K Poulose
2016-03-14 16:53   ` Suzuki K Poulose
2016-03-14 16:53 ` [RFC PATCH 10/12] kvm: arm64: Get rid of fake page table levels Suzuki K Poulose
2016-03-14 16:53   ` Suzuki K Poulose
2016-03-14 16:53 ` [RFC PATCH 11/12] kvm-arm: Cleanup stage2 pgd handling Suzuki K Poulose
2016-03-14 16:53   ` Suzuki K Poulose
2016-03-14 16:53 ` [RFC PATCH 12/12] arm64: kvm: Add support for 16K pages Suzuki K Poulose
2016-03-14 16:53   ` Suzuki K Poulose

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.