All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2] arm/arm64: KVM: Host 48-bit VA support and IPA limits
@ 2014-09-25 19:42 ` Christoffer Dall
  0 siblings, 0 replies; 22+ messages in thread
From: Christoffer Dall @ 2014-09-25 19:42 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm; +Cc: kvm, Christoffer Dall

The following two patches fixup some missing memory handling in
KVM/arm64.

The first patch supports 48 bit virtual address space which involves
supporting a different number of levels of page tables in the host
kernel and the stage-2 page tables.  This patch removes "depends on
BROKEN" from CONFIG_ARM64_VA_BITS_48 which was introduced due to the KVM
breakage.

The second patch ensures userspace cannot create memory slots with too
large IPA space given VTCR_EL2.T0SZ = 24.

The following host configurations have been tested with KVM on APM
Mustang:

 1) 4KB  + 39 bits VA space
 2) 4KB  + 48 bits VA space

Due to the 64K/GICV issue, I couldn't test a 64K setup on Mutang.  If
anyone can test this on Seattle, it would be much appreciated.

Tested on TC2 for regressions.

Christoffer Dall (2):
  arm64: KVM: Implement 48 VA support for KVM EL2 and Stage-2
  arm/arm64: KVM: Ensure memslots are within KVM_PHYS_SIZE

 arch/arm/include/asm/kvm_mmu.h   |  23 +++++++
 arch/arm/kvm/arm.c               |   2 +-
 arch/arm/kvm/mmu.c               | 119 +++++++++++++++++++++++++++-------
 arch/arm64/Kconfig               |   1 -
 arch/arm64/include/asm/kvm_mmu.h | 136 ++++++++++++++++++++++++++++++++++++---
 5 files changed, 247 insertions(+), 34 deletions(-)

-- 
2.0.0


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 0/2] arm/arm64: KVM: Host 48-bit VA support and IPA limits
@ 2014-09-25 19:42 ` Christoffer Dall
  0 siblings, 0 replies; 22+ messages in thread
From: Christoffer Dall @ 2014-09-25 19:42 UTC (permalink / raw)
  To: linux-arm-kernel

The following two patches fixup some missing memory handling in
KVM/arm64.

The first patch supports 48 bit virtual address space which involves
supporting a different number of levels of page tables in the host
kernel and the stage-2 page tables.  This patch removes "depends on
BROKEN" from CONFIG_ARM64_VA_BITS_48 which was introduced due to the KVM
breakage.

The second patch ensures userspace cannot create memory slots with too
large IPA space given VTCR_EL2.T0SZ = 24.

The following host configurations have been tested with KVM on APM
Mustang:

 1) 4KB  + 39 bits VA space
 2) 4KB  + 48 bits VA space

Due to the 64K/GICV issue, I couldn't test a 64K setup on Mutang.  If
anyone can test this on Seattle, it would be much appreciated.

Tested on TC2 for regressions.

Christoffer Dall (2):
  arm64: KVM: Implement 48 VA support for KVM EL2 and Stage-2
  arm/arm64: KVM: Ensure memslots are within KVM_PHYS_SIZE

 arch/arm/include/asm/kvm_mmu.h   |  23 +++++++
 arch/arm/kvm/arm.c               |   2 +-
 arch/arm/kvm/mmu.c               | 119 +++++++++++++++++++++++++++-------
 arch/arm64/Kconfig               |   1 -
 arch/arm64/include/asm/kvm_mmu.h | 136 ++++++++++++++++++++++++++++++++++++---
 5 files changed, 247 insertions(+), 34 deletions(-)

-- 
2.0.0

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 1/2] arm64: KVM: Implement 48 VA support for KVM EL2 and Stage-2
  2014-09-25 19:42 ` Christoffer Dall
@ 2014-09-25 19:42   ` Christoffer Dall
  -1 siblings, 0 replies; 22+ messages in thread
From: Christoffer Dall @ 2014-09-25 19:42 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm
  Cc: kvm, Christoffer Dall, Marc Zyngier, Catalin Marinas, Jungseok Lee

This patch adds the necessary support for all host kernel PGSIZE and
VA_SPACE configuration options for both EL2 and the Stage-2 page tables.

However, for 40bit and 42bit PARange systems, the architecture mandates
that VTCR_EL2.SL0 is maximum 1, resulting in fewer levels of stage-2
pagge tables than levels of host kernel page tables.  At the same time,
systems with a PARange > 42bit, we limit the IPA range by always setting
VTCR_EL2.T0SZ to 24.

To solve the situation with different levels of page tables for Stage-2
translation than the host kernel page tables, we allocate a dummy PGD
with pointers to our actual inital level Stage-2 page table, in order
for us to reuse the kernel pgtable manipulation primitives.  Reproducing
all these in KVM does not look pretty and unnecessarily complicates the
32-bit side.

Systems with a PARange < 40bits are not yet supported.

 [ I have reworked this patch from its original form submitted by
   Jungseok to take the architecture constraints into consideration.
   There were too many changes from the original patch for me to
   preserve the authorship.  Thanks to Catalin Marinas for his help in
   figuring out a good solution to this challenge.  I have also fixed
   various bugs and missing error code handling from the original
   patch. - Christoffer ]

Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Jungseok Lee <jungseoklee85@gmail.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
---
 arch/arm/include/asm/kvm_mmu.h   |  23 +++++++
 arch/arm/kvm/arm.c               |   2 +-
 arch/arm/kvm/mmu.c               | 111 +++++++++++++++++++++++++-------
 arch/arm64/Kconfig               |   1 -
 arch/arm64/include/asm/kvm_mmu.h | 136 ++++++++++++++++++++++++++++++++++++---
 5 files changed, 239 insertions(+), 34 deletions(-)

diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index 3f688b4..0ffd2a8 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -37,6 +37,11 @@
  */
 #define TRAMPOLINE_VA		UL(CONFIG_VECTORS_BASE)
 
+/*
+ * KVM_MMU_CACHE_MIN_PAGES is the number of stage2 page table translation levels.
+ */
+#define KVM_MMU_CACHE_MIN_PAGES	2
+
 #ifndef __ASSEMBLY__
 
 #include <asm/cacheflush.h>
@@ -83,6 +88,11 @@ static inline void kvm_clean_pgd(pgd_t *pgd)
 	clean_dcache_area(pgd, PTRS_PER_S2_PGD * sizeof(pgd_t));
 }
 
+static inline void kvm_clean_pmd(pmd_t *pmd)
+{
+	clean_dcache_area(pmd, PTRS_PER_PMD * sizeof(pmd_t));
+}
+
 static inline void kvm_clean_pmd_entry(pmd_t *pmd)
 {
 	clean_pmd_entry(pmd);
@@ -127,6 +137,19 @@ static inline bool kvm_page_empty(void *ptr)
 #define kvm_pmd_table_empty(pmdp) kvm_page_empty(pmdp)
 #define kvm_pud_table_empty(pudp) (0)
 
+#define KVM_PREALLOC_LEVELS	0
+
+static inline int kvm_prealloc_hwpgd(struct kvm *kvm, pgd_t *pgd)
+{
+	return 0;
+}
+
+static inline void kvm_free_hwpgd(struct kvm *kvm) { }
+
+static inline phys_addr_t kvm_get_hwpgd(struct kvm *kvm)
+{
+	return virt_to_phys(kvm->arch.pgd);
+}
 
 struct kvm;
 
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 7796051..048f37f 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -409,7 +409,7 @@ static void update_vttbr(struct kvm *kvm)
 	kvm_next_vmid++;
 
 	/* update vttbr to be used with the new vmid */
-	pgd_phys = virt_to_phys(kvm->arch.pgd);
+	pgd_phys = kvm_get_hwpgd(kvm);
 	BUG_ON(pgd_phys & ~VTTBR_BADDR_MASK);
 	vmid = ((u64)(kvm->arch.vmid) << VTTBR_VMID_SHIFT) & VTTBR_VMID_MASK;
 	kvm->arch.vttbr = pgd_phys | vmid;
diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index bb06f76..4532f5f 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -33,6 +33,7 @@
 
 extern char  __hyp_idmap_text_start[], __hyp_idmap_text_end[];
 
+static int kvm_prealloc_levels;
 static pgd_t *boot_hyp_pgd;
 static pgd_t *hyp_pgd;
 static DEFINE_MUTEX(kvm_hyp_pgd_mutex);
@@ -42,7 +43,7 @@ static unsigned long hyp_idmap_start;
 static unsigned long hyp_idmap_end;
 static phys_addr_t hyp_idmap_vector;
 
-#define pgd_order get_order(PTRS_PER_PGD * sizeof(pgd_t))
+#define hyp_pgd_order get_order(PTRS_PER_PGD * sizeof(pgd_t))
 
 #define kvm_pmd_huge(_x)	(pmd_huge(_x) || pmd_trans_huge(_x))
 
@@ -158,7 +159,7 @@ static void unmap_pmds(struct kvm *kvm, pud_t *pud,
 		}
 	} while (pmd++, addr = next, addr != end);
 
-	if (kvm_pmd_table_empty(start_pmd))
+	if (kvm_pmd_table_empty(start_pmd) && (!kvm || kvm_prealloc_levels < 2))
 		clear_pud_entry(kvm, pud, start_addr);
 }
 
@@ -182,7 +183,7 @@ static void unmap_puds(struct kvm *kvm, pgd_t *pgd,
 		}
 	} while (pud++, addr = next, addr != end);
 
-	if (kvm_pud_table_empty(start_pud))
+	if (kvm_pud_table_empty(start_pud) && (!kvm || kvm_prealloc_levels < 1))
 		clear_pgd_entry(kvm, pgd, start_addr);
 }
 
@@ -306,7 +307,7 @@ void free_boot_hyp_pgd(void)
 	if (boot_hyp_pgd) {
 		unmap_range(NULL, boot_hyp_pgd, hyp_idmap_start, PAGE_SIZE);
 		unmap_range(NULL, boot_hyp_pgd, TRAMPOLINE_VA, PAGE_SIZE);
-		free_pages((unsigned long)boot_hyp_pgd, pgd_order);
+		free_pages((unsigned long)boot_hyp_pgd, hyp_pgd_order);
 		boot_hyp_pgd = NULL;
 	}
 
@@ -343,7 +344,7 @@ void free_hyp_pgds(void)
 		for (addr = VMALLOC_START; is_vmalloc_addr((void*)addr); addr += PGDIR_SIZE)
 			unmap_range(NULL, hyp_pgd, KERN_TO_HYP(addr), PGDIR_SIZE);
 
-		free_pages((unsigned long)hyp_pgd, pgd_order);
+		free_pages((unsigned long)hyp_pgd, hyp_pgd_order);
 		hyp_pgd = NULL;
 	}
 
@@ -401,13 +402,46 @@ static int create_hyp_pmd_mappings(pud_t *pud, unsigned long start,
 	return 0;
 }
 
+static int create_hyp_pud_mappings(pgd_t *pgd, unsigned long start,
+				   unsigned long end, unsigned long pfn,
+				   pgprot_t prot)
+{
+	pud_t *pud;
+	pmd_t *pmd;
+	unsigned long addr, next;
+	int ret;
+
+	addr = start;
+	do {
+		pud = pud_offset(pgd, addr);
+
+		if (pud_none_or_clear_bad(pud)) {
+			pmd = pmd_alloc_one(NULL, addr);
+			if (!pmd) {
+				kvm_err("Cannot allocate Hyp pmd\n");
+				return -ENOMEM;
+			}
+			pud_populate(NULL, pud, pmd);
+			get_page(virt_to_page(pud));
+			kvm_flush_dcache_to_poc(pud, sizeof(*pud));
+		}
+
+		next = pud_addr_end(addr, end);
+		ret = create_hyp_pmd_mappings(pud, addr, next, pfn, prot);
+		if (ret)
+			return ret;
+		pfn += (next - addr) >> PAGE_SHIFT;
+	} while (addr = next, addr != end);
+
+	return 0;
+}
+
 static int __create_hyp_mappings(pgd_t *pgdp,
 				 unsigned long start, unsigned long end,
 				 unsigned long pfn, pgprot_t prot)
 {
 	pgd_t *pgd;
 	pud_t *pud;
-	pmd_t *pmd;
 	unsigned long addr, next;
 	int err = 0;
 
@@ -416,22 +450,21 @@ static int __create_hyp_mappings(pgd_t *pgdp,
 	end = PAGE_ALIGN(end);
 	do {
 		pgd = pgdp + pgd_index(addr);
-		pud = pud_offset(pgd, addr);
 
-		if (pud_none_or_clear_bad(pud)) {
-			pmd = pmd_alloc_one(NULL, addr);
-			if (!pmd) {
-				kvm_err("Cannot allocate Hyp pmd\n");
+		if (pgd_none(*pgd)) {
+			pud = pud_alloc_one(NULL, addr);
+			if (!pud) {
+				kvm_err("Cannot allocate Hyp pud\n");
 				err = -ENOMEM;
 				goto out;
 			}
-			pud_populate(NULL, pud, pmd);
-			get_page(virt_to_page(pud));
-			kvm_flush_dcache_to_poc(pud, sizeof(*pud));
+			pgd_populate(NULL, pgd, pud);
+			get_page(virt_to_page(pgd));
+			kvm_flush_dcache_to_poc(pgd, sizeof(*pgd));
 		}
 
 		next = pgd_addr_end(addr, end);
-		err = create_hyp_pmd_mappings(pud, addr, next, pfn, prot);
+		err = create_hyp_pud_mappings(pgd, addr, next, pfn, prot);
 		if (err)
 			goto out;
 		pfn += (next - addr) >> PAGE_SHIFT;
@@ -521,6 +554,7 @@ int create_hyp_io_mappings(void *from, void *to, phys_addr_t phys_addr)
  */
 int kvm_alloc_stage2_pgd(struct kvm *kvm)
 {
+	int ret;
 	pgd_t *pgd;
 
 	if (kvm->arch.pgd != NULL) {
@@ -533,9 +567,17 @@ int kvm_alloc_stage2_pgd(struct kvm *kvm)
 		return -ENOMEM;
 
 	memset(pgd, 0, PTRS_PER_S2_PGD * sizeof(pgd_t));
+
+	ret = kvm_prealloc_hwpgd(kvm, pgd);
+	if (ret)
+		goto out;
+
 	kvm_clean_pgd(pgd);
 	kvm->arch.pgd = pgd;
-
+	ret = 0;
+out:
+	if (ret)
+		free_pages((unsigned long)pgd, S2_PGD_ORDER);
 	return 0;
 }
 
@@ -572,19 +614,36 @@ void kvm_free_stage2_pgd(struct kvm *kvm)
 		return;
 
 	unmap_stage2_range(kvm, 0, KVM_PHYS_SIZE);
+	kvm_free_hwpgd(kvm);
 	free_pages((unsigned long)kvm->arch.pgd, S2_PGD_ORDER);
 	kvm->arch.pgd = NULL;
 }
 
-static pmd_t *stage2_get_pmd(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
+static pud_t *stage2_get_pud(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
 			     phys_addr_t addr)
 {
 	pgd_t *pgd;
 	pud_t *pud;
-	pmd_t *pmd;
 
 	pgd = kvm->arch.pgd + pgd_index(addr);
-	pud = pud_offset(pgd, addr);
+	if (pgd_none(*pgd)) {
+		if (!cache)
+			return NULL;
+		pud = mmu_memory_cache_alloc(cache);
+		pgd_populate(NULL, pgd, pud);
+		get_page(virt_to_page(pgd));
+	}
+
+	return pud_offset(pgd, addr);
+}
+
+static pmd_t *stage2_get_pmd(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
+			     phys_addr_t addr)
+{
+	pud_t *pud;
+	pmd_t *pmd;
+
+	pud = stage2_get_pud(kvm, cache, addr);
 	if (pud_none(*pud)) {
 		if (!cache)
 			return NULL;
@@ -630,7 +689,7 @@ static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
 	pmd_t *pmd;
 	pte_t *pte, old_pte;
 
-	/* Create stage-2 page table mapping - Level 1 */
+	/* Create stage-2 page table mapping - Levels 0 and 1 */
 	pmd = stage2_get_pmd(kvm, cache, addr);
 	if (!pmd) {
 		/*
@@ -688,7 +747,8 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
 	for (addr = guest_ipa; addr < end; addr += PAGE_SIZE) {
 		pte_t pte = pfn_pte(pfn, PAGE_S2_DEVICE);
 
-		ret = mmu_topup_memory_cache(&cache, 2, 2);
+		ret = mmu_topup_memory_cache(&cache, KVM_MMU_CACHE_MIN_PAGES,
+						KVM_MMU_CACHE_MIN_PAGES);
 		if (ret)
 			goto out;
 		spin_lock(&kvm->mmu_lock);
@@ -797,7 +857,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 	up_read(&current->mm->mmap_sem);
 
 	/* We need minimum second+third level pages */
-	ret = mmu_topup_memory_cache(memcache, 2, KVM_NR_MEM_OBJS);
+	ret = mmu_topup_memory_cache(memcache, KVM_MMU_CACHE_MIN_PAGES,
+				     KVM_NR_MEM_OBJS);
 	if (ret)
 		return ret;
 
@@ -1070,8 +1131,8 @@ int kvm_mmu_init(void)
 			 (unsigned long)phys_base);
 	}
 
-	hyp_pgd = (pgd_t *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, pgd_order);
-	boot_hyp_pgd = (pgd_t *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, pgd_order);
+	hyp_pgd = (pgd_t *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, hyp_pgd_order);
+	boot_hyp_pgd = (pgd_t *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, hyp_pgd_order);
 
 	if (!hyp_pgd || !boot_hyp_pgd) {
 		kvm_err("Hyp mode PGD not allocated\n");
@@ -1113,6 +1174,8 @@ int kvm_mmu_init(void)
 		goto out;
 	}
 
+	kvm_prealloc_levels = KVM_PREALLOC_LEVELS;
+
 	return 0;
 out:
 	free_hyp_pgds();
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index fd4e81a..0f3e0a9 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -200,7 +200,6 @@ config ARM64_VA_BITS_42
 
 config ARM64_VA_BITS_48
 	bool "48-bit"
-	depends on BROKEN
 
 endchoice
 
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index a030d16..313c6f9 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -41,6 +41,18 @@
  */
 #define TRAMPOLINE_VA		(HYP_PAGE_OFFSET_MASK & PAGE_MASK)
 
+/*
+ * KVM_MMU_CACHE_MIN_PAGES is the number of stage2 page table translation
+ * levels in addition to the PGD and potentially the PUD which are
+ * pre-allocated (we pre-allocate the fake PGD and the PUD when the Stage-2
+ * tables use one level of tables less than the kernel.
+ */
+#ifdef CONFIG_ARM64_64K_PAGES
+#define KVM_MMU_CACHE_MIN_PAGES	1
+#else
+#define KVM_MMU_CACHE_MIN_PAGES	2
+#endif
+
 #ifdef __ASSEMBLY__
 
 /*
@@ -53,6 +65,7 @@
 
 #else
 
+#include <asm/pgalloc.h>
 #include <asm/cachetype.h>
 #include <asm/cacheflush.h>
 
@@ -65,10 +78,6 @@
 #define KVM_PHYS_SIZE	(1UL << KVM_PHYS_SHIFT)
 #define KVM_PHYS_MASK	(KVM_PHYS_SIZE - 1UL)
 
-/* Make sure we get the right size, and thus the right alignment */
-#define PTRS_PER_S2_PGD (1 << (KVM_PHYS_SHIFT - PGDIR_SHIFT))
-#define S2_PGD_ORDER	get_order(PTRS_PER_S2_PGD * sizeof(pgd_t))
-
 int create_hyp_mappings(void *from, void *to);
 int create_hyp_io_mappings(void *from, void *to, phys_addr_t);
 void free_boot_hyp_pgd(void);
@@ -93,6 +102,7 @@ void kvm_clear_hyp_idmap(void);
 #define	kvm_set_pmd(pmdp, pmd)		set_pmd(pmdp, pmd)
 
 static inline void kvm_clean_pgd(pgd_t *pgd) {}
+static inline void kvm_clean_pmd(pmd_t *pmd) {}
 static inline void kvm_clean_pmd_entry(pmd_t *pmd) {}
 static inline void kvm_clean_pte(pte_t *pte) {}
 static inline void kvm_clean_pte_entry(pte_t *pte) {}
@@ -118,13 +128,123 @@ static inline bool kvm_page_empty(void *ptr)
 }
 
 #define kvm_pte_table_empty(ptep) kvm_page_empty(ptep)
-#ifndef CONFIG_ARM64_64K_PAGES
-#define kvm_pmd_table_empty(pmdp) kvm_page_empty(pmdp)
-#else
+#if CONFIG_ARM64_PGTABLE_LEVELS == 2
 #define kvm_pmd_table_empty(pmdp) (0)
-#endif
 #define kvm_pud_table_empty(pudp) (0)
+#elif CONFIG_ARM64_PGTABLE_LEVELS == 3
+#define kvm_pmd_table_empty(pmdp) kvm_page_empty(pmdp)
+#define kvm_pud_table_empty(pudp) (0)
+#elif CONFIG_ARM64_PGTABLE_LEVELS == 4
+#define kvm_pmd_table_empty(pmdp) kvm_page_empty(pmdp)
+#define kvm_pud_table_empty(pudp) kvm_page_empty(pudp)
+#endif
+
+/**
+ * kvm_prealloc_hwpgd - allocate inital table for VTTBR
+ * @kvm:	The KVM struct pointer for the VM.
+ * @pgd:	The kernel pseudo pgd
+ *
+ * When the kernel uses more levels of page tables than the guest, we allocate
+ * a fake PGD and pre-populate it to point to the next-level page table, which
+ * will be the real initial page table pointed to by the VTTBR.
+ *
+ * When KVM_PREALLOC_PMD is defined, we allocate a single page for the PMD and
+ * the kernel will use folded pud.  When KVM_PREALLOC_PUD is defined, we
+ * allocate 2 consecutive PUD pages.
+ */
+#if defined(CONFIG_ARM64_64K_PAGES) && CONFIG_ARM64_PGTABLE_LEVELS == 3
+#define KVM_PREALLOC_LEVELS	2
+#define PTRS_PER_S2_PGD		1
+#define S2_PGD_ORDER		get_order(PTRS_PER_S2_PGD * sizeof(pgd_t))
+
+
+static inline int kvm_prealloc_hwpgd(struct kvm *kvm, pgd_t *pgd)
+{
+	pud_t *pud;
+	pmd_t *pmd;
+
+	pud = pud_offset(pgd, 0);
+	pmd = (pmd_t *)__get_free_pages(GFP_KERNEL, 0);
+
+	if (!pmd)
+		return -ENOMEM;
+	memset(pmd, 0, PAGE_SIZE);
+	pud_populate(NULL, pud, pmd);
+	get_page(virt_to_page(pud));
+
+	return 0;
+}
 
+static inline void kvm_free_hwpgd(struct kvm *kvm)
+{
+	pgd_t *pgd = kvm->arch.pgd;
+	pud_t *pud = pud_offset(pgd, 0);
+	pmd_t *pmd = pmd_offset(pud, 0);
+	free_pages((unsigned long)pmd, 0);
+	put_page(virt_to_page(pud));
+}
+
+static inline phys_addr_t kvm_get_hwpgd(struct kvm *kvm)
+{
+	pgd_t *pgd = kvm->arch.pgd;
+	pud_t *pud = pud_offset(pgd, 0);
+	pmd_t *pmd = pmd_offset(pud, 0);
+	return virt_to_phys(pmd);
+
+}
+#elif defined(CONFIG_ARM64_4K_PAGES) && CONFIG_ARM64_PGTABLE_LEVELS == 4
+#define KVM_PREALLOC_LEVELS	1
+#define PTRS_PER_S2_PGD		2
+#define S2_PGD_ORDER		get_order(PTRS_PER_S2_PGD * sizeof(pgd_t))
+
+static inline int kvm_prealloc_hwpgd(struct kvm *kvm, pgd_t *pgd)
+{
+	pud_t *pud;
+
+	pud = (pud_t *)__get_free_pages(GFP_KERNEL, 1);
+	if (!pud)
+		return -ENOMEM;
+	memset(pud, 0, 2 * PAGE_SIZE);
+	pgd_populate(NULL, pgd, pud);
+	pgd_populate(NULL, pgd + 1, pud + PTRS_PER_PUD);
+	get_page(virt_to_page(pgd));
+	get_page(virt_to_page(pgd));
+
+	return 0;
+}
+
+static inline void kvm_free_hwpgd(struct kvm *kvm)
+{
+	pgd_t *pgd = kvm->arch.pgd;
+	pud_t *pud = pud_offset(pgd, 0);
+	free_pages((unsigned long)pud, 1);
+	put_page(virt_to_page(pgd));
+	put_page(virt_to_page(pgd));
+}
+
+static inline phys_addr_t kvm_get_hwpgd(struct kvm *kvm)
+{
+	pgd_t *pgd = kvm->arch.pgd;
+	pud_t *pud = pud_offset(pgd, 0);
+	return virt_to_phys(pud);
+}
+#else
+#define KVM_PREALLOC_LEVELS	0
+#define PTRS_PER_S2_PGD		(1 << (KVM_PHYS_SHIFT - PGDIR_SHIFT))
+#define S2_PGD_ORDER		get_order(PTRS_PER_S2_PGD * sizeof(pgd_t))
+
+static inline int kvm_prealloc_hwpgd(struct kvm *kvm, pgd_t *pgd)
+{
+	return 0;
+}
+
+static inline void kvm_free_hwpgd(struct kvm *kvm) { }
+
+static inline phys_addr_t kvm_get_hwpgd(struct kvm *kvm)
+{
+	return virt_to_phys(kvm->arch.pgd);
+}
+#endif
 
 struct kvm;
 
-- 
2.0.0


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 1/2] arm64: KVM: Implement 48 VA support for KVM EL2 and Stage-2
@ 2014-09-25 19:42   ` Christoffer Dall
  0 siblings, 0 replies; 22+ messages in thread
From: Christoffer Dall @ 2014-09-25 19:42 UTC (permalink / raw)
  To: linux-arm-kernel

This patch adds the necessary support for all host kernel PGSIZE and
VA_SPACE configuration options for both EL2 and the Stage-2 page tables.

However, for 40bit and 42bit PARange systems, the architecture mandates
that VTCR_EL2.SL0 is maximum 1, resulting in fewer levels of stage-2
pagge tables than levels of host kernel page tables.  At the same time,
systems with a PARange > 42bit, we limit the IPA range by always setting
VTCR_EL2.T0SZ to 24.

To solve the situation with different levels of page tables for Stage-2
translation than the host kernel page tables, we allocate a dummy PGD
with pointers to our actual inital level Stage-2 page table, in order
for us to reuse the kernel pgtable manipulation primitives.  Reproducing
all these in KVM does not look pretty and unnecessarily complicates the
32-bit side.

Systems with a PARange < 40bits are not yet supported.

 [ I have reworked this patch from its original form submitted by
   Jungseok to take the architecture constraints into consideration.
   There were too many changes from the original patch for me to
   preserve the authorship.  Thanks to Catalin Marinas for his help in
   figuring out a good solution to this challenge.  I have also fixed
   various bugs and missing error code handling from the original
   patch. - Christoffer ]

Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Jungseok Lee <jungseoklee85@gmail.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
---
 arch/arm/include/asm/kvm_mmu.h   |  23 +++++++
 arch/arm/kvm/arm.c               |   2 +-
 arch/arm/kvm/mmu.c               | 111 +++++++++++++++++++++++++-------
 arch/arm64/Kconfig               |   1 -
 arch/arm64/include/asm/kvm_mmu.h | 136 ++++++++++++++++++++++++++++++++++++---
 5 files changed, 239 insertions(+), 34 deletions(-)

diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index 3f688b4..0ffd2a8 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -37,6 +37,11 @@
  */
 #define TRAMPOLINE_VA		UL(CONFIG_VECTORS_BASE)
 
+/*
+ * KVM_MMU_CACHE_MIN_PAGES is the number of stage2 page table translation levels.
+ */
+#define KVM_MMU_CACHE_MIN_PAGES	2
+
 #ifndef __ASSEMBLY__
 
 #include <asm/cacheflush.h>
@@ -83,6 +88,11 @@ static inline void kvm_clean_pgd(pgd_t *pgd)
 	clean_dcache_area(pgd, PTRS_PER_S2_PGD * sizeof(pgd_t));
 }
 
+static inline void kvm_clean_pmd(pmd_t *pmd)
+{
+	clean_dcache_area(pmd, PTRS_PER_PMD * sizeof(pmd_t));
+}
+
 static inline void kvm_clean_pmd_entry(pmd_t *pmd)
 {
 	clean_pmd_entry(pmd);
@@ -127,6 +137,19 @@ static inline bool kvm_page_empty(void *ptr)
 #define kvm_pmd_table_empty(pmdp) kvm_page_empty(pmdp)
 #define kvm_pud_table_empty(pudp) (0)
 
+#define KVM_PREALLOC_LEVELS	0
+
+static inline int kvm_prealloc_hwpgd(struct kvm *kvm, pgd_t *pgd)
+{
+	return 0;
+}
+
+static inline void kvm_free_hwpgd(struct kvm *kvm) { }
+
+static inline phys_addr_t kvm_get_hwpgd(struct kvm *kvm)
+{
+	return virt_to_phys(kvm->arch.pgd);
+}
 
 struct kvm;
 
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 7796051..048f37f 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -409,7 +409,7 @@ static void update_vttbr(struct kvm *kvm)
 	kvm_next_vmid++;
 
 	/* update vttbr to be used with the new vmid */
-	pgd_phys = virt_to_phys(kvm->arch.pgd);
+	pgd_phys = kvm_get_hwpgd(kvm);
 	BUG_ON(pgd_phys & ~VTTBR_BADDR_MASK);
 	vmid = ((u64)(kvm->arch.vmid) << VTTBR_VMID_SHIFT) & VTTBR_VMID_MASK;
 	kvm->arch.vttbr = pgd_phys | vmid;
diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index bb06f76..4532f5f 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -33,6 +33,7 @@
 
 extern char  __hyp_idmap_text_start[], __hyp_idmap_text_end[];
 
+static int kvm_prealloc_levels;
 static pgd_t *boot_hyp_pgd;
 static pgd_t *hyp_pgd;
 static DEFINE_MUTEX(kvm_hyp_pgd_mutex);
@@ -42,7 +43,7 @@ static unsigned long hyp_idmap_start;
 static unsigned long hyp_idmap_end;
 static phys_addr_t hyp_idmap_vector;
 
-#define pgd_order get_order(PTRS_PER_PGD * sizeof(pgd_t))
+#define hyp_pgd_order get_order(PTRS_PER_PGD * sizeof(pgd_t))
 
 #define kvm_pmd_huge(_x)	(pmd_huge(_x) || pmd_trans_huge(_x))
 
@@ -158,7 +159,7 @@ static void unmap_pmds(struct kvm *kvm, pud_t *pud,
 		}
 	} while (pmd++, addr = next, addr != end);
 
-	if (kvm_pmd_table_empty(start_pmd))
+	if (kvm_pmd_table_empty(start_pmd) && (!kvm || kvm_prealloc_levels < 2))
 		clear_pud_entry(kvm, pud, start_addr);
 }
 
@@ -182,7 +183,7 @@ static void unmap_puds(struct kvm *kvm, pgd_t *pgd,
 		}
 	} while (pud++, addr = next, addr != end);
 
-	if (kvm_pud_table_empty(start_pud))
+	if (kvm_pud_table_empty(start_pud) && (!kvm || kvm_prealloc_levels < 1))
 		clear_pgd_entry(kvm, pgd, start_addr);
 }
 
@@ -306,7 +307,7 @@ void free_boot_hyp_pgd(void)
 	if (boot_hyp_pgd) {
 		unmap_range(NULL, boot_hyp_pgd, hyp_idmap_start, PAGE_SIZE);
 		unmap_range(NULL, boot_hyp_pgd, TRAMPOLINE_VA, PAGE_SIZE);
-		free_pages((unsigned long)boot_hyp_pgd, pgd_order);
+		free_pages((unsigned long)boot_hyp_pgd, hyp_pgd_order);
 		boot_hyp_pgd = NULL;
 	}
 
@@ -343,7 +344,7 @@ void free_hyp_pgds(void)
 		for (addr = VMALLOC_START; is_vmalloc_addr((void*)addr); addr += PGDIR_SIZE)
 			unmap_range(NULL, hyp_pgd, KERN_TO_HYP(addr), PGDIR_SIZE);
 
-		free_pages((unsigned long)hyp_pgd, pgd_order);
+		free_pages((unsigned long)hyp_pgd, hyp_pgd_order);
 		hyp_pgd = NULL;
 	}
 
@@ -401,13 +402,46 @@ static int create_hyp_pmd_mappings(pud_t *pud, unsigned long start,
 	return 0;
 }
 
+static int create_hyp_pud_mappings(pgd_t *pgd, unsigned long start,
+				   unsigned long end, unsigned long pfn,
+				   pgprot_t prot)
+{
+	pud_t *pud;
+	pmd_t *pmd;
+	unsigned long addr, next;
+	int ret;
+
+	addr = start;
+	do {
+		pud = pud_offset(pgd, addr);
+
+		if (pud_none_or_clear_bad(pud)) {
+			pmd = pmd_alloc_one(NULL, addr);
+			if (!pmd) {
+				kvm_err("Cannot allocate Hyp pmd\n");
+				return -ENOMEM;
+			}
+			pud_populate(NULL, pud, pmd);
+			get_page(virt_to_page(pud));
+			kvm_flush_dcache_to_poc(pud, sizeof(*pud));
+		}
+
+		next = pud_addr_end(addr, end);
+		ret = create_hyp_pmd_mappings(pud, addr, next, pfn, prot);
+		if (ret)
+			return ret;
+		pfn += (next - addr) >> PAGE_SHIFT;
+	} while (addr = next, addr != end);
+
+	return 0;
+}
+
 static int __create_hyp_mappings(pgd_t *pgdp,
 				 unsigned long start, unsigned long end,
 				 unsigned long pfn, pgprot_t prot)
 {
 	pgd_t *pgd;
 	pud_t *pud;
-	pmd_t *pmd;
 	unsigned long addr, next;
 	int err = 0;
 
@@ -416,22 +450,21 @@ static int __create_hyp_mappings(pgd_t *pgdp,
 	end = PAGE_ALIGN(end);
 	do {
 		pgd = pgdp + pgd_index(addr);
-		pud = pud_offset(pgd, addr);
 
-		if (pud_none_or_clear_bad(pud)) {
-			pmd = pmd_alloc_one(NULL, addr);
-			if (!pmd) {
-				kvm_err("Cannot allocate Hyp pmd\n");
+		if (pgd_none(*pgd)) {
+			pud = pud_alloc_one(NULL, addr);
+			if (!pud) {
+				kvm_err("Cannot allocate Hyp pud\n");
 				err = -ENOMEM;
 				goto out;
 			}
-			pud_populate(NULL, pud, pmd);
-			get_page(virt_to_page(pud));
-			kvm_flush_dcache_to_poc(pud, sizeof(*pud));
+			pgd_populate(NULL, pgd, pud);
+			get_page(virt_to_page(pgd));
+			kvm_flush_dcache_to_poc(pgd, sizeof(*pgd));
 		}
 
 		next = pgd_addr_end(addr, end);
-		err = create_hyp_pmd_mappings(pud, addr, next, pfn, prot);
+		err = create_hyp_pud_mappings(pgd, addr, next, pfn, prot);
 		if (err)
 			goto out;
 		pfn += (next - addr) >> PAGE_SHIFT;
@@ -521,6 +554,7 @@ int create_hyp_io_mappings(void *from, void *to, phys_addr_t phys_addr)
  */
 int kvm_alloc_stage2_pgd(struct kvm *kvm)
 {
+	int ret;
 	pgd_t *pgd;
 
 	if (kvm->arch.pgd != NULL) {
@@ -533,9 +567,17 @@ int kvm_alloc_stage2_pgd(struct kvm *kvm)
 		return -ENOMEM;
 
 	memset(pgd, 0, PTRS_PER_S2_PGD * sizeof(pgd_t));
+
+	ret = kvm_prealloc_hwpgd(kvm, pgd);
+	if (ret)
+		goto out;
+
 	kvm_clean_pgd(pgd);
 	kvm->arch.pgd = pgd;
-
+	ret = 0;
+out:
+	if (ret)
+		free_pages((unsigned long)pgd, S2_PGD_ORDER);
 	return 0;
 }
 
@@ -572,19 +614,36 @@ void kvm_free_stage2_pgd(struct kvm *kvm)
 		return;
 
 	unmap_stage2_range(kvm, 0, KVM_PHYS_SIZE);
+	kvm_free_hwpgd(kvm);
 	free_pages((unsigned long)kvm->arch.pgd, S2_PGD_ORDER);
 	kvm->arch.pgd = NULL;
 }
 
-static pmd_t *stage2_get_pmd(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
+static pud_t *stage2_get_pud(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
 			     phys_addr_t addr)
 {
 	pgd_t *pgd;
 	pud_t *pud;
-	pmd_t *pmd;
 
 	pgd = kvm->arch.pgd + pgd_index(addr);
-	pud = pud_offset(pgd, addr);
+	if (pgd_none(*pgd)) {
+		if (!cache)
+			return NULL;
+		pud = mmu_memory_cache_alloc(cache);
+		pgd_populate(NULL, pgd, pud);
+		get_page(virt_to_page(pgd));
+	}
+
+	return pud_offset(pgd, addr);
+}
+
+static pmd_t *stage2_get_pmd(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
+			     phys_addr_t addr)
+{
+	pud_t *pud;
+	pmd_t *pmd;
+
+	pud = stage2_get_pud(kvm, cache, addr);
 	if (pud_none(*pud)) {
 		if (!cache)
 			return NULL;
@@ -630,7 +689,7 @@ static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
 	pmd_t *pmd;
 	pte_t *pte, old_pte;
 
-	/* Create stage-2 page table mapping - Level 1 */
+	/* Create stage-2 page table mapping - Levels 0 and 1 */
 	pmd = stage2_get_pmd(kvm, cache, addr);
 	if (!pmd) {
 		/*
@@ -688,7 +747,8 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
 	for (addr = guest_ipa; addr < end; addr += PAGE_SIZE) {
 		pte_t pte = pfn_pte(pfn, PAGE_S2_DEVICE);
 
-		ret = mmu_topup_memory_cache(&cache, 2, 2);
+		ret = mmu_topup_memory_cache(&cache, KVM_MMU_CACHE_MIN_PAGES,
+						KVM_MMU_CACHE_MIN_PAGES);
 		if (ret)
 			goto out;
 		spin_lock(&kvm->mmu_lock);
@@ -797,7 +857,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 	up_read(&current->mm->mmap_sem);
 
 	/* We need minimum second+third level pages */
-	ret = mmu_topup_memory_cache(memcache, 2, KVM_NR_MEM_OBJS);
+	ret = mmu_topup_memory_cache(memcache, KVM_MMU_CACHE_MIN_PAGES,
+				     KVM_NR_MEM_OBJS);
 	if (ret)
 		return ret;
 
@@ -1070,8 +1131,8 @@ int kvm_mmu_init(void)
 			 (unsigned long)phys_base);
 	}
 
-	hyp_pgd = (pgd_t *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, pgd_order);
-	boot_hyp_pgd = (pgd_t *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, pgd_order);
+	hyp_pgd = (pgd_t *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, hyp_pgd_order);
+	boot_hyp_pgd = (pgd_t *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, hyp_pgd_order);
 
 	if (!hyp_pgd || !boot_hyp_pgd) {
 		kvm_err("Hyp mode PGD not allocated\n");
@@ -1113,6 +1174,8 @@ int kvm_mmu_init(void)
 		goto out;
 	}
 
+	kvm_prealloc_levels = KVM_PREALLOC_LEVELS;
+
 	return 0;
 out:
 	free_hyp_pgds();
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index fd4e81a..0f3e0a9 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -200,7 +200,6 @@ config ARM64_VA_BITS_42
 
 config ARM64_VA_BITS_48
 	bool "48-bit"
-	depends on BROKEN
 
 endchoice
 
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index a030d16..313c6f9 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -41,6 +41,18 @@
  */
 #define TRAMPOLINE_VA		(HYP_PAGE_OFFSET_MASK & PAGE_MASK)
 
+/*
+ * KVM_MMU_CACHE_MIN_PAGES is the number of stage2 page table translation
+ * levels in addition to the PGD and potentially the PUD which are
+ * pre-allocated (we pre-allocate the fake PGD and the PUD when the Stage-2
+ * tables use one level of tables less than the kernel.
+ */
+#ifdef CONFIG_ARM64_64K_PAGES
+#define KVM_MMU_CACHE_MIN_PAGES	1
+#else
+#define KVM_MMU_CACHE_MIN_PAGES	2
+#endif
+
 #ifdef __ASSEMBLY__
 
 /*
@@ -53,6 +65,7 @@
 
 #else
 
+#include <asm/pgalloc.h>
 #include <asm/cachetype.h>
 #include <asm/cacheflush.h>
 
@@ -65,10 +78,6 @@
 #define KVM_PHYS_SIZE	(1UL << KVM_PHYS_SHIFT)
 #define KVM_PHYS_MASK	(KVM_PHYS_SIZE - 1UL)
 
-/* Make sure we get the right size, and thus the right alignment */
-#define PTRS_PER_S2_PGD (1 << (KVM_PHYS_SHIFT - PGDIR_SHIFT))
-#define S2_PGD_ORDER	get_order(PTRS_PER_S2_PGD * sizeof(pgd_t))
-
 int create_hyp_mappings(void *from, void *to);
 int create_hyp_io_mappings(void *from, void *to, phys_addr_t);
 void free_boot_hyp_pgd(void);
@@ -93,6 +102,7 @@ void kvm_clear_hyp_idmap(void);
 #define	kvm_set_pmd(pmdp, pmd)		set_pmd(pmdp, pmd)
 
 static inline void kvm_clean_pgd(pgd_t *pgd) {}
+static inline void kvm_clean_pmd(pmd_t *pmd) {}
 static inline void kvm_clean_pmd_entry(pmd_t *pmd) {}
 static inline void kvm_clean_pte(pte_t *pte) {}
 static inline void kvm_clean_pte_entry(pte_t *pte) {}
@@ -118,13 +128,123 @@ static inline bool kvm_page_empty(void *ptr)
 }
 
 #define kvm_pte_table_empty(ptep) kvm_page_empty(ptep)
-#ifndef CONFIG_ARM64_64K_PAGES
-#define kvm_pmd_table_empty(pmdp) kvm_page_empty(pmdp)
-#else
+#if CONFIG_ARM64_PGTABLE_LEVELS == 2
 #define kvm_pmd_table_empty(pmdp) (0)
-#endif
 #define kvm_pud_table_empty(pudp) (0)
+#elif CONFIG_ARM64_PGTABLE_LEVELS == 3
+#define kvm_pmd_table_empty(pmdp) kvm_page_empty(pmdp)
+#define kvm_pud_table_empty(pudp) (0)
+#elif CONFIG_ARM64_PGTABLE_LEVELS == 4
+#define kvm_pmd_table_empty(pmdp) kvm_page_empty(pmdp)
+#define kvm_pud_table_empty(pudp) kvm_page_empty(pudp)
+#endif
+
+/**
+ * kvm_prealloc_hwpgd - allocate inital table for VTTBR
+ * @kvm:	The KVM struct pointer for the VM.
+ * @pgd:	The kernel pseudo pgd
+ *
+ * When the kernel uses more levels of page tables than the guest, we allocate
+ * a fake PGD and pre-populate it to point to the next-level page table, which
+ * will be the real initial page table pointed to by the VTTBR.
+ *
+ * When KVM_PREALLOC_PMD is defined, we allocate a single page for the PMD and
+ * the kernel will use folded pud.  When KVM_PREALLOC_PUD is defined, we
+ * allocate 2 consecutive PUD pages.
+ */
+#if defined(CONFIG_ARM64_64K_PAGES) && CONFIG_ARM64_PGTABLE_LEVELS == 3
+#define KVM_PREALLOC_LEVELS	2
+#define PTRS_PER_S2_PGD		1
+#define S2_PGD_ORDER		get_order(PTRS_PER_S2_PGD * sizeof(pgd_t))
+
+
+static inline int kvm_prealloc_hwpgd(struct kvm *kvm, pgd_t *pgd)
+{
+	pud_t *pud;
+	pmd_t *pmd;
+
+	pud = pud_offset(pgd, 0);
+	pmd = (pmd_t *)__get_free_pages(GFP_KERNEL, 0);
+
+	if (!pmd)
+		return -ENOMEM;
+	memset(pmd, 0, PAGE_SIZE);
+	pud_populate(NULL, pud, pmd);
+	get_page(virt_to_page(pud));
+
+	return 0;
+}
 
+static inline void kvm_free_hwpgd(struct kvm *kvm)
+{
+	pgd_t *pgd = kvm->arch.pgd;
+	pud_t *pud = pud_offset(pgd, 0);
+	pmd_t *pmd = pmd_offset(pud, 0);
+	free_pages((unsigned long)pmd, 0);
+	put_page(virt_to_page(pud));
+}
+
+static inline phys_addr_t kvm_get_hwpgd(struct kvm *kvm)
+{
+	pgd_t *pgd = kvm->arch.pgd;
+	pud_t *pud = pud_offset(pgd, 0);
+	pmd_t *pmd = pmd_offset(pud, 0);
+	return virt_to_phys(pmd);
+
+}
+#elif defined(CONFIG_ARM64_4K_PAGES) && CONFIG_ARM64_PGTABLE_LEVELS == 4
+#define KVM_PREALLOC_LEVELS	1
+#define PTRS_PER_S2_PGD		2
+#define S2_PGD_ORDER		get_order(PTRS_PER_S2_PGD * sizeof(pgd_t))
+
+static inline int kvm_prealloc_hwpgd(struct kvm *kvm, pgd_t *pgd)
+{
+	pud_t *pud;
+
+	pud = (pud_t *)__get_free_pages(GFP_KERNEL, 1);
+	if (!pud)
+		return -ENOMEM;
+	memset(pud, 0, 2 * PAGE_SIZE);
+	pgd_populate(NULL, pgd, pud);
+	pgd_populate(NULL, pgd + 1, pud + PTRS_PER_PUD);
+	get_page(virt_to_page(pgd));
+	get_page(virt_to_page(pgd));
+
+	return 0;
+}
+
+static inline void kvm_free_hwpgd(struct kvm *kvm)
+{
+	pgd_t *pgd = kvm->arch.pgd;
+	pud_t *pud = pud_offset(pgd, 0);
+	free_pages((unsigned long)pud, 1);
+	put_page(virt_to_page(pgd));
+	put_page(virt_to_page(pgd));
+}
+
+static inline phys_addr_t kvm_get_hwpgd(struct kvm *kvm)
+{
+	pgd_t *pgd = kvm->arch.pgd;
+	pud_t *pud = pud_offset(pgd, 0);
+	return virt_to_phys(pud);
+}
+#else
+#define KVM_PREALLOC_LEVELS	0
+#define PTRS_PER_S2_PGD		(1 << (KVM_PHYS_SHIFT - PGDIR_SHIFT))
+#define S2_PGD_ORDER		get_order(PTRS_PER_S2_PGD * sizeof(pgd_t))
+
+static inline int kvm_prealloc_hwpgd(struct kvm *kvm, pgd_t *pgd)
+{
+	return 0;
+}
+
+static inline void kvm_free_hwpgd(struct kvm *kvm) { }
+
+static inline phys_addr_t kvm_get_hwpgd(struct kvm *kvm)
+{
+	return virt_to_phys(kvm->arch.pgd);
+}
+#endif
 
 struct kvm;
 
-- 
2.0.0

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 2/2] arm/arm64: KVM: Ensure memslots are within KVM_PHYS_SIZE
  2014-09-25 19:42 ` Christoffer Dall
@ 2014-09-25 19:42   ` Christoffer Dall
  -1 siblings, 0 replies; 22+ messages in thread
From: Christoffer Dall @ 2014-09-25 19:42 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm; +Cc: kvm, Christoffer Dall

When creating or moving a memslot, make sure the IPA space is within the
addressable range of the guest.  Otherwise, user space can create too
large a memslot and KVM would try to access potentially unallocated page
table entries when inserting entries in the Stage-2 page tables.

Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
---
 arch/arm/kvm/mmu.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index 4532f5f..52a311a 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -975,6 +975,9 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run)
 		goto out_unlock;
 	}
 
+	/* Userspace should not be able to register out-of-bounds IPAs */
+	VM_BUG_ON(fault_ipa >= KVM_PHYS_SIZE);
+
 	ret = user_mem_abort(vcpu, fault_ipa, memslot, hva, fault_status);
 	if (ret == 0)
 		ret = 1;
@@ -1201,6 +1204,11 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
 				   struct kvm_userspace_memory_region *mem,
 				   enum kvm_mr_change change)
 {
+	if (change == KVM_MR_CREATE || change == KVM_MR_MOVE) {
+		if (memslot->base_gfn + memslot->npages >=
+		    (KVM_PHYS_SIZE >> PAGE_SHIFT))
+			return -EFAULT;
+	}
 	return 0;
 }
 
-- 
2.0.0


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 2/2] arm/arm64: KVM: Ensure memslots are within KVM_PHYS_SIZE
@ 2014-09-25 19:42   ` Christoffer Dall
  0 siblings, 0 replies; 22+ messages in thread
From: Christoffer Dall @ 2014-09-25 19:42 UTC (permalink / raw)
  To: linux-arm-kernel

When creating or moving a memslot, make sure the IPA space is within the
addressable range of the guest.  Otherwise, user space can create too
large a memslot and KVM would try to access potentially unallocated page
table entries when inserting entries in the Stage-2 page tables.

Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
---
 arch/arm/kvm/mmu.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index 4532f5f..52a311a 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -975,6 +975,9 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run)
 		goto out_unlock;
 	}
 
+	/* Userspace should not be able to register out-of-bounds IPAs */
+	VM_BUG_ON(fault_ipa >= KVM_PHYS_SIZE);
+
 	ret = user_mem_abort(vcpu, fault_ipa, memslot, hva, fault_status);
 	if (ret == 0)
 		ret = 1;
@@ -1201,6 +1204,11 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
 				   struct kvm_userspace_memory_region *mem,
 				   enum kvm_mr_change change)
 {
+	if (change == KVM_MR_CREATE || change == KVM_MR_MOVE) {
+		if (memslot->base_gfn + memslot->npages >=
+		    (KVM_PHYS_SIZE >> PAGE_SHIFT))
+			return -EFAULT;
+	}
 	return 0;
 }
 
-- 
2.0.0

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH 1/2] arm64: KVM: Implement 48 VA support for KVM EL2 and Stage-2
  2014-09-25 19:42   ` Christoffer Dall
@ 2014-09-26 14:08     ` Jungseok Lee
  -1 siblings, 0 replies; 22+ messages in thread
From: Jungseok Lee @ 2014-09-26 14:08 UTC (permalink / raw)
  To: Christoffer Dall
  Cc: linux-arm-kernel, kvmarm, kvm, Marc Zyngier, Catalin Marinas

On Sep 26, 2014, at 4:42 AM, Christoffer Dall wrote:

Hi, Christoffer

> This patch adds the necessary support for all host kernel PGSIZE and
> VA_SPACE configuration options for both EL2 and the Stage-2 page tables.
> 
> However, for 40bit and 42bit PARange systems, the architecture mandates
> that VTCR_EL2.SL0 is maximum 1, resulting in fewer levels of stage-2
> pagge tables than levels of host kernel page tables.  At the same time,
> systems with a PARange > 42bit, we limit the IPA range by always setting
> VTCR_EL2.T0SZ to 24.
> 
> To solve the situation with different levels of page tables for Stage-2
> translation than the host kernel page tables, we allocate a dummy PGD
> with pointers to our actual inital level Stage-2 page table, in order
> for us to reuse the kernel pgtable manipulation primitives.  Reproducing
> all these in KVM does not look pretty and unnecessarily complicates the
> 32-bit side.

This is a very similar idea suggested by Steve Capper to deal with swapper
table when I worked on 4 level solution. I believe that dummy PGD approach
can handle all combinations described in ARMv8 architecture. 

> Systems with a PARange < 40bits are not yet supported.
> 
> [ I have reworked this patch from its original form submitted by
>   Jungseok to take the architecture constraints into consideration.
>   There were too many changes from the original patch for me to
>   preserve the authorship.  Thanks to Catalin Marinas for his help in
>   figuring out a good solution to this challenge.  I have also fixed
>   various bugs and missing error code handling from the original
>   patch. - Christoffer ]

No doubt. This is yours :)
I really thank you for picking up the patch and writing this graceful solution.

At first glance, it looks good except some unnecessary newlines.
I will leave comments after reading the patches carefully.

Best Regards
Jungseok Lee

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 1/2] arm64: KVM: Implement 48 VA support for KVM EL2 and Stage-2
@ 2014-09-26 14:08     ` Jungseok Lee
  0 siblings, 0 replies; 22+ messages in thread
From: Jungseok Lee @ 2014-09-26 14:08 UTC (permalink / raw)
  To: linux-arm-kernel

On Sep 26, 2014, at 4:42 AM, Christoffer Dall wrote:

Hi, Christoffer

> This patch adds the necessary support for all host kernel PGSIZE and
> VA_SPACE configuration options for both EL2 and the Stage-2 page tables.
> 
> However, for 40bit and 42bit PARange systems, the architecture mandates
> that VTCR_EL2.SL0 is maximum 1, resulting in fewer levels of stage-2
> pagge tables than levels of host kernel page tables.  At the same time,
> systems with a PARange > 42bit, we limit the IPA range by always setting
> VTCR_EL2.T0SZ to 24.
> 
> To solve the situation with different levels of page tables for Stage-2
> translation than the host kernel page tables, we allocate a dummy PGD
> with pointers to our actual inital level Stage-2 page table, in order
> for us to reuse the kernel pgtable manipulation primitives.  Reproducing
> all these in KVM does not look pretty and unnecessarily complicates the
> 32-bit side.

This is a very similar idea suggested by Steve Capper to deal with swapper
table when I worked on 4 level solution. I believe that dummy PGD approach
can handle all combinations described in ARMv8 architecture. 

> Systems with a PARange < 40bits are not yet supported.
> 
> [ I have reworked this patch from its original form submitted by
>   Jungseok to take the architecture constraints into consideration.
>   There were too many changes from the original patch for me to
>   preserve the authorship.  Thanks to Catalin Marinas for his help in
>   figuring out a good solution to this challenge.  I have also fixed
>   various bugs and missing error code handling from the original
>   patch. - Christoffer ]

No doubt. This is yours :)
I really thank you for picking up the patch and writing this graceful solution.

At first glance, it looks good except some unnecessary newlines.
I will leave comments after reading the patches carefully.

Best Regards
Jungseok Lee

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 1/2] arm64: KVM: Implement 48 VA support for KVM EL2 and Stage-2
  2014-09-25 19:42   ` Christoffer Dall
@ 2014-09-30 12:39     ` Catalin Marinas
  -1 siblings, 0 replies; 22+ messages in thread
From: Catalin Marinas @ 2014-09-30 12:39 UTC (permalink / raw)
  To: Christoffer Dall
  Cc: linux-arm-kernel, kvmarm, kvm, Marc Zyngier, Jungseok Lee

Hi Christoffer,

On Thu, Sep 25, 2014 at 08:42:53PM +0100, Christoffer Dall wrote:
> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
> index 7796051..048f37f 100644
> --- a/arch/arm/kvm/arm.c
> +++ b/arch/arm/kvm/arm.c
> @@ -409,7 +409,7 @@ static void update_vttbr(struct kvm *kvm)
>         kvm_next_vmid++;
>
>         /* update vttbr to be used with the new vmid */
> -       pgd_phys = virt_to_phys(kvm->arch.pgd);
> +       pgd_phys = kvm_get_hwpgd(kvm);
>         BUG_ON(pgd_phys & ~VTTBR_BADDR_MASK);
>         vmid = ((u64)(kvm->arch.vmid) << VTTBR_VMID_SHIFT) & VTTBR_VMID_MASK;
>         kvm->arch.vttbr = pgd_phys | vmid;
> diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
> index bb06f76..4532f5f 100644
> --- a/arch/arm/kvm/mmu.c
> +++ b/arch/arm/kvm/mmu.c
> @@ -33,6 +33,7 @@
>
>  extern char  __hyp_idmap_text_start[], __hyp_idmap_text_end[];
>
> +static int kvm_prealloc_levels;

Do you have plans to determine this dynamically? In this patch, I can
only see it being assigned to KVM_PREALLOC_LEVELS. I guess you could
simply use the macro.

> @@ -521,6 +554,7 @@ int create_hyp_io_mappings(void *from, void *to, phys_addr_t phys_addr)
>   */
>  int kvm_alloc_stage2_pgd(struct kvm *kvm)
>  {
> +       int ret;
>         pgd_t *pgd;
>
>         if (kvm->arch.pgd != NULL) {
> @@ -533,9 +567,17 @@ int kvm_alloc_stage2_pgd(struct kvm *kvm)
>                 return -ENOMEM;
>
>         memset(pgd, 0, PTRS_PER_S2_PGD * sizeof(pgd_t));

I don't think you need to allocate a full page here (not shown here in
the diff context) but it may not be trivial since you use get_page() to
keep track of when page tables could be freed (do you do this for the
pgd as well?).

> +
> +       ret = kvm_prealloc_hwpgd(kvm, pgd);
> +       if (ret)
> +               goto out;
> +
>         kvm_clean_pgd(pgd);
>         kvm->arch.pgd = pgd;
> -
> +       ret = 0;
> +out:
> +       if (ret)
> +               free_pages((unsigned long)pgd, S2_PGD_ORDER);
>         return 0;
>  }

Does this always return 0, even in case of error? Does it look better
as:

        ret = kvm_prealloc_hwpgd(kvm, pgd);
        if (ret)
                goto out;

        kvm_clean_pgd(pgd);
        kvm->arch.pgd = pgd;

        return 0;

out:
        free_pages((unsigned long)pgd, S2_PGD_ORDER);
        return ret;

> @@ -572,19 +614,36 @@ void kvm_free_stage2_pgd(struct kvm *kvm)
>                 return;
>
>         unmap_stage2_range(kvm, 0, KVM_PHYS_SIZE);
> +       kvm_free_hwpgd(kvm);
>         free_pages((unsigned long)kvm->arch.pgd, S2_PGD_ORDER);
>         kvm->arch.pgd = NULL;
>  }
>
> -static pmd_t *stage2_get_pmd(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
> +static pud_t *stage2_get_pud(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
>                              phys_addr_t addr)
>  {
>         pgd_t *pgd;
>         pud_t *pud;
> -       pmd_t *pmd;
>
>         pgd = kvm->arch.pgd + pgd_index(addr);
> -       pud = pud_offset(pgd, addr);
> +       if (pgd_none(*pgd)) {
> +               if (!cache)
> +                       return NULL;
> +               pud = mmu_memory_cache_alloc(cache);
> +               pgd_populate(NULL, pgd, pud);
> +               get_page(virt_to_page(pgd));
> +       }
> +
> +       return pud_offset(pgd, addr);
> +}

This function looks correct, though we would not expect pgd_none() to
ever be true as we pre-populate it. You could but a WARN_ON or something
until we actually have hardware 4-levels stage 2 (though I don't see a
need yet).

> +static pmd_t *stage2_get_pmd(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
> +                            phys_addr_t addr)
> +{
> +       pud_t *pud;
> +       pmd_t *pmd;
> +
> +       pud = stage2_get_pud(kvm, cache, addr);
>         if (pud_none(*pud)) {
>                 if (!cache)
>                         return NULL;
> @@ -630,7 +689,7 @@ static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
>         pmd_t *pmd;
>         pte_t *pte, old_pte;
>
> -       /* Create stage-2 page table mapping - Level 1 */
> +       /* Create stage-2 page table mapping - Levels 0 and 1 */
>         pmd = stage2_get_pmd(kvm, cache, addr);
>         if (!pmd) {
>                 /*
> @@ -688,7 +747,8 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
>         for (addr = guest_ipa; addr < end; addr += PAGE_SIZE) {
>                 pte_t pte = pfn_pte(pfn, PAGE_S2_DEVICE);
>
> -               ret = mmu_topup_memory_cache(&cache, 2, 2);
> +               ret = mmu_topup_memory_cache(&cache, KVM_MMU_CACHE_MIN_PAGES,
> +                                               KVM_MMU_CACHE_MIN_PAGES);

BTW, why do you need to preallocate the pages in KVM's own cache? Would
GFP_ATOMIC not work within the kvm->mmu_lock region?

> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index fd4e81a..0f3e0a9 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -200,7 +200,6 @@ config ARM64_VA_BITS_42
>
>  config ARM64_VA_BITS_48
>         bool "48-bit"
> -       depends on BROKEN

I'm ok with removing BROKEN here but I think at the same time we need to
disable the SMMU driver which has similar issues with (not) supporting
4-levels page tables for stage 2 (stage 1 should be fine).

> diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
> index a030d16..313c6f9 100644
> --- a/arch/arm64/include/asm/kvm_mmu.h
> +++ b/arch/arm64/include/asm/kvm_mmu.h
[...]
> @@ -118,13 +128,123 @@ static inline bool kvm_page_empty(void *ptr)
>  }
>
>  #define kvm_pte_table_empty(ptep) kvm_page_empty(ptep)
> -#ifndef CONFIG_ARM64_64K_PAGES
> -#define kvm_pmd_table_empty(pmdp) kvm_page_empty(pmdp)
> -#else
> +#if CONFIG_ARM64_PGTABLE_LEVELS == 2
>  #define kvm_pmd_table_empty(pmdp) (0)
> -#endif
>  #define kvm_pud_table_empty(pudp) (0)
> +#elif CONFIG_ARM64_PGTABLE_LEVELS == 3
> +#define kvm_pmd_table_empty(pmdp) kvm_page_empty(pmdp)
> +#define kvm_pud_table_empty(pudp) (0)
> +#elif CONFIG_ARM64_PGTABLE_LEVELS == 4
> +#define kvm_pmd_table_empty(pmdp) kvm_page_empty(pmdp)
> +#define kvm_pud_table_empty(pudp) kvm_page_empty(pudp)
> +#endif

Not sure whether it's clearer as (up to you):

#ifdef __PAGETABLE_PMD_FOLDED
#define kvm_pmd_table_empty(pmdp)       (0)
#else
#define kvm_pmd_table_empty(pmdp)       kvm_page_empty(pmdp)
#endif

#ifdef __PAGETABLE_PUD_FOLDED
#define kvm_pud_table_empty(pudp)       (0)
#else
#define kvm_pud_table_empty(pudp)       kvm_page_empty(pudp)
#endif


> +
> +/**
> + * kvm_prealloc_hwpgd - allocate inital table for VTTBR
> + * @kvm:       The KVM struct pointer for the VM.
> + * @pgd:       The kernel pseudo pgd

"hwpgd" doesn't sound like the right name since it's actually a fake pgd
that it is allocating. Maybe "swpgd"?

> + *
> + * When the kernel uses more levels of page tables than the guest, we allocate
> + * a fake PGD and pre-populate it to point to the next-level page table, which
> + * will be the real initial page table pointed to by the VTTBR.
> + *
> + * When KVM_PREALLOC_PMD is defined, we allocate a single page for the PMD and
> + * the kernel will use folded pud.  When KVM_PREALLOC_PUD is defined, we
> + * allocate 2 consecutive PUD pages.
> + */
> +#if defined(CONFIG_ARM64_64K_PAGES) && CONFIG_ARM64_PGTABLE_LEVELS == 3
> +#define KVM_PREALLOC_LEVELS    2
> +#define PTRS_PER_S2_PGD                1
> +#define S2_PGD_ORDER           get_order(PTRS_PER_S2_PGD * sizeof(pgd_t))
> +
> +
> +static inline int kvm_prealloc_hwpgd(struct kvm *kvm, pgd_t *pgd)
> +{
> +       pud_t *pud;
> +       pmd_t *pmd;
> +
> +       pud = pud_offset(pgd, 0);
> +       pmd = (pmd_t *)__get_free_pages(GFP_KERNEL, 0);
> +
> +       if (!pmd)
> +               return -ENOMEM;
> +       memset(pmd, 0, PAGE_SIZE);

You could use __GFP_ZERO.

> +       pud_populate(NULL, pud, pmd);
> +       get_page(virt_to_page(pud));
> +
> +       return 0;
> +}
>
> +static inline void kvm_free_hwpgd(struct kvm *kvm)
> +{
> +       pgd_t *pgd = kvm->arch.pgd;
> +       pud_t *pud = pud_offset(pgd, 0);
> +       pmd_t *pmd = pmd_offset(pud, 0);
> +       free_pages((unsigned long)pmd, 0);
> +       put_page(virt_to_page(pud));
> +}
> +
> +static inline phys_addr_t kvm_get_hwpgd(struct kvm *kvm)
> +{
> +       pgd_t *pgd = kvm->arch.pgd;
> +       pud_t *pud = pud_offset(pgd, 0);
> +       pmd_t *pmd = pmd_offset(pud, 0);
> +       return virt_to_phys(pmd);
> +
> +}
> +#elif defined(CONFIG_ARM64_4K_PAGES) && CONFIG_ARM64_PGTABLE_LEVELS == 4
> +#define KVM_PREALLOC_LEVELS    1
> +#define PTRS_PER_S2_PGD                2
> +#define S2_PGD_ORDER           get_order(PTRS_PER_S2_PGD * sizeof(pgd_t))
> +
> +static inline int kvm_prealloc_hwpgd(struct kvm *kvm, pgd_t *pgd)
> +{
> +       pud_t *pud;
> +
> +       pud = (pud_t *)__get_free_pages(GFP_KERNEL, 1);
> +       if (!pud)
> +               return -ENOMEM;
> +       memset(pud, 0, 2 * PAGE_SIZE);
> +       pgd_populate(NULL, pgd, pud);
> +       pgd_populate(NULL, pgd + 1, pud + PTRS_PER_PUD);
> +       get_page(virt_to_page(pgd));
> +       get_page(virt_to_page(pgd));
> +
> +       return 0;
> +}
> +
> +static inline void kvm_free_hwpgd(struct kvm *kvm)
> +{
> +       pgd_t *pgd = kvm->arch.pgd;
> +       pud_t *pud = pud_offset(pgd, 0);
> +       free_pages((unsigned long)pud, 1);
> +       put_page(virt_to_page(pgd));
> +       put_page(virt_to_page(pgd));
> +}
> +
> +static inline phys_addr_t kvm_get_hwpgd(struct kvm *kvm)
> +{
> +       pgd_t *pgd = kvm->arch.pgd;
> +       pud_t *pud = pud_offset(pgd, 0);
> +       return virt_to_phys(pud);
> +}
> +#else
> +#define KVM_PREALLOC_LEVELS    0
> +#define PTRS_PER_S2_PGD                (1 << (KVM_PHYS_SHIFT - PGDIR_SHIFT))
> +#define S2_PGD_ORDER           get_order(PTRS_PER_S2_PGD * sizeof(pgd_t))
> +
> +static inline int kvm_prealloc_hwpgd(struct kvm *kvm, pgd_t *pgd)
> +{
> +       return 0;
> +}
> +
> +static inline void kvm_free_hwpgd(struct kvm *kvm) { }
> +
> +static inline phys_addr_t kvm_get_hwpgd(struct kvm *kvm)
> +{
> +       return virt_to_phys(kvm->arch.pgd);
> +}
> +#endif

I think all the above could be squashed into a single set of function
implementations with the right macros defined before.

For the KVM levels, you could use something like (we exchanged emails in
private on this topic and how I got to these macros but they need better
checking):

#define KVM_PGTABLE_LEVELS      ((KVM_PHYS_SHIFT - PAGE_SHIFT - 5) / (PAGE_SHIFT - 3) + 1)
#define KVM_PREALLOC_LEVELS     (CONFIG_ARM64_PGTABLE_LEVELS - KVM_PGTABLE_LEVELS)
#if PGDIR_SHIFT > KVM_PHYS_SHIFT
#define PTRS_PER_S2_PGD		(1)
#else
#define PTRS_PER_S2_PGD		(1 << (KVM_PHYS_SHIFT - PGDIR_SHIFT))
#endif

--
Catalin

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 1/2] arm64: KVM: Implement 48 VA support for KVM EL2 and Stage-2
@ 2014-09-30 12:39     ` Catalin Marinas
  0 siblings, 0 replies; 22+ messages in thread
From: Catalin Marinas @ 2014-09-30 12:39 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Christoffer,

On Thu, Sep 25, 2014 at 08:42:53PM +0100, Christoffer Dall wrote:
> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
> index 7796051..048f37f 100644
> --- a/arch/arm/kvm/arm.c
> +++ b/arch/arm/kvm/arm.c
> @@ -409,7 +409,7 @@ static void update_vttbr(struct kvm *kvm)
>         kvm_next_vmid++;
>
>         /* update vttbr to be used with the new vmid */
> -       pgd_phys = virt_to_phys(kvm->arch.pgd);
> +       pgd_phys = kvm_get_hwpgd(kvm);
>         BUG_ON(pgd_phys & ~VTTBR_BADDR_MASK);
>         vmid = ((u64)(kvm->arch.vmid) << VTTBR_VMID_SHIFT) & VTTBR_VMID_MASK;
>         kvm->arch.vttbr = pgd_phys | vmid;
> diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
> index bb06f76..4532f5f 100644
> --- a/arch/arm/kvm/mmu.c
> +++ b/arch/arm/kvm/mmu.c
> @@ -33,6 +33,7 @@
>
>  extern char  __hyp_idmap_text_start[], __hyp_idmap_text_end[];
>
> +static int kvm_prealloc_levels;

Do you have plans to determine this dynamically? In this patch, I can
only see it being assigned to KVM_PREALLOC_LEVELS. I guess you could
simply use the macro.

> @@ -521,6 +554,7 @@ int create_hyp_io_mappings(void *from, void *to, phys_addr_t phys_addr)
>   */
>  int kvm_alloc_stage2_pgd(struct kvm *kvm)
>  {
> +       int ret;
>         pgd_t *pgd;
>
>         if (kvm->arch.pgd != NULL) {
> @@ -533,9 +567,17 @@ int kvm_alloc_stage2_pgd(struct kvm *kvm)
>                 return -ENOMEM;
>
>         memset(pgd, 0, PTRS_PER_S2_PGD * sizeof(pgd_t));

I don't think you need to allocate a full page here (not shown here in
the diff context) but it may not be trivial since you use get_page() to
keep track of when page tables could be freed (do you do this for the
pgd as well?).

> +
> +       ret = kvm_prealloc_hwpgd(kvm, pgd);
> +       if (ret)
> +               goto out;
> +
>         kvm_clean_pgd(pgd);
>         kvm->arch.pgd = pgd;
> -
> +       ret = 0;
> +out:
> +       if (ret)
> +               free_pages((unsigned long)pgd, S2_PGD_ORDER);
>         return 0;
>  }

Does this always return 0, even in case of error? Does it look better
as:

        ret = kvm_prealloc_hwpgd(kvm, pgd);
        if (ret)
                goto out;

        kvm_clean_pgd(pgd);
        kvm->arch.pgd = pgd;

        return 0;

out:
        free_pages((unsigned long)pgd, S2_PGD_ORDER);
        return ret;

> @@ -572,19 +614,36 @@ void kvm_free_stage2_pgd(struct kvm *kvm)
>                 return;
>
>         unmap_stage2_range(kvm, 0, KVM_PHYS_SIZE);
> +       kvm_free_hwpgd(kvm);
>         free_pages((unsigned long)kvm->arch.pgd, S2_PGD_ORDER);
>         kvm->arch.pgd = NULL;
>  }
>
> -static pmd_t *stage2_get_pmd(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
> +static pud_t *stage2_get_pud(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
>                              phys_addr_t addr)
>  {
>         pgd_t *pgd;
>         pud_t *pud;
> -       pmd_t *pmd;
>
>         pgd = kvm->arch.pgd + pgd_index(addr);
> -       pud = pud_offset(pgd, addr);
> +       if (pgd_none(*pgd)) {
> +               if (!cache)
> +                       return NULL;
> +               pud = mmu_memory_cache_alloc(cache);
> +               pgd_populate(NULL, pgd, pud);
> +               get_page(virt_to_page(pgd));
> +       }
> +
> +       return pud_offset(pgd, addr);
> +}

This function looks correct, though we would not expect pgd_none() to
ever be true as we pre-populate it. You could but a WARN_ON or something
until we actually have hardware 4-levels stage 2 (though I don't see a
need yet).

> +static pmd_t *stage2_get_pmd(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
> +                            phys_addr_t addr)
> +{
> +       pud_t *pud;
> +       pmd_t *pmd;
> +
> +       pud = stage2_get_pud(kvm, cache, addr);
>         if (pud_none(*pud)) {
>                 if (!cache)
>                         return NULL;
> @@ -630,7 +689,7 @@ static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
>         pmd_t *pmd;
>         pte_t *pte, old_pte;
>
> -       /* Create stage-2 page table mapping - Level 1 */
> +       /* Create stage-2 page table mapping - Levels 0 and 1 */
>         pmd = stage2_get_pmd(kvm, cache, addr);
>         if (!pmd) {
>                 /*
> @@ -688,7 +747,8 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
>         for (addr = guest_ipa; addr < end; addr += PAGE_SIZE) {
>                 pte_t pte = pfn_pte(pfn, PAGE_S2_DEVICE);
>
> -               ret = mmu_topup_memory_cache(&cache, 2, 2);
> +               ret = mmu_topup_memory_cache(&cache, KVM_MMU_CACHE_MIN_PAGES,
> +                                               KVM_MMU_CACHE_MIN_PAGES);

BTW, why do you need to preallocate the pages in KVM's own cache? Would
GFP_ATOMIC not work within the kvm->mmu_lock region?

> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index fd4e81a..0f3e0a9 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -200,7 +200,6 @@ config ARM64_VA_BITS_42
>
>  config ARM64_VA_BITS_48
>         bool "48-bit"
> -       depends on BROKEN

I'm ok with removing BROKEN here but I think at the same time we need to
disable the SMMU driver which has similar issues with (not) supporting
4-levels page tables for stage 2 (stage 1 should be fine).

> diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
> index a030d16..313c6f9 100644
> --- a/arch/arm64/include/asm/kvm_mmu.h
> +++ b/arch/arm64/include/asm/kvm_mmu.h
[...]
> @@ -118,13 +128,123 @@ static inline bool kvm_page_empty(void *ptr)
>  }
>
>  #define kvm_pte_table_empty(ptep) kvm_page_empty(ptep)
> -#ifndef CONFIG_ARM64_64K_PAGES
> -#define kvm_pmd_table_empty(pmdp) kvm_page_empty(pmdp)
> -#else
> +#if CONFIG_ARM64_PGTABLE_LEVELS == 2
>  #define kvm_pmd_table_empty(pmdp) (0)
> -#endif
>  #define kvm_pud_table_empty(pudp) (0)
> +#elif CONFIG_ARM64_PGTABLE_LEVELS == 3
> +#define kvm_pmd_table_empty(pmdp) kvm_page_empty(pmdp)
> +#define kvm_pud_table_empty(pudp) (0)
> +#elif CONFIG_ARM64_PGTABLE_LEVELS == 4
> +#define kvm_pmd_table_empty(pmdp) kvm_page_empty(pmdp)
> +#define kvm_pud_table_empty(pudp) kvm_page_empty(pudp)
> +#endif

Not sure whether it's clearer as (up to you):

#ifdef __PAGETABLE_PMD_FOLDED
#define kvm_pmd_table_empty(pmdp)       (0)
#else
#define kvm_pmd_table_empty(pmdp)       kvm_page_empty(pmdp)
#endif

#ifdef __PAGETABLE_PUD_FOLDED
#define kvm_pud_table_empty(pudp)       (0)
#else
#define kvm_pud_table_empty(pudp)       kvm_page_empty(pudp)
#endif


> +
> +/**
> + * kvm_prealloc_hwpgd - allocate inital table for VTTBR
> + * @kvm:       The KVM struct pointer for the VM.
> + * @pgd:       The kernel pseudo pgd

"hwpgd" doesn't sound like the right name since it's actually a fake pgd
that it is allocating. Maybe "swpgd"?

> + *
> + * When the kernel uses more levels of page tables than the guest, we allocate
> + * a fake PGD and pre-populate it to point to the next-level page table, which
> + * will be the real initial page table pointed to by the VTTBR.
> + *
> + * When KVM_PREALLOC_PMD is defined, we allocate a single page for the PMD and
> + * the kernel will use folded pud.  When KVM_PREALLOC_PUD is defined, we
> + * allocate 2 consecutive PUD pages.
> + */
> +#if defined(CONFIG_ARM64_64K_PAGES) && CONFIG_ARM64_PGTABLE_LEVELS == 3
> +#define KVM_PREALLOC_LEVELS    2
> +#define PTRS_PER_S2_PGD                1
> +#define S2_PGD_ORDER           get_order(PTRS_PER_S2_PGD * sizeof(pgd_t))
> +
> +
> +static inline int kvm_prealloc_hwpgd(struct kvm *kvm, pgd_t *pgd)
> +{
> +       pud_t *pud;
> +       pmd_t *pmd;
> +
> +       pud = pud_offset(pgd, 0);
> +       pmd = (pmd_t *)__get_free_pages(GFP_KERNEL, 0);
> +
> +       if (!pmd)
> +               return -ENOMEM;
> +       memset(pmd, 0, PAGE_SIZE);

You could use __GFP_ZERO.

> +       pud_populate(NULL, pud, pmd);
> +       get_page(virt_to_page(pud));
> +
> +       return 0;
> +}
>
> +static inline void kvm_free_hwpgd(struct kvm *kvm)
> +{
> +       pgd_t *pgd = kvm->arch.pgd;
> +       pud_t *pud = pud_offset(pgd, 0);
> +       pmd_t *pmd = pmd_offset(pud, 0);
> +       free_pages((unsigned long)pmd, 0);
> +       put_page(virt_to_page(pud));
> +}
> +
> +static inline phys_addr_t kvm_get_hwpgd(struct kvm *kvm)
> +{
> +       pgd_t *pgd = kvm->arch.pgd;
> +       pud_t *pud = pud_offset(pgd, 0);
> +       pmd_t *pmd = pmd_offset(pud, 0);
> +       return virt_to_phys(pmd);
> +
> +}
> +#elif defined(CONFIG_ARM64_4K_PAGES) && CONFIG_ARM64_PGTABLE_LEVELS == 4
> +#define KVM_PREALLOC_LEVELS    1
> +#define PTRS_PER_S2_PGD                2
> +#define S2_PGD_ORDER           get_order(PTRS_PER_S2_PGD * sizeof(pgd_t))
> +
> +static inline int kvm_prealloc_hwpgd(struct kvm *kvm, pgd_t *pgd)
> +{
> +       pud_t *pud;
> +
> +       pud = (pud_t *)__get_free_pages(GFP_KERNEL, 1);
> +       if (!pud)
> +               return -ENOMEM;
> +       memset(pud, 0, 2 * PAGE_SIZE);
> +       pgd_populate(NULL, pgd, pud);
> +       pgd_populate(NULL, pgd + 1, pud + PTRS_PER_PUD);
> +       get_page(virt_to_page(pgd));
> +       get_page(virt_to_page(pgd));
> +
> +       return 0;
> +}
> +
> +static inline void kvm_free_hwpgd(struct kvm *kvm)
> +{
> +       pgd_t *pgd = kvm->arch.pgd;
> +       pud_t *pud = pud_offset(pgd, 0);
> +       free_pages((unsigned long)pud, 1);
> +       put_page(virt_to_page(pgd));
> +       put_page(virt_to_page(pgd));
> +}
> +
> +static inline phys_addr_t kvm_get_hwpgd(struct kvm *kvm)
> +{
> +       pgd_t *pgd = kvm->arch.pgd;
> +       pud_t *pud = pud_offset(pgd, 0);
> +       return virt_to_phys(pud);
> +}
> +#else
> +#define KVM_PREALLOC_LEVELS    0
> +#define PTRS_PER_S2_PGD                (1 << (KVM_PHYS_SHIFT - PGDIR_SHIFT))
> +#define S2_PGD_ORDER           get_order(PTRS_PER_S2_PGD * sizeof(pgd_t))
> +
> +static inline int kvm_prealloc_hwpgd(struct kvm *kvm, pgd_t *pgd)
> +{
> +       return 0;
> +}
> +
> +static inline void kvm_free_hwpgd(struct kvm *kvm) { }
> +
> +static inline phys_addr_t kvm_get_hwpgd(struct kvm *kvm)
> +{
> +       return virt_to_phys(kvm->arch.pgd);
> +}
> +#endif

I think all the above could be squashed into a single set of function
implementations with the right macros defined before.

For the KVM levels, you could use something like (we exchanged emails in
private on this topic and how I got to these macros but they need better
checking):

#define KVM_PGTABLE_LEVELS      ((KVM_PHYS_SHIFT - PAGE_SHIFT - 5) / (PAGE_SHIFT - 3) + 1)
#define KVM_PREALLOC_LEVELS     (CONFIG_ARM64_PGTABLE_LEVELS - KVM_PGTABLE_LEVELS)
#if PGDIR_SHIFT > KVM_PHYS_SHIFT
#define PTRS_PER_S2_PGD		(1)
#else
#define PTRS_PER_S2_PGD		(1 << (KVM_PHYS_SHIFT - PGDIR_SHIFT))
#endif

--
Catalin

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 2/2] arm/arm64: KVM: Ensure memslots are within KVM_PHYS_SIZE
  2014-09-25 19:42   ` Christoffer Dall
@ 2014-09-30 12:46     ` Catalin Marinas
  -1 siblings, 0 replies; 22+ messages in thread
From: Catalin Marinas @ 2014-09-30 12:46 UTC (permalink / raw)
  To: Christoffer Dall; +Cc: linux-arm-kernel, kvmarm, kvm

On Thu, Sep 25, 2014 at 08:42:54PM +0100, Christoffer Dall wrote:
> When creating or moving a memslot, make sure the IPA space is within the
> addressable range of the guest.  Otherwise, user space can create too
> large a memslot and KVM would try to access potentially unallocated page
> table entries when inserting entries in the Stage-2 page tables.
> 
> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
> ---
>  arch/arm/kvm/mmu.c | 8 ++++++++
>  1 file changed, 8 insertions(+)
> 
> diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
> index 4532f5f..52a311a 100644
> --- a/arch/arm/kvm/mmu.c
> +++ b/arch/arm/kvm/mmu.c
> @@ -975,6 +975,9 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  		goto out_unlock;
>  	}
>  
> +	/* Userspace should not be able to register out-of-bounds IPAs */

I think "userspace" is a bit misleading (should be "guests").

> +	VM_BUG_ON(fault_ipa >= KVM_PHYS_SIZE);

Can guests not generate IPA addresses higher than KVM_PHYS_SIZE? I don't
see why this wouldn't be possible when PARange > 40.

-- 
Catalin

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 2/2] arm/arm64: KVM: Ensure memslots are within KVM_PHYS_SIZE
@ 2014-09-30 12:46     ` Catalin Marinas
  0 siblings, 0 replies; 22+ messages in thread
From: Catalin Marinas @ 2014-09-30 12:46 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Sep 25, 2014 at 08:42:54PM +0100, Christoffer Dall wrote:
> When creating or moving a memslot, make sure the IPA space is within the
> addressable range of the guest.  Otherwise, user space can create too
> large a memslot and KVM would try to access potentially unallocated page
> table entries when inserting entries in the Stage-2 page tables.
> 
> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
> ---
>  arch/arm/kvm/mmu.c | 8 ++++++++
>  1 file changed, 8 insertions(+)
> 
> diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
> index 4532f5f..52a311a 100644
> --- a/arch/arm/kvm/mmu.c
> +++ b/arch/arm/kvm/mmu.c
> @@ -975,6 +975,9 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  		goto out_unlock;
>  	}
>  
> +	/* Userspace should not be able to register out-of-bounds IPAs */

I think "userspace" is a bit misleading (should be "guests").

> +	VM_BUG_ON(fault_ipa >= KVM_PHYS_SIZE);

Can guests not generate IPA addresses higher than KVM_PHYS_SIZE? I don't
see why this wouldn't be possible when PARange > 40.

-- 
Catalin

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 1/2] arm64: KVM: Implement 48 VA support for KVM EL2 and Stage-2
  2014-09-30 12:39     ` Catalin Marinas
@ 2014-10-06 13:41       ` Christoffer Dall
  -1 siblings, 0 replies; 22+ messages in thread
From: Christoffer Dall @ 2014-10-06 13:41 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: Marc Zyngier, Jungseok Lee, kvmarm, linux-arm-kernel, kvm

Hi Catalin,

On Tue, Sep 30, 2014 at 01:39:47PM +0100, Catalin Marinas wrote:
> Hi Christoffer,
> 
> On Thu, Sep 25, 2014 at 08:42:53PM +0100, Christoffer Dall wrote:
> > diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
> > index 7796051..048f37f 100644
> > --- a/arch/arm/kvm/arm.c
> > +++ b/arch/arm/kvm/arm.c
> > @@ -409,7 +409,7 @@ static void update_vttbr(struct kvm *kvm)
> >         kvm_next_vmid++;
> >
> >         /* update vttbr to be used with the new vmid */
> > -       pgd_phys = virt_to_phys(kvm->arch.pgd);
> > +       pgd_phys = kvm_get_hwpgd(kvm);
> >         BUG_ON(pgd_phys & ~VTTBR_BADDR_MASK);
> >         vmid = ((u64)(kvm->arch.vmid) << VTTBR_VMID_SHIFT) & VTTBR_VMID_MASK;
> >         kvm->arch.vttbr = pgd_phys | vmid;
> > diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
> > index bb06f76..4532f5f 100644
> > --- a/arch/arm/kvm/mmu.c
> > +++ b/arch/arm/kvm/mmu.c
> > @@ -33,6 +33,7 @@
> >
> >  extern char  __hyp_idmap_text_start[], __hyp_idmap_text_end[];
> >
> > +static int kvm_prealloc_levels;
> 
> Do you have plans to determine this dynamically? In this patch, I can
> only see it being assigned to KVM_PREALLOC_LEVELS. I guess you could
> simply use the macro.
> 

Yes, I could.  Not really sure why I did this.

> > @@ -521,6 +554,7 @@ int create_hyp_io_mappings(void *from, void *to, phys_addr_t phys_addr)
> >   */
> >  int kvm_alloc_stage2_pgd(struct kvm *kvm)
> >  {
> > +       int ret;
> >         pgd_t *pgd;
> >
> >         if (kvm->arch.pgd != NULL) {
> > @@ -533,9 +567,17 @@ int kvm_alloc_stage2_pgd(struct kvm *kvm)
> >                 return -ENOMEM;
> >
> >         memset(pgd, 0, PTRS_PER_S2_PGD * sizeof(pgd_t));
> 
> I don't think you need to allocate a full page here (not shown here in
> the diff context) but it may not be trivial since you use get_page() to
> keep track of when page tables could be freed (do you do this for the
> pgd as well?).
> 

The code becomes a bit weird, because we'd have to check if this is
really just the fake pseudo-pgds and we can just use kmalloc(), but if
it's not, we have to use __get_free_pages() to adhere to the
architecture's alignment requirements for the VTTBR physical address.

I'll try to fix this in the next version and we can see how it looks
like.

As far as the get_page() counting goes, since we always fully propogate
this first level pseudo table, we never mess with the page counts except
for in kvm_prealloc_hwpgd() and kvm_free_hwpgd() so we can just take all
the get/put_page stuff out of those and we should be in the clear.

> > +
> > +       ret = kvm_prealloc_hwpgd(kvm, pgd);
> > +       if (ret)
> > +               goto out;
> > +
> >         kvm_clean_pgd(pgd);
> >         kvm->arch.pgd = pgd;
> > -
> > +       ret = 0;
> > +out:
> > +       if (ret)
> > +               free_pages((unsigned long)pgd, S2_PGD_ORDER);
> >         return 0;
> >  }
> 
> Does this always return 0, even in case of error?

Me being braindead, sorry.

> Does it look better
> as:
> 
>         ret = kvm_prealloc_hwpgd(kvm, pgd);
>         if (ret)
>                 goto out;
> 
>         kvm_clean_pgd(pgd);
>         kvm->arch.pgd = pgd;
> 
>         return 0;
> 
> out:
>         free_pages((unsigned long)pgd, S2_PGD_ORDER);
>         return ret;
> 

probably looks better, I'd rename out to out_err, but yes.

> > @@ -572,19 +614,36 @@ void kvm_free_stage2_pgd(struct kvm *kvm)
> >                 return;
> >
> >         unmap_stage2_range(kvm, 0, KVM_PHYS_SIZE);
> > +       kvm_free_hwpgd(kvm);
> >         free_pages((unsigned long)kvm->arch.pgd, S2_PGD_ORDER);
> >         kvm->arch.pgd = NULL;
> >  }
> >
> > -static pmd_t *stage2_get_pmd(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
> > +static pud_t *stage2_get_pud(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
> >                              phys_addr_t addr)
> >  {
> >         pgd_t *pgd;
> >         pud_t *pud;
> > -       pmd_t *pmd;
> >
> >         pgd = kvm->arch.pgd + pgd_index(addr);
> > -       pud = pud_offset(pgd, addr);
> > +       if (pgd_none(*pgd)) {
> > +               if (!cache)
> > +                       return NULL;
> > +               pud = mmu_memory_cache_alloc(cache);
> > +               pgd_populate(NULL, pgd, pud);
> > +               get_page(virt_to_page(pgd));
> > +       }
> > +
> > +       return pud_offset(pgd, addr);
> > +}
> 
> This function looks correct, though we would not expect pgd_none() to
> ever be true as we pre-populate it.
> You could but a WARN_ON or something
> until we actually have hardware 4-levels stage 2 (though I don't see a
> need yet).
> 

pgd_none will never be true, because on both aarch64 and arm we fold the
pud into the pgd (and include pgtable-nopud.h) if we have less than 4
levels, and if we do have 4 levels, then we have preallocated and
prepopulated the pgd.  If I got this right, I agree, and we should add
the WARN_ON or VM_BUG_ON or something.

> > +static pmd_t *stage2_get_pmd(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
> > +                            phys_addr_t addr)
> > +{
> > +       pud_t *pud;
> > +       pmd_t *pmd;
> > +
> > +       pud = stage2_get_pud(kvm, cache, addr);
> >         if (pud_none(*pud)) {
> >                 if (!cache)
> >                         return NULL;
> > @@ -630,7 +689,7 @@ static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
> >         pmd_t *pmd;
> >         pte_t *pte, old_pte;
> >
> > -       /* Create stage-2 page table mapping - Level 1 */
> > +       /* Create stage-2 page table mapping - Levels 0 and 1 */
> >         pmd = stage2_get_pmd(kvm, cache, addr);
> >         if (!pmd) {
> >                 /*
> > @@ -688,7 +747,8 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
> >         for (addr = guest_ipa; addr < end; addr += PAGE_SIZE) {
> >                 pte_t pte = pfn_pte(pfn, PAGE_S2_DEVICE);
> >
> > -               ret = mmu_topup_memory_cache(&cache, 2, 2);
> > +               ret = mmu_topup_memory_cache(&cache, KVM_MMU_CACHE_MIN_PAGES,
> > +                                               KVM_MMU_CACHE_MIN_PAGES);
> 
> BTW, why do you need to preallocate the pages in KVM's own cache? Would
> GFP_ATOMIC not work within the kvm->mmu_lock region?
> 

It would, but it doesn't really seem like the sort of thing you would
want to tap into the GFP_ATOMIC pool for, would it?  A bunch of VMs
could seriously trash your host by asking for a lot of address space.

I think that was the rationale for doing things this way on x86 and it
made sense to me...

> > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> > index fd4e81a..0f3e0a9 100644
> > --- a/arch/arm64/Kconfig
> > +++ b/arch/arm64/Kconfig
> > @@ -200,7 +200,6 @@ config ARM64_VA_BITS_42
> >
> >  config ARM64_VA_BITS_48
> >         bool "48-bit"
> > -       depends on BROKEN
> 
> I'm ok with removing BROKEN here but I think at the same time we need to
> disable the SMMU driver which has similar issues with (not) supporting
> 4-levels page tables for stage 2 (stage 1 should be fine).

Should that be in separate patches, or is it fine to just include it
here?

> 
> > diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
> > index a030d16..313c6f9 100644
> > --- a/arch/arm64/include/asm/kvm_mmu.h
> > +++ b/arch/arm64/include/asm/kvm_mmu.h
> [...]
> > @@ -118,13 +128,123 @@ static inline bool kvm_page_empty(void *ptr)
> >  }
> >
> >  #define kvm_pte_table_empty(ptep) kvm_page_empty(ptep)
> > -#ifndef CONFIG_ARM64_64K_PAGES
> > -#define kvm_pmd_table_empty(pmdp) kvm_page_empty(pmdp)
> > -#else
> > +#if CONFIG_ARM64_PGTABLE_LEVELS == 2
> >  #define kvm_pmd_table_empty(pmdp) (0)
> > -#endif
> >  #define kvm_pud_table_empty(pudp) (0)
> > +#elif CONFIG_ARM64_PGTABLE_LEVELS == 3
> > +#define kvm_pmd_table_empty(pmdp) kvm_page_empty(pmdp)
> > +#define kvm_pud_table_empty(pudp) (0)
> > +#elif CONFIG_ARM64_PGTABLE_LEVELS == 4
> > +#define kvm_pmd_table_empty(pmdp) kvm_page_empty(pmdp)
> > +#define kvm_pud_table_empty(pudp) kvm_page_empty(pudp)
> > +#endif
> 
> Not sure whether it's clearer as (up to you):
> 
> #ifdef __PAGETABLE_PMD_FOLDED
> #define kvm_pmd_table_empty(pmdp)       (0)
> #else
> #define kvm_pmd_table_empty(pmdp)       kvm_page_empty(pmdp)
> #endif
> 
> #ifdef __PAGETABLE_PUD_FOLDED
> #define kvm_pud_table_empty(pudp)       (0)
> #else
> #define kvm_pud_table_empty(pudp)       kvm_page_empty(pudp)
> #endif
> 
> 

I think it probably is more clear.

> > +
> > +/**
> > + * kvm_prealloc_hwpgd - allocate inital table for VTTBR
> > + * @kvm:       The KVM struct pointer for the VM.
> > + * @pgd:       The kernel pseudo pgd
> 
> "hwpgd" doesn't sound like the right name since it's actually a fake pgd
> that it is allocating. Maybe "swpgd"?
> 

The fake pgd was already allocated in kvm_alloc_stage2_pgd(), now we're
allocating the first-level page table that is going to be walked by the
MMU for Stage-2 translations (the address of which will go into the
VTTBR), so that's the reasoning for calling it the hwpgd.

Does that make sense?

> > + *
> > + * When the kernel uses more levels of page tables than the guest, we allocate
> > + * a fake PGD and pre-populate it to point to the next-level page table, which
> > + * will be the real initial page table pointed to by the VTTBR.
> > + *
> > + * When KVM_PREALLOC_PMD is defined, we allocate a single page for the PMD and
> > + * the kernel will use folded pud.  When KVM_PREALLOC_PUD is defined, we
> > + * allocate 2 consecutive PUD pages.
> > + */
> > +#if defined(CONFIG_ARM64_64K_PAGES) && CONFIG_ARM64_PGTABLE_LEVELS == 3
> > +#define KVM_PREALLOC_LEVELS    2
> > +#define PTRS_PER_S2_PGD                1
> > +#define S2_PGD_ORDER           get_order(PTRS_PER_S2_PGD * sizeof(pgd_t))
> > +
> > +
> > +static inline int kvm_prealloc_hwpgd(struct kvm *kvm, pgd_t *pgd)
> > +{
> > +       pud_t *pud;
> > +       pmd_t *pmd;
> > +
> > +       pud = pud_offset(pgd, 0);
> > +       pmd = (pmd_t *)__get_free_pages(GFP_KERNEL, 0);
> > +
> > +       if (!pmd)
> > +               return -ENOMEM;
> > +       memset(pmd, 0, PAGE_SIZE);
> 
> You could use __GFP_ZERO.
> 

yes, Ard also commented on that, we should do that.

> > +       pud_populate(NULL, pud, pmd);
> > +       get_page(virt_to_page(pud));
> > +
> > +       return 0;
> > +}
> >
> > +static inline void kvm_free_hwpgd(struct kvm *kvm)
> > +{
> > +       pgd_t *pgd = kvm->arch.pgd;
> > +       pud_t *pud = pud_offset(pgd, 0);
> > +       pmd_t *pmd = pmd_offset(pud, 0);
> > +       free_pages((unsigned long)pmd, 0);
> > +       put_page(virt_to_page(pud));
> > +}
> > +
> > +static inline phys_addr_t kvm_get_hwpgd(struct kvm *kvm)
> > +{
> > +       pgd_t *pgd = kvm->arch.pgd;
> > +       pud_t *pud = pud_offset(pgd, 0);
> > +       pmd_t *pmd = pmd_offset(pud, 0);
> > +       return virt_to_phys(pmd);
> > +
> > +}
> > +#elif defined(CONFIG_ARM64_4K_PAGES) && CONFIG_ARM64_PGTABLE_LEVELS == 4
> > +#define KVM_PREALLOC_LEVELS    1
> > +#define PTRS_PER_S2_PGD                2
> > +#define S2_PGD_ORDER           get_order(PTRS_PER_S2_PGD * sizeof(pgd_t))
> > +
> > +static inline int kvm_prealloc_hwpgd(struct kvm *kvm, pgd_t *pgd)
> > +{
> > +       pud_t *pud;
> > +
> > +       pud = (pud_t *)__get_free_pages(GFP_KERNEL, 1);
> > +       if (!pud)
> > +               return -ENOMEM;
> > +       memset(pud, 0, 2 * PAGE_SIZE);
> > +       pgd_populate(NULL, pgd, pud);
> > +       pgd_populate(NULL, pgd + 1, pud + PTRS_PER_PUD);
> > +       get_page(virt_to_page(pgd));
> > +       get_page(virt_to_page(pgd));
> > +
> > +       return 0;
> > +}
> > +
> > +static inline void kvm_free_hwpgd(struct kvm *kvm)
> > +{
> > +       pgd_t *pgd = kvm->arch.pgd;
> > +       pud_t *pud = pud_offset(pgd, 0);
> > +       free_pages((unsigned long)pud, 1);
> > +       put_page(virt_to_page(pgd));
> > +       put_page(virt_to_page(pgd));
> > +}
> > +
> > +static inline phys_addr_t kvm_get_hwpgd(struct kvm *kvm)
> > +{
> > +       pgd_t *pgd = kvm->arch.pgd;
> > +       pud_t *pud = pud_offset(pgd, 0);
> > +       return virt_to_phys(pud);
> > +}
> > +#else
> > +#define KVM_PREALLOC_LEVELS    0
> > +#define PTRS_PER_S2_PGD                (1 << (KVM_PHYS_SHIFT - PGDIR_SHIFT))
> > +#define S2_PGD_ORDER           get_order(PTRS_PER_S2_PGD * sizeof(pgd_t))
> > +
> > +static inline int kvm_prealloc_hwpgd(struct kvm *kvm, pgd_t *pgd)
> > +{
> > +       return 0;
> > +}
> > +
> > +static inline void kvm_free_hwpgd(struct kvm *kvm) { }
> > +
> > +static inline phys_addr_t kvm_get_hwpgd(struct kvm *kvm)
> > +{
> > +       return virt_to_phys(kvm->arch.pgd);
> > +}
> > +#endif
> 
> I think all the above could be squashed into a single set of function
> implementations with the right macros defined before.
> 
> For the KVM levels, you could use something like (we exchanged emails in
> private on this topic and how I got to these macros but they need better
> checking):
> 
> #define KVM_PGTABLE_LEVELS      ((KVM_PHYS_SHIFT - PAGE_SHIFT - 5) / (PAGE_SHIFT - 3) + 1)
> #define KVM_PREALLOC_LEVELS     (CONFIG_ARM64_PGTABLE_LEVELS - KVM_PGTABLE_LEVELS)
> #if PGDIR_SHIFT > KVM_PHYS_SHIFT
> #define PTRS_PER_S2_PGD		(1)
> #else
> #define PTRS_PER_S2_PGD		(1 << (KVM_PHYS_SHIFT - PGDIR_SHIFT))
> #endif
> 

I'll revisit this, but the amount of time I needed to spend making sure
I understand this sort of indicated to me that it was simpler to just
spell it out.  What do you think?  Do you strongly prefer the simple
macro definitions?

Thanks for the review!

-Christoffer

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 1/2] arm64: KVM: Implement 48 VA support for KVM EL2 and Stage-2
@ 2014-10-06 13:41       ` Christoffer Dall
  0 siblings, 0 replies; 22+ messages in thread
From: Christoffer Dall @ 2014-10-06 13:41 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Catalin,

On Tue, Sep 30, 2014 at 01:39:47PM +0100, Catalin Marinas wrote:
> Hi Christoffer,
> 
> On Thu, Sep 25, 2014 at 08:42:53PM +0100, Christoffer Dall wrote:
> > diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
> > index 7796051..048f37f 100644
> > --- a/arch/arm/kvm/arm.c
> > +++ b/arch/arm/kvm/arm.c
> > @@ -409,7 +409,7 @@ static void update_vttbr(struct kvm *kvm)
> >         kvm_next_vmid++;
> >
> >         /* update vttbr to be used with the new vmid */
> > -       pgd_phys = virt_to_phys(kvm->arch.pgd);
> > +       pgd_phys = kvm_get_hwpgd(kvm);
> >         BUG_ON(pgd_phys & ~VTTBR_BADDR_MASK);
> >         vmid = ((u64)(kvm->arch.vmid) << VTTBR_VMID_SHIFT) & VTTBR_VMID_MASK;
> >         kvm->arch.vttbr = pgd_phys | vmid;
> > diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
> > index bb06f76..4532f5f 100644
> > --- a/arch/arm/kvm/mmu.c
> > +++ b/arch/arm/kvm/mmu.c
> > @@ -33,6 +33,7 @@
> >
> >  extern char  __hyp_idmap_text_start[], __hyp_idmap_text_end[];
> >
> > +static int kvm_prealloc_levels;
> 
> Do you have plans to determine this dynamically? In this patch, I can
> only see it being assigned to KVM_PREALLOC_LEVELS. I guess you could
> simply use the macro.
> 

Yes, I could.  Not really sure why I did this.

> > @@ -521,6 +554,7 @@ int create_hyp_io_mappings(void *from, void *to, phys_addr_t phys_addr)
> >   */
> >  int kvm_alloc_stage2_pgd(struct kvm *kvm)
> >  {
> > +       int ret;
> >         pgd_t *pgd;
> >
> >         if (kvm->arch.pgd != NULL) {
> > @@ -533,9 +567,17 @@ int kvm_alloc_stage2_pgd(struct kvm *kvm)
> >                 return -ENOMEM;
> >
> >         memset(pgd, 0, PTRS_PER_S2_PGD * sizeof(pgd_t));
> 
> I don't think you need to allocate a full page here (not shown here in
> the diff context) but it may not be trivial since you use get_page() to
> keep track of when page tables could be freed (do you do this for the
> pgd as well?).
> 

The code becomes a bit weird, because we'd have to check if this is
really just the fake pseudo-pgds and we can just use kmalloc(), but if
it's not, we have to use __get_free_pages() to adhere to the
architecture's alignment requirements for the VTTBR physical address.

I'll try to fix this in the next version and we can see how it looks
like.

As far as the get_page() counting goes, since we always fully propogate
this first level pseudo table, we never mess with the page counts except
for in kvm_prealloc_hwpgd() and kvm_free_hwpgd() so we can just take all
the get/put_page stuff out of those and we should be in the clear.

> > +
> > +       ret = kvm_prealloc_hwpgd(kvm, pgd);
> > +       if (ret)
> > +               goto out;
> > +
> >         kvm_clean_pgd(pgd);
> >         kvm->arch.pgd = pgd;
> > -
> > +       ret = 0;
> > +out:
> > +       if (ret)
> > +               free_pages((unsigned long)pgd, S2_PGD_ORDER);
> >         return 0;
> >  }
> 
> Does this always return 0, even in case of error?

Me being braindead, sorry.

> Does it look better
> as:
> 
>         ret = kvm_prealloc_hwpgd(kvm, pgd);
>         if (ret)
>                 goto out;
> 
>         kvm_clean_pgd(pgd);
>         kvm->arch.pgd = pgd;
> 
>         return 0;
> 
> out:
>         free_pages((unsigned long)pgd, S2_PGD_ORDER);
>         return ret;
> 

probably looks better, I'd rename out to out_err, but yes.

> > @@ -572,19 +614,36 @@ void kvm_free_stage2_pgd(struct kvm *kvm)
> >                 return;
> >
> >         unmap_stage2_range(kvm, 0, KVM_PHYS_SIZE);
> > +       kvm_free_hwpgd(kvm);
> >         free_pages((unsigned long)kvm->arch.pgd, S2_PGD_ORDER);
> >         kvm->arch.pgd = NULL;
> >  }
> >
> > -static pmd_t *stage2_get_pmd(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
> > +static pud_t *stage2_get_pud(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
> >                              phys_addr_t addr)
> >  {
> >         pgd_t *pgd;
> >         pud_t *pud;
> > -       pmd_t *pmd;
> >
> >         pgd = kvm->arch.pgd + pgd_index(addr);
> > -       pud = pud_offset(pgd, addr);
> > +       if (pgd_none(*pgd)) {
> > +               if (!cache)
> > +                       return NULL;
> > +               pud = mmu_memory_cache_alloc(cache);
> > +               pgd_populate(NULL, pgd, pud);
> > +               get_page(virt_to_page(pgd));
> > +       }
> > +
> > +       return pud_offset(pgd, addr);
> > +}
> 
> This function looks correct, though we would not expect pgd_none() to
> ever be true as we pre-populate it.
> You could but a WARN_ON or something
> until we actually have hardware 4-levels stage 2 (though I don't see a
> need yet).
> 

pgd_none will never be true, because on both aarch64 and arm we fold the
pud into the pgd (and include pgtable-nopud.h) if we have less than 4
levels, and if we do have 4 levels, then we have preallocated and
prepopulated the pgd.  If I got this right, I agree, and we should add
the WARN_ON or VM_BUG_ON or something.

> > +static pmd_t *stage2_get_pmd(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
> > +                            phys_addr_t addr)
> > +{
> > +       pud_t *pud;
> > +       pmd_t *pmd;
> > +
> > +       pud = stage2_get_pud(kvm, cache, addr);
> >         if (pud_none(*pud)) {
> >                 if (!cache)
> >                         return NULL;
> > @@ -630,7 +689,7 @@ static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
> >         pmd_t *pmd;
> >         pte_t *pte, old_pte;
> >
> > -       /* Create stage-2 page table mapping - Level 1 */
> > +       /* Create stage-2 page table mapping - Levels 0 and 1 */
> >         pmd = stage2_get_pmd(kvm, cache, addr);
> >         if (!pmd) {
> >                 /*
> > @@ -688,7 +747,8 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
> >         for (addr = guest_ipa; addr < end; addr += PAGE_SIZE) {
> >                 pte_t pte = pfn_pte(pfn, PAGE_S2_DEVICE);
> >
> > -               ret = mmu_topup_memory_cache(&cache, 2, 2);
> > +               ret = mmu_topup_memory_cache(&cache, KVM_MMU_CACHE_MIN_PAGES,
> > +                                               KVM_MMU_CACHE_MIN_PAGES);
> 
> BTW, why do you need to preallocate the pages in KVM's own cache? Would
> GFP_ATOMIC not work within the kvm->mmu_lock region?
> 

It would, but it doesn't really seem like the sort of thing you would
want to tap into the GFP_ATOMIC pool for, would it?  A bunch of VMs
could seriously trash your host by asking for a lot of address space.

I think that was the rationale for doing things this way on x86 and it
made sense to me...

> > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> > index fd4e81a..0f3e0a9 100644
> > --- a/arch/arm64/Kconfig
> > +++ b/arch/arm64/Kconfig
> > @@ -200,7 +200,6 @@ config ARM64_VA_BITS_42
> >
> >  config ARM64_VA_BITS_48
> >         bool "48-bit"
> > -       depends on BROKEN
> 
> I'm ok with removing BROKEN here but I think at the same time we need to
> disable the SMMU driver which has similar issues with (not) supporting
> 4-levels page tables for stage 2 (stage 1 should be fine).

Should that be in separate patches, or is it fine to just include it
here?

> 
> > diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
> > index a030d16..313c6f9 100644
> > --- a/arch/arm64/include/asm/kvm_mmu.h
> > +++ b/arch/arm64/include/asm/kvm_mmu.h
> [...]
> > @@ -118,13 +128,123 @@ static inline bool kvm_page_empty(void *ptr)
> >  }
> >
> >  #define kvm_pte_table_empty(ptep) kvm_page_empty(ptep)
> > -#ifndef CONFIG_ARM64_64K_PAGES
> > -#define kvm_pmd_table_empty(pmdp) kvm_page_empty(pmdp)
> > -#else
> > +#if CONFIG_ARM64_PGTABLE_LEVELS == 2
> >  #define kvm_pmd_table_empty(pmdp) (0)
> > -#endif
> >  #define kvm_pud_table_empty(pudp) (0)
> > +#elif CONFIG_ARM64_PGTABLE_LEVELS == 3
> > +#define kvm_pmd_table_empty(pmdp) kvm_page_empty(pmdp)
> > +#define kvm_pud_table_empty(pudp) (0)
> > +#elif CONFIG_ARM64_PGTABLE_LEVELS == 4
> > +#define kvm_pmd_table_empty(pmdp) kvm_page_empty(pmdp)
> > +#define kvm_pud_table_empty(pudp) kvm_page_empty(pudp)
> > +#endif
> 
> Not sure whether it's clearer as (up to you):
> 
> #ifdef __PAGETABLE_PMD_FOLDED
> #define kvm_pmd_table_empty(pmdp)       (0)
> #else
> #define kvm_pmd_table_empty(pmdp)       kvm_page_empty(pmdp)
> #endif
> 
> #ifdef __PAGETABLE_PUD_FOLDED
> #define kvm_pud_table_empty(pudp)       (0)
> #else
> #define kvm_pud_table_empty(pudp)       kvm_page_empty(pudp)
> #endif
> 
> 

I think it probably is more clear.

> > +
> > +/**
> > + * kvm_prealloc_hwpgd - allocate inital table for VTTBR
> > + * @kvm:       The KVM struct pointer for the VM.
> > + * @pgd:       The kernel pseudo pgd
> 
> "hwpgd" doesn't sound like the right name since it's actually a fake pgd
> that it is allocating. Maybe "swpgd"?
> 

The fake pgd was already allocated in kvm_alloc_stage2_pgd(), now we're
allocating the first-level page table that is going to be walked by the
MMU for Stage-2 translations (the address of which will go into the
VTTBR), so that's the reasoning for calling it the hwpgd.

Does that make sense?

> > + *
> > + * When the kernel uses more levels of page tables than the guest, we allocate
> > + * a fake PGD and pre-populate it to point to the next-level page table, which
> > + * will be the real initial page table pointed to by the VTTBR.
> > + *
> > + * When KVM_PREALLOC_PMD is defined, we allocate a single page for the PMD and
> > + * the kernel will use folded pud.  When KVM_PREALLOC_PUD is defined, we
> > + * allocate 2 consecutive PUD pages.
> > + */
> > +#if defined(CONFIG_ARM64_64K_PAGES) && CONFIG_ARM64_PGTABLE_LEVELS == 3
> > +#define KVM_PREALLOC_LEVELS    2
> > +#define PTRS_PER_S2_PGD                1
> > +#define S2_PGD_ORDER           get_order(PTRS_PER_S2_PGD * sizeof(pgd_t))
> > +
> > +
> > +static inline int kvm_prealloc_hwpgd(struct kvm *kvm, pgd_t *pgd)
> > +{
> > +       pud_t *pud;
> > +       pmd_t *pmd;
> > +
> > +       pud = pud_offset(pgd, 0);
> > +       pmd = (pmd_t *)__get_free_pages(GFP_KERNEL, 0);
> > +
> > +       if (!pmd)
> > +               return -ENOMEM;
> > +       memset(pmd, 0, PAGE_SIZE);
> 
> You could use __GFP_ZERO.
> 

yes, Ard also commented on that, we should do that.

> > +       pud_populate(NULL, pud, pmd);
> > +       get_page(virt_to_page(pud));
> > +
> > +       return 0;
> > +}
> >
> > +static inline void kvm_free_hwpgd(struct kvm *kvm)
> > +{
> > +       pgd_t *pgd = kvm->arch.pgd;
> > +       pud_t *pud = pud_offset(pgd, 0);
> > +       pmd_t *pmd = pmd_offset(pud, 0);
> > +       free_pages((unsigned long)pmd, 0);
> > +       put_page(virt_to_page(pud));
> > +}
> > +
> > +static inline phys_addr_t kvm_get_hwpgd(struct kvm *kvm)
> > +{
> > +       pgd_t *pgd = kvm->arch.pgd;
> > +       pud_t *pud = pud_offset(pgd, 0);
> > +       pmd_t *pmd = pmd_offset(pud, 0);
> > +       return virt_to_phys(pmd);
> > +
> > +}
> > +#elif defined(CONFIG_ARM64_4K_PAGES) && CONFIG_ARM64_PGTABLE_LEVELS == 4
> > +#define KVM_PREALLOC_LEVELS    1
> > +#define PTRS_PER_S2_PGD                2
> > +#define S2_PGD_ORDER           get_order(PTRS_PER_S2_PGD * sizeof(pgd_t))
> > +
> > +static inline int kvm_prealloc_hwpgd(struct kvm *kvm, pgd_t *pgd)
> > +{
> > +       pud_t *pud;
> > +
> > +       pud = (pud_t *)__get_free_pages(GFP_KERNEL, 1);
> > +       if (!pud)
> > +               return -ENOMEM;
> > +       memset(pud, 0, 2 * PAGE_SIZE);
> > +       pgd_populate(NULL, pgd, pud);
> > +       pgd_populate(NULL, pgd + 1, pud + PTRS_PER_PUD);
> > +       get_page(virt_to_page(pgd));
> > +       get_page(virt_to_page(pgd));
> > +
> > +       return 0;
> > +}
> > +
> > +static inline void kvm_free_hwpgd(struct kvm *kvm)
> > +{
> > +       pgd_t *pgd = kvm->arch.pgd;
> > +       pud_t *pud = pud_offset(pgd, 0);
> > +       free_pages((unsigned long)pud, 1);
> > +       put_page(virt_to_page(pgd));
> > +       put_page(virt_to_page(pgd));
> > +}
> > +
> > +static inline phys_addr_t kvm_get_hwpgd(struct kvm *kvm)
> > +{
> > +       pgd_t *pgd = kvm->arch.pgd;
> > +       pud_t *pud = pud_offset(pgd, 0);
> > +       return virt_to_phys(pud);
> > +}
> > +#else
> > +#define KVM_PREALLOC_LEVELS    0
> > +#define PTRS_PER_S2_PGD                (1 << (KVM_PHYS_SHIFT - PGDIR_SHIFT))
> > +#define S2_PGD_ORDER           get_order(PTRS_PER_S2_PGD * sizeof(pgd_t))
> > +
> > +static inline int kvm_prealloc_hwpgd(struct kvm *kvm, pgd_t *pgd)
> > +{
> > +       return 0;
> > +}
> > +
> > +static inline void kvm_free_hwpgd(struct kvm *kvm) { }
> > +
> > +static inline phys_addr_t kvm_get_hwpgd(struct kvm *kvm)
> > +{
> > +       return virt_to_phys(kvm->arch.pgd);
> > +}
> > +#endif
> 
> I think all the above could be squashed into a single set of function
> implementations with the right macros defined before.
> 
> For the KVM levels, you could use something like (we exchanged emails in
> private on this topic and how I got to these macros but they need better
> checking):
> 
> #define KVM_PGTABLE_LEVELS      ((KVM_PHYS_SHIFT - PAGE_SHIFT - 5) / (PAGE_SHIFT - 3) + 1)
> #define KVM_PREALLOC_LEVELS     (CONFIG_ARM64_PGTABLE_LEVELS - KVM_PGTABLE_LEVELS)
> #if PGDIR_SHIFT > KVM_PHYS_SHIFT
> #define PTRS_PER_S2_PGD		(1)
> #else
> #define PTRS_PER_S2_PGD		(1 << (KVM_PHYS_SHIFT - PGDIR_SHIFT))
> #endif
> 

I'll revisit this, but the amount of time I needed to spend making sure
I understand this sort of indicated to me that it was simpler to just
spell it out.  What do you think?  Do you strongly prefer the simple
macro definitions?

Thanks for the review!

-Christoffer

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 2/2] arm/arm64: KVM: Ensure memslots are within KVM_PHYS_SIZE
  2014-09-30 12:46     ` Catalin Marinas
@ 2014-10-06 13:47       ` Christoffer Dall
  -1 siblings, 0 replies; 22+ messages in thread
From: Christoffer Dall @ 2014-10-06 13:47 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: linux-arm-kernel, kvmarm, kvm

On Tue, Sep 30, 2014 at 01:46:51PM +0100, Catalin Marinas wrote:
> On Thu, Sep 25, 2014 at 08:42:54PM +0100, Christoffer Dall wrote:
> > When creating or moving a memslot, make sure the IPA space is within the
> > addressable range of the guest.  Otherwise, user space can create too
> > large a memslot and KVM would try to access potentially unallocated page
> > table entries when inserting entries in the Stage-2 page tables.
> > 
> > Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
> > ---
> >  arch/arm/kvm/mmu.c | 8 ++++++++
> >  1 file changed, 8 insertions(+)
> > 
> > diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
> > index 4532f5f..52a311a 100644
> > --- a/arch/arm/kvm/mmu.c
> > +++ b/arch/arm/kvm/mmu.c
> > @@ -975,6 +975,9 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run)
> >  		goto out_unlock;
> >  	}
> >  
> > +	/* Userspace should not be able to register out-of-bounds IPAs */
> 
> I think "userspace" is a bit misleading (should be "guests").
> 

see below

> > +	VM_BUG_ON(fault_ipa >= KVM_PHYS_SIZE);
> 
> Can guests not generate IPA addresses higher than KVM_PHYS_SIZE? I don't
> see why this wouldn't be possible when PARange > 40.
> 
Guests can, but higher in this function we resolve the the gfn (IPA) to
a memslot (hva).  That only succeeds if userspace actually managed to
register a memslot with such an IPA, which we prevent in the other part
of this patch.  So if we get here, it's a bug, because we should have
entered the section above and taken the goto out_unlock.

Makes sense?

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 2/2] arm/arm64: KVM: Ensure memslots are within KVM_PHYS_SIZE
@ 2014-10-06 13:47       ` Christoffer Dall
  0 siblings, 0 replies; 22+ messages in thread
From: Christoffer Dall @ 2014-10-06 13:47 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Sep 30, 2014 at 01:46:51PM +0100, Catalin Marinas wrote:
> On Thu, Sep 25, 2014 at 08:42:54PM +0100, Christoffer Dall wrote:
> > When creating or moving a memslot, make sure the IPA space is within the
> > addressable range of the guest.  Otherwise, user space can create too
> > large a memslot and KVM would try to access potentially unallocated page
> > table entries when inserting entries in the Stage-2 page tables.
> > 
> > Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
> > ---
> >  arch/arm/kvm/mmu.c | 8 ++++++++
> >  1 file changed, 8 insertions(+)
> > 
> > diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
> > index 4532f5f..52a311a 100644
> > --- a/arch/arm/kvm/mmu.c
> > +++ b/arch/arm/kvm/mmu.c
> > @@ -975,6 +975,9 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run)
> >  		goto out_unlock;
> >  	}
> >  
> > +	/* Userspace should not be able to register out-of-bounds IPAs */
> 
> I think "userspace" is a bit misleading (should be "guests").
> 

see below

> > +	VM_BUG_ON(fault_ipa >= KVM_PHYS_SIZE);
> 
> Can guests not generate IPA addresses higher than KVM_PHYS_SIZE? I don't
> see why this wouldn't be possible when PARange > 40.
> 
Guests can, but higher in this function we resolve the the gfn (IPA) to
a memslot (hva).  That only succeeds if userspace actually managed to
register a memslot with such an IPA, which we prevent in the other part
of this patch.  So if we get here, it's a bug, because we should have
entered the section above and taken the goto out_unlock.

Makes sense?

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 1/2] arm64: KVM: Implement 48 VA support for KVM EL2 and Stage-2
  2014-10-06 13:41       ` Christoffer Dall
@ 2014-10-06 15:54         ` Catalin Marinas
  -1 siblings, 0 replies; 22+ messages in thread
From: Catalin Marinas @ 2014-10-06 15:54 UTC (permalink / raw)
  To: Christoffer Dall
  Cc: linux-arm-kernel, kvmarm, kvm, Marc Zyngier, Jungseok Lee

On Mon, Oct 06, 2014 at 02:41:18PM +0100, Christoffer Dall wrote:
> On Tue, Sep 30, 2014 at 01:39:47PM +0100, Catalin Marinas wrote:
> > On Thu, Sep 25, 2014 at 08:42:53PM +0100, Christoffer Dall wrote:
> > > @@ -572,19 +614,36 @@ void kvm_free_stage2_pgd(struct kvm *kvm)
> > >                 return;
> > >
> > >         unmap_stage2_range(kvm, 0, KVM_PHYS_SIZE);
> > > +       kvm_free_hwpgd(kvm);
> > >         free_pages((unsigned long)kvm->arch.pgd, S2_PGD_ORDER);
> > >         kvm->arch.pgd = NULL;
> > >  }
> > >
> > > -static pmd_t *stage2_get_pmd(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
> > > +static pud_t *stage2_get_pud(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
> > >                              phys_addr_t addr)
> > >  {
> > >         pgd_t *pgd;
> > >         pud_t *pud;
> > > -       pmd_t *pmd;
> > >
> > >         pgd = kvm->arch.pgd + pgd_index(addr);
> > > -       pud = pud_offset(pgd, addr);
> > > +       if (pgd_none(*pgd)) {
> > > +               if (!cache)
> > > +                       return NULL;
> > > +               pud = mmu_memory_cache_alloc(cache);
> > > +               pgd_populate(NULL, pgd, pud);
> > > +               get_page(virt_to_page(pgd));
> > > +       }
> > > +
> > > +       return pud_offset(pgd, addr);
> > > +}
> >
> > This function looks correct, though we would not expect pgd_none() to
> > ever be true as we pre-populate it.
> > You could but a WARN_ON or something
> > until we actually have hardware 4-levels stage 2 (though I don't see a
> > need yet).
>
> pgd_none will never be true, because on both aarch64 and arm we fold the
> pud into the pgd (and include pgtable-nopud.h) if we have less than 4
> levels, and if we do have 4 levels, then we have preallocated and
> prepopulated the pgd.  If I got this right, I agree, and we should add
> the WARN_ON or VM_BUG_ON or something.

Yes.

> > > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> > > index fd4e81a..0f3e0a9 100644
> > > --- a/arch/arm64/Kconfig
> > > +++ b/arch/arm64/Kconfig
> > > @@ -200,7 +200,6 @@ config ARM64_VA_BITS_42
> > >
> > >  config ARM64_VA_BITS_48
> > >         bool "48-bit"
> > > -       depends on BROKEN
> >
> > I'm ok with removing BROKEN here but I think at the same time we need to
> > disable the SMMU driver which has similar issues with (not) supporting
> > 4-levels page tables for stage 2 (stage 1 should be fine).
>
> Should that be in separate patches, or is it fine to just include it
> here?

I think a separate patch replacing BROKEN with !ARM_SMMU. But we'll
need to test the kernel a bit more to make sure there isn't anything
else that's broken before merging it.

> > > +
> > > +/**
> > > + * kvm_prealloc_hwpgd - allocate inital table for VTTBR
> > > + * @kvm:       The KVM struct pointer for the VM.
> > > + * @pgd:       The kernel pseudo pgd
> >
> > "hwpgd" doesn't sound like the right name since it's actually a fake pgd
> > that it is allocating. Maybe "swpgd"?
>
> The fake pgd was already allocated in kvm_alloc_stage2_pgd(), now we're
> allocating the first-level page table that is going to be walked by the
> MMU for Stage-2 translations (the address of which will go into the
> VTTBR), so that's the reasoning for calling it the hwpgd.

OK, it makes sense now.

> > > +#define KVM_PREALLOC_LEVELS    0
> > > +#define PTRS_PER_S2_PGD                (1 << (KVM_PHYS_SHIFT - PGDIR_SHIFT))
> > > +#define S2_PGD_ORDER           get_order(PTRS_PER_S2_PGD * sizeof(pgd_t))
> > > +
> > > +static inline int kvm_prealloc_hwpgd(struct kvm *kvm, pgd_t *pgd)
> > > +{
> > > +       return 0;
> > > +}
> > > +
> > > +static inline void kvm_free_hwpgd(struct kvm *kvm) { }
> > > +
> > > +static inline phys_addr_t kvm_get_hwpgd(struct kvm *kvm)
> > > +{
> > > +       return virt_to_phys(kvm->arch.pgd);
> > > +}
> > > +#endif
> >
> > I think all the above could be squashed into a single set of function
> > implementations with the right macros defined before.
> >
> > For the KVM levels, you could use something like (we exchanged emails in
> > private on this topic and how I got to these macros but they need better
> > checking):
> >
> > #define KVM_PGTABLE_LEVELS      ((KVM_PHYS_SHIFT - PAGE_SHIFT - 5) / (PAGE_SHIFT - 3) + 1)
> > #define KVM_PREALLOC_LEVELS     (CONFIG_ARM64_PGTABLE_LEVELS - KVM_PGTABLE_LEVELS)
> > #if PGDIR_SHIFT > KVM_PHYS_SHIFT
> > #define PTRS_PER_S2_PGD               (1)
> > #else
> > #define PTRS_PER_S2_PGD               (1 << (KVM_PHYS_SHIFT - PGDIR_SHIFT))
> > #endif
>
> I'll revisit this, but the amount of time I needed to spend making sure
> I understand this sort of indicated to me that it was simpler to just
> spell it out.  What do you think?  Do you strongly prefer the simple
> macro definitions?

I don't have a strong preference but I was looking to reduce the #ifdef
complexity when we make the next change to the host page table (16K
pages is coming as well with 3 or 4 levels).

--
Catalin

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 1/2] arm64: KVM: Implement 48 VA support for KVM EL2 and Stage-2
@ 2014-10-06 15:54         ` Catalin Marinas
  0 siblings, 0 replies; 22+ messages in thread
From: Catalin Marinas @ 2014-10-06 15:54 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Oct 06, 2014 at 02:41:18PM +0100, Christoffer Dall wrote:
> On Tue, Sep 30, 2014 at 01:39:47PM +0100, Catalin Marinas wrote:
> > On Thu, Sep 25, 2014 at 08:42:53PM +0100, Christoffer Dall wrote:
> > > @@ -572,19 +614,36 @@ void kvm_free_stage2_pgd(struct kvm *kvm)
> > >                 return;
> > >
> > >         unmap_stage2_range(kvm, 0, KVM_PHYS_SIZE);
> > > +       kvm_free_hwpgd(kvm);
> > >         free_pages((unsigned long)kvm->arch.pgd, S2_PGD_ORDER);
> > >         kvm->arch.pgd = NULL;
> > >  }
> > >
> > > -static pmd_t *stage2_get_pmd(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
> > > +static pud_t *stage2_get_pud(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
> > >                              phys_addr_t addr)
> > >  {
> > >         pgd_t *pgd;
> > >         pud_t *pud;
> > > -       pmd_t *pmd;
> > >
> > >         pgd = kvm->arch.pgd + pgd_index(addr);
> > > -       pud = pud_offset(pgd, addr);
> > > +       if (pgd_none(*pgd)) {
> > > +               if (!cache)
> > > +                       return NULL;
> > > +               pud = mmu_memory_cache_alloc(cache);
> > > +               pgd_populate(NULL, pgd, pud);
> > > +               get_page(virt_to_page(pgd));
> > > +       }
> > > +
> > > +       return pud_offset(pgd, addr);
> > > +}
> >
> > This function looks correct, though we would not expect pgd_none() to
> > ever be true as we pre-populate it.
> > You could but a WARN_ON or something
> > until we actually have hardware 4-levels stage 2 (though I don't see a
> > need yet).
>
> pgd_none will never be true, because on both aarch64 and arm we fold the
> pud into the pgd (and include pgtable-nopud.h) if we have less than 4
> levels, and if we do have 4 levels, then we have preallocated and
> prepopulated the pgd.  If I got this right, I agree, and we should add
> the WARN_ON or VM_BUG_ON or something.

Yes.

> > > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> > > index fd4e81a..0f3e0a9 100644
> > > --- a/arch/arm64/Kconfig
> > > +++ b/arch/arm64/Kconfig
> > > @@ -200,7 +200,6 @@ config ARM64_VA_BITS_42
> > >
> > >  config ARM64_VA_BITS_48
> > >         bool "48-bit"
> > > -       depends on BROKEN
> >
> > I'm ok with removing BROKEN here but I think at the same time we need to
> > disable the SMMU driver which has similar issues with (not) supporting
> > 4-levels page tables for stage 2 (stage 1 should be fine).
>
> Should that be in separate patches, or is it fine to just include it
> here?

I think a separate patch replacing BROKEN with !ARM_SMMU. But we'll
need to test the kernel a bit more to make sure there isn't anything
else that's broken before merging it.

> > > +
> > > +/**
> > > + * kvm_prealloc_hwpgd - allocate inital table for VTTBR
> > > + * @kvm:       The KVM struct pointer for the VM.
> > > + * @pgd:       The kernel pseudo pgd
> >
> > "hwpgd" doesn't sound like the right name since it's actually a fake pgd
> > that it is allocating. Maybe "swpgd"?
>
> The fake pgd was already allocated in kvm_alloc_stage2_pgd(), now we're
> allocating the first-level page table that is going to be walked by the
> MMU for Stage-2 translations (the address of which will go into the
> VTTBR), so that's the reasoning for calling it the hwpgd.

OK, it makes sense now.

> > > +#define KVM_PREALLOC_LEVELS    0
> > > +#define PTRS_PER_S2_PGD                (1 << (KVM_PHYS_SHIFT - PGDIR_SHIFT))
> > > +#define S2_PGD_ORDER           get_order(PTRS_PER_S2_PGD * sizeof(pgd_t))
> > > +
> > > +static inline int kvm_prealloc_hwpgd(struct kvm *kvm, pgd_t *pgd)
> > > +{
> > > +       return 0;
> > > +}
> > > +
> > > +static inline void kvm_free_hwpgd(struct kvm *kvm) { }
> > > +
> > > +static inline phys_addr_t kvm_get_hwpgd(struct kvm *kvm)
> > > +{
> > > +       return virt_to_phys(kvm->arch.pgd);
> > > +}
> > > +#endif
> >
> > I think all the above could be squashed into a single set of function
> > implementations with the right macros defined before.
> >
> > For the KVM levels, you could use something like (we exchanged emails in
> > private on this topic and how I got to these macros but they need better
> > checking):
> >
> > #define KVM_PGTABLE_LEVELS      ((KVM_PHYS_SHIFT - PAGE_SHIFT - 5) / (PAGE_SHIFT - 3) + 1)
> > #define KVM_PREALLOC_LEVELS     (CONFIG_ARM64_PGTABLE_LEVELS - KVM_PGTABLE_LEVELS)
> > #if PGDIR_SHIFT > KVM_PHYS_SHIFT
> > #define PTRS_PER_S2_PGD               (1)
> > #else
> > #define PTRS_PER_S2_PGD               (1 << (KVM_PHYS_SHIFT - PGDIR_SHIFT))
> > #endif
>
> I'll revisit this, but the amount of time I needed to spend making sure
> I understand this sort of indicated to me that it was simpler to just
> spell it out.  What do you think?  Do you strongly prefer the simple
> macro definitions?

I don't have a strong preference but I was looking to reduce the #ifdef
complexity when we make the next change to the host page table (16K
pages is coming as well with 3 or 4 levels).

--
Catalin

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 2/2] arm/arm64: KVM: Ensure memslots are within KVM_PHYS_SIZE
  2014-10-06 13:47       ` Christoffer Dall
@ 2014-10-06 16:02         ` Catalin Marinas
  -1 siblings, 0 replies; 22+ messages in thread
From: Catalin Marinas @ 2014-10-06 16:02 UTC (permalink / raw)
  To: Christoffer Dall; +Cc: linux-arm-kernel, kvmarm, kvm

On Mon, Oct 06, 2014 at 02:47:01PM +0100, Christoffer Dall wrote:
> On Tue, Sep 30, 2014 at 01:46:51PM +0100, Catalin Marinas wrote:
> > On Thu, Sep 25, 2014 at 08:42:54PM +0100, Christoffer Dall wrote:
> > > When creating or moving a memslot, make sure the IPA space is within the
> > > addressable range of the guest.  Otherwise, user space can create too
> > > large a memslot and KVM would try to access potentially unallocated page
> > > table entries when inserting entries in the Stage-2 page tables.
> > > 
> > > Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
> > > ---
> > >  arch/arm/kvm/mmu.c | 8 ++++++++
> > >  1 file changed, 8 insertions(+)
> > > 
> > > diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
> > > index 4532f5f..52a311a 100644
> > > --- a/arch/arm/kvm/mmu.c
> > > +++ b/arch/arm/kvm/mmu.c
> > > @@ -975,6 +975,9 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run)
> > >  		goto out_unlock;
> > >  	}
> > >  
> > > +	/* Userspace should not be able to register out-of-bounds IPAs */
> > 
> > I think "userspace" is a bit misleading (should be "guests").
> 
> see below
> 
> > > +	VM_BUG_ON(fault_ipa >= KVM_PHYS_SIZE);
> > 
> > Can guests not generate IPA addresses higher than KVM_PHYS_SIZE? I don't
> > see why this wouldn't be possible when PARange > 40.
> > 
> Guests can, but higher in this function we resolve the the gfn (IPA) to
> a memslot (hva).  That only succeeds if userspace actually managed to
> register a memslot with such an IPA, which we prevent in the other part
> of this patch.  So if we get here, it's a bug, because we should have
> entered the section above and taken the goto out_unlock.

OK. I got it now.

-- 
Catalin

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 2/2] arm/arm64: KVM: Ensure memslots are within KVM_PHYS_SIZE
@ 2014-10-06 16:02         ` Catalin Marinas
  0 siblings, 0 replies; 22+ messages in thread
From: Catalin Marinas @ 2014-10-06 16:02 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Oct 06, 2014 at 02:47:01PM +0100, Christoffer Dall wrote:
> On Tue, Sep 30, 2014 at 01:46:51PM +0100, Catalin Marinas wrote:
> > On Thu, Sep 25, 2014 at 08:42:54PM +0100, Christoffer Dall wrote:
> > > When creating or moving a memslot, make sure the IPA space is within the
> > > addressable range of the guest.  Otherwise, user space can create too
> > > large a memslot and KVM would try to access potentially unallocated page
> > > table entries when inserting entries in the Stage-2 page tables.
> > > 
> > > Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
> > > ---
> > >  arch/arm/kvm/mmu.c | 8 ++++++++
> > >  1 file changed, 8 insertions(+)
> > > 
> > > diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
> > > index 4532f5f..52a311a 100644
> > > --- a/arch/arm/kvm/mmu.c
> > > +++ b/arch/arm/kvm/mmu.c
> > > @@ -975,6 +975,9 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run)
> > >  		goto out_unlock;
> > >  	}
> > >  
> > > +	/* Userspace should not be able to register out-of-bounds IPAs */
> > 
> > I think "userspace" is a bit misleading (should be "guests").
> 
> see below
> 
> > > +	VM_BUG_ON(fault_ipa >= KVM_PHYS_SIZE);
> > 
> > Can guests not generate IPA addresses higher than KVM_PHYS_SIZE? I don't
> > see why this wouldn't be possible when PARange > 40.
> > 
> Guests can, but higher in this function we resolve the the gfn (IPA) to
> a memslot (hva).  That only succeeds if userspace actually managed to
> register a memslot with such an IPA, which we prevent in the other part
> of this patch.  So if we get here, it's a bug, because we should have
> entered the section above and taken the goto out_unlock.

OK. I got it now.

-- 
Catalin

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 1/2] arm64: KVM: Implement 48 VA support for KVM EL2 and Stage-2
  2014-10-06 15:54         ` Catalin Marinas
@ 2014-10-06 19:54           ` Christoffer Dall
  -1 siblings, 0 replies; 22+ messages in thread
From: Christoffer Dall @ 2014-10-06 19:54 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: linux-arm-kernel, kvmarm, kvm, Marc Zyngier, Jungseok Lee

On Mon, Oct 06, 2014 at 04:54:46PM +0100, Catalin Marinas wrote:
> On Mon, Oct 06, 2014 at 02:41:18PM +0100, Christoffer Dall wrote:
> > On Tue, Sep 30, 2014 at 01:39:47PM +0100, Catalin Marinas wrote:
> > > On Thu, Sep 25, 2014 at 08:42:53PM +0100, Christoffer Dall wrote:
> > > > @@ -572,19 +614,36 @@ void kvm_free_stage2_pgd(struct kvm *kvm)
> > > >                 return;
> > > >
> > > >         unmap_stage2_range(kvm, 0, KVM_PHYS_SIZE);
> > > > +       kvm_free_hwpgd(kvm);
> > > >         free_pages((unsigned long)kvm->arch.pgd, S2_PGD_ORDER);
> > > >         kvm->arch.pgd = NULL;
> > > >  }
> > > >
> > > > -static pmd_t *stage2_get_pmd(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
> > > > +static pud_t *stage2_get_pud(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
> > > >                              phys_addr_t addr)
> > > >  {
> > > >         pgd_t *pgd;
> > > >         pud_t *pud;
> > > > -       pmd_t *pmd;
> > > >
> > > >         pgd = kvm->arch.pgd + pgd_index(addr);
> > > > -       pud = pud_offset(pgd, addr);
> > > > +       if (pgd_none(*pgd)) {
> > > > +               if (!cache)
> > > > +                       return NULL;
> > > > +               pud = mmu_memory_cache_alloc(cache);
> > > > +               pgd_populate(NULL, pgd, pud);
> > > > +               get_page(virt_to_page(pgd));
> > > > +       }
> > > > +
> > > > +       return pud_offset(pgd, addr);
> > > > +}
> > >
> > > This function looks correct, though we would not expect pgd_none() to
> > > ever be true as we pre-populate it.
> > > You could but a WARN_ON or something
> > > until we actually have hardware 4-levels stage 2 (though I don't see a
> > > need yet).
> >
> > pgd_none will never be true, because on both aarch64 and arm we fold the
> > pud into the pgd (and include pgtable-nopud.h) if we have less than 4
> > levels, and if we do have 4 levels, then we have preallocated and
> > prepopulated the pgd.  If I got this right, I agree, and we should add
> > the WARN_ON or VM_BUG_ON or something.
> 
> Yes.
> 
> > > > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> > > > index fd4e81a..0f3e0a9 100644
> > > > --- a/arch/arm64/Kconfig
> > > > +++ b/arch/arm64/Kconfig
> > > > @@ -200,7 +200,6 @@ config ARM64_VA_BITS_42
> > > >
> > > >  config ARM64_VA_BITS_48
> > > >         bool "48-bit"
> > > > -       depends on BROKEN
> > >
> > > I'm ok with removing BROKEN here but I think at the same time we need to
> > > disable the SMMU driver which has similar issues with (not) supporting
> > > 4-levels page tables for stage 2 (stage 1 should be fine).
> >
> > Should that be in separate patches, or is it fine to just include it
> > here?
> 
> I think a separate patch replacing BROKEN with !ARM_SMMU. But we'll
> need to test the kernel a bit more to make sure there isn't anything
> else that's broken before merging it.
> 
> > > > +
> > > > +/**
> > > > + * kvm_prealloc_hwpgd - allocate inital table for VTTBR
> > > > + * @kvm:       The KVM struct pointer for the VM.
> > > > + * @pgd:       The kernel pseudo pgd
> > >
> > > "hwpgd" doesn't sound like the right name since it's actually a fake pgd
> > > that it is allocating. Maybe "swpgd"?
> >
> > The fake pgd was already allocated in kvm_alloc_stage2_pgd(), now we're
> > allocating the first-level page table that is going to be walked by the
> > MMU for Stage-2 translations (the address of which will go into the
> > VTTBR), so that's the reasoning for calling it the hwpgd.
> 
> OK, it makes sense now.
> 
> > > > +#define KVM_PREALLOC_LEVELS    0
> > > > +#define PTRS_PER_S2_PGD                (1 << (KVM_PHYS_SHIFT - PGDIR_SHIFT))
> > > > +#define S2_PGD_ORDER           get_order(PTRS_PER_S2_PGD * sizeof(pgd_t))
> > > > +
> > > > +static inline int kvm_prealloc_hwpgd(struct kvm *kvm, pgd_t *pgd)
> > > > +{
> > > > +       return 0;
> > > > +}
> > > > +
> > > > +static inline void kvm_free_hwpgd(struct kvm *kvm) { }
> > > > +
> > > > +static inline phys_addr_t kvm_get_hwpgd(struct kvm *kvm)
> > > > +{
> > > > +       return virt_to_phys(kvm->arch.pgd);
> > > > +}
> > > > +#endif
> > >
> > > I think all the above could be squashed into a single set of function
> > > implementations with the right macros defined before.
> > >
> > > For the KVM levels, you could use something like (we exchanged emails in
> > > private on this topic and how I got to these macros but they need better
> > > checking):
> > >
> > > #define KVM_PGTABLE_LEVELS      ((KVM_PHYS_SHIFT - PAGE_SHIFT - 5) / (PAGE_SHIFT - 3) + 1)
> > > #define KVM_PREALLOC_LEVELS     (CONFIG_ARM64_PGTABLE_LEVELS - KVM_PGTABLE_LEVELS)
> > > #if PGDIR_SHIFT > KVM_PHYS_SHIFT
> > > #define PTRS_PER_S2_PGD               (1)
> > > #else
> > > #define PTRS_PER_S2_PGD               (1 << (KVM_PHYS_SHIFT - PGDIR_SHIFT))
> > > #endif
> >
> > I'll revisit this, but the amount of time I needed to spend making sure
> > I understand this sort of indicated to me that it was simpler to just
> > spell it out.  What do you think?  Do you strongly prefer the simple
> > macro definitions?
> 
> I don't have a strong preference but I was looking to reduce the #ifdef
> complexity when we make the next change to the host page table (16K
> pages is coming as well with 3 or 4 levels).
> 
I looked a bit at this tonight, and I still have some reservations:

The KVM_PREALLOC_LEVELS macro doesn't seem to yield the right result, or
at least not the result we have now, because it will give us 1 in both
the 64K and 4K situation.  I've renamed the symbol to KVM_PREALLOC_LEVEL
in the new version to be slightly more clear.

In the case where PGDIR_SHIFT > KVM_PHYS_SHIFT, we should have
PTRS_PER_S2_PGD == 2, and this now has a real consequence when we're no
longer allocating on a page-basis for the (potentially) fake PGD.

Finally, I'm a bit worried about introducing a bunch of macro logic
which is really hard to intuitively understand.  Maybe I'm over
emphasizing this issue, but I fint the KVM_PGTABLE_LEVELS definition
pretty hard to explain.

Anyway, I'm open to revisit this in v2.

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 1/2] arm64: KVM: Implement 48 VA support for KVM EL2 and Stage-2
@ 2014-10-06 19:54           ` Christoffer Dall
  0 siblings, 0 replies; 22+ messages in thread
From: Christoffer Dall @ 2014-10-06 19:54 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Oct 06, 2014 at 04:54:46PM +0100, Catalin Marinas wrote:
> On Mon, Oct 06, 2014 at 02:41:18PM +0100, Christoffer Dall wrote:
> > On Tue, Sep 30, 2014 at 01:39:47PM +0100, Catalin Marinas wrote:
> > > On Thu, Sep 25, 2014 at 08:42:53PM +0100, Christoffer Dall wrote:
> > > > @@ -572,19 +614,36 @@ void kvm_free_stage2_pgd(struct kvm *kvm)
> > > >                 return;
> > > >
> > > >         unmap_stage2_range(kvm, 0, KVM_PHYS_SIZE);
> > > > +       kvm_free_hwpgd(kvm);
> > > >         free_pages((unsigned long)kvm->arch.pgd, S2_PGD_ORDER);
> > > >         kvm->arch.pgd = NULL;
> > > >  }
> > > >
> > > > -static pmd_t *stage2_get_pmd(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
> > > > +static pud_t *stage2_get_pud(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
> > > >                              phys_addr_t addr)
> > > >  {
> > > >         pgd_t *pgd;
> > > >         pud_t *pud;
> > > > -       pmd_t *pmd;
> > > >
> > > >         pgd = kvm->arch.pgd + pgd_index(addr);
> > > > -       pud = pud_offset(pgd, addr);
> > > > +       if (pgd_none(*pgd)) {
> > > > +               if (!cache)
> > > > +                       return NULL;
> > > > +               pud = mmu_memory_cache_alloc(cache);
> > > > +               pgd_populate(NULL, pgd, pud);
> > > > +               get_page(virt_to_page(pgd));
> > > > +       }
> > > > +
> > > > +       return pud_offset(pgd, addr);
> > > > +}
> > >
> > > This function looks correct, though we would not expect pgd_none() to
> > > ever be true as we pre-populate it.
> > > You could but a WARN_ON or something
> > > until we actually have hardware 4-levels stage 2 (though I don't see a
> > > need yet).
> >
> > pgd_none will never be true, because on both aarch64 and arm we fold the
> > pud into the pgd (and include pgtable-nopud.h) if we have less than 4
> > levels, and if we do have 4 levels, then we have preallocated and
> > prepopulated the pgd.  If I got this right, I agree, and we should add
> > the WARN_ON or VM_BUG_ON or something.
> 
> Yes.
> 
> > > > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> > > > index fd4e81a..0f3e0a9 100644
> > > > --- a/arch/arm64/Kconfig
> > > > +++ b/arch/arm64/Kconfig
> > > > @@ -200,7 +200,6 @@ config ARM64_VA_BITS_42
> > > >
> > > >  config ARM64_VA_BITS_48
> > > >         bool "48-bit"
> > > > -       depends on BROKEN
> > >
> > > I'm ok with removing BROKEN here but I think at the same time we need to
> > > disable the SMMU driver which has similar issues with (not) supporting
> > > 4-levels page tables for stage 2 (stage 1 should be fine).
> >
> > Should that be in separate patches, or is it fine to just include it
> > here?
> 
> I think a separate patch replacing BROKEN with !ARM_SMMU. But we'll
> need to test the kernel a bit more to make sure there isn't anything
> else that's broken before merging it.
> 
> > > > +
> > > > +/**
> > > > + * kvm_prealloc_hwpgd - allocate inital table for VTTBR
> > > > + * @kvm:       The KVM struct pointer for the VM.
> > > > + * @pgd:       The kernel pseudo pgd
> > >
> > > "hwpgd" doesn't sound like the right name since it's actually a fake pgd
> > > that it is allocating. Maybe "swpgd"?
> >
> > The fake pgd was already allocated in kvm_alloc_stage2_pgd(), now we're
> > allocating the first-level page table that is going to be walked by the
> > MMU for Stage-2 translations (the address of which will go into the
> > VTTBR), so that's the reasoning for calling it the hwpgd.
> 
> OK, it makes sense now.
> 
> > > > +#define KVM_PREALLOC_LEVELS    0
> > > > +#define PTRS_PER_S2_PGD                (1 << (KVM_PHYS_SHIFT - PGDIR_SHIFT))
> > > > +#define S2_PGD_ORDER           get_order(PTRS_PER_S2_PGD * sizeof(pgd_t))
> > > > +
> > > > +static inline int kvm_prealloc_hwpgd(struct kvm *kvm, pgd_t *pgd)
> > > > +{
> > > > +       return 0;
> > > > +}
> > > > +
> > > > +static inline void kvm_free_hwpgd(struct kvm *kvm) { }
> > > > +
> > > > +static inline phys_addr_t kvm_get_hwpgd(struct kvm *kvm)
> > > > +{
> > > > +       return virt_to_phys(kvm->arch.pgd);
> > > > +}
> > > > +#endif
> > >
> > > I think all the above could be squashed into a single set of function
> > > implementations with the right macros defined before.
> > >
> > > For the KVM levels, you could use something like (we exchanged emails in
> > > private on this topic and how I got to these macros but they need better
> > > checking):
> > >
> > > #define KVM_PGTABLE_LEVELS      ((KVM_PHYS_SHIFT - PAGE_SHIFT - 5) / (PAGE_SHIFT - 3) + 1)
> > > #define KVM_PREALLOC_LEVELS     (CONFIG_ARM64_PGTABLE_LEVELS - KVM_PGTABLE_LEVELS)
> > > #if PGDIR_SHIFT > KVM_PHYS_SHIFT
> > > #define PTRS_PER_S2_PGD               (1)
> > > #else
> > > #define PTRS_PER_S2_PGD               (1 << (KVM_PHYS_SHIFT - PGDIR_SHIFT))
> > > #endif
> >
> > I'll revisit this, but the amount of time I needed to spend making sure
> > I understand this sort of indicated to me that it was simpler to just
> > spell it out.  What do you think?  Do you strongly prefer the simple
> > macro definitions?
> 
> I don't have a strong preference but I was looking to reduce the #ifdef
> complexity when we make the next change to the host page table (16K
> pages is coming as well with 3 or 4 levels).
> 
I looked a bit at this tonight, and I still have some reservations:

The KVM_PREALLOC_LEVELS macro doesn't seem to yield the right result, or
at least not the result we have now, because it will give us 1 in both
the 64K and 4K situation.  I've renamed the symbol to KVM_PREALLOC_LEVEL
in the new version to be slightly more clear.

In the case where PGDIR_SHIFT > KVM_PHYS_SHIFT, we should have
PTRS_PER_S2_PGD == 2, and this now has a real consequence when we're no
longer allocating on a page-basis for the (potentially) fake PGD.

Finally, I'm a bit worried about introducing a bunch of macro logic
which is really hard to intuitively understand.  Maybe I'm over
emphasizing this issue, but I fint the KVM_PGTABLE_LEVELS definition
pretty hard to explain.

Anyway, I'm open to revisit this in v2.

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2014-10-06 19:54 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-09-25 19:42 [PATCH 0/2] arm/arm64: KVM: Host 48-bit VA support and IPA limits Christoffer Dall
2014-09-25 19:42 ` Christoffer Dall
2014-09-25 19:42 ` [PATCH 1/2] arm64: KVM: Implement 48 VA support for KVM EL2 and Stage-2 Christoffer Dall
2014-09-25 19:42   ` Christoffer Dall
2014-09-26 14:08   ` Jungseok Lee
2014-09-26 14:08     ` Jungseok Lee
2014-09-30 12:39   ` Catalin Marinas
2014-09-30 12:39     ` Catalin Marinas
2014-10-06 13:41     ` Christoffer Dall
2014-10-06 13:41       ` Christoffer Dall
2014-10-06 15:54       ` Catalin Marinas
2014-10-06 15:54         ` Catalin Marinas
2014-10-06 19:54         ` Christoffer Dall
2014-10-06 19:54           ` Christoffer Dall
2014-09-25 19:42 ` [PATCH 2/2] arm/arm64: KVM: Ensure memslots are within KVM_PHYS_SIZE Christoffer Dall
2014-09-25 19:42   ` Christoffer Dall
2014-09-30 12:46   ` Catalin Marinas
2014-09-30 12:46     ` Catalin Marinas
2014-10-06 13:47     ` Christoffer Dall
2014-10-06 13:47       ` Christoffer Dall
2014-10-06 16:02       ` Catalin Marinas
2014-10-06 16:02         ` Catalin Marinas

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.