All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/1] KVM: arm64: Skip RCU protection for hyp stage-1
@ 2022-11-14 20:11 ` Oliver Upton
  0 siblings, 0 replies; 12+ messages in thread
From: Oliver Upton @ 2022-11-14 20:11 UTC (permalink / raw)
  To: Marc Zyngier, James Morse, Alexandru Elisei
  Cc: linux-arm-kernel, kvmarm, kvm, kvmarm, Oliver Upton, Marek Szyprowski

Whelp, that was quick.

Marek reports [1] that the parallel faults series leads to a kernel BUG
when initializing the hyp stage-1 page tables. Work around the issue by
never acquiring the RCU read lock when walking hyp stage-1. This is safe
because hyp stage-1 is protected by a spinlock (pKVM) or mutex (regular
nVHE).

The included patch applies to the parallel faults series. To avoid
breaking bisection, the patch should immediately precede commit
c3119ae45dfb ("KVM: arm64: Protect stage-2 traversal with RCU"). Or, if
preferred, I can respin the whole series in the correct order.

Tested with the pKVM isolated vCPU state series [2] merged on top, w/
kvm-arm.mode={nvhe,protected} on an Ampere Altra system.

Cc: Marek Szyprowski <m.szyprowski@samsung.com>

[1]: https://lore.kernel.org/kvmarm/d9854277-0411-8169-9e8b-68d15e4c0248@samsung.com/
[2]: https://lore.kernel.org/linux-arm-kernel/20221110190259.26861-1-will@kernel.org/

Oliver Upton (1):
  KVM: arm64: Use a separate function for hyp stage-1 walks

 arch/arm64/include/asm/kvm_pgtable.h | 24 ++++++++++++++++++++++++
 arch/arm64/kvm/hyp/nvhe/setup.c      |  2 +-
 arch/arm64/kvm/hyp/pgtable.c         | 18 +++++++++++++++---
 3 files changed, 40 insertions(+), 4 deletions(-)

-- 
2.38.1.431.g37b22c650d-goog


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 0/1] KVM: arm64: Skip RCU protection for hyp stage-1
@ 2022-11-14 20:11 ` Oliver Upton
  0 siblings, 0 replies; 12+ messages in thread
From: Oliver Upton @ 2022-11-14 20:11 UTC (permalink / raw)
  To: Marc Zyngier, James Morse, Alexandru Elisei
  Cc: kvm, kvmarm, kvmarm, linux-arm-kernel, Marek Szyprowski

Whelp, that was quick.

Marek reports [1] that the parallel faults series leads to a kernel BUG
when initializing the hyp stage-1 page tables. Work around the issue by
never acquiring the RCU read lock when walking hyp stage-1. This is safe
because hyp stage-1 is protected by a spinlock (pKVM) or mutex (regular
nVHE).

The included patch applies to the parallel faults series. To avoid
breaking bisection, the patch should immediately precede commit
c3119ae45dfb ("KVM: arm64: Protect stage-2 traversal with RCU"). Or, if
preferred, I can respin the whole series in the correct order.

Tested with the pKVM isolated vCPU state series [2] merged on top, w/
kvm-arm.mode={nvhe,protected} on an Ampere Altra system.

Cc: Marek Szyprowski <m.szyprowski@samsung.com>

[1]: https://lore.kernel.org/kvmarm/d9854277-0411-8169-9e8b-68d15e4c0248@samsung.com/
[2]: https://lore.kernel.org/linux-arm-kernel/20221110190259.26861-1-will@kernel.org/

Oliver Upton (1):
  KVM: arm64: Use a separate function for hyp stage-1 walks

 arch/arm64/include/asm/kvm_pgtable.h | 24 ++++++++++++++++++++++++
 arch/arm64/kvm/hyp/nvhe/setup.c      |  2 +-
 arch/arm64/kvm/hyp/pgtable.c         | 18 +++++++++++++++---
 3 files changed, 40 insertions(+), 4 deletions(-)

-- 
2.38.1.431.g37b22c650d-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 0/1] KVM: arm64: Skip RCU protection for hyp stage-1
@ 2022-11-14 20:11 ` Oliver Upton
  0 siblings, 0 replies; 12+ messages in thread
From: Oliver Upton @ 2022-11-14 20:11 UTC (permalink / raw)
  To: Marc Zyngier, James Morse, Alexandru Elisei
  Cc: linux-arm-kernel, kvmarm, kvm, kvmarm, Oliver Upton, Marek Szyprowski

Whelp, that was quick.

Marek reports [1] that the parallel faults series leads to a kernel BUG
when initializing the hyp stage-1 page tables. Work around the issue by
never acquiring the RCU read lock when walking hyp stage-1. This is safe
because hyp stage-1 is protected by a spinlock (pKVM) or mutex (regular
nVHE).

The included patch applies to the parallel faults series. To avoid
breaking bisection, the patch should immediately precede commit
c3119ae45dfb ("KVM: arm64: Protect stage-2 traversal with RCU"). Or, if
preferred, I can respin the whole series in the correct order.

Tested with the pKVM isolated vCPU state series [2] merged on top, w/
kvm-arm.mode={nvhe,protected} on an Ampere Altra system.

Cc: Marek Szyprowski <m.szyprowski@samsung.com>

[1]: https://lore.kernel.org/kvmarm/d9854277-0411-8169-9e8b-68d15e4c0248@samsung.com/
[2]: https://lore.kernel.org/linux-arm-kernel/20221110190259.26861-1-will@kernel.org/

Oliver Upton (1):
  KVM: arm64: Use a separate function for hyp stage-1 walks

 arch/arm64/include/asm/kvm_pgtable.h | 24 ++++++++++++++++++++++++
 arch/arm64/kvm/hyp/nvhe/setup.c      |  2 +-
 arch/arm64/kvm/hyp/pgtable.c         | 18 +++++++++++++++---
 3 files changed, 40 insertions(+), 4 deletions(-)

-- 
2.38.1.431.g37b22c650d-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 1/1] KVM: arm64: Use a separate function for hyp stage-1 walks
  2022-11-14 20:11 ` Oliver Upton
  (?)
@ 2022-11-14 20:11   ` Oliver Upton
  -1 siblings, 0 replies; 12+ messages in thread
From: Oliver Upton @ 2022-11-14 20:11 UTC (permalink / raw)
  To: Marc Zyngier, James Morse, Alexandru Elisei, Suzuki K Poulose,
	Oliver Upton, Catalin Marinas, Will Deacon
  Cc: linux-arm-kernel, kvmarm, kvm, kvmarm, linux-kernel

A subsequent change to the page table walkers adds RCU protection for
walking stage-2 page tables. KVM uses a global lock to serialize hyp
stage-1 walks, meaning RCU protection is quite meaningless for
protecting hyp stage-1 walkers.

Add a new helper, kvm_pgtable_hyp_walk(), for use when walking hyp
stage-1 tables. Call directly into __kvm_pgtable_walk() as table
concatenation is not a supported feature at stage-1.

No functional change intended.

Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
---
 arch/arm64/include/asm/kvm_pgtable.h | 24 ++++++++++++++++++++++++
 arch/arm64/kvm/hyp/nvhe/setup.c      |  2 +-
 arch/arm64/kvm/hyp/pgtable.c         | 18 +++++++++++++++---
 3 files changed, 40 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h
index a874ce0ce7b5..43b2f1882e11 100644
--- a/arch/arm64/include/asm/kvm_pgtable.h
+++ b/arch/arm64/include/asm/kvm_pgtable.h
@@ -596,6 +596,30 @@ int kvm_pgtable_stage2_flush(struct kvm_pgtable *pgt, u64 addr, u64 size);
 int kvm_pgtable_walk(struct kvm_pgtable *pgt, u64 addr, u64 size,
 		     struct kvm_pgtable_walker *walker);
 
+/**
+ * kvm_pgtable_hyp_walk() - Walk a hyp stage-1 page-table.
+ * @pgt:	Page-table structure initialized by kvm_pgtable_hyp_init().
+ * @addr:	Input address for the start of the walk.
+ * @size:	Size of the range to walk.
+ * @walker:	Walker callback description.
+ *
+ * The offset of @addr within a page is ignored and @size is rounded-up to
+ * the next page boundary.
+ *
+ * The walker will walk the page-table entries corresponding to the input
+ * address range specified, visiting entries according to the walker flags.
+ * Invalid entries are treated as leaf entries. Leaf entries are reloaded
+ * after invoking the walker callback, allowing the walker to descend into
+ * a newly installed table.
+ *
+ * Returning a negative error code from the walker callback function will
+ * terminate the walk immediately with the same error code.
+ *
+ * Return: 0 on success, negative error code on failure.
+ */
+int kvm_pgtable_hyp_walk(struct kvm_pgtable *pgt, u64 addr, u64 size,
+			 struct kvm_pgtable_walker *walker);
+
 /**
  * kvm_pgtable_get_leaf() - Walk a page-table and retrieve the leaf entry
  *			    with its level.
diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c
index 1068338d77f3..55eeb3ed1891 100644
--- a/arch/arm64/kvm/hyp/nvhe/setup.c
+++ b/arch/arm64/kvm/hyp/nvhe/setup.c
@@ -246,7 +246,7 @@ static int finalize_host_mappings(void)
 		struct memblock_region *reg = &hyp_memory[i];
 		u64 start = (u64)hyp_phys_to_virt(reg->base);
 
-		ret = kvm_pgtable_walk(&pkvm_pgtable, start, reg->size, &walker);
+		ret = kvm_pgtable_hyp_walk(&pkvm_pgtable, start, reg->size, &walker);
 		if (ret)
 			return ret;
 	}
diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index 5bca9610d040..385fa1051b5d 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -335,6 +335,18 @@ int kvm_pgtable_get_leaf(struct kvm_pgtable *pgt, u64 addr,
 	return ret;
 }
 
+int kvm_pgtable_hyp_walk(struct kvm_pgtable *pgt, u64 addr, u64 size,
+			 struct kvm_pgtable_walker *walker)
+{
+	struct kvm_pgtable_walk_data data = {
+		.walker	= walker,
+		.addr	= ALIGN_DOWN(addr, PAGE_SIZE),
+		.end	= PAGE_ALIGN(addr + size),
+	};
+
+	return __kvm_pgtable_walk(&data, pgt->mm_ops, pgt->pgd, pgt->start_level);
+}
+
 struct hyp_map_data {
 	u64				phys;
 	kvm_pte_t			attr;
@@ -454,7 +466,7 @@ int kvm_pgtable_hyp_map(struct kvm_pgtable *pgt, u64 addr, u64 size, u64 phys,
 	if (ret)
 		return ret;
 
-	ret = kvm_pgtable_walk(pgt, addr, size, &walker);
+	ret = kvm_pgtable_hyp_walk(pgt, addr, size, &walker);
 	dsb(ishst);
 	isb();
 	return ret;
@@ -512,7 +524,7 @@ u64 kvm_pgtable_hyp_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size)
 	if (!pgt->mm_ops->page_count)
 		return 0;
 
-	kvm_pgtable_walk(pgt, addr, size, &walker);
+	kvm_pgtable_hyp_walk(pgt, addr, size, &walker);
 	return unmapped;
 }
 
@@ -557,7 +569,7 @@ void kvm_pgtable_hyp_destroy(struct kvm_pgtable *pgt)
 		.flags	= KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_TABLE_POST,
 	};
 
-	WARN_ON(kvm_pgtable_walk(pgt, 0, BIT(pgt->ia_bits), &walker));
+	WARN_ON(kvm_pgtable_hyp_walk(pgt, 0, BIT(pgt->ia_bits), &walker));
 	pgt->mm_ops->put_page(kvm_dereference_pteref(pgt->pgd, false));
 	pgt->pgd = NULL;
 }
-- 
2.38.1.431.g37b22c650d-goog


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 1/1] KVM: arm64: Use a separate function for hyp stage-1 walks
@ 2022-11-14 20:11   ` Oliver Upton
  0 siblings, 0 replies; 12+ messages in thread
From: Oliver Upton @ 2022-11-14 20:11 UTC (permalink / raw)
  To: Marc Zyngier, James Morse, Alexandru Elisei, Suzuki K Poulose,
	Oliver Upton, Catalin Marinas, Will Deacon
  Cc: kvmarm, linux-kernel, kvmarm, linux-arm-kernel, kvm

A subsequent change to the page table walkers adds RCU protection for
walking stage-2 page tables. KVM uses a global lock to serialize hyp
stage-1 walks, meaning RCU protection is quite meaningless for
protecting hyp stage-1 walkers.

Add a new helper, kvm_pgtable_hyp_walk(), for use when walking hyp
stage-1 tables. Call directly into __kvm_pgtable_walk() as table
concatenation is not a supported feature at stage-1.

No functional change intended.

Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
---
 arch/arm64/include/asm/kvm_pgtable.h | 24 ++++++++++++++++++++++++
 arch/arm64/kvm/hyp/nvhe/setup.c      |  2 +-
 arch/arm64/kvm/hyp/pgtable.c         | 18 +++++++++++++++---
 3 files changed, 40 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h
index a874ce0ce7b5..43b2f1882e11 100644
--- a/arch/arm64/include/asm/kvm_pgtable.h
+++ b/arch/arm64/include/asm/kvm_pgtable.h
@@ -596,6 +596,30 @@ int kvm_pgtable_stage2_flush(struct kvm_pgtable *pgt, u64 addr, u64 size);
 int kvm_pgtable_walk(struct kvm_pgtable *pgt, u64 addr, u64 size,
 		     struct kvm_pgtable_walker *walker);
 
+/**
+ * kvm_pgtable_hyp_walk() - Walk a hyp stage-1 page-table.
+ * @pgt:	Page-table structure initialized by kvm_pgtable_hyp_init().
+ * @addr:	Input address for the start of the walk.
+ * @size:	Size of the range to walk.
+ * @walker:	Walker callback description.
+ *
+ * The offset of @addr within a page is ignored and @size is rounded-up to
+ * the next page boundary.
+ *
+ * The walker will walk the page-table entries corresponding to the input
+ * address range specified, visiting entries according to the walker flags.
+ * Invalid entries are treated as leaf entries. Leaf entries are reloaded
+ * after invoking the walker callback, allowing the walker to descend into
+ * a newly installed table.
+ *
+ * Returning a negative error code from the walker callback function will
+ * terminate the walk immediately with the same error code.
+ *
+ * Return: 0 on success, negative error code on failure.
+ */
+int kvm_pgtable_hyp_walk(struct kvm_pgtable *pgt, u64 addr, u64 size,
+			 struct kvm_pgtable_walker *walker);
+
 /**
  * kvm_pgtable_get_leaf() - Walk a page-table and retrieve the leaf entry
  *			    with its level.
diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c
index 1068338d77f3..55eeb3ed1891 100644
--- a/arch/arm64/kvm/hyp/nvhe/setup.c
+++ b/arch/arm64/kvm/hyp/nvhe/setup.c
@@ -246,7 +246,7 @@ static int finalize_host_mappings(void)
 		struct memblock_region *reg = &hyp_memory[i];
 		u64 start = (u64)hyp_phys_to_virt(reg->base);
 
-		ret = kvm_pgtable_walk(&pkvm_pgtable, start, reg->size, &walker);
+		ret = kvm_pgtable_hyp_walk(&pkvm_pgtable, start, reg->size, &walker);
 		if (ret)
 			return ret;
 	}
diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index 5bca9610d040..385fa1051b5d 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -335,6 +335,18 @@ int kvm_pgtable_get_leaf(struct kvm_pgtable *pgt, u64 addr,
 	return ret;
 }
 
+int kvm_pgtable_hyp_walk(struct kvm_pgtable *pgt, u64 addr, u64 size,
+			 struct kvm_pgtable_walker *walker)
+{
+	struct kvm_pgtable_walk_data data = {
+		.walker	= walker,
+		.addr	= ALIGN_DOWN(addr, PAGE_SIZE),
+		.end	= PAGE_ALIGN(addr + size),
+	};
+
+	return __kvm_pgtable_walk(&data, pgt->mm_ops, pgt->pgd, pgt->start_level);
+}
+
 struct hyp_map_data {
 	u64				phys;
 	kvm_pte_t			attr;
@@ -454,7 +466,7 @@ int kvm_pgtable_hyp_map(struct kvm_pgtable *pgt, u64 addr, u64 size, u64 phys,
 	if (ret)
 		return ret;
 
-	ret = kvm_pgtable_walk(pgt, addr, size, &walker);
+	ret = kvm_pgtable_hyp_walk(pgt, addr, size, &walker);
 	dsb(ishst);
 	isb();
 	return ret;
@@ -512,7 +524,7 @@ u64 kvm_pgtable_hyp_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size)
 	if (!pgt->mm_ops->page_count)
 		return 0;
 
-	kvm_pgtable_walk(pgt, addr, size, &walker);
+	kvm_pgtable_hyp_walk(pgt, addr, size, &walker);
 	return unmapped;
 }
 
@@ -557,7 +569,7 @@ void kvm_pgtable_hyp_destroy(struct kvm_pgtable *pgt)
 		.flags	= KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_TABLE_POST,
 	};
 
-	WARN_ON(kvm_pgtable_walk(pgt, 0, BIT(pgt->ia_bits), &walker));
+	WARN_ON(kvm_pgtable_hyp_walk(pgt, 0, BIT(pgt->ia_bits), &walker));
 	pgt->mm_ops->put_page(kvm_dereference_pteref(pgt->pgd, false));
 	pgt->pgd = NULL;
 }
-- 
2.38.1.431.g37b22c650d-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 1/1] KVM: arm64: Use a separate function for hyp stage-1 walks
@ 2022-11-14 20:11   ` Oliver Upton
  0 siblings, 0 replies; 12+ messages in thread
From: Oliver Upton @ 2022-11-14 20:11 UTC (permalink / raw)
  To: Marc Zyngier, James Morse, Alexandru Elisei, Suzuki K Poulose,
	Oliver Upton, Catalin Marinas, Will Deacon
  Cc: linux-arm-kernel, kvmarm, kvm, kvmarm, linux-kernel

A subsequent change to the page table walkers adds RCU protection for
walking stage-2 page tables. KVM uses a global lock to serialize hyp
stage-1 walks, meaning RCU protection is quite meaningless for
protecting hyp stage-1 walkers.

Add a new helper, kvm_pgtable_hyp_walk(), for use when walking hyp
stage-1 tables. Call directly into __kvm_pgtable_walk() as table
concatenation is not a supported feature at stage-1.

No functional change intended.

Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
---
 arch/arm64/include/asm/kvm_pgtable.h | 24 ++++++++++++++++++++++++
 arch/arm64/kvm/hyp/nvhe/setup.c      |  2 +-
 arch/arm64/kvm/hyp/pgtable.c         | 18 +++++++++++++++---
 3 files changed, 40 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h
index a874ce0ce7b5..43b2f1882e11 100644
--- a/arch/arm64/include/asm/kvm_pgtable.h
+++ b/arch/arm64/include/asm/kvm_pgtable.h
@@ -596,6 +596,30 @@ int kvm_pgtable_stage2_flush(struct kvm_pgtable *pgt, u64 addr, u64 size);
 int kvm_pgtable_walk(struct kvm_pgtable *pgt, u64 addr, u64 size,
 		     struct kvm_pgtable_walker *walker);
 
+/**
+ * kvm_pgtable_hyp_walk() - Walk a hyp stage-1 page-table.
+ * @pgt:	Page-table structure initialized by kvm_pgtable_hyp_init().
+ * @addr:	Input address for the start of the walk.
+ * @size:	Size of the range to walk.
+ * @walker:	Walker callback description.
+ *
+ * The offset of @addr within a page is ignored and @size is rounded-up to
+ * the next page boundary.
+ *
+ * The walker will walk the page-table entries corresponding to the input
+ * address range specified, visiting entries according to the walker flags.
+ * Invalid entries are treated as leaf entries. Leaf entries are reloaded
+ * after invoking the walker callback, allowing the walker to descend into
+ * a newly installed table.
+ *
+ * Returning a negative error code from the walker callback function will
+ * terminate the walk immediately with the same error code.
+ *
+ * Return: 0 on success, negative error code on failure.
+ */
+int kvm_pgtable_hyp_walk(struct kvm_pgtable *pgt, u64 addr, u64 size,
+			 struct kvm_pgtable_walker *walker);
+
 /**
  * kvm_pgtable_get_leaf() - Walk a page-table and retrieve the leaf entry
  *			    with its level.
diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c
index 1068338d77f3..55eeb3ed1891 100644
--- a/arch/arm64/kvm/hyp/nvhe/setup.c
+++ b/arch/arm64/kvm/hyp/nvhe/setup.c
@@ -246,7 +246,7 @@ static int finalize_host_mappings(void)
 		struct memblock_region *reg = &hyp_memory[i];
 		u64 start = (u64)hyp_phys_to_virt(reg->base);
 
-		ret = kvm_pgtable_walk(&pkvm_pgtable, start, reg->size, &walker);
+		ret = kvm_pgtable_hyp_walk(&pkvm_pgtable, start, reg->size, &walker);
 		if (ret)
 			return ret;
 	}
diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index 5bca9610d040..385fa1051b5d 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -335,6 +335,18 @@ int kvm_pgtable_get_leaf(struct kvm_pgtable *pgt, u64 addr,
 	return ret;
 }
 
+int kvm_pgtable_hyp_walk(struct kvm_pgtable *pgt, u64 addr, u64 size,
+			 struct kvm_pgtable_walker *walker)
+{
+	struct kvm_pgtable_walk_data data = {
+		.walker	= walker,
+		.addr	= ALIGN_DOWN(addr, PAGE_SIZE),
+		.end	= PAGE_ALIGN(addr + size),
+	};
+
+	return __kvm_pgtable_walk(&data, pgt->mm_ops, pgt->pgd, pgt->start_level);
+}
+
 struct hyp_map_data {
 	u64				phys;
 	kvm_pte_t			attr;
@@ -454,7 +466,7 @@ int kvm_pgtable_hyp_map(struct kvm_pgtable *pgt, u64 addr, u64 size, u64 phys,
 	if (ret)
 		return ret;
 
-	ret = kvm_pgtable_walk(pgt, addr, size, &walker);
+	ret = kvm_pgtable_hyp_walk(pgt, addr, size, &walker);
 	dsb(ishst);
 	isb();
 	return ret;
@@ -512,7 +524,7 @@ u64 kvm_pgtable_hyp_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size)
 	if (!pgt->mm_ops->page_count)
 		return 0;
 
-	kvm_pgtable_walk(pgt, addr, size, &walker);
+	kvm_pgtable_hyp_walk(pgt, addr, size, &walker);
 	return unmapped;
 }
 
@@ -557,7 +569,7 @@ void kvm_pgtable_hyp_destroy(struct kvm_pgtable *pgt)
 		.flags	= KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_TABLE_POST,
 	};
 
-	WARN_ON(kvm_pgtable_walk(pgt, 0, BIT(pgt->ia_bits), &walker));
+	WARN_ON(kvm_pgtable_hyp_walk(pgt, 0, BIT(pgt->ia_bits), &walker));
 	pgt->mm_ops->put_page(kvm_dereference_pteref(pgt->pgd, false));
 	pgt->pgd = NULL;
 }
-- 
2.38.1.431.g37b22c650d-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/1] KVM: arm64: Use a separate function for hyp stage-1 walks
  2022-11-14 20:11   ` Oliver Upton
  (?)
@ 2022-11-15 13:25     ` Will Deacon
  -1 siblings, 0 replies; 12+ messages in thread
From: Will Deacon @ 2022-11-15 13:25 UTC (permalink / raw)
  To: Oliver Upton
  Cc: Marc Zyngier, James Morse, Alexandru Elisei, Suzuki K Poulose,
	Catalin Marinas, linux-arm-kernel, kvmarm, kvm, kvmarm,
	linux-kernel

On Mon, Nov 14, 2022 at 08:11:27PM +0000, Oliver Upton wrote:
> A subsequent change to the page table walkers adds RCU protection for
> walking stage-2 page tables. KVM uses a global lock to serialize hyp
> stage-1 walks, meaning RCU protection is quite meaningless for
> protecting hyp stage-1 walkers.
> 
> Add a new helper, kvm_pgtable_hyp_walk(), for use when walking hyp
> stage-1 tables. Call directly into __kvm_pgtable_walk() as table
> concatenation is not a supported feature at stage-1.
> 
> No functional change intended.
> 
> Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
> ---
>  arch/arm64/include/asm/kvm_pgtable.h | 24 ++++++++++++++++++++++++
>  arch/arm64/kvm/hyp/nvhe/setup.c      |  2 +-
>  arch/arm64/kvm/hyp/pgtable.c         | 18 +++++++++++++++---
>  3 files changed, 40 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h
> index a874ce0ce7b5..43b2f1882e11 100644
> --- a/arch/arm64/include/asm/kvm_pgtable.h
> +++ b/arch/arm64/include/asm/kvm_pgtable.h
> @@ -596,6 +596,30 @@ int kvm_pgtable_stage2_flush(struct kvm_pgtable *pgt, u64 addr, u64 size);
>  int kvm_pgtable_walk(struct kvm_pgtable *pgt, u64 addr, u64 size,
>  		     struct kvm_pgtable_walker *walker);
>  
> +/**
> + * kvm_pgtable_hyp_walk() - Walk a hyp stage-1 page-table.
> + * @pgt:	Page-table structure initialized by kvm_pgtable_hyp_init().
> + * @addr:	Input address for the start of the walk.
> + * @size:	Size of the range to walk.
> + * @walker:	Walker callback description.
> + *
> + * The offset of @addr within a page is ignored and @size is rounded-up to
> + * the next page boundary.
> + *
> + * The walker will walk the page-table entries corresponding to the input
> + * address range specified, visiting entries according to the walker flags.
> + * Invalid entries are treated as leaf entries. Leaf entries are reloaded
> + * after invoking the walker callback, allowing the walker to descend into
> + * a newly installed table.
> + *
> + * Returning a negative error code from the walker callback function will
> + * terminate the walk immediately with the same error code.
> + *
> + * Return: 0 on success, negative error code on failure.
> + */
> +int kvm_pgtable_hyp_walk(struct kvm_pgtable *pgt, u64 addr, u64 size,
> +			 struct kvm_pgtable_walker *walker);

Hmm, this feels like slightly the wrong abstraction to me -- there's nothing
hyp-specific about the problem being solved, it's just that the only user
is for hyp walks.

Could we instead rework 'struct kvm_pgtable' slightly so that the existing
'flags' field is no-longer stage-2 specific and includes a KVM_PGTABLE_LOCKED
flag which could be set by kvm_pgtable_hyp_init()?

That way the top-level API remains unchanged and the existing callers will
continue to work.

Cheers,

Will

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/1] KVM: arm64: Use a separate function for hyp stage-1 walks
@ 2022-11-15 13:25     ` Will Deacon
  0 siblings, 0 replies; 12+ messages in thread
From: Will Deacon @ 2022-11-15 13:25 UTC (permalink / raw)
  To: Oliver Upton
  Cc: kvm, Marc Zyngier, linux-kernel, Catalin Marinas, kvmarm, kvmarm,
	linux-arm-kernel

On Mon, Nov 14, 2022 at 08:11:27PM +0000, Oliver Upton wrote:
> A subsequent change to the page table walkers adds RCU protection for
> walking stage-2 page tables. KVM uses a global lock to serialize hyp
> stage-1 walks, meaning RCU protection is quite meaningless for
> protecting hyp stage-1 walkers.
> 
> Add a new helper, kvm_pgtable_hyp_walk(), for use when walking hyp
> stage-1 tables. Call directly into __kvm_pgtable_walk() as table
> concatenation is not a supported feature at stage-1.
> 
> No functional change intended.
> 
> Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
> ---
>  arch/arm64/include/asm/kvm_pgtable.h | 24 ++++++++++++++++++++++++
>  arch/arm64/kvm/hyp/nvhe/setup.c      |  2 +-
>  arch/arm64/kvm/hyp/pgtable.c         | 18 +++++++++++++++---
>  3 files changed, 40 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h
> index a874ce0ce7b5..43b2f1882e11 100644
> --- a/arch/arm64/include/asm/kvm_pgtable.h
> +++ b/arch/arm64/include/asm/kvm_pgtable.h
> @@ -596,6 +596,30 @@ int kvm_pgtable_stage2_flush(struct kvm_pgtable *pgt, u64 addr, u64 size);
>  int kvm_pgtable_walk(struct kvm_pgtable *pgt, u64 addr, u64 size,
>  		     struct kvm_pgtable_walker *walker);
>  
> +/**
> + * kvm_pgtable_hyp_walk() - Walk a hyp stage-1 page-table.
> + * @pgt:	Page-table structure initialized by kvm_pgtable_hyp_init().
> + * @addr:	Input address for the start of the walk.
> + * @size:	Size of the range to walk.
> + * @walker:	Walker callback description.
> + *
> + * The offset of @addr within a page is ignored and @size is rounded-up to
> + * the next page boundary.
> + *
> + * The walker will walk the page-table entries corresponding to the input
> + * address range specified, visiting entries according to the walker flags.
> + * Invalid entries are treated as leaf entries. Leaf entries are reloaded
> + * after invoking the walker callback, allowing the walker to descend into
> + * a newly installed table.
> + *
> + * Returning a negative error code from the walker callback function will
> + * terminate the walk immediately with the same error code.
> + *
> + * Return: 0 on success, negative error code on failure.
> + */
> +int kvm_pgtable_hyp_walk(struct kvm_pgtable *pgt, u64 addr, u64 size,
> +			 struct kvm_pgtable_walker *walker);

Hmm, this feels like slightly the wrong abstraction to me -- there's nothing
hyp-specific about the problem being solved, it's just that the only user
is for hyp walks.

Could we instead rework 'struct kvm_pgtable' slightly so that the existing
'flags' field is no-longer stage-2 specific and includes a KVM_PGTABLE_LOCKED
flag which could be set by kvm_pgtable_hyp_init()?

That way the top-level API remains unchanged and the existing callers will
continue to work.

Cheers,

Will
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/1] KVM: arm64: Use a separate function for hyp stage-1 walks
@ 2022-11-15 13:25     ` Will Deacon
  0 siblings, 0 replies; 12+ messages in thread
From: Will Deacon @ 2022-11-15 13:25 UTC (permalink / raw)
  To: Oliver Upton
  Cc: Marc Zyngier, James Morse, Alexandru Elisei, Suzuki K Poulose,
	Catalin Marinas, linux-arm-kernel, kvmarm, kvm, kvmarm,
	linux-kernel

On Mon, Nov 14, 2022 at 08:11:27PM +0000, Oliver Upton wrote:
> A subsequent change to the page table walkers adds RCU protection for
> walking stage-2 page tables. KVM uses a global lock to serialize hyp
> stage-1 walks, meaning RCU protection is quite meaningless for
> protecting hyp stage-1 walkers.
> 
> Add a new helper, kvm_pgtable_hyp_walk(), for use when walking hyp
> stage-1 tables. Call directly into __kvm_pgtable_walk() as table
> concatenation is not a supported feature at stage-1.
> 
> No functional change intended.
> 
> Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
> ---
>  arch/arm64/include/asm/kvm_pgtable.h | 24 ++++++++++++++++++++++++
>  arch/arm64/kvm/hyp/nvhe/setup.c      |  2 +-
>  arch/arm64/kvm/hyp/pgtable.c         | 18 +++++++++++++++---
>  3 files changed, 40 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h
> index a874ce0ce7b5..43b2f1882e11 100644
> --- a/arch/arm64/include/asm/kvm_pgtable.h
> +++ b/arch/arm64/include/asm/kvm_pgtable.h
> @@ -596,6 +596,30 @@ int kvm_pgtable_stage2_flush(struct kvm_pgtable *pgt, u64 addr, u64 size);
>  int kvm_pgtable_walk(struct kvm_pgtable *pgt, u64 addr, u64 size,
>  		     struct kvm_pgtable_walker *walker);
>  
> +/**
> + * kvm_pgtable_hyp_walk() - Walk a hyp stage-1 page-table.
> + * @pgt:	Page-table structure initialized by kvm_pgtable_hyp_init().
> + * @addr:	Input address for the start of the walk.
> + * @size:	Size of the range to walk.
> + * @walker:	Walker callback description.
> + *
> + * The offset of @addr within a page is ignored and @size is rounded-up to
> + * the next page boundary.
> + *
> + * The walker will walk the page-table entries corresponding to the input
> + * address range specified, visiting entries according to the walker flags.
> + * Invalid entries are treated as leaf entries. Leaf entries are reloaded
> + * after invoking the walker callback, allowing the walker to descend into
> + * a newly installed table.
> + *
> + * Returning a negative error code from the walker callback function will
> + * terminate the walk immediately with the same error code.
> + *
> + * Return: 0 on success, negative error code on failure.
> + */
> +int kvm_pgtable_hyp_walk(struct kvm_pgtable *pgt, u64 addr, u64 size,
> +			 struct kvm_pgtable_walker *walker);

Hmm, this feels like slightly the wrong abstraction to me -- there's nothing
hyp-specific about the problem being solved, it's just that the only user
is for hyp walks.

Could we instead rework 'struct kvm_pgtable' slightly so that the existing
'flags' field is no-longer stage-2 specific and includes a KVM_PGTABLE_LOCKED
flag which could be set by kvm_pgtable_hyp_init()?

That way the top-level API remains unchanged and the existing callers will
continue to work.

Cheers,

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/1] KVM: arm64: Use a separate function for hyp stage-1 walks
  2022-11-15 13:25     ` Will Deacon
  (?)
@ 2022-11-15 17:23       ` Oliver Upton
  -1 siblings, 0 replies; 12+ messages in thread
From: Oliver Upton @ 2022-11-15 17:23 UTC (permalink / raw)
  To: Will Deacon
  Cc: Marc Zyngier, James Morse, Alexandru Elisei, Suzuki K Poulose,
	Catalin Marinas, linux-arm-kernel, kvmarm, kvm, kvmarm,
	linux-kernel

Hey Will,

On Tue, Nov 15, 2022 at 01:25:34PM +0000, Will Deacon wrote:

[...]

> On Mon, Nov 14, 2022 at 08:11:27PM +0000, Oliver Upton wrote:
> > +int kvm_pgtable_hyp_walk(struct kvm_pgtable *pgt, u64 addr, u64 size,
> > +			 struct kvm_pgtable_walker *walker);
> 
> Hmm, this feels like slightly the wrong abstraction to me -- there's nothing
> hyp-specific about the problem being solved, it's just that the only user
> is for hyp walks.
> 
> Could we instead rework 'struct kvm_pgtable' slightly so that the existing
> 'flags' field is no-longer stage-2 specific and includes a KVM_PGTABLE_LOCKED
> flag which could be set by kvm_pgtable_hyp_init()?
> 
> That way the top-level API remains unchanged and the existing callers will
> continue to work.

Thanks for the suggestion! Yeah, this should be described by the flags
instead.

We already have KVM_PGTABLE_WALK_SHARED, I could actually condition the
RCU lock/unlock on that one. That would make it an explicit opt-in
instead of requiring an opt out with callers passing KVM_PGTABLE_LOCKED.

Thoughts?

--
Thanks,
Oliver

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/1] KVM: arm64: Use a separate function for hyp stage-1 walks
@ 2022-11-15 17:23       ` Oliver Upton
  0 siblings, 0 replies; 12+ messages in thread
From: Oliver Upton @ 2022-11-15 17:23 UTC (permalink / raw)
  To: Will Deacon
  Cc: kvm, Marc Zyngier, linux-kernel, Catalin Marinas, kvmarm, kvmarm,
	linux-arm-kernel

Hey Will,

On Tue, Nov 15, 2022 at 01:25:34PM +0000, Will Deacon wrote:

[...]

> On Mon, Nov 14, 2022 at 08:11:27PM +0000, Oliver Upton wrote:
> > +int kvm_pgtable_hyp_walk(struct kvm_pgtable *pgt, u64 addr, u64 size,
> > +			 struct kvm_pgtable_walker *walker);
> 
> Hmm, this feels like slightly the wrong abstraction to me -- there's nothing
> hyp-specific about the problem being solved, it's just that the only user
> is for hyp walks.
> 
> Could we instead rework 'struct kvm_pgtable' slightly so that the existing
> 'flags' field is no-longer stage-2 specific and includes a KVM_PGTABLE_LOCKED
> flag which could be set by kvm_pgtable_hyp_init()?
> 
> That way the top-level API remains unchanged and the existing callers will
> continue to work.

Thanks for the suggestion! Yeah, this should be described by the flags
instead.

We already have KVM_PGTABLE_WALK_SHARED, I could actually condition the
RCU lock/unlock on that one. That would make it an explicit opt-in
instead of requiring an opt out with callers passing KVM_PGTABLE_LOCKED.

Thoughts?

--
Thanks,
Oliver
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/1] KVM: arm64: Use a separate function for hyp stage-1 walks
@ 2022-11-15 17:23       ` Oliver Upton
  0 siblings, 0 replies; 12+ messages in thread
From: Oliver Upton @ 2022-11-15 17:23 UTC (permalink / raw)
  To: Will Deacon
  Cc: Marc Zyngier, James Morse, Alexandru Elisei, Suzuki K Poulose,
	Catalin Marinas, linux-arm-kernel, kvmarm, kvm, kvmarm,
	linux-kernel

Hey Will,

On Tue, Nov 15, 2022 at 01:25:34PM +0000, Will Deacon wrote:

[...]

> On Mon, Nov 14, 2022 at 08:11:27PM +0000, Oliver Upton wrote:
> > +int kvm_pgtable_hyp_walk(struct kvm_pgtable *pgt, u64 addr, u64 size,
> > +			 struct kvm_pgtable_walker *walker);
> 
> Hmm, this feels like slightly the wrong abstraction to me -- there's nothing
> hyp-specific about the problem being solved, it's just that the only user
> is for hyp walks.
> 
> Could we instead rework 'struct kvm_pgtable' slightly so that the existing
> 'flags' field is no-longer stage-2 specific and includes a KVM_PGTABLE_LOCKED
> flag which could be set by kvm_pgtable_hyp_init()?
> 
> That way the top-level API remains unchanged and the existing callers will
> continue to work.

Thanks for the suggestion! Yeah, this should be described by the flags
instead.

We already have KVM_PGTABLE_WALK_SHARED, I could actually condition the
RCU lock/unlock on that one. That would make it an explicit opt-in
instead of requiring an opt out with callers passing KVM_PGTABLE_LOCKED.

Thoughts?

--
Thanks,
Oliver

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2022-11-15 17:25 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-11-14 20:11 [PATCH 0/1] KVM: arm64: Skip RCU protection for hyp stage-1 Oliver Upton
2022-11-14 20:11 ` Oliver Upton
2022-11-14 20:11 ` Oliver Upton
2022-11-14 20:11 ` [PATCH 1/1] KVM: arm64: Use a separate function for hyp stage-1 walks Oliver Upton
2022-11-14 20:11   ` Oliver Upton
2022-11-14 20:11   ` Oliver Upton
2022-11-15 13:25   ` Will Deacon
2022-11-15 13:25     ` Will Deacon
2022-11-15 13:25     ` Will Deacon
2022-11-15 17:23     ` Oliver Upton
2022-11-15 17:23       ` Oliver Upton
2022-11-15 17:23       ` Oliver Upton

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.