All of lore.kernel.org
 help / color / mirror / Atom feed
From: Vipin Sharma <vipinsh@google.com>
To: maz@kernel.org, oliver.upton@linux.dev, james.morse@arm.com,
	suzuki.poulose@arm.com, yuzenghui@huawei.com,
	catalin.marinas@arm.com, will@kernel.org, chenhuacai@kernel.org,
	aleksandar.qemu.devel@gmail.com, tsbogend@alpha.franken.de,
	anup@brainfault.org, atishp@atishpatra.org,
	paul.walmsley@sifive.com, palmer@dabbelt.com,
	aou@eecs.berkeley.edu, seanjc@google.com, pbonzini@redhat.com,
	dmatlack@google.com, ricarkol@google.com
Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev,
	linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org,
	linux-riscv@lists.infradead.org, linux-kselftest@vger.kernel.org,
	kvm@vger.kernel.org, linux-kernel@vger.kernel.org,
	Vipin Sharma <vipinsh@google.com>
Subject: [PATCH v2 10/16] KVM: arm64: Return -ENOENT if PTE is not valid in stage2_attr_walker
Date: Fri,  2 Jun 2023 09:09:08 -0700	[thread overview]
Message-ID: <20230602160914.4011728-11-vipinsh@google.com> (raw)
In-Reply-To: <20230602160914.4011728-1-vipinsh@google.com>

Return -ENOENT from stage2_attr_walker for invalid PTE. Continue page
table walk if walker callback returns -ENOENT outside of the fault
handler path else terminate the walk. In fault handler path, similar to
-EAGAIN in user_mem_abort, retry guest execution.

stage2_attr_walker() is used from multiple places like, write
protection, MMU notifier callbacks, and relaxing permission during vCPU
faults. This function returns -EAGAIN for different cases:
1. When PTE is not valid.
2. When cmpxchg() fails while setting new SPTE.

For non-shared walkers, like write protection and MMU notifier, above 2
cases are just ignored by walker and it moves to the next SPTE. #2 will
never happen for non-shared walkers as they don't use cmpxchg() for
updating SPTEs.

For shared walkers, like vCPU fault handler, above 2 cases results in
walk termination.

In future commits, clear-dirty-log walker will write protect SPTEs under
MMU read lock and use shared page table walker. This will result in two
shared page table walkers type, vCPUs fault handler and clear-dirty-log,
competing with each other and sometime causing cmpxchg() failure. So,
-EAGAIN in clear-dirty-log walker due to cmpxchg() failure must be
retried. Whereas, -EAGAIN in the clear-dirty-log due to invalid SPTE
must be ignored instead of exiting as per the current logic of shared
page table walker. This is not needed for vCPU fault handler which also
runs via shared page table walker and terminates walk on getting -EAGAIN
due to invalid SPTE.

To handle all these scenarios, stage2_attr_walker must return different
error codes for invalid SPTEs and cmxchg() failure. -ENOENT for invalid
SPTE is chosen because it is not used by any other shared walker. When
clear-dirty-log will be changed to use shared page table walker, it will
be possible to differentiate cases of retrying, continuing or
terminating the walk for shared fault handler and shared
clear-dirty-log.

Signed-off-by: Vipin Sharma <vipinsh@google.com>
---
 arch/arm64/include/asm/kvm_pgtable.h |  1 +
 arch/arm64/kvm/hyp/pgtable.c         | 19 ++++++++++++-------
 arch/arm64/kvm/mmu.c                 |  2 +-
 3 files changed, 14 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h
index 957bc20dab00..23e7e7851f1d 100644
--- a/arch/arm64/include/asm/kvm_pgtable.h
+++ b/arch/arm64/include/asm/kvm_pgtable.h
@@ -720,6 +720,7 @@ int kvm_pgtable_stage2_split(struct kvm_pgtable *pgt, u64 addr, u64 size,
  * -------------|------------------|--------------
  * Non-Shared   | 0                | Continue
  * Non-Shared   | -EAGAIN          | Continue
+ * Non-Shared   | -ENOENT          | Continue
  * Non-Shared   | Any other        | Exit
  * -------------|------------------|--------------
  * Shared       | 0                | Continue
diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index a3a0812b2301..bc8c5c4ac1cf 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -186,14 +186,19 @@ static bool kvm_pgtable_walk_continue(const struct kvm_pgtable_walker *walker,
 	/*
 	 * Visitor callbacks return EAGAIN when the conditions that led to a
 	 * fault are no longer reflected in the page tables due to a race to
-	 * update a PTE. In the context of a fault handler this is interpreted
-	 * as a signal to retry guest execution.
+	 * update a PTE.
 	 *
-	 * Ignore the return code altogether for walkers outside a fault handler
-	 * (e.g. write protecting a range of memory) and chug along with the
-	 * page table walk.
+	 * Callbacks can also return ENOENT when PTE which is visited is not
+	 * valid.
+	 *
+	 * In the context of a fault handler interpret these as a signal
+	 * to retry guest execution.
+	 *
+	 * Ignore these return codes altogether for walkers outside a fault
+	 * handler (e.g. write protecting a range of memory) and chug along
+	 * with the page table walk.
 	 */
-	if (r == -EAGAIN)
+	if (r == -EAGAIN || r == -ENOENT)
 		return !(walker->flags & KVM_PGTABLE_WALK_HANDLE_FAULT);
 
 	return !r;
@@ -1072,7 +1077,7 @@ static int stage2_attr_walker(const struct kvm_pgtable_visit_ctx *ctx,
 	struct kvm_pgtable_mm_ops *mm_ops = ctx->mm_ops;
 
 	if (!kvm_pte_valid(ctx->old))
-		return -EAGAIN;
+		return -ENOENT;
 
 	data->level = ctx->level;
 	data->pte = pte;
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 1030921d89f8..356dc4131023 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1551,7 +1551,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 	read_unlock(&kvm->mmu_lock);
 	kvm_set_pfn_accessed(pfn);
 	kvm_release_pfn_clean(pfn);
-	return ret != -EAGAIN ? ret : 0;
+	return (ret != -EAGAIN && ret != -ENOENT) ? ret : 0;
 }
 
 /* Resolve the access fault by making the page young again. */
-- 
2.41.0.rc0.172.g3f132b7071-goog


WARNING: multiple messages have this Message-ID (diff)
From: Vipin Sharma <vipinsh@google.com>
To: maz@kernel.org, oliver.upton@linux.dev, james.morse@arm.com,
	 suzuki.poulose@arm.com, yuzenghui@huawei.com,
	catalin.marinas@arm.com,  will@kernel.org, chenhuacai@kernel.org,
	aleksandar.qemu.devel@gmail.com,  tsbogend@alpha.franken.de,
	anup@brainfault.org, atishp@atishpatra.org,
	 paul.walmsley@sifive.com, palmer@dabbelt.com,
	aou@eecs.berkeley.edu,  seanjc@google.com, pbonzini@redhat.com,
	dmatlack@google.com,  ricarkol@google.com
Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev,
	 linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org,
	 linux-riscv@lists.infradead.org,
	linux-kselftest@vger.kernel.org,  kvm@vger.kernel.org,
	linux-kernel@vger.kernel.org,  Vipin Sharma <vipinsh@google.com>
Subject: [PATCH v2 10/16] KVM: arm64: Return -ENOENT if PTE is not valid in stage2_attr_walker
Date: Fri,  2 Jun 2023 09:09:08 -0700	[thread overview]
Message-ID: <20230602160914.4011728-11-vipinsh@google.com> (raw)
In-Reply-To: <20230602160914.4011728-1-vipinsh@google.com>

Return -ENOENT from stage2_attr_walker for invalid PTE. Continue page
table walk if walker callback returns -ENOENT outside of the fault
handler path else terminate the walk. In fault handler path, similar to
-EAGAIN in user_mem_abort, retry guest execution.

stage2_attr_walker() is used from multiple places like, write
protection, MMU notifier callbacks, and relaxing permission during vCPU
faults. This function returns -EAGAIN for different cases:
1. When PTE is not valid.
2. When cmpxchg() fails while setting new SPTE.

For non-shared walkers, like write protection and MMU notifier, above 2
cases are just ignored by walker and it moves to the next SPTE. #2 will
never happen for non-shared walkers as they don't use cmpxchg() for
updating SPTEs.

For shared walkers, like vCPU fault handler, above 2 cases results in
walk termination.

In future commits, clear-dirty-log walker will write protect SPTEs under
MMU read lock and use shared page table walker. This will result in two
shared page table walkers type, vCPUs fault handler and clear-dirty-log,
competing with each other and sometime causing cmpxchg() failure. So,
-EAGAIN in clear-dirty-log walker due to cmpxchg() failure must be
retried. Whereas, -EAGAIN in the clear-dirty-log due to invalid SPTE
must be ignored instead of exiting as per the current logic of shared
page table walker. This is not needed for vCPU fault handler which also
runs via shared page table walker and terminates walk on getting -EAGAIN
due to invalid SPTE.

To handle all these scenarios, stage2_attr_walker must return different
error codes for invalid SPTEs and cmxchg() failure. -ENOENT for invalid
SPTE is chosen because it is not used by any other shared walker. When
clear-dirty-log will be changed to use shared page table walker, it will
be possible to differentiate cases of retrying, continuing or
terminating the walk for shared fault handler and shared
clear-dirty-log.

Signed-off-by: Vipin Sharma <vipinsh@google.com>
---
 arch/arm64/include/asm/kvm_pgtable.h |  1 +
 arch/arm64/kvm/hyp/pgtable.c         | 19 ++++++++++++-------
 arch/arm64/kvm/mmu.c                 |  2 +-
 3 files changed, 14 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h
index 957bc20dab00..23e7e7851f1d 100644
--- a/arch/arm64/include/asm/kvm_pgtable.h
+++ b/arch/arm64/include/asm/kvm_pgtable.h
@@ -720,6 +720,7 @@ int kvm_pgtable_stage2_split(struct kvm_pgtable *pgt, u64 addr, u64 size,
  * -------------|------------------|--------------
  * Non-Shared   | 0                | Continue
  * Non-Shared   | -EAGAIN          | Continue
+ * Non-Shared   | -ENOENT          | Continue
  * Non-Shared   | Any other        | Exit
  * -------------|------------------|--------------
  * Shared       | 0                | Continue
diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index a3a0812b2301..bc8c5c4ac1cf 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -186,14 +186,19 @@ static bool kvm_pgtable_walk_continue(const struct kvm_pgtable_walker *walker,
 	/*
 	 * Visitor callbacks return EAGAIN when the conditions that led to a
 	 * fault are no longer reflected in the page tables due to a race to
-	 * update a PTE. In the context of a fault handler this is interpreted
-	 * as a signal to retry guest execution.
+	 * update a PTE.
 	 *
-	 * Ignore the return code altogether for walkers outside a fault handler
-	 * (e.g. write protecting a range of memory) and chug along with the
-	 * page table walk.
+	 * Callbacks can also return ENOENT when PTE which is visited is not
+	 * valid.
+	 *
+	 * In the context of a fault handler interpret these as a signal
+	 * to retry guest execution.
+	 *
+	 * Ignore these return codes altogether for walkers outside a fault
+	 * handler (e.g. write protecting a range of memory) and chug along
+	 * with the page table walk.
 	 */
-	if (r == -EAGAIN)
+	if (r == -EAGAIN || r == -ENOENT)
 		return !(walker->flags & KVM_PGTABLE_WALK_HANDLE_FAULT);
 
 	return !r;
@@ -1072,7 +1077,7 @@ static int stage2_attr_walker(const struct kvm_pgtable_visit_ctx *ctx,
 	struct kvm_pgtable_mm_ops *mm_ops = ctx->mm_ops;
 
 	if (!kvm_pte_valid(ctx->old))
-		return -EAGAIN;
+		return -ENOENT;
 
 	data->level = ctx->level;
 	data->pte = pte;
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 1030921d89f8..356dc4131023 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1551,7 +1551,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 	read_unlock(&kvm->mmu_lock);
 	kvm_set_pfn_accessed(pfn);
 	kvm_release_pfn_clean(pfn);
-	return ret != -EAGAIN ? ret : 0;
+	return (ret != -EAGAIN && ret != -ENOENT) ? ret : 0;
 }
 
 /* Resolve the access fault by making the page young again. */
-- 
2.41.0.rc0.172.g3f132b7071-goog


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

WARNING: multiple messages have this Message-ID (diff)
From: Vipin Sharma <vipinsh@google.com>
To: maz@kernel.org, oliver.upton@linux.dev, james.morse@arm.com,
	 suzuki.poulose@arm.com, yuzenghui@huawei.com,
	catalin.marinas@arm.com,  will@kernel.org, chenhuacai@kernel.org,
	aleksandar.qemu.devel@gmail.com,  tsbogend@alpha.franken.de,
	anup@brainfault.org, atishp@atishpatra.org,
	 paul.walmsley@sifive.com, palmer@dabbelt.com,
	aou@eecs.berkeley.edu,  seanjc@google.com, pbonzini@redhat.com,
	dmatlack@google.com,  ricarkol@google.com
Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev,
	 linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org,
	 linux-riscv@lists.infradead.org,
	linux-kselftest@vger.kernel.org,  kvm@vger.kernel.org,
	linux-kernel@vger.kernel.org,  Vipin Sharma <vipinsh@google.com>
Subject: [PATCH v2 10/16] KVM: arm64: Return -ENOENT if PTE is not valid in stage2_attr_walker
Date: Fri,  2 Jun 2023 09:09:08 -0700	[thread overview]
Message-ID: <20230602160914.4011728-11-vipinsh@google.com> (raw)
In-Reply-To: <20230602160914.4011728-1-vipinsh@google.com>

Return -ENOENT from stage2_attr_walker for invalid PTE. Continue page
table walk if walker callback returns -ENOENT outside of the fault
handler path else terminate the walk. In fault handler path, similar to
-EAGAIN in user_mem_abort, retry guest execution.

stage2_attr_walker() is used from multiple places like, write
protection, MMU notifier callbacks, and relaxing permission during vCPU
faults. This function returns -EAGAIN for different cases:
1. When PTE is not valid.
2. When cmpxchg() fails while setting new SPTE.

For non-shared walkers, like write protection and MMU notifier, above 2
cases are just ignored by walker and it moves to the next SPTE. #2 will
never happen for non-shared walkers as they don't use cmpxchg() for
updating SPTEs.

For shared walkers, like vCPU fault handler, above 2 cases results in
walk termination.

In future commits, clear-dirty-log walker will write protect SPTEs under
MMU read lock and use shared page table walker. This will result in two
shared page table walkers type, vCPUs fault handler and clear-dirty-log,
competing with each other and sometime causing cmpxchg() failure. So,
-EAGAIN in clear-dirty-log walker due to cmpxchg() failure must be
retried. Whereas, -EAGAIN in the clear-dirty-log due to invalid SPTE
must be ignored instead of exiting as per the current logic of shared
page table walker. This is not needed for vCPU fault handler which also
runs via shared page table walker and terminates walk on getting -EAGAIN
due to invalid SPTE.

To handle all these scenarios, stage2_attr_walker must return different
error codes for invalid SPTEs and cmxchg() failure. -ENOENT for invalid
SPTE is chosen because it is not used by any other shared walker. When
clear-dirty-log will be changed to use shared page table walker, it will
be possible to differentiate cases of retrying, continuing or
terminating the walk for shared fault handler and shared
clear-dirty-log.

Signed-off-by: Vipin Sharma <vipinsh@google.com>
---
 arch/arm64/include/asm/kvm_pgtable.h |  1 +
 arch/arm64/kvm/hyp/pgtable.c         | 19 ++++++++++++-------
 arch/arm64/kvm/mmu.c                 |  2 +-
 3 files changed, 14 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h
index 957bc20dab00..23e7e7851f1d 100644
--- a/arch/arm64/include/asm/kvm_pgtable.h
+++ b/arch/arm64/include/asm/kvm_pgtable.h
@@ -720,6 +720,7 @@ int kvm_pgtable_stage2_split(struct kvm_pgtable *pgt, u64 addr, u64 size,
  * -------------|------------------|--------------
  * Non-Shared   | 0                | Continue
  * Non-Shared   | -EAGAIN          | Continue
+ * Non-Shared   | -ENOENT          | Continue
  * Non-Shared   | Any other        | Exit
  * -------------|------------------|--------------
  * Shared       | 0                | Continue
diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index a3a0812b2301..bc8c5c4ac1cf 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -186,14 +186,19 @@ static bool kvm_pgtable_walk_continue(const struct kvm_pgtable_walker *walker,
 	/*
 	 * Visitor callbacks return EAGAIN when the conditions that led to a
 	 * fault are no longer reflected in the page tables due to a race to
-	 * update a PTE. In the context of a fault handler this is interpreted
-	 * as a signal to retry guest execution.
+	 * update a PTE.
 	 *
-	 * Ignore the return code altogether for walkers outside a fault handler
-	 * (e.g. write protecting a range of memory) and chug along with the
-	 * page table walk.
+	 * Callbacks can also return ENOENT when PTE which is visited is not
+	 * valid.
+	 *
+	 * In the context of a fault handler interpret these as a signal
+	 * to retry guest execution.
+	 *
+	 * Ignore these return codes altogether for walkers outside a fault
+	 * handler (e.g. write protecting a range of memory) and chug along
+	 * with the page table walk.
 	 */
-	if (r == -EAGAIN)
+	if (r == -EAGAIN || r == -ENOENT)
 		return !(walker->flags & KVM_PGTABLE_WALK_HANDLE_FAULT);
 
 	return !r;
@@ -1072,7 +1077,7 @@ static int stage2_attr_walker(const struct kvm_pgtable_visit_ctx *ctx,
 	struct kvm_pgtable_mm_ops *mm_ops = ctx->mm_ops;
 
 	if (!kvm_pte_valid(ctx->old))
-		return -EAGAIN;
+		return -ENOENT;
 
 	data->level = ctx->level;
 	data->pte = pte;
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 1030921d89f8..356dc4131023 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1551,7 +1551,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 	read_unlock(&kvm->mmu_lock);
 	kvm_set_pfn_accessed(pfn);
 	kvm_release_pfn_clean(pfn);
-	return ret != -EAGAIN ? ret : 0;
+	return (ret != -EAGAIN && ret != -ENOENT) ? ret : 0;
 }
 
 /* Resolve the access fault by making the page young again. */
-- 
2.41.0.rc0.172.g3f132b7071-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  parent reply	other threads:[~2023-06-02 16:10 UTC|newest]

Thread overview: 63+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-06-02 16:08 [PATCH v2 00/16] Use MMU read lock for clear-dirty-log Vipin Sharma
2023-06-02 16:08 ` Vipin Sharma
2023-06-02 16:08 ` Vipin Sharma
2023-06-02 16:08 ` [PATCH v2 01/16] KVM: selftests: Clear dirty logs in user defined chunks sizes in dirty_log_perf_test Vipin Sharma
2023-06-02 16:08   ` Vipin Sharma
2023-06-02 16:08   ` Vipin Sharma
2023-06-02 16:09 ` [PATCH v2 02/16] KVM: selftests: Add optional delay between consecutive clear-dirty-log calls Vipin Sharma
2023-06-02 16:09   ` Vipin Sharma
2023-06-02 16:09   ` Vipin Sharma
2023-06-02 16:09 ` [PATCH v2 03/16] KVM: selftests: Pass the count of read and write accesses from guest to host Vipin Sharma
2023-06-02 16:09   ` Vipin Sharma
2023-06-02 16:09   ` Vipin Sharma
2023-06-02 16:09 ` [PATCH v2 04/16] KVM: selftests: Print read-write progress by vCPUs in dirty_log_perf_test Vipin Sharma
2023-06-02 16:09   ` Vipin Sharma
2023-06-02 16:09   ` Vipin Sharma
2023-06-02 16:09 ` [PATCH v2 05/16] KVM: selftests: Allow independent execution of " Vipin Sharma
2023-06-02 16:09   ` Vipin Sharma
2023-06-02 16:09   ` Vipin Sharma
2023-06-02 16:09 ` [PATCH v2 06/16] KVM: arm64: Correct the kvm_pgtable_stage2_flush() documentation Vipin Sharma
2023-06-02 16:09   ` Vipin Sharma
2023-06-02 16:09   ` Vipin Sharma
2023-06-02 16:09 ` [PATCH v2 07/16] KVM: mmu: Move mmu lock/unlock to arch code for clear dirty log Vipin Sharma
2023-06-02 16:09   ` Vipin Sharma
2023-06-02 16:09   ` Vipin Sharma
2023-06-02 16:09 ` [PATCH v2 08/16] KMV: arm64: Pass page table walker flags to stage2_apply_range_*() Vipin Sharma
2023-06-02 16:09   ` Vipin Sharma
2023-06-02 16:09   ` Vipin Sharma
2023-06-02 16:09 ` [PATCH v2 09/16] KVM: arm64: Document the page table walker actions based on the callback's return value Vipin Sharma
2023-06-02 16:09   ` Vipin Sharma
2023-06-02 16:09   ` Vipin Sharma
2023-06-05 14:35   ` Zhi Wang
2023-06-05 14:35     ` Zhi Wang
2023-06-05 14:35     ` Zhi Wang
2023-06-06 17:30     ` Vipin Sharma
2023-06-06 17:30       ` Vipin Sharma
2023-06-06 17:30       ` Vipin Sharma
2023-06-07 12:37       ` Zhi Wang
2023-06-07 12:37         ` Zhi Wang
2023-06-07 12:37         ` Zhi Wang
2023-06-08 20:17         ` Vipin Sharma
2023-06-08 20:17           ` Vipin Sharma
2023-06-08 20:17           ` Vipin Sharma
2023-06-02 16:09 ` Vipin Sharma [this message]
2023-06-02 16:09   ` [PATCH v2 10/16] KVM: arm64: Return -ENOENT if PTE is not valid in stage2_attr_walker Vipin Sharma
2023-06-02 16:09   ` Vipin Sharma
2023-06-02 16:09 ` [PATCH v2 11/16] KVM: arm64: Use KVM_PGTABLE_WALK_SHARED flag instead of KVM_PGTABLE_WALK_HANDLE_FAULT Vipin Sharma
2023-06-02 16:09   ` Vipin Sharma
2023-06-02 16:09   ` Vipin Sharma
2023-06-02 16:09 ` [PATCH v2 12/16] KVM: arm64: Retry shared page table walks outside of fault handler Vipin Sharma
2023-06-02 16:09   ` Vipin Sharma
2023-06-02 16:09   ` Vipin Sharma
2023-06-02 16:09 ` [PATCH v2 13/16] KVM: arm64: Run clear-dirty-log under MMU read lock Vipin Sharma
2023-06-02 16:09   ` Vipin Sharma
2023-06-02 16:09   ` Vipin Sharma
2023-06-02 16:09 ` [PATCH v2 14/16] KVM: arm64: Pass page walker flags from callers of stage 2 split walker Vipin Sharma
2023-06-02 16:09   ` Vipin Sharma
2023-06-02 16:09   ` Vipin Sharma
2023-06-02 16:09 ` [PATCH v2 15/16] KVM: arm64: Provide option to pass page walker flag for huge page splits Vipin Sharma
2023-06-02 16:09   ` Vipin Sharma
2023-06-02 16:09   ` Vipin Sharma
2023-06-02 16:09 ` [PATCH v2 16/16] KVM: arm64: Split huge pages during clear-dirty-log under MMU read lock Vipin Sharma
2023-06-02 16:09   ` Vipin Sharma
2023-06-02 16:09   ` Vipin Sharma

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230602160914.4011728-11-vipinsh@google.com \
    --to=vipinsh@google.com \
    --cc=aleksandar.qemu.devel@gmail.com \
    --cc=anup@brainfault.org \
    --cc=aou@eecs.berkeley.edu \
    --cc=atishp@atishpatra.org \
    --cc=catalin.marinas@arm.com \
    --cc=chenhuacai@kernel.org \
    --cc=dmatlack@google.com \
    --cc=james.morse@arm.com \
    --cc=kvm-riscv@lists.infradead.org \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.linux.dev \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-kselftest@vger.kernel.org \
    --cc=linux-mips@vger.kernel.org \
    --cc=linux-riscv@lists.infradead.org \
    --cc=maz@kernel.org \
    --cc=oliver.upton@linux.dev \
    --cc=palmer@dabbelt.com \
    --cc=paul.walmsley@sifive.com \
    --cc=pbonzini@redhat.com \
    --cc=ricarkol@google.com \
    --cc=seanjc@google.com \
    --cc=suzuki.poulose@arm.com \
    --cc=tsbogend@alpha.franken.de \
    --cc=will@kernel.org \
    --cc=yuzenghui@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.