* [PATCH 0/5] KVM: x86: Clean up rmap zap helpers
@ 2022-07-12 1:55 Sean Christopherson
2022-07-12 1:55 ` [PATCH 1/5] KVM: x86/mmu: Return a u64 (the old SPTE) from mmu_spte_clear_track_bits() Sean Christopherson
` (5 more replies)
0 siblings, 6 replies; 10+ messages in thread
From: Sean Christopherson @ 2022-07-12 1:55 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini; +Cc: kvm, linux-kernel
Clean up the rmap helpers (mostly renames) to yield a more coherent set of
APIs, and to purge the irritating and inconsistent "rmapp" (p is for pointer)
nomenclature.
Patch 1 is a tangentially related fix for a benign bug.
Sean Christopherson (5):
KVM: x86/mmu: Return a u64 (the old SPTE) from
mmu_spte_clear_track_bits()
KVM: x86/mmu: Rename rmap zap helpers to better show relationships
KVM: x86/mmu: Remove underscores from __pte_list_remove()
KVM: x86/mmu: Use innermost rmap zap helper when recycling rmaps
KVM: x86/mmu: Drop the "p is for pointer" from rmap helpers
arch/x86/kvm/mmu/mmu.c | 73 +++++++++++++++++++++---------------------
1 file changed, 36 insertions(+), 37 deletions(-)
base-commit: b9b71f43683ae9d76b0989249607bbe8c9eb6c5c
--
2.37.0.144.g8ac04bfd2-goog
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH 1/5] KVM: x86/mmu: Return a u64 (the old SPTE) from mmu_spte_clear_track_bits()
2022-07-12 1:55 [PATCH 0/5] KVM: x86: Clean up rmap zap helpers Sean Christopherson
@ 2022-07-12 1:55 ` Sean Christopherson
2022-07-12 1:55 ` [PATCH 2/5] KVM: x86/mmu: Rename rmap zap helpers to better show relationships Sean Christopherson
` (4 subsequent siblings)
5 siblings, 0 replies; 10+ messages in thread
From: Sean Christopherson @ 2022-07-12 1:55 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini; +Cc: kvm, linux-kernel
Return a u64, not an int, from mmu_spte_clear_track_bits(). The return
value is the old SPTE value, which is very much a 64-bit value. The sole
caller that consumes the return value, drop_spte(), already uses a u64.
The only reason that truncating the SPTE value is not problematic is
because drop_spte() only queries the shadow-present bit, which is in the
lower 32 bits.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/mmu/mmu.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index f7fa4c31b7c5..2605d6ebc193 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -529,7 +529,7 @@ static bool mmu_spte_update(u64 *sptep, u64 new_spte)
* state bits, it is used to clear the last level sptep.
* Returns the old PTE.
*/
-static int mmu_spte_clear_track_bits(struct kvm *kvm, u64 *sptep)
+static u64 mmu_spte_clear_track_bits(struct kvm *kvm, u64 *sptep)
{
kvm_pfn_t pfn;
u64 old_spte = *sptep;
--
2.37.0.144.g8ac04bfd2-goog
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 2/5] KVM: x86/mmu: Rename rmap zap helpers to better show relationships
2022-07-12 1:55 [PATCH 0/5] KVM: x86: Clean up rmap zap helpers Sean Christopherson
2022-07-12 1:55 ` [PATCH 1/5] KVM: x86/mmu: Return a u64 (the old SPTE) from mmu_spte_clear_track_bits() Sean Christopherson
@ 2022-07-12 1:55 ` Sean Christopherson
2022-07-12 1:55 ` [PATCH 3/5] KVM: x86/mmu: Remove underscores from __pte_list_remove() Sean Christopherson
` (3 subsequent siblings)
5 siblings, 0 replies; 10+ messages in thread
From: Sean Christopherson @ 2022-07-12 1:55 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini; +Cc: kvm, linux-kernel
Rename the helpers that zap rmaps to use consistent naming and better
show the relationships between the various helpers. E.g. rename
pte_list_remove() to kvm_zap_one_rmap(), use "zap" universally instead of
a mix of "zap" and "unmap", etc...
No functional change intended.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/mmu/mmu.c | 37 ++++++++++++++++++-------------------
1 file changed, 18 insertions(+), 19 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 2605d6ebc193..32f9427f3334 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -957,15 +957,15 @@ static void __pte_list_remove(u64 *spte, struct kvm_rmap_head *rmap_head)
}
}
-static void pte_list_remove(struct kvm *kvm, struct kvm_rmap_head *rmap_head,
- u64 *sptep)
+static void kvm_zap_one_rmap(struct kvm *kvm, struct kvm_rmap_head *rmap_head,
+ u64 *sptep)
{
mmu_spte_clear_track_bits(kvm, sptep);
__pte_list_remove(sptep, rmap_head);
}
-/* Return true if rmap existed, false otherwise */
-static bool pte_list_destroy(struct kvm *kvm, struct kvm_rmap_head *rmap_head)
+/* Return true if at least one rmap was zapped, false otherwise */
+static bool ____kvm_zap_rmaps(struct kvm *kvm, struct kvm_rmap_head *rmap_head)
{
struct pte_list_desc *desc, *next;
int i;
@@ -1383,17 +1383,17 @@ static bool kvm_vcpu_write_protect_gfn(struct kvm_vcpu *vcpu, u64 gfn)
return kvm_mmu_slot_gfn_write_protect(vcpu->kvm, slot, gfn, PG_LEVEL_4K);
}
-static bool kvm_zap_rmapp(struct kvm *kvm, struct kvm_rmap_head *rmap_head,
- const struct kvm_memory_slot *slot)
+static bool __kvm_zap_rmaps(struct kvm *kvm, struct kvm_rmap_head *rmap_head,
+ const struct kvm_memory_slot *slot)
{
- return pte_list_destroy(kvm, rmap_head);
+ return ____kvm_zap_rmaps(kvm, rmap_head);
}
-static bool kvm_unmap_rmapp(struct kvm *kvm, struct kvm_rmap_head *rmap_head,
- struct kvm_memory_slot *slot, gfn_t gfn, int level,
- pte_t unused)
+static bool kvm_zap_rmaps(struct kvm *kvm, struct kvm_rmap_head *rmap_head,
+ struct kvm_memory_slot *slot, gfn_t gfn, int level,
+ pte_t unused)
{
- return kvm_zap_rmapp(kvm, rmap_head, slot);
+ return __kvm_zap_rmaps(kvm, rmap_head, slot);
}
static bool kvm_set_pte_rmapp(struct kvm *kvm, struct kvm_rmap_head *rmap_head,
@@ -1417,7 +1417,7 @@ static bool kvm_set_pte_rmapp(struct kvm *kvm, struct kvm_rmap_head *rmap_head,
need_flush = true;
if (pte_write(pte)) {
- pte_list_remove(kvm, rmap_head, sptep);
+ kvm_zap_one_rmap(kvm, rmap_head, sptep);
goto restart;
} else {
new_spte = kvm_mmu_changed_pte_notifier_make_spte(
@@ -1529,7 +1529,7 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range)
bool flush = false;
if (kvm_memslots_have_rmaps(kvm))
- flush = kvm_handle_gfn_range(kvm, range, kvm_unmap_rmapp);
+ flush = kvm_handle_gfn_range(kvm, range, kvm_zap_rmaps);
if (is_tdp_mmu_enabled(kvm))
flush = kvm_tdp_mmu_unmap_gfn_range(kvm, range, flush);
@@ -1596,7 +1596,7 @@ static void __rmap_add(struct kvm *kvm,
rmap_count = pte_list_add(cache, spte, rmap_head);
if (rmap_count > RMAP_RECYCLE_THRESHOLD) {
- kvm_unmap_rmapp(kvm, rmap_head, NULL, gfn, sp->role.level, __pte(0));
+ kvm_zap_rmaps(kvm, rmap_head, NULL, gfn, sp->role.level, __pte(0));
kvm_flush_remote_tlbs_with_address(
kvm, sp->gfn, KVM_PAGES_PER_HPAGE(sp->role.level));
}
@@ -5977,7 +5977,7 @@ void kvm_mmu_uninit_vm(struct kvm *kvm)
mmu_free_vm_memory_caches(kvm);
}
-static bool __kvm_zap_rmaps(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end)
+static bool __kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end)
{
const struct kvm_memory_slot *memslot;
struct kvm_memslots *slots;
@@ -5999,8 +5999,7 @@ static bool __kvm_zap_rmaps(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end)
if (WARN_ON_ONCE(start >= end))
continue;
- flush = slot_handle_level_range(kvm, memslot, kvm_zap_rmapp,
-
+ flush = slot_handle_level_range(kvm, memslot, __kvm_zap_rmaps,
PG_LEVEL_4K, KVM_MAX_HUGEPAGE_LEVEL,
start, end - 1, true, flush);
}
@@ -6025,7 +6024,7 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end)
kvm_inc_notifier_count(kvm, gfn_start, gfn_end);
- flush = __kvm_zap_rmaps(kvm, gfn_start, gfn_end);
+ flush = __kvm_zap_gfn_range(kvm, gfn_start, gfn_end);
if (is_tdp_mmu_enabled(kvm)) {
for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++)
@@ -6401,7 +6400,7 @@ static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm,
if (sp->role.direct &&
sp->role.level < kvm_mmu_max_mapping_level(kvm, slot, sp->gfn,
pfn, PG_LEVEL_NUM)) {
- pte_list_remove(kvm, rmap_head, sptep);
+ kvm_zap_one_rmap(kvm, rmap_head, sptep);
if (kvm_available_flush_tlb_with_range())
kvm_flush_remote_tlbs_with_address(kvm, sp->gfn,
--
2.37.0.144.g8ac04bfd2-goog
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 3/5] KVM: x86/mmu: Remove underscores from __pte_list_remove()
2022-07-12 1:55 [PATCH 0/5] KVM: x86: Clean up rmap zap helpers Sean Christopherson
2022-07-12 1:55 ` [PATCH 1/5] KVM: x86/mmu: Return a u64 (the old SPTE) from mmu_spte_clear_track_bits() Sean Christopherson
2022-07-12 1:55 ` [PATCH 2/5] KVM: x86/mmu: Rename rmap zap helpers to better show relationships Sean Christopherson
@ 2022-07-12 1:55 ` Sean Christopherson
2022-07-12 1:55 ` [PATCH 4/5] KVM: x86/mmu: Use innermost rmap zap helper when recycling rmaps Sean Christopherson
` (2 subsequent siblings)
5 siblings, 0 replies; 10+ messages in thread
From: Sean Christopherson @ 2022-07-12 1:55 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini; +Cc: kvm, linux-kernel
Remove the underscores from __pte_list_remove(), the function formerly
known as pte_list_remove() is now named ____kvm_zap_rmaps() to show that
it zaps rmaps/PTEs, i.e. doesn't just remove an entry from a list.
No functional change intended.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/mmu/mmu.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 32f9427f3334..fbe808bb0ce1 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -921,7 +921,7 @@ pte_list_desc_remove_entry(struct kvm_rmap_head *rmap_head,
mmu_free_pte_list_desc(desc);
}
-static void __pte_list_remove(u64 *spte, struct kvm_rmap_head *rmap_head)
+static void pte_list_remove(u64 *spte, struct kvm_rmap_head *rmap_head)
{
struct pte_list_desc *desc;
struct pte_list_desc *prev_desc;
@@ -961,7 +961,7 @@ static void kvm_zap_one_rmap(struct kvm *kvm, struct kvm_rmap_head *rmap_head,
u64 *sptep)
{
mmu_spte_clear_track_bits(kvm, sptep);
- __pte_list_remove(sptep, rmap_head);
+ pte_list_remove(sptep, rmap_head);
}
/* Return true if at least one rmap was zapped, false otherwise */
@@ -1050,7 +1050,7 @@ static void rmap_remove(struct kvm *kvm, u64 *spte)
slot = __gfn_to_memslot(slots, gfn);
rmap_head = gfn_to_rmap(gfn, sp->role.level, slot);
- __pte_list_remove(spte, rmap_head);
+ pte_list_remove(spte, rmap_head);
}
/*
@@ -1692,7 +1692,7 @@ static void mmu_page_add_parent_pte(struct kvm_mmu_memory_cache *cache,
static void mmu_page_remove_parent_pte(struct kvm_mmu_page *sp,
u64 *parent_pte)
{
- __pte_list_remove(parent_pte, &sp->parent_ptes);
+ pte_list_remove(parent_pte, &sp->parent_ptes);
}
static void drop_parent_pte(struct kvm_mmu_page *sp,
--
2.37.0.144.g8ac04bfd2-goog
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 4/5] KVM: x86/mmu: Use innermost rmap zap helper when recycling rmaps
2022-07-12 1:55 [PATCH 0/5] KVM: x86: Clean up rmap zap helpers Sean Christopherson
` (2 preceding siblings ...)
2022-07-12 1:55 ` [PATCH 3/5] KVM: x86/mmu: Remove underscores from __pte_list_remove() Sean Christopherson
@ 2022-07-12 1:55 ` Sean Christopherson
2022-07-12 1:55 ` [PATCH 5/5] KVM: x86/mmu: Drop the "p is for pointer" from rmap helpers Sean Christopherson
2022-07-14 16:45 ` [PATCH 0/5] KVM: x86: Clean up rmap zap helpers Paolo Bonzini
5 siblings, 0 replies; 10+ messages in thread
From: Sean Christopherson @ 2022-07-12 1:55 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini; +Cc: kvm, linux-kernel
Use ____kvm_zap_rmaps() directly when recycling rmaps instead of bouncing
through kvm_zap_rmaps() and __kvm_zap_rmaps(). Calling kvm_zap_rmaps()
is unnecessary and odd as it requires passing dummy parameters; passing
NULL for @slot when __rmap_add() already has a valid slot is especially
weird and confusing.
No functional change intended.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/mmu/mmu.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index fbe808bb0ce1..496672ffaf46 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -1596,7 +1596,7 @@ static void __rmap_add(struct kvm *kvm,
rmap_count = pte_list_add(cache, spte, rmap_head);
if (rmap_count > RMAP_RECYCLE_THRESHOLD) {
- kvm_zap_rmaps(kvm, rmap_head, NULL, gfn, sp->role.level, __pte(0));
+ ____kvm_zap_rmaps(kvm, rmap_head);
kvm_flush_remote_tlbs_with_address(
kvm, sp->gfn, KVM_PAGES_PER_HPAGE(sp->role.level));
}
--
2.37.0.144.g8ac04bfd2-goog
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 5/5] KVM: x86/mmu: Drop the "p is for pointer" from rmap helpers
2022-07-12 1:55 [PATCH 0/5] KVM: x86: Clean up rmap zap helpers Sean Christopherson
` (3 preceding siblings ...)
2022-07-12 1:55 ` [PATCH 4/5] KVM: x86/mmu: Use innermost rmap zap helper when recycling rmaps Sean Christopherson
@ 2022-07-12 1:55 ` Sean Christopherson
2022-07-14 16:45 ` [PATCH 0/5] KVM: x86: Clean up rmap zap helpers Paolo Bonzini
5 siblings, 0 replies; 10+ messages in thread
From: Sean Christopherson @ 2022-07-12 1:55 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini; +Cc: kvm, linux-kernel
Drop the trailing "p" from rmap helpers, i.e. rename functions to simply
be kvm_<action>_rmap(). Declaring that a function takes a pointer is
completely unnecessary and goes against kernel style.
No functional change intended.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/mmu/mmu.c | 26 +++++++++++++-------------
1 file changed, 13 insertions(+), 13 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 496672ffaf46..47e46c10731d 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -403,7 +403,7 @@ static u64 __update_clear_spte_slow(u64 *sptep, u64 spte)
* The idea using the light way get the spte on x86_32 guest is from
* gup_get_pte (mm/gup.c).
*
- * An spte tlb flush may be pending, because kvm_set_pte_rmapp
+ * An spte tlb flush may be pending, because kvm_set_pte_rmap
* coalesces them and we are running out of the MMU lock. Therefore
* we need to protect against in-progress updates of the spte.
*
@@ -1396,9 +1396,9 @@ static bool kvm_zap_rmaps(struct kvm *kvm, struct kvm_rmap_head *rmap_head,
return __kvm_zap_rmaps(kvm, rmap_head, slot);
}
-static bool kvm_set_pte_rmapp(struct kvm *kvm, struct kvm_rmap_head *rmap_head,
- struct kvm_memory_slot *slot, gfn_t gfn, int level,
- pte_t pte)
+static bool kvm_set_pte_rmap(struct kvm *kvm, struct kvm_rmap_head *rmap_head,
+ struct kvm_memory_slot *slot, gfn_t gfn, int level,
+ pte_t pte)
{
u64 *sptep;
struct rmap_iterator iter;
@@ -1542,7 +1542,7 @@ bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
bool flush = false;
if (kvm_memslots_have_rmaps(kvm))
- flush = kvm_handle_gfn_range(kvm, range, kvm_set_pte_rmapp);
+ flush = kvm_handle_gfn_range(kvm, range, kvm_set_pte_rmap);
if (is_tdp_mmu_enabled(kvm))
flush |= kvm_tdp_mmu_set_spte_gfn(kvm, range);
@@ -1550,9 +1550,9 @@ bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
return flush;
}
-static bool kvm_age_rmapp(struct kvm *kvm, struct kvm_rmap_head *rmap_head,
- struct kvm_memory_slot *slot, gfn_t gfn, int level,
- pte_t unused)
+static bool kvm_age_rmap(struct kvm *kvm, struct kvm_rmap_head *rmap_head,
+ struct kvm_memory_slot *slot, gfn_t gfn, int level,
+ pte_t unused)
{
u64 *sptep;
struct rmap_iterator iter;
@@ -1564,9 +1564,9 @@ static bool kvm_age_rmapp(struct kvm *kvm, struct kvm_rmap_head *rmap_head,
return young;
}
-static bool kvm_test_age_rmapp(struct kvm *kvm, struct kvm_rmap_head *rmap_head,
- struct kvm_memory_slot *slot, gfn_t gfn,
- int level, pte_t unused)
+static bool kvm_test_age_rmap(struct kvm *kvm, struct kvm_rmap_head *rmap_head,
+ struct kvm_memory_slot *slot, gfn_t gfn,
+ int level, pte_t unused)
{
u64 *sptep;
struct rmap_iterator iter;
@@ -1615,7 +1615,7 @@ bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
bool young = false;
if (kvm_memslots_have_rmaps(kvm))
- young = kvm_handle_gfn_range(kvm, range, kvm_age_rmapp);
+ young = kvm_handle_gfn_range(kvm, range, kvm_age_rmap);
if (is_tdp_mmu_enabled(kvm))
young |= kvm_tdp_mmu_age_gfn_range(kvm, range);
@@ -1628,7 +1628,7 @@ bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
bool young = false;
if (kvm_memslots_have_rmaps(kvm))
- young = kvm_handle_gfn_range(kvm, range, kvm_test_age_rmapp);
+ young = kvm_handle_gfn_range(kvm, range, kvm_test_age_rmap);
if (is_tdp_mmu_enabled(kvm))
young |= kvm_tdp_mmu_test_age_gfn(kvm, range);
--
2.37.0.144.g8ac04bfd2-goog
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH 0/5] KVM: x86: Clean up rmap zap helpers
2022-07-12 1:55 [PATCH 0/5] KVM: x86: Clean up rmap zap helpers Sean Christopherson
` (4 preceding siblings ...)
2022-07-12 1:55 ` [PATCH 5/5] KVM: x86/mmu: Drop the "p is for pointer" from rmap helpers Sean Christopherson
@ 2022-07-14 16:45 ` Paolo Bonzini
2022-07-14 17:27 ` Sean Christopherson
5 siblings, 1 reply; 10+ messages in thread
From: Paolo Bonzini @ 2022-07-14 16:45 UTC (permalink / raw)
To: Sean Christopherson; +Cc: kvm, linux-kernel
On 7/12/22 03:55, Sean Christopherson wrote:
> Clean up the rmap helpers (mostly renames) to yield a more coherent set of
> APIs, and to purge the irritating and inconsistent "rmapp" (p is for pointer)
> nomenclature.
>
> Patch 1 is a tangentially related fix for a benign bug.
>
> Sean Christopherson (5):
> KVM: x86/mmu: Return a u64 (the old SPTE) from
> mmu_spte_clear_track_bits()
> KVM: x86/mmu: Rename rmap zap helpers to better show relationships
> KVM: x86/mmu: Remove underscores from __pte_list_remove()
> KVM: x86/mmu: Use innermost rmap zap helper when recycling rmaps
> KVM: x86/mmu: Drop the "p is for pointer" from rmap helpers
>
> arch/x86/kvm/mmu/mmu.c | 73 +++++++++++++++++++++---------------------
> 1 file changed, 36 insertions(+), 37 deletions(-)
>
>
> base-commit: b9b71f43683ae9d76b0989249607bbe8c9eb6c5c
I'm not sure I dig the ____, I'll take a closer look tomorrow or next
week since it's dinner time here.
I'm pushing what you and I collected over the past 3 weeks, for now I
only checked that it compiles.
Paolo
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 0/5] KVM: x86: Clean up rmap zap helpers
2022-07-14 16:45 ` [PATCH 0/5] KVM: x86: Clean up rmap zap helpers Paolo Bonzini
@ 2022-07-14 17:27 ` Sean Christopherson
2022-07-14 17:33 ` Paolo Bonzini
2022-07-14 17:34 ` Sean Christopherson
0 siblings, 2 replies; 10+ messages in thread
From: Sean Christopherson @ 2022-07-14 17:27 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: kvm, linux-kernel
On Thu, Jul 14, 2022, Paolo Bonzini wrote:
> On 7/12/22 03:55, Sean Christopherson wrote:
> > Clean up the rmap helpers (mostly renames) to yield a more coherent set of
> > APIs, and to purge the irritating and inconsistent "rmapp" (p is for pointer)
> > nomenclature.
> >
> > Patch 1 is a tangentially related fix for a benign bug.
> >
> > Sean Christopherson (5):
> > KVM: x86/mmu: Return a u64 (the old SPTE) from
> > mmu_spte_clear_track_bits()
> > KVM: x86/mmu: Rename rmap zap helpers to better show relationships
> > KVM: x86/mmu: Remove underscores from __pte_list_remove()
> > KVM: x86/mmu: Use innermost rmap zap helper when recycling rmaps
> > KVM: x86/mmu: Drop the "p is for pointer" from rmap helpers
> >
> > arch/x86/kvm/mmu/mmu.c | 73 +++++++++++++++++++++---------------------
> > 1 file changed, 36 insertions(+), 37 deletions(-)
> >
> >
> > base-commit: b9b71f43683ae9d76b0989249607bbe8c9eb6c5c
>
> I'm not sure I dig the ____, I'll take a closer look tomorrow or next week
> since it's dinner time here.
Yeah, I'm not a fan of it either. And rereading things, my proposed names also
create an inconsistency; the zap path is the only user of kvm_handle_gfn_range()
that uses a plural "rmaps".
$ git grep kvm_handle_gfn_range
arch/x86/kvm/mmu/mmu.c:static __always_inline bool kvm_handle_gfn_range(struct kvm *kvm,
arch/x86/kvm/mmu/mmu.c: flush = kvm_handle_gfn_range(kvm, range, kvm_zap_rmaps);
arch/x86/kvm/mmu/mmu.c: flush = kvm_handle_gfn_range(kvm, range, kvm_set_pte_rmap);
arch/x86/kvm/mmu/mmu.c: young = kvm_handle_gfn_range(kvm, range, kvm_age_rmap);
arch/x86/kvm/mmu/mmu.c: young = kvm_handle_gfn_range(kvm, range, kvm_test_age_rmap);
Make "rmaps" plural is probably a mistake. The helper zaps multiple SPTEs for a
given rmap list, but from a certain point of view it's just a single "rmap".
What about:
kvm_zap_rmapp => kvm_zap_rmap // to align with kvm_handle_gfn_range() usage
kvm_zap_rmap => __kvm_zap_rmap // to pair with kvm_zap_rmap()
and
pte_list_remove => kvm_zap_one_rmap_spte
pte_list_destroy => kvm_zap_all_rmap_sptes
That will yield a better series too, as I can move patch 5 to be patch 2, then
split what was patch 2 (the rename) into separate patches to first align kvm_zap_rmap()
and __kvm_zap_rmap(), and then rename the pte_list_remove/destroy helpers.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 0/5] KVM: x86: Clean up rmap zap helpers
2022-07-14 17:27 ` Sean Christopherson
@ 2022-07-14 17:33 ` Paolo Bonzini
2022-07-14 17:34 ` Sean Christopherson
1 sibling, 0 replies; 10+ messages in thread
From: Paolo Bonzini @ 2022-07-14 17:33 UTC (permalink / raw)
To: Sean Christopherson; +Cc: kvm, linux-kernel
On 7/14/22 19:27, Sean Christopherson wrote:
>
> pte_list_remove => kvm_zap_one_rmap_spte
> pte_list_destroy => kvm_zap_all_rmap_sptes
>
> That will yield a better series too, as I can move patch 5 to be patch 2, then
> split what was patch 2 (the rename) into separate patches to first align kvm_zap_rmap()
> and __kvm_zap_rmap(), and then rename the pte_list_remove/destroy helpers.
Yeah, sounds good (I also was looking into moving patch 5 and possibly
even patch 4 more towards the beginning).
Paolo
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 0/5] KVM: x86: Clean up rmap zap helpers
2022-07-14 17:27 ` Sean Christopherson
2022-07-14 17:33 ` Paolo Bonzini
@ 2022-07-14 17:34 ` Sean Christopherson
1 sibling, 0 replies; 10+ messages in thread
From: Sean Christopherson @ 2022-07-14 17:34 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: kvm, linux-kernel
On Thu, Jul 14, 2022, Sean Christopherson wrote:
> On Thu, Jul 14, 2022, Paolo Bonzini wrote:
> > On 7/12/22 03:55, Sean Christopherson wrote:
> > > Clean up the rmap helpers (mostly renames) to yield a more coherent set of
> > > APIs, and to purge the irritating and inconsistent "rmapp" (p is for pointer)
> > > nomenclature.
> > >
> > > Patch 1 is a tangentially related fix for a benign bug.
> > >
> > > Sean Christopherson (5):
> > > KVM: x86/mmu: Return a u64 (the old SPTE) from
> > > mmu_spte_clear_track_bits()
> > > KVM: x86/mmu: Rename rmap zap helpers to better show relationships
> > > KVM: x86/mmu: Remove underscores from __pte_list_remove()
> > > KVM: x86/mmu: Use innermost rmap zap helper when recycling rmaps
> > > KVM: x86/mmu: Drop the "p is for pointer" from rmap helpers
> > >
> > > arch/x86/kvm/mmu/mmu.c | 73 +++++++++++++++++++++---------------------
> > > 1 file changed, 36 insertions(+), 37 deletions(-)
> > >
> > >
> > > base-commit: b9b71f43683ae9d76b0989249607bbe8c9eb6c5c
> >
> > I'm not sure I dig the ____, I'll take a closer look tomorrow or next week
> > since it's dinner time here.
>
> Yeah, I'm not a fan of it either. And rereading things, my proposed names also
> create an inconsistency; the zap path is the only user of kvm_handle_gfn_range()
> that uses a plural "rmaps".
>
> $ git grep kvm_handle_gfn_range
> arch/x86/kvm/mmu/mmu.c:static __always_inline bool kvm_handle_gfn_range(struct kvm *kvm,
> arch/x86/kvm/mmu/mmu.c: flush = kvm_handle_gfn_range(kvm, range, kvm_zap_rmaps);
> arch/x86/kvm/mmu/mmu.c: flush = kvm_handle_gfn_range(kvm, range, kvm_set_pte_rmap);
> arch/x86/kvm/mmu/mmu.c: young = kvm_handle_gfn_range(kvm, range, kvm_age_rmap);
> arch/x86/kvm/mmu/mmu.c: young = kvm_handle_gfn_range(kvm, range, kvm_test_age_rmap);
>
> Make "rmaps" plural is probably a mistake. The helper zaps multiple SPTEs for a
> given rmap list, but from a certain point of view it's just a single "rmap".
>
> What about:
>
> kvm_zap_rmapp => kvm_zap_rmap // to align with kvm_handle_gfn_range() usage
> kvm_zap_rmap => __kvm_zap_rmap // to pair with kvm_zap_rmap()
>
> and
>
> pte_list_remove => kvm_zap_one_rmap_spte
> pte_list_destroy => kvm_zap_all_rmap_sptes
>
> That will yield a better series too, as I can move patch 5 to be patch 2, then
> split what was patch 2 (the rename) into separate patches to first align kvm_zap_rmap()
> and __kvm_zap_rmap(), and then rename the pte_list_remove/destroy helpers.
And also:
__kvm_zap_rmaps => kvm_rmap_zap_gfn_range
instead of renaming it to __kvm_zap_gfn_range() to make it clear that it zaps only
rmap-based MMUs, to align with kvm_rmap_zap_collapsible_sptes(), and to avoid the
plural "rmaps".
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2022-07-14 17:35 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-07-12 1:55 [PATCH 0/5] KVM: x86: Clean up rmap zap helpers Sean Christopherson
2022-07-12 1:55 ` [PATCH 1/5] KVM: x86/mmu: Return a u64 (the old SPTE) from mmu_spte_clear_track_bits() Sean Christopherson
2022-07-12 1:55 ` [PATCH 2/5] KVM: x86/mmu: Rename rmap zap helpers to better show relationships Sean Christopherson
2022-07-12 1:55 ` [PATCH 3/5] KVM: x86/mmu: Remove underscores from __pte_list_remove() Sean Christopherson
2022-07-12 1:55 ` [PATCH 4/5] KVM: x86/mmu: Use innermost rmap zap helper when recycling rmaps Sean Christopherson
2022-07-12 1:55 ` [PATCH 5/5] KVM: x86/mmu: Drop the "p is for pointer" from rmap helpers Sean Christopherson
2022-07-14 16:45 ` [PATCH 0/5] KVM: x86: Clean up rmap zap helpers Paolo Bonzini
2022-07-14 17:27 ` Sean Christopherson
2022-07-14 17:33 ` Paolo Bonzini
2022-07-14 17:34 ` Sean Christopherson
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.