kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/3] KVM: x86/mmu: Add macro for hugepage GFN mask
@ 2020-10-27 21:42 Sean Christopherson
  2020-10-27 21:42 ` [PATCH 1/3] KVM: x86/mmu: Add helper macro for computing " Sean Christopherson
                   ` (3 more replies)
  0 siblings, 4 replies; 12+ messages in thread
From: Sean Christopherson @ 2020-10-27 21:42 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel, Ben Gardon

Add a macro, which is probably long overdue, to replace open coded
variants of "~(KVM_PAGES_PER_HPAGE(level) - 1)".  The straw that broke the
camel's back is the TDP MMU's round_gfn_for_level(), which goes straight
for the optimized approach of using NEG instead of SUB+NOT (gcc uses NEG
for both).  The use of '-(...)' made me do a double take (more like a
quadrupal take) when reading the TDP MMU code as my eyes/brain have been
heavily trained to look for the more common '~(... - 1)'.

Sean Christopherson (3):
  KVM: x86/mmu: Add helper macro for computing hugepage GFN mask
  KVM: x86/mmu: Open code GFN "rounding" in TDP MMU
  KVM: x86/mmu: Use hugepage GFN mask to compute GFN offset mask

 arch/x86/include/asm/kvm_host.h |  1 +
 arch/x86/kvm/mmu/mmu.c          |  4 ++--
 arch/x86/kvm/mmu/mmutrace.h     |  2 +-
 arch/x86/kvm/mmu/paging_tmpl.h  |  4 ++--
 arch/x86/kvm/mmu/tdp_iter.c     | 11 +++--------
 arch/x86/kvm/mmu/tdp_mmu.c      |  2 +-
 arch/x86/kvm/x86.c              |  6 +++---
 7 files changed, 13 insertions(+), 17 deletions(-)

-- 
2.28.0


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 1/3] KVM: x86/mmu: Add helper macro for computing hugepage GFN mask
  2020-10-27 21:42 [PATCH 0/3] KVM: x86/mmu: Add macro for hugepage GFN mask Sean Christopherson
@ 2020-10-27 21:42 ` Sean Christopherson
  2020-10-27 22:17   ` Ben Gardon
  2020-10-27 21:42 ` [PATCH 2/3] KVM: x86/mmu: Open code GFN "rounding" in TDP MMU Sean Christopherson
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 12+ messages in thread
From: Sean Christopherson @ 2020-10-27 21:42 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel, Ben Gardon

Add a helper to compute the GFN mask given a hugepage level, KVM is
accumulating quite a few users with the addition of the TDP MMU.

Note, gcc is clever enough to use a single NEG instruction instead of
SUB+NOT, i.e. use the more common "~(level -1)" pattern instead of
round_gfn_for_level()'s direct two's complement trickery.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/include/asm/kvm_host.h | 1 +
 arch/x86/kvm/mmu/mmu.c          | 2 +-
 arch/x86/kvm/mmu/paging_tmpl.h  | 4 ++--
 arch/x86/kvm/mmu/tdp_iter.c     | 2 +-
 4 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index d44858b69353..6ea046415f29 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -119,6 +119,7 @@
 #define KVM_HPAGE_SIZE(x)	(1UL << KVM_HPAGE_SHIFT(x))
 #define KVM_HPAGE_MASK(x)	(~(KVM_HPAGE_SIZE(x) - 1))
 #define KVM_PAGES_PER_HPAGE(x)	(KVM_HPAGE_SIZE(x) / PAGE_SIZE)
+#define KVM_HPAGE_GFN_MASK(x)	(~(KVM_PAGES_PER_HPAGE(x) - 1))
 
 static inline gfn_t gfn_to_index(gfn_t gfn, gfn_t base_gfn, int level)
 {
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 17587f496ec7..3bfc7ee44e51 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -2886,7 +2886,7 @@ static int __direct_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code,
 			disallowed_hugepage_adjust(*it.sptep, gfn, it.level,
 						   &pfn, &level);
 
-		base_gfn = gfn & ~(KVM_PAGES_PER_HPAGE(it.level) - 1);
+		base_gfn = gfn & KVM_HPAGE_GFN_MASK(it.level);
 		if (it.level == level)
 			break;
 
diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
index 50e268eb8e1a..76ee36f2afd2 100644
--- a/arch/x86/kvm/mmu/paging_tmpl.h
+++ b/arch/x86/kvm/mmu/paging_tmpl.h
@@ -698,7 +698,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gpa_t addr,
 			disallowed_hugepage_adjust(*it.sptep, gw->gfn, it.level,
 						   &pfn, &level);
 
-		base_gfn = gw->gfn & ~(KVM_PAGES_PER_HPAGE(it.level) - 1);
+		base_gfn = gw->gfn & KVM_HPAGE_GFN_MASK(it.level);
 		if (it.level == level)
 			break;
 
@@ -751,7 +751,7 @@ FNAME(is_self_change_mapping)(struct kvm_vcpu *vcpu,
 			      bool *write_fault_to_shadow_pgtable)
 {
 	int level;
-	gfn_t mask = ~(KVM_PAGES_PER_HPAGE(walker->level) - 1);
+	gfn_t mask = KVM_HPAGE_GFN_MASK(walker->level);
 	bool self_changed = false;
 
 	if (!(walker->pte_access & ACC_WRITE_MASK ||
diff --git a/arch/x86/kvm/mmu/tdp_iter.c b/arch/x86/kvm/mmu/tdp_iter.c
index 87b7e16911db..c6e914c96641 100644
--- a/arch/x86/kvm/mmu/tdp_iter.c
+++ b/arch/x86/kvm/mmu/tdp_iter.c
@@ -17,7 +17,7 @@ static void tdp_iter_refresh_sptep(struct tdp_iter *iter)
 
 static gfn_t round_gfn_for_level(gfn_t gfn, int level)
 {
-	return gfn & -KVM_PAGES_PER_HPAGE(level);
+	return gfn & KVM_HPAGE_GFN_MASK(level);
 }
 
 /*
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 2/3] KVM: x86/mmu: Open code GFN "rounding" in TDP MMU
  2020-10-27 21:42 [PATCH 0/3] KVM: x86/mmu: Add macro for hugepage GFN mask Sean Christopherson
  2020-10-27 21:42 ` [PATCH 1/3] KVM: x86/mmu: Add helper macro for computing " Sean Christopherson
@ 2020-10-27 21:42 ` Sean Christopherson
  2020-10-27 22:13   ` Ben Gardon
  2020-10-27 21:43 ` [PATCH 3/3] KVM: x86/mmu: Use hugepage GFN mask to compute GFN offset mask Sean Christopherson
  2020-10-28 15:01 ` [PATCH 0/3] KVM: x86/mmu: Add macro for hugepage GFN mask Paolo Bonzini
  3 siblings, 1 reply; 12+ messages in thread
From: Sean Christopherson @ 2020-10-27 21:42 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel, Ben Gardon

Drop round_gfn_for_level() and directly use the recently introdocued
KVM_HPAGE_GFN_MASK() macro.  Hiding the masking in a "rounding" function
adds an extra "what does this do?" lookup, whereas the concept and usage
of PFN/GFN masks is common enough that it's easy to read the open coded
version without thinking too hard.

No functional change intended.

Cc: Ben Gardon <bgardon@google.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/mmu/tdp_iter.c | 11 +++--------
 1 file changed, 3 insertions(+), 8 deletions(-)

diff --git a/arch/x86/kvm/mmu/tdp_iter.c b/arch/x86/kvm/mmu/tdp_iter.c
index c6e914c96641..4175947dc401 100644
--- a/arch/x86/kvm/mmu/tdp_iter.c
+++ b/arch/x86/kvm/mmu/tdp_iter.c
@@ -15,11 +15,6 @@ static void tdp_iter_refresh_sptep(struct tdp_iter *iter)
 	iter->old_spte = READ_ONCE(*iter->sptep);
 }
 
-static gfn_t round_gfn_for_level(gfn_t gfn, int level)
-{
-	return gfn & KVM_HPAGE_GFN_MASK(level);
-}
-
 /*
  * Sets a TDP iterator to walk a pre-order traversal of the paging structure
  * rooted at root_pt, starting with the walk to translate goal_gfn.
@@ -36,7 +31,7 @@ void tdp_iter_start(struct tdp_iter *iter, u64 *root_pt, int root_level,
 	iter->level = root_level;
 	iter->pt_path[iter->level - 1] = root_pt;
 
-	iter->gfn = round_gfn_for_level(iter->goal_gfn, iter->level);
+	iter->gfn = iter->goal_gfn & KVM_HPAGE_GFN_MASK(iter->level);
 	tdp_iter_refresh_sptep(iter);
 
 	iter->valid = true;
@@ -82,7 +77,7 @@ static bool try_step_down(struct tdp_iter *iter)
 
 	iter->level--;
 	iter->pt_path[iter->level - 1] = child_pt;
-	iter->gfn = round_gfn_for_level(iter->goal_gfn, iter->level);
+	iter->gfn = iter->goal_gfn & KVM_HPAGE_GFN_MASK(iter->level);
 	tdp_iter_refresh_sptep(iter);
 
 	return true;
@@ -124,7 +119,7 @@ static bool try_step_up(struct tdp_iter *iter)
 		return false;
 
 	iter->level++;
-	iter->gfn = round_gfn_for_level(iter->gfn, iter->level);
+	iter->gfn &= KVM_HPAGE_GFN_MASK(iter->level);
 	tdp_iter_refresh_sptep(iter);
 
 	return true;
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 3/3] KVM: x86/mmu: Use hugepage GFN mask to compute GFN offset mask
  2020-10-27 21:42 [PATCH 0/3] KVM: x86/mmu: Add macro for hugepage GFN mask Sean Christopherson
  2020-10-27 21:42 ` [PATCH 1/3] KVM: x86/mmu: Add helper macro for computing " Sean Christopherson
  2020-10-27 21:42 ` [PATCH 2/3] KVM: x86/mmu: Open code GFN "rounding" in TDP MMU Sean Christopherson
@ 2020-10-27 21:43 ` Sean Christopherson
  2020-10-27 22:09   ` Ben Gardon
  2020-10-28 15:01 ` [PATCH 0/3] KVM: x86/mmu: Add macro for hugepage GFN mask Paolo Bonzini
  3 siblings, 1 reply; 12+ messages in thread
From: Sean Christopherson @ 2020-10-27 21:43 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel, Ben Gardon

Use the logical NOT of KVM_HPAGE_GFN_MASK() to compute the GFN offset
mask instead of open coding the equivalent in a variety of locations.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/mmu/mmu.c      | 2 +-
 arch/x86/kvm/mmu/mmutrace.h | 2 +-
 arch/x86/kvm/mmu/tdp_mmu.c  | 2 +-
 arch/x86/kvm/x86.c          | 6 +++---
 4 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 3bfc7ee44e51..9fb50c666ec5 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -2827,7 +2827,7 @@ int kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, gfn_t gfn,
 	 * mmu_notifier_retry() was successful and mmu_lock is held, so
 	 * the pmd can't be split from under us.
 	 */
-	mask = KVM_PAGES_PER_HPAGE(level) - 1;
+	mask = ~KVM_HPAGE_GFN_MASK(level);
 	VM_BUG_ON((gfn & mask) != (pfn & mask));
 	*pfnp = pfn & ~mask;
 
diff --git a/arch/x86/kvm/mmu/mmutrace.h b/arch/x86/kvm/mmu/mmutrace.h
index 213699b27b44..4432ca3c7e4e 100644
--- a/arch/x86/kvm/mmu/mmutrace.h
+++ b/arch/x86/kvm/mmu/mmutrace.h
@@ -372,7 +372,7 @@ TRACE_EVENT(
 
 	TP_fast_assign(
 		__entry->gfn = addr >> PAGE_SHIFT;
-		__entry->pfn = pfn | (__entry->gfn & (KVM_PAGES_PER_HPAGE(level) - 1));
+		__entry->pfn = pfn | (__entry->gfn & ~KVM_HPAGE_GFN_MASK(level));
 		__entry->level = level;
 	),
 
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index 27e381c9da6c..681686608c0b 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -209,7 +209,7 @@ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn,
 
 	WARN_ON(level > PT64_ROOT_MAX_LEVEL);
 	WARN_ON(level < PG_LEVEL_4K);
-	WARN_ON(gfn & (KVM_PAGES_PER_HPAGE(level) - 1));
+	WARN_ON(gfn & ~KVM_HPAGE_GFN_MASK(level));
 
 	/*
 	 * If this warning were to trigger it would indicate that there was a
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 397f599b20e5..faf4c4ddde94 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -10451,16 +10451,16 @@ static int kvm_alloc_memslot_metadata(struct kvm_memory_slot *slot,
 
 		slot->arch.lpage_info[i - 1] = linfo;
 
-		if (slot->base_gfn & (KVM_PAGES_PER_HPAGE(level) - 1))
+		if (slot->base_gfn & ~KVM_HPAGE_GFN_MASK(level))
 			linfo[0].disallow_lpage = 1;
-		if ((slot->base_gfn + npages) & (KVM_PAGES_PER_HPAGE(level) - 1))
+		if ((slot->base_gfn + npages) & ~KVM_HPAGE_GFN_MASK(level))
 			linfo[lpages - 1].disallow_lpage = 1;
 		ugfn = slot->userspace_addr >> PAGE_SHIFT;
 		/*
 		 * If the gfn and userspace address are not aligned wrt each
 		 * other, disable large page support for this slot.
 		 */
-		if ((slot->base_gfn ^ ugfn) & (KVM_PAGES_PER_HPAGE(level) - 1)) {
+		if ((slot->base_gfn ^ ugfn) & ~KVM_HPAGE_GFN_MASK(level)) {
 			unsigned long j;
 
 			for (j = 0; j < lpages; ++j)
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH 3/3] KVM: x86/mmu: Use hugepage GFN mask to compute GFN offset mask
  2020-10-27 21:43 ` [PATCH 3/3] KVM: x86/mmu: Use hugepage GFN mask to compute GFN offset mask Sean Christopherson
@ 2020-10-27 22:09   ` Ben Gardon
  2020-10-27 22:15     ` Sean Christopherson
  0 siblings, 1 reply; 12+ messages in thread
From: Ben Gardon @ 2020-10-27 22:09 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, LKML

On Tue, Oct 27, 2020 at 2:43 PM Sean Christopherson
<sean.j.christopherson@intel.com> wrote:
>
> Use the logical NOT of KVM_HPAGE_GFN_MASK() to compute the GFN offset
> mask instead of open coding the equivalent in a variety of locations.

I don't see a "no functional change expected" note on this patch as
was on the previous one, but I don't think this represents any
functional change.

>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>

Reviewed-by: Ben Gardon <bgardon@google.com>

> ---
>  arch/x86/kvm/mmu/mmu.c      | 2 +-
>  arch/x86/kvm/mmu/mmutrace.h | 2 +-
>  arch/x86/kvm/mmu/tdp_mmu.c  | 2 +-
>  arch/x86/kvm/x86.c          | 6 +++---
>  4 files changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 3bfc7ee44e51..9fb50c666ec5 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -2827,7 +2827,7 @@ int kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, gfn_t gfn,
>          * mmu_notifier_retry() was successful and mmu_lock is held, so
>          * the pmd can't be split from under us.
>          */
> -       mask = KVM_PAGES_PER_HPAGE(level) - 1;
> +       mask = ~KVM_HPAGE_GFN_MASK(level);
>         VM_BUG_ON((gfn & mask) != (pfn & mask));
>         *pfnp = pfn & ~mask;
>
> diff --git a/arch/x86/kvm/mmu/mmutrace.h b/arch/x86/kvm/mmu/mmutrace.h
> index 213699b27b44..4432ca3c7e4e 100644
> --- a/arch/x86/kvm/mmu/mmutrace.h
> +++ b/arch/x86/kvm/mmu/mmutrace.h
> @@ -372,7 +372,7 @@ TRACE_EVENT(
>
>         TP_fast_assign(
>                 __entry->gfn = addr >> PAGE_SHIFT;
> -               __entry->pfn = pfn | (__entry->gfn & (KVM_PAGES_PER_HPAGE(level) - 1));
> +               __entry->pfn = pfn | (__entry->gfn & ~KVM_HPAGE_GFN_MASK(level));
>                 __entry->level = level;
>         ),
>
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> index 27e381c9da6c..681686608c0b 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.c
> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> @@ -209,7 +209,7 @@ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn,
>
>         WARN_ON(level > PT64_ROOT_MAX_LEVEL);
>         WARN_ON(level < PG_LEVEL_4K);
> -       WARN_ON(gfn & (KVM_PAGES_PER_HPAGE(level) - 1));
> +       WARN_ON(gfn & ~KVM_HPAGE_GFN_MASK(level));
>
>         /*
>          * If this warning were to trigger it would indicate that there was a
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 397f599b20e5..faf4c4ddde94 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -10451,16 +10451,16 @@ static int kvm_alloc_memslot_metadata(struct kvm_memory_slot *slot,
>
>                 slot->arch.lpage_info[i - 1] = linfo;
>
> -               if (slot->base_gfn & (KVM_PAGES_PER_HPAGE(level) - 1))
> +               if (slot->base_gfn & ~KVM_HPAGE_GFN_MASK(level))
>                         linfo[0].disallow_lpage = 1;
> -               if ((slot->base_gfn + npages) & (KVM_PAGES_PER_HPAGE(level) - 1))
> +               if ((slot->base_gfn + npages) & ~KVM_HPAGE_GFN_MASK(level))
>                         linfo[lpages - 1].disallow_lpage = 1;
>                 ugfn = slot->userspace_addr >> PAGE_SHIFT;
>                 /*
>                  * If the gfn and userspace address are not aligned wrt each
>                  * other, disable large page support for this slot.
>                  */
> -               if ((slot->base_gfn ^ ugfn) & (KVM_PAGES_PER_HPAGE(level) - 1)) {
> +               if ((slot->base_gfn ^ ugfn) & ~KVM_HPAGE_GFN_MASK(level)) {
>                         unsigned long j;
>
>                         for (j = 0; j < lpages; ++j)
> --
> 2.28.0
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/3] KVM: x86/mmu: Open code GFN "rounding" in TDP MMU
  2020-10-27 21:42 ` [PATCH 2/3] KVM: x86/mmu: Open code GFN "rounding" in TDP MMU Sean Christopherson
@ 2020-10-27 22:13   ` Ben Gardon
  0 siblings, 0 replies; 12+ messages in thread
From: Ben Gardon @ 2020-10-27 22:13 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, LKML

On Tue, Oct 27, 2020 at 2:43 PM Sean Christopherson
<sean.j.christopherson@intel.com> wrote:
>
> Drop round_gfn_for_level() and directly use the recently introdocued
> KVM_HPAGE_GFN_MASK() macro.  Hiding the masking in a "rounding" function
> adds an extra "what does this do?" lookup, whereas the concept and usage
> of PFN/GFN masks is common enough that it's easy to read the open coded
> version without thinking too hard.
>
> No functional change intended.
>
> Cc: Ben Gardon <bgardon@google.com>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>

Reviewed-by: Ben Gardon <bgardon@google.com>

> ---
>  arch/x86/kvm/mmu/tdp_iter.c | 11 +++--------
>  1 file changed, 3 insertions(+), 8 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu/tdp_iter.c b/arch/x86/kvm/mmu/tdp_iter.c
> index c6e914c96641..4175947dc401 100644
> --- a/arch/x86/kvm/mmu/tdp_iter.c
> +++ b/arch/x86/kvm/mmu/tdp_iter.c
> @@ -15,11 +15,6 @@ static void tdp_iter_refresh_sptep(struct tdp_iter *iter)
>         iter->old_spte = READ_ONCE(*iter->sptep);
>  }
>
> -static gfn_t round_gfn_for_level(gfn_t gfn, int level)
> -{
> -       return gfn & KVM_HPAGE_GFN_MASK(level);
> -}
> -
>  /*
>   * Sets a TDP iterator to walk a pre-order traversal of the paging structure
>   * rooted at root_pt, starting with the walk to translate goal_gfn.
> @@ -36,7 +31,7 @@ void tdp_iter_start(struct tdp_iter *iter, u64 *root_pt, int root_level,
>         iter->level = root_level;
>         iter->pt_path[iter->level - 1] = root_pt;
>
> -       iter->gfn = round_gfn_for_level(iter->goal_gfn, iter->level);
> +       iter->gfn = iter->goal_gfn & KVM_HPAGE_GFN_MASK(iter->level);
>         tdp_iter_refresh_sptep(iter);
>
>         iter->valid = true;
> @@ -82,7 +77,7 @@ static bool try_step_down(struct tdp_iter *iter)
>
>         iter->level--;
>         iter->pt_path[iter->level - 1] = child_pt;
> -       iter->gfn = round_gfn_for_level(iter->goal_gfn, iter->level);
> +       iter->gfn = iter->goal_gfn & KVM_HPAGE_GFN_MASK(iter->level);
>         tdp_iter_refresh_sptep(iter);
>
>         return true;
> @@ -124,7 +119,7 @@ static bool try_step_up(struct tdp_iter *iter)
>                 return false;
>
>         iter->level++;
> -       iter->gfn = round_gfn_for_level(iter->gfn, iter->level);
> +       iter->gfn &= KVM_HPAGE_GFN_MASK(iter->level);
>         tdp_iter_refresh_sptep(iter);
>
>         return true;
> --
> 2.28.0
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 3/3] KVM: x86/mmu: Use hugepage GFN mask to compute GFN offset mask
  2020-10-27 22:09   ` Ben Gardon
@ 2020-10-27 22:15     ` Sean Christopherson
  0 siblings, 0 replies; 12+ messages in thread
From: Sean Christopherson @ 2020-10-27 22:15 UTC (permalink / raw)
  To: Ben Gardon
  Cc: Paolo Bonzini, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, LKML

On Tue, Oct 27, 2020 at 03:09:11PM -0700, Ben Gardon wrote:
> On Tue, Oct 27, 2020 at 2:43 PM Sean Christopherson
> <sean.j.christopherson@intel.com> wrote:
> >
> > Use the logical NOT of KVM_HPAGE_GFN_MASK() to compute the GFN offset
> > mask instead of open coding the equivalent in a variety of locations.
> 
> I don't see a "no functional change expected" note on this patch as
> was on the previous one, but I don't think this represents any
> functional change.

Ah, yeah, I meant to call out in the cover letter than nothing in this series
generates a functional difference, e.g. objdump of kvm/kvm-intel is identical
from start to finish.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/3] KVM: x86/mmu: Add helper macro for computing hugepage GFN mask
  2020-10-27 21:42 ` [PATCH 1/3] KVM: x86/mmu: Add helper macro for computing " Sean Christopherson
@ 2020-10-27 22:17   ` Ben Gardon
  2020-10-27 22:46     ` Sean Christopherson
  0 siblings, 1 reply; 12+ messages in thread
From: Ben Gardon @ 2020-10-27 22:17 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, LKML

On Tue, Oct 27, 2020 at 2:43 PM Sean Christopherson
<sean.j.christopherson@intel.com> wrote:
>
> Add a helper to compute the GFN mask given a hugepage level, KVM is
> accumulating quite a few users with the addition of the TDP MMU.
>
> Note, gcc is clever enough to use a single NEG instruction instead of
> SUB+NOT, i.e. use the more common "~(level -1)" pattern instead of
> round_gfn_for_level()'s direct two's complement trickery.

As far as I can tell this patch has no functional changes intended.
Please correct me if that's not correct.

>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>

Reviewed-by: Ben Gardon <bgardon@google.com>

> ---
>  arch/x86/include/asm/kvm_host.h | 1 +
>  arch/x86/kvm/mmu/mmu.c          | 2 +-
>  arch/x86/kvm/mmu/paging_tmpl.h  | 4 ++--
>  arch/x86/kvm/mmu/tdp_iter.c     | 2 +-
>  4 files changed, 5 insertions(+), 4 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index d44858b69353..6ea046415f29 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -119,6 +119,7 @@
>  #define KVM_HPAGE_SIZE(x)      (1UL << KVM_HPAGE_SHIFT(x))
>  #define KVM_HPAGE_MASK(x)      (~(KVM_HPAGE_SIZE(x) - 1))
>  #define KVM_PAGES_PER_HPAGE(x) (KVM_HPAGE_SIZE(x) / PAGE_SIZE)
> +#define KVM_HPAGE_GFN_MASK(x)  (~(KVM_PAGES_PER_HPAGE(x) - 1))

NIT: I know x follows the convention on adjacent macros, but this
would be clearer to me if x was changed to level. (Probably for all
the macros in this block)

>
>  static inline gfn_t gfn_to_index(gfn_t gfn, gfn_t base_gfn, int level)
>  {
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 17587f496ec7..3bfc7ee44e51 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -2886,7 +2886,7 @@ static int __direct_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code,
>                         disallowed_hugepage_adjust(*it.sptep, gfn, it.level,
>                                                    &pfn, &level);
>
> -               base_gfn = gfn & ~(KVM_PAGES_PER_HPAGE(it.level) - 1);
> +               base_gfn = gfn & KVM_HPAGE_GFN_MASK(it.level);
>                 if (it.level == level)
>                         break;
>
> diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
> index 50e268eb8e1a..76ee36f2afd2 100644
> --- a/arch/x86/kvm/mmu/paging_tmpl.h
> +++ b/arch/x86/kvm/mmu/paging_tmpl.h
> @@ -698,7 +698,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gpa_t addr,
>                         disallowed_hugepage_adjust(*it.sptep, gw->gfn, it.level,
>                                                    &pfn, &level);
>
> -               base_gfn = gw->gfn & ~(KVM_PAGES_PER_HPAGE(it.level) - 1);
> +               base_gfn = gw->gfn & KVM_HPAGE_GFN_MASK(it.level);
>                 if (it.level == level)
>                         break;
>
> @@ -751,7 +751,7 @@ FNAME(is_self_change_mapping)(struct kvm_vcpu *vcpu,
>                               bool *write_fault_to_shadow_pgtable)
>  {
>         int level;
> -       gfn_t mask = ~(KVM_PAGES_PER_HPAGE(walker->level) - 1);
> +       gfn_t mask = KVM_HPAGE_GFN_MASK(walker->level);
>         bool self_changed = false;
>
>         if (!(walker->pte_access & ACC_WRITE_MASK ||
> diff --git a/arch/x86/kvm/mmu/tdp_iter.c b/arch/x86/kvm/mmu/tdp_iter.c
> index 87b7e16911db..c6e914c96641 100644
> --- a/arch/x86/kvm/mmu/tdp_iter.c
> +++ b/arch/x86/kvm/mmu/tdp_iter.c
> @@ -17,7 +17,7 @@ static void tdp_iter_refresh_sptep(struct tdp_iter *iter)
>
>  static gfn_t round_gfn_for_level(gfn_t gfn, int level)
>  {
> -       return gfn & -KVM_PAGES_PER_HPAGE(level);
> +       return gfn & KVM_HPAGE_GFN_MASK(level);
>  }
>
>  /*
> --
> 2.28.0
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/3] KVM: x86/mmu: Add helper macro for computing hugepage GFN mask
  2020-10-27 22:17   ` Ben Gardon
@ 2020-10-27 22:46     ` Sean Christopherson
  0 siblings, 0 replies; 12+ messages in thread
From: Sean Christopherson @ 2020-10-27 22:46 UTC (permalink / raw)
  To: Ben Gardon
  Cc: Paolo Bonzini, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, LKML

On Tue, Oct 27, 2020 at 03:17:40PM -0700, Ben Gardon wrote:
> On Tue, Oct 27, 2020 at 2:43 PM Sean Christopherson
> <sean.j.christopherson@intel.com> wrote:
> >
> > Add a helper to compute the GFN mask given a hugepage level, KVM is
> > accumulating quite a few users with the addition of the TDP MMU.
> >
> > Note, gcc is clever enough to use a single NEG instruction instead of
> > SUB+NOT, i.e. use the more common "~(level -1)" pattern instead of
> > round_gfn_for_level()'s direct two's complement trickery.
> 
> As far as I can tell this patch has no functional changes intended.
> Please correct me if that's not correct.

Correct. :-)

> >
> > Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> 
> Reviewed-by: Ben Gardon <bgardon@google.com>
> 
> > ---
> >  arch/x86/include/asm/kvm_host.h | 1 +
> >  arch/x86/kvm/mmu/mmu.c          | 2 +-
> >  arch/x86/kvm/mmu/paging_tmpl.h  | 4 ++--
> >  arch/x86/kvm/mmu/tdp_iter.c     | 2 +-
> >  4 files changed, 5 insertions(+), 4 deletions(-)
> >
> > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> > index d44858b69353..6ea046415f29 100644
> > --- a/arch/x86/include/asm/kvm_host.h
> > +++ b/arch/x86/include/asm/kvm_host.h
> > @@ -119,6 +119,7 @@
> >  #define KVM_HPAGE_SIZE(x)      (1UL << KVM_HPAGE_SHIFT(x))
> >  #define KVM_HPAGE_MASK(x)      (~(KVM_HPAGE_SIZE(x) - 1))
> >  #define KVM_PAGES_PER_HPAGE(x) (KVM_HPAGE_SIZE(x) / PAGE_SIZE)
> > +#define KVM_HPAGE_GFN_MASK(x)  (~(KVM_PAGES_PER_HPAGE(x) - 1))
> 
> NIT: I know x follows the convention on adjacent macros, but this
> would be clearer to me if x was changed to level. (Probably for all
> the macros in this block)

Agreed.  I'll spin a v2 and opportunistically change them all to "level"
in this patch.  I'll also add "No function change intended™." to patches
1 and 3.

> >  static inline gfn_t gfn_to_index(gfn_t gfn, gfn_t base_gfn, int level)
> >  {

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 0/3] KVM: x86/mmu: Add macro for hugepage GFN mask
  2020-10-27 21:42 [PATCH 0/3] KVM: x86/mmu: Add macro for hugepage GFN mask Sean Christopherson
                   ` (2 preceding siblings ...)
  2020-10-27 21:43 ` [PATCH 3/3] KVM: x86/mmu: Use hugepage GFN mask to compute GFN offset mask Sean Christopherson
@ 2020-10-28 15:01 ` Paolo Bonzini
       [not found]   ` <20201028152948.GA7584@linux.intel.com>
  3 siblings, 1 reply; 12+ messages in thread
From: Paolo Bonzini @ 2020-10-28 15:01 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm,
	linux-kernel, Ben Gardon

On 27/10/20 22:42, Sean Christopherson wrote:
> Add a macro, which is probably long overdue, to replace open coded
> variants of "~(KVM_PAGES_PER_HPAGE(level) - 1)".  The straw that broke the
> camel's back is the TDP MMU's round_gfn_for_level(), which goes straight
> for the optimized approach of using NEG instead of SUB+NOT (gcc uses NEG
> for both).  The use of '-(...)' made me do a double take (more like a
> quadrupal take) when reading the TDP MMU code as my eyes/brain have been
> heavily trained to look for the more common '~(... - 1)'.

The upside is that a "how many" macro such as KVM_PAGES_PER_HPAGE will
have only one definition that makes sense.  With a "mask" macro you
never know if the 1s are to the left or to the right.  That is, does
"gfn & KVM_HPAGE_GFN_MASK(x)" return the first gfn within the huge page
or the index of the page within the huge page?

This is actually a problem with this series; see this line in patch 3:

-	mask = KVM_PAGES_PER_HPAGE(level) - 1;
+	mask = ~KVM_HPAGE_GFN_MASK(level);
        ^^^^                  ^^^^

So it's a mask, but not _that_ mask, _another_ mask. :)  That's why I
don't really like "mask" macros, now the other question is how to
express bit masking with "how many" macros.

First of all, I think we all agree that when reading "x & how_many()" we
assume (or we check first thing of all) that the right side is a power
of two.  I like "x & -y" because---even if your eyes have trouble
distinguishing "-" from "~"---it's clearly not "x & (y-1)" and therefore
the only sensible operation that the AND can do "clear everything to the
right".

Now I realize that maybe my obsession for bit twiddling tricks is not
shared with everyone, and of course if you're debugging it you have to
look closer and check if it's really "x & -y" or "x & ~y", but at least
in normal cursory code reading that's how it works for me.


Paolo


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 0/3] KVM: x86/mmu: Add macro for hugepage GFN mask
       [not found]   ` <20201028152948.GA7584@linux.intel.com>
@ 2020-10-29  7:08     ` Paolo Bonzini
  2020-11-05  0:44       ` Sean Christopherson
  0 siblings, 1 reply; 12+ messages in thread
From: Paolo Bonzini @ 2020-10-29  7:08 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm,
	linux-kernel, Ben Gardon

On 28/10/20 16:29, Sean Christopherson wrote:
> The naming and usage also aligns with the kernel, which defines PAGE, PMD and
> PUD masks, and has near identical usage patterns.
> 
>   #define PAGE_SIZE               (_AC(1,UL) << PAGE_SHIFT)
>   #define PAGE_MASK               (~(PAGE_SIZE-1))
> 
>   #define PMD_PAGE_SIZE           (_AC(1, UL) << PMD_SHIFT)
>   #define PMD_PAGE_MASK           (~(PMD_PAGE_SIZE-1))
> 
>   #define PUD_PAGE_SIZE           (_AC(1, UL) << PUD_SHIFT)
>   #define PUD_PAGE_MASK           (~(PUD_PAGE_SIZE-1))

Well, PAGE_MASK is also one of my pet peeves for Linux.  At least I am
consistent. :)

>> and of course if you're debugging it you have to
>> look closer and check if it's really "x & -y" or "x & ~y", but at least
>> in normal cursory code reading that's how it works for me.
> 
> IMO, "x & -y" has a higher barrier to entry, especially when the kernel's page
> masks uses "x & ~(y - 1))".  But, my opinion is definitely colored by my
> inability to read two's-complement on the fly.

Fair enough.  What about having instead

#define KVM_HPAGE_GFN_BASE(gfn, level)  \
   (x & ~(KVM_PAGES_PER_HPAGE(gfn) - 1))
#define KVM_HPAGE_GFN_INDEX(gfn, level)  \
   (x & (KVM_PAGES_PER_HPAGE(gfn) - 1))

?

Paolo


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 0/3] KVM: x86/mmu: Add macro for hugepage GFN mask
  2020-10-29  7:08     ` Paolo Bonzini
@ 2020-11-05  0:44       ` Sean Christopherson
  0 siblings, 0 replies; 12+ messages in thread
From: Sean Christopherson @ 2020-11-05  0:44 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm,
	linux-kernel, Ben Gardon

On Thu, Oct 29, 2020 at 08:08:48AM +0100, Paolo Bonzini wrote:
> On 28/10/20 16:29, Sean Christopherson wrote:
> > The naming and usage also aligns with the kernel, which defines PAGE, PMD and
> > PUD masks, and has near identical usage patterns.
> > 
> >   #define PAGE_SIZE               (_AC(1,UL) << PAGE_SHIFT)
> >   #define PAGE_MASK               (~(PAGE_SIZE-1))
> > 
> >   #define PMD_PAGE_SIZE           (_AC(1, UL) << PMD_SHIFT)
> >   #define PMD_PAGE_MASK           (~(PMD_PAGE_SIZE-1))
> > 
> >   #define PUD_PAGE_SIZE           (_AC(1, UL) << PUD_SHIFT)
> >   #define PUD_PAGE_MASK           (~(PUD_PAGE_SIZE-1))
> 
> Well, PAGE_MASK is also one of my pet peeves for Linux.  At least I am
> consistent. :)
> 
> >> and of course if you're debugging it you have to
> >> look closer and check if it's really "x & -y" or "x & ~y", but at least
> >> in normal cursory code reading that's how it works for me.
> > 
> > IMO, "x & -y" has a higher barrier to entry, especially when the kernel's page
> > masks uses "x & ~(y - 1))".  But, my opinion is definitely colored by my
> > inability to read two's-complement on the fly.
> 
> Fair enough.  What about having instead
> 
> #define KVM_HPAGE_GFN_BASE(gfn, level)  \
>    (x & ~(KVM_PAGES_PER_HPAGE(gfn) - 1))
> #define KVM_HPAGE_GFN_INDEX(gfn, level)  \
>    (x & (KVM_PAGES_PER_HPAGE(gfn) - 1))
> 
> ?

Hmm, not awful?  What about OFFSET instead of INDEX, to pair with page offset?
I don't particularly love either one, but I can't think of anything better.

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2020-11-05  0:44 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-27 21:42 [PATCH 0/3] KVM: x86/mmu: Add macro for hugepage GFN mask Sean Christopherson
2020-10-27 21:42 ` [PATCH 1/3] KVM: x86/mmu: Add helper macro for computing " Sean Christopherson
2020-10-27 22:17   ` Ben Gardon
2020-10-27 22:46     ` Sean Christopherson
2020-10-27 21:42 ` [PATCH 2/3] KVM: x86/mmu: Open code GFN "rounding" in TDP MMU Sean Christopherson
2020-10-27 22:13   ` Ben Gardon
2020-10-27 21:43 ` [PATCH 3/3] KVM: x86/mmu: Use hugepage GFN mask to compute GFN offset mask Sean Christopherson
2020-10-27 22:09   ` Ben Gardon
2020-10-27 22:15     ` Sean Christopherson
2020-10-28 15:01 ` [PATCH 0/3] KVM: x86/mmu: Add macro for hugepage GFN mask Paolo Bonzini
     [not found]   ` <20201028152948.GA7584@linux.intel.com>
2020-10-29  7:08     ` Paolo Bonzini
2020-11-05  0:44       ` Sean Christopherson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).