* [PATCH] KVM: x86: fix 32 bit build
@ 2021-06-16 15:50 Maxim Levitsky
2021-06-16 15:59 ` Sean Christopherson
0 siblings, 1 reply; 3+ messages in thread
From: Maxim Levitsky @ 2021-06-16 15:50 UTC (permalink / raw)
To: kvm
Cc: Joerg Roedel, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
Borislav Petkov, Vitaly Kuznetsov,
maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
Jim Mattson, Wanpeng Li, Paolo Bonzini, Sean Christopherson,
Maxim Levitsky
Now that kvm->stat.nx_lpage_splits is 64 bit, use DIV_ROUND_UP_ULL
when doing division.
Fixes: 7ee093d4f3f5 ("KVM: switch per-VM stats to u64")
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
arch/x86/kvm/mmu/mmu.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 720ceb0a1f5c..97372225f183 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -6054,7 +6054,7 @@ static void kvm_recover_nx_lpages(struct kvm *kvm)
write_lock(&kvm->mmu_lock);
ratio = READ_ONCE(nx_huge_pages_recovery_ratio);
- to_zap = ratio ? DIV_ROUND_UP(kvm->stat.nx_lpage_splits, ratio) : 0;
+ to_zap = ratio ? DIV_ROUND_UP_ULL(kvm->stat.nx_lpage_splits, ratio) : 0;
for ( ; to_zap; --to_zap) {
if (list_empty(&kvm->arch.lpage_disallowed_mmu_pages))
break;
--
2.26.3
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH] KVM: x86: fix 32 bit build
2021-06-16 15:50 [PATCH] KVM: x86: fix 32 bit build Maxim Levitsky
@ 2021-06-16 15:59 ` Sean Christopherson
2021-06-16 18:59 ` Maxim Levitsky
0 siblings, 1 reply; 3+ messages in thread
From: Sean Christopherson @ 2021-06-16 15:59 UTC (permalink / raw)
To: Maxim Levitsky
Cc: kvm, Joerg Roedel, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
Borislav Petkov, Vitaly Kuznetsov,
maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
Jim Mattson, Wanpeng Li, Paolo Bonzini
On Wed, Jun 16, 2021, Maxim Levitsky wrote:
> Now that kvm->stat.nx_lpage_splits is 64 bit, use DIV_ROUND_UP_ULL
> when doing division.
I went the "cast to an unsigned long" route. I prefer the cast approach because
to_zap is also an unsigned long, i.e. using DIV_ROUND_UP_ULL() could look like a
truncation bug. In practice, nx_lpage_splits can't be more than an unsigned long
so it's largely a moot point, I just like the more explicit "this is doing
something odd".
https://lkml.kernel.org/r/20210615162905.2132937-1-seanjc@google.com
> Fixes: 7ee093d4f3f5 ("KVM: switch per-VM stats to u64")
> Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
> ---
> arch/x86/kvm/mmu/mmu.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 720ceb0a1f5c..97372225f183 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -6054,7 +6054,7 @@ static void kvm_recover_nx_lpages(struct kvm *kvm)
> write_lock(&kvm->mmu_lock);
>
> ratio = READ_ONCE(nx_huge_pages_recovery_ratio);
> - to_zap = ratio ? DIV_ROUND_UP(kvm->stat.nx_lpage_splits, ratio) : 0;
> + to_zap = ratio ? DIV_ROUND_UP_ULL(kvm->stat.nx_lpage_splits, ratio) : 0;
> for ( ; to_zap; --to_zap) {
> if (list_empty(&kvm->arch.lpage_disallowed_mmu_pages))
> break;
> --
> 2.26.3
>
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH] KVM: x86: fix 32 bit build
2021-06-16 15:59 ` Sean Christopherson
@ 2021-06-16 18:59 ` Maxim Levitsky
0 siblings, 0 replies; 3+ messages in thread
From: Maxim Levitsky @ 2021-06-16 18:59 UTC (permalink / raw)
To: Sean Christopherson
Cc: kvm, Joerg Roedel, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
Borislav Petkov, Vitaly Kuznetsov,
maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
Jim Mattson, Wanpeng Li, Paolo Bonzini
On Wed, 2021-06-16 at 15:59 +0000, Sean Christopherson wrote:
> On Wed, Jun 16, 2021, Maxim Levitsky wrote:
> > Now that kvm->stat.nx_lpage_splits is 64 bit, use DIV_ROUND_UP_ULL
> > when doing division.
>
> I went the "cast to an unsigned long" route. I prefer the cast approach because
> to_zap is also an unsigned long, i.e. using DIV_ROUND_UP_ULL() could look like a
> truncation bug. In practice, nx_lpage_splits can't be more than an unsigned long
> so it's largely a moot point, I just like the more explicit "this is doing
> something odd".
>
> https://lkml.kernel.org/r/20210615162905.2132937-1-seanjc@google.com
>
> > Fixes: 7ee093d4f3f5 ("KVM: switch per-VM stats to u64")
> > Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
> > ---
> > arch/x86/kvm/mmu/mmu.c | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > index 720ceb0a1f5c..97372225f183 100644
> > --- a/arch/x86/kvm/mmu/mmu.c
> > +++ b/arch/x86/kvm/mmu/mmu.c
> > @@ -6054,7 +6054,7 @@ static void kvm_recover_nx_lpages(struct kvm *kvm)
> > write_lock(&kvm->mmu_lock);
> >
> > ratio = READ_ONCE(nx_huge_pages_recovery_ratio);
> > - to_zap = ratio ? DIV_ROUND_UP(kvm->stat.nx_lpage_splits, ratio) : 0;
> > + to_zap = ratio ? DIV_ROUND_UP_ULL(kvm->stat.nx_lpage_splits, ratio) : 0;
> > for ( ; to_zap; --to_zap) {
> > if (list_empty(&kvm->arch.lpage_disallowed_mmu_pages))
> > break;
> > --
> > 2.26.3
> >
Cool, makes sense.
I didn't notice your patch (I did look at the list but
since the subject didn't mention the build breakage I didn't notice).
I just wanted to send this patch to avoid someone else
spending time figuring it out.
Thanks,
Best regards,
Maxim Levitsky
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2021-06-16 19:00 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-16 15:50 [PATCH] KVM: x86: fix 32 bit build Maxim Levitsky
2021-06-16 15:59 ` Sean Christopherson
2021-06-16 18:59 ` Maxim Levitsky
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.