kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Lai Jiangshan <jiangshanlai+lkml@gmail.com>
To: Sean Christopherson <seanjc@google.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	kvm@vger.kernel.org, LKML <linux-kernel@vger.kernel.org>,
	Venkatesh Srinivas <venkateshs@google.com>,
	Vitaly Kuznetsov <vkuznets@redhat.com>
Subject: Re: [PATCH 2/2] KVM: Guard cpusmask NULL check with CONFIG_CPUMASK_OFFSTACK
Date: Wed, 25 Aug 2021 12:05:45 +0800	[thread overview]
Message-ID: <CAJhGHyB1RjBLRLtaS80XQSTb0g35smxnBQPjEp-BwieKu1cwXw@mail.gmail.com> (raw)
In-Reply-To: <20210821000501.375978-3-seanjc@google.com>

On Sat, Aug 21, 2021 at 8:09 AM Sean Christopherson <seanjc@google.com> wrote:
>
> Check for a NULL cpumask_var_t when kicking multiple vCPUs if and only if
> cpumasks are configured to be allocated off-stack.  This is a meaningless
> optimization, e.g. avoids a TEST+Jcc and TEST+CMOV on x86, but more
> importantly helps document that the NULL check is necessary even though
> all callers pass in a local variable.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
>  virt/kvm/kvm_main.c | 12 ++++++++++--
>  1 file changed, 10 insertions(+), 2 deletions(-)
>
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 786b914db98f..82c5280dd5ce 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -247,7 +247,7 @@ static void ack_flush(void *_completed)
>
>  static inline bool kvm_kick_many_cpus(const struct cpumask *cpus, bool wait)
>  {
> -       if (unlikely(!cpus))
> +       if (IS_ENABLED(CONFIG_CPUMASK_OFFSTACK) && unlikely(!cpus))
>                 cpus = cpu_online_mask;
>
>         if (cpumask_empty(cpus))
> @@ -277,6 +277,14 @@ bool kvm_make_vcpus_request_mask(struct kvm *kvm, unsigned int req,
>                 if (!(req & KVM_REQUEST_NO_WAKEUP) && kvm_vcpu_wake_up(vcpu))
>                         continue;
>
> +               /*
> +                * tmp can be NULL if cpumasks are allocated off stack, as
> +                * allocation of the mask is deliberately not fatal and is
> +                * handled by falling back to kicking all online CPUs.
> +                */
> +               if (IS_ENABLED(CONFIG_CPUMASK_OFFSTACK) && !tmp)
> +                       continue;
> +

Hello, Sean

I don't think it is a good idea to reinvent the cpumask_available().
You can rework the patch as the following code if cpumask_available()
fits for you.

Thanks
Lai

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 3e67c93ca403..ca043ec7ed74 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -245,9 +245,11 @@ static void ack_flush(void *_completed)
 {
 }

-static inline bool kvm_kick_many_cpus(const struct cpumask *cpus, bool wait)
+static inline bool kvm_kick_many_cpus(cpumask_var_t tmp, bool wait)
 {
-       if (unlikely(!cpus))
+       const struct cpumask *cpus = tmp;
+
+       if (unlikely(!cpumask_available(tmp)))
                cpus = cpu_online_mask;

        if (cpumask_empty(cpus))
@@ -278,7 +280,7 @@ bool kvm_make_vcpus_request_mask(struct kvm *kvm,
unsigned int req,
                if (!(req & KVM_REQUEST_NO_WAKEUP) && kvm_vcpu_wake_up(vcpu))
                        continue;

-               if (tmp != NULL && cpu != -1 && cpu != me &&
+               if (cpumask_available(tmp) && cpu != -1 && cpu != me &&
                    kvm_request_needs_ipi(vcpu, req))
                        __cpumask_set_cpu(cpu, tmp);
        }

>                 /*
>                  * Note, the vCPU could get migrated to a different pCPU at any
>                  * point after kvm_request_needs_ipi(), which could result in
> @@ -288,7 +296,7 @@ bool kvm_make_vcpus_request_mask(struct kvm *kvm, unsigned int req,
>                  * were reading SPTEs _before_ any changes were finalized.  See
>                  * kvm_vcpu_kick() for more details on handling requests.
>                  */
> -               if (tmp != NULL && kvm_request_needs_ipi(vcpu, req)) {
> +               if (kvm_request_needs_ipi(vcpu, req)) {
>                         cpu = READ_ONCE(vcpu->cpu);
>                         if (cpu != -1 && cpu != me)
>                                 __cpumask_set_cpu(cpu, tmp);
> --
> 2.33.0.rc2.250.ged5fa647cd-goog
>

  parent reply	other threads:[~2021-08-25  4:06 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-08-21  0:04 [PATCH 0/2] VM: Fix a benign race in kicking vCPUs Sean Christopherson
2021-08-21  0:05 ` [PATCH 1/2] KVM: Clean up benign vcpu->cpu data races when " Sean Christopherson
2021-08-23  7:49   ` Vitaly Kuznetsov
2021-08-21  0:05 ` [PATCH 2/2] KVM: Guard cpusmask NULL check with CONFIG_CPUMASK_OFFSTACK Sean Christopherson
2021-08-23  7:54   ` Vitaly Kuznetsov
2021-08-25  4:05   ` Lai Jiangshan [this message]
2021-08-25 21:57     ` Sean Christopherson
2021-08-26 10:24       ` Vitaly Kuznetsov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAJhGHyB1RjBLRLtaS80XQSTb0g35smxnBQPjEp-BwieKu1cwXw@mail.gmail.com \
    --to=jiangshanlai+lkml@gmail.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=seanjc@google.com \
    --cc=venkateshs@google.com \
    --cc=vkuznets@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).