From: "Radim Krčmář" <rkrcmar@redhat.com>
To: Wanpeng Li <kernellwp@gmail.com>
Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org,
Paolo Bonzini <pbonzini@redhat.com>,
Liran Alon <liran.alon@oracle.com>
Subject: Re: [PATCH v3 2/3] KVM: X86: Implement PV sched yield hypercall
Date: Mon, 10 Jun 2019 16:17:17 +0200 [thread overview]
Message-ID: <20190610141717.GA6604@flask> (raw)
In-Reply-To: <1559178307-6835-3-git-send-email-wanpengli@tencent.com>
2019-05-30 09:05+0800, Wanpeng Li:
> From: Wanpeng Li <wanpengli@tencent.com>
>
> The target vCPUs are in runnable state after vcpu_kick and suitable
> as a yield target. This patch implements the sched yield hypercall.
>
> 17% performance increasement of ebizzy benchmark can be observed in an
> over-subscribe environment. (w/ kvm-pv-tlb disabled, testing TLB flush
> call-function IPI-many since call-function is not easy to be trigged
> by userspace workload).
>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Radim Krčmář <rkrcmar@redhat.com>
> Cc: Liran Alon <liran.alon@oracle.com>
> Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
> ---
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> @@ -7172,6 +7172,28 @@ void kvm_vcpu_deactivate_apicv(struct kvm_vcpu *vcpu)
> kvm_x86_ops->refresh_apicv_exec_ctrl(vcpu);
> }
>
> +static void kvm_sched_yield(struct kvm *kvm, unsigned long dest_id)
> +{
> + struct kvm_vcpu *target = NULL;
> + struct kvm_apic_map *map = NULL;
> +
> + rcu_read_lock();
> + map = rcu_dereference(kvm->arch.apic_map);
> +
> + if (unlikely(!map) || dest_id > map->max_apic_id)
> + goto out;
> +
> + if (map->phys_map[dest_id]->vcpu) {
This should check for map->phys_map[dest_id].
> + target = map->phys_map[dest_id]->vcpu;
> + rcu_read_unlock();
> + kvm_vcpu_yield_to(target);
> + }
> +
> +out:
> + if (!target)
> + rcu_read_unlock();
Also, I find the following logic clearer
{
struct kvm_vcpu *target = NULL;
struct kvm_apic_map *map;
rcu_read_lock();
map = rcu_dereference(kvm->arch.apic_map);
if (likely(map) && dest_id <= map->max_apic_id && map->phys_map[dest_id])
target = map->phys_map[dest_id]->vcpu;
rcu_read_unlock();
if (target)
kvm_vcpu_yield_to(target);
}
thanks.
next prev parent reply other threads:[~2019-06-10 14:17 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-05-30 1:05 [PATCH v3 0/3] KVM: Yield to IPI target if necessary Wanpeng Li
2019-05-30 1:05 ` [PATCH v3 1/3] KVM: X86: " Wanpeng Li
2019-05-30 1:05 ` [PATCH v3 2/3] KVM: X86: Implement PV sched yield hypercall Wanpeng Li
2019-06-10 14:17 ` Radim Krčmář [this message]
2019-06-11 8:47 ` Wanpeng Li
2019-05-30 1:05 ` [PATCH v3 3/3] KVM: X86: Expose PV_SCHED_YIELD CPUID feature bit to guest Wanpeng Li
2019-06-10 5:58 ` [PATCH v3 0/3] KVM: Yield to IPI target if necessary Wanpeng Li
2019-06-10 14:34 ` Radim Krčmář
2019-06-11 1:11 ` Sean Christopherson
2019-06-11 1:45 ` Wanpeng Li
2019-06-11 1:48 ` Nadav Amit
2019-06-11 10:02 ` Wanpeng Li
2019-06-11 16:57 ` Nadav Amit
2019-06-12 1:18 ` Wanpeng Li
2019-06-12 1:37 ` Nadav Amit
2019-06-28 9:12 ` Wanpeng Li
2019-06-28 9:18 ` Wanpeng Li
2019-06-11 10:26 ` Wanpeng Li
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190610141717.GA6604@flask \
--to=rkrcmar@redhat.com \
--cc=kernellwp@gmail.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=liran.alon@oracle.com \
--cc=pbonzini@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).