linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Paolo Bonzini <pbonzini@redhat.com>
To: "Roman Kagan" <rkagan@virtuozzo.com>,
	"Vitaly Kuznetsov" <vkuznets@redhat.com>,
	kvm@vger.kernel.org, "Radim Krčmář" <rkrcmar@redhat.com>,
	"K. Y. Srinivasan" <kys@microsoft.com>,
	"Haiyang Zhang" <haiyangz@microsoft.com>,
	"Stephen Hemminger" <sthemmin@microsoft.com>,
	"Michael Kelley (EOSG)" <Michael.H.Kelley@microsoft.com>,
	"Mohammed Gamal" <mmorsy@redhat.com>,
	"Cathy Avery" <cavery@redhat.com>,
	"Wanpeng Li" <wanpeng.li@hotmail.com>,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH v6 7/7] KVM: x86: hyperv: implement PV IPI send hypercalls
Date: Mon, 1 Oct 2018 18:01:19 +0200	[thread overview]
Message-ID: <51ff55e0-9d8d-73be-e0e7-f8580bc0206e@redhat.com> (raw)
In-Reply-To: <20180927110711.GE4186@rkaganb.sw.ru>

On 27/09/2018 13:07, Roman Kagan wrote:
> On Wed, Sep 26, 2018 at 07:02:59PM +0200, Vitaly Kuznetsov wrote:
>> Using hypercall for sending IPIs is faster because this allows to specify
>> any number of vCPUs (even > 64 with sparse CPU set), the whole procedure
>> will take only one VMEXIT.
>>
>> Current Hyper-V TLFS (v5.0b) claims that HvCallSendSyntheticClusterIpi
>> hypercall can't be 'fast' (passing parameters through registers) but
>> apparently this is not true, Windows always uses it as 'fast' so we need
>> to support that.
>>
>> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
>> ---
>>  Documentation/virtual/kvm/api.txt |   7 ++
>>  arch/x86/kvm/hyperv.c             | 115 ++++++++++++++++++++++++++++++
>>  arch/x86/kvm/trace.h              |  42 +++++++++++
>>  arch/x86/kvm/x86.c                |   1 +
>>  include/uapi/linux/kvm.h          |   1 +
>>  5 files changed, 166 insertions(+)
>>
>> diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
>> index 647f94128a85..1659b75d577d 100644
>> --- a/Documentation/virtual/kvm/api.txt
>> +++ b/Documentation/virtual/kvm/api.txt
>> @@ -4772,3 +4772,10 @@ CPU when the exception is taken. If this virtual SError is taken to EL1 using
>>  AArch64, this value will be reported in the ISS field of ESR_ELx.
>>  
>>  See KVM_CAP_VCPU_EVENTS for more details.
>> +8.20 KVM_CAP_HYPERV_SEND_IPI
>> +
>> +Architectures: x86
>> +
>> +This capability indicates that KVM supports paravirtualized Hyper-V IPI send
>> +hypercalls:
>> +HvCallSendSyntheticClusterIpi, HvCallSendSyntheticClusterIpiEx.
>> diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
>> index cc0535a078f7..4b4a6d015ade 100644
>> --- a/arch/x86/kvm/hyperv.c
>> +++ b/arch/x86/kvm/hyperv.c
>> @@ -1405,6 +1405,107 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *current_vcpu, u64 ingpa,
>>  		((u64)rep_cnt << HV_HYPERCALL_REP_COMP_OFFSET);
>>  }
>>  
>> +static u64 kvm_hv_send_ipi(struct kvm_vcpu *current_vcpu, u64 ingpa, u64 outgpa,
>> +			   bool ex, bool fast)
>> +{
>> +	struct kvm *kvm = current_vcpu->kvm;
>> +	struct kvm_hv *hv = &kvm->arch.hyperv;
>> +	struct hv_send_ipi_ex send_ipi_ex;
>> +	struct hv_send_ipi send_ipi;
>> +	struct kvm_vcpu *vcpu;
>> +	unsigned long valid_bank_mask;
>> +	u64 sparse_banks[64];
>> +	int sparse_banks_len, bank, i, sbank;
>> +	struct kvm_lapic_irq irq = {.delivery_mode = APIC_DM_FIXED};
>> +	bool all_cpus;
>> +
>> +	if (!ex) {
>> +		if (!fast) {
>> +			if (unlikely(kvm_read_guest(kvm, ingpa, &send_ipi,
>> +						    sizeof(send_ipi))))
>> +				return HV_STATUS_INVALID_HYPERCALL_INPUT;
>> +			sparse_banks[0] = send_ipi.cpu_mask;
>> +			irq.vector = send_ipi.vector;
>> +		} else {
>> +			/* 'reserved' part of hv_send_ipi should be 0 */
>> +			if (unlikely(ingpa >> 32 != 0))
>> +				return HV_STATUS_INVALID_HYPERCALL_INPUT;
>> +			sparse_banks[0] = outgpa;
>> +			irq.vector = (u32)ingpa;
>> +		}
>> +		all_cpus = false;
>> +		valid_bank_mask = BIT_ULL(0);
>> +
>> +		trace_kvm_hv_send_ipi(irq.vector, sparse_banks[0]);
>> +	} else {
>> +		if (unlikely(kvm_read_guest(kvm, ingpa, &send_ipi_ex,
>> +					    sizeof(send_ipi_ex))))
>> +			return HV_STATUS_INVALID_HYPERCALL_INPUT;
>> +
>> +		trace_kvm_hv_send_ipi_ex(send_ipi_ex.vector,
>> +					 send_ipi_ex.vp_set.format,
>> +					 send_ipi_ex.vp_set.valid_bank_mask);
>> +
>> +		irq.vector = send_ipi_ex.vector;
>> +		valid_bank_mask = send_ipi_ex.vp_set.valid_bank_mask;
>> +		sparse_banks_len = bitmap_weight(&valid_bank_mask, 64) *
>> +			sizeof(sparse_banks[0]);
>> +
>> +		all_cpus = send_ipi_ex.vp_set.format == HV_GENERIC_SET_ALL;
>> +
>> +		if (!sparse_banks_len)
>> +			goto ret_success;
>> +
>> +		if (!all_cpus &&
>> +		    kvm_read_guest(kvm,
>> +				   ingpa + offsetof(struct hv_send_ipi_ex,
>> +						    vp_set.bank_contents),
>> +				   sparse_banks,
>> +				   sparse_banks_len))
>> +			return HV_STATUS_INVALID_HYPERCALL_INPUT;
>> +	}
>> +
>> +	if ((irq.vector < HV_IPI_LOW_VECTOR) ||
>> +	    (irq.vector > HV_IPI_HIGH_VECTOR))
>> +		return HV_STATUS_INVALID_HYPERCALL_INPUT;
>> +
>> +	if (all_cpus || atomic_read(&hv->num_mismatched_vp_indexes)) {
>> +		kvm_for_each_vcpu(i, vcpu, kvm) {
>> +			if (all_cpus || hv_vcpu_in_sparse_set(
>> +				    &vcpu->arch.hyperv, sparse_banks,
>> +				    valid_bank_mask)) {
>> +				/* We fail only when APIC is disabled */
>> +				kvm_apic_set_irq(vcpu, &irq, NULL);
>> +			}
>> +		}
>> +		goto ret_success;
>> +	}
>> +
>> +	/*
>> +	 * num_mismatched_vp_indexes is zero so every vcpu has
>> +	 * vp_index == vcpu_idx.
>> +	 */
>> +	sbank = 0;
>> +	for_each_set_bit(bank, (unsigned long *)&valid_bank_mask, 64) {
>> +		for_each_set_bit(i, (unsigned long *)&sparse_banks[sbank], 64) {
>> +			u32 vp_index = bank * 64 + i;
>> +			struct kvm_vcpu *vcpu =
>> +				get_vcpu_by_vpidx(kvm, vp_index);
>> +
>> +			/* Unknown vCPU specified */
>> +			if (!vcpu)
>> +				continue;
>> +
>> +			/* We fail only when APIC is disabled */
>> +			kvm_apic_set_irq(vcpu, &irq, NULL);
>> +		}
>> +		sbank++;
>> +	}
>> +
>> +ret_success:
>> +	return HV_STATUS_SUCCESS;
>> +}
>> +
> 
> I must say that now it looks even more tempting to follow the same
> pattern as your kvm_hv_flush_tlb: define a function that would call
> kvm_apic_set_irq() on all vcpus in a mask (optimizing the all-set case
> with a NULL mask), and make kvm_hv_send_ipi perform the same hv_vp_set
> -> vcpu_mask transformation followed by calling into that function.


It would perhaps be cleaner, but really kvm_apic_set_irq is as efficient
as it can be, since it takes the destination vcpu directly.

The code duplication for walking the sparse set is a bit ugly, perhaps
that could be changed to use an iterator macro.

Paolo

  reply	other threads:[~2018-10-01 16:01 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-09-26 17:02 [PATCH v6 0/7] KVM: x86: hyperv: PV IPI support for Windows guests Vitaly Kuznetsov
2018-09-26 17:02 ` [PATCH v6 1/7] KVM: x86: hyperv: enforce vp_index < KVM_MAX_VCPUS Vitaly Kuznetsov
2018-09-26 17:02 ` [PATCH v6 2/7] KVM: x86: hyperv: optimize 'all cpus' case in kvm_hv_flush_tlb() Vitaly Kuznetsov
2018-09-26 17:02 ` [PATCH v6 3/7] KVM: x86: hyperv: consistently use 'hv_vcpu' for 'struct kvm_vcpu_hv' variables Vitaly Kuznetsov
2018-09-27  7:49   ` Roman Kagan
2018-09-26 17:02 ` [PATCH v6 4/7] KVM: x86: hyperv: keep track of mismatched VP indexes Vitaly Kuznetsov
2018-09-27  7:59   ` Roman Kagan
2018-09-27  9:17     ` Vitaly Kuznetsov
2018-10-01 15:48       ` Paolo Bonzini
2018-10-01 15:54         ` Roman Kagan
2018-10-01 15:57           ` Roman Kagan
2018-09-26 17:02 ` [PATCH v6 5/7] KVM: x86: hyperv: valid_bank_mask should be 'u64' Vitaly Kuznetsov
2018-09-27  8:01   ` Roman Kagan
2018-09-26 17:02 ` [PATCH v6 6/7] KVM: x86: hyperv: optimize kvm_hv_flush_tlb() for vp_index == vcpu_idx case Vitaly Kuznetsov
2018-09-27  9:42   ` Roman Kagan
2018-09-26 17:02 ` [PATCH v6 7/7] KVM: x86: hyperv: implement PV IPI send hypercalls Vitaly Kuznetsov
2018-09-27 11:07   ` Roman Kagan
2018-10-01 16:01     ` Paolo Bonzini [this message]
2018-10-01 16:20       ` Vitaly Kuznetsov
2018-10-01 16:21         ` Paolo Bonzini
2018-10-01 16:41           ` Vitaly Kuznetsov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=51ff55e0-9d8d-73be-e0e7-f8580bc0206e@redhat.com \
    --to=pbonzini@redhat.com \
    --cc=Michael.H.Kelley@microsoft.com \
    --cc=cavery@redhat.com \
    --cc=haiyangz@microsoft.com \
    --cc=kvm@vger.kernel.org \
    --cc=kys@microsoft.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mmorsy@redhat.com \
    --cc=rkagan@virtuozzo.com \
    --cc=rkrcmar@redhat.com \
    --cc=sthemmin@microsoft.com \
    --cc=vkuznets@redhat.com \
    --cc=wanpeng.li@hotmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).