linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Tianyu Lan <Tianyu.Lan@microsoft.com>
To: "Michael Kelley (EOSG)" <Michael.H.Kelley@microsoft.com>,
	Tianyu Lan <Tianyu.Lan@microsoft.com>
Cc: KY Srinivasan <kys@microsoft.com>,
	Haiyang Zhang <haiyangz@microsoft.com>,
	Stephen Hemminger <sthemmin@microsoft.com>,
	"tglx@linutronix.de" <tglx@linutronix.de>,
	"mingo@redhat.com" <mingo@redhat.com>,
	"hpa@zytor.com" <hpa@zytor.com>,
	"x86@kernel.org" <x86@kernel.org>,
	"pbonzini@redhat.com" <pbonzini@redhat.com>,
	"rkrcmar@redhat.com" <rkrcmar@redhat.com>,
	"devel@linuxdriverproject.org" <devel@linuxdriverproject.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	"vkuznets@redhat.com" <vkuznets@redhat.com>
Subject: Re: [PATCH V2 1/5] X86/Hyper-V: Add flush HvFlushGuestPhysicalAddressSpace hypercall support
Date: Wed, 11 Jul 2018 06:01:16 +0000	[thread overview]
Message-ID: <c41ce53f-cf6a-2b0e-4a9c-da01839094c1@microsoft.com> (raw)
In-Reply-To: <SN6PR2101MB11202EAE0D5C623EC0CF8273DC5B0@SN6PR2101MB1120.namprd21.prod.outlook.com>

Hi Michael:
	Thanks for your review.

On 7/11/2018 5:29 AM, Michael Kelley (EOSG) wrote:
> From: Tianyu Lan <Tianyu.Lan@microsoft.com> Monday, July 9, 2018 2:03 AM
>> Hyper-V supports a pv hypercall HvFlushGuestPhysicalAddressSpace to
>> flush nested VM address space mapping in l1 hypervisor and it's to
>> reduce overhead of flushing ept tlb among vcpus. This patch is to
>> implement it.
>>
>> Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
>> ---
>>   arch/x86/hyperv/Makefile           |  2 +-
>>   arch/x86/hyperv/nested.c           | 64 ++++++++++++++++++++++++++++++++++++++
>>   arch/x86/include/asm/hyperv-tlfs.h |  8 +++++
>>   arch/x86/include/asm/mshyperv.h    |  2 ++
>>   4 files changed, 75 insertions(+), 1 deletion(-)
>>   create mode 100644 arch/x86/hyperv/nested.c
>> +#include <linux/types.h>
>> +#include <asm/hyperv-tlfs.h>
>> +#include <asm/mshyperv.h>
>> +#include <asm/tlbflush.h>
>> +
>> +int hyperv_flush_guest_mapping(u64 as)
>> +{
>> +	struct hv_guest_mapping_flush **flush_pcpu;
>> +	struct hv_guest_mapping_flush *flush;
>> +	u64 status;
>> +	unsigned long flags;
>> +	int ret = -EFAULT;
>> +
>> +	if (!hv_hypercall_pg)
>> +		goto fault;
>> +
>> +	local_irq_save(flags);
>> +
>> +	flush_pcpu = (struct hv_guest_mapping_flush **)
>> +		this_cpu_ptr(hyperv_pcpu_input_arg);
>> +
>> +	flush = *flush_pcpu;
>> +
>> +	if (unlikely(!flush)) {
>> +		local_irq_restore(flags);
>> +		goto fault;
>> +	}
>> +
>> +	flush->address_space = as;
>> +	flush->flags = 0;
>> +
>> +	status = hv_do_hypercall(HVCALL_FLUSH_GUEST_PHYSICAL_ADDRESS_SPACE,
>> +				 flush, NULL);
> 
> Did you consider using a "fast" hypercall?  Unless there's some reason I'm
> not aware of, a "fast" hypercall would be perfect here as there are 16 bytes
> of input and no output. Vitaly recently added hv_do_fast_hypercall16()
> in the linux-next tree. See __send_ipi_mask() in hv_apic.c in linux-next
> for an example of usage.  With a fast hypercall, you don't need the code for
> getting the per-cpu input arg or the code for local irq save/restore, so the
> code that is left is a lot faster and simpler.
> 
> Michael
> 

Good suggestion. But the "fast" hypercall still is not available in 
kvm-next branch and it's in the x86 tip repo. We may rework this with 
"fast" hypercall in the next kernel development cycle if this patchset 
is accepted in for 4.19.

>> +	local_irq_restore(flags);
>> +
>> +	if (!(status & HV_HYPERCALL_RESULT_MASK))
>> +		ret = 0;
>> +
>> +fault:
>> +	return ret;
>> +}
>> +EXPORT_SYMBOL_GPL(hyperv_flush_guest_mapping);

  reply	other threads:[~2018-07-11  6:01 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-07-09  9:02 [PATCH V2 0/5] KVM/x86/hyper-V: Introduce PV guest address space mapping flush support Tianyu Lan
2018-07-09  9:02 ` [PATCH V2 1/5] X86/Hyper-V: Add flush HvFlushGuestPhysicalAddressSpace hypercall support Tianyu Lan
2018-07-10 21:29   ` Michael Kelley (EOSG)
2018-07-11  6:01     ` Tianyu Lan [this message]
2018-07-09  9:02 ` [PATCH V2 2/5] KVM: Add tlb remote flush callback in kvm_x86_ops Tianyu Lan
2018-07-18 11:57   ` Paolo Bonzini
2018-07-18 12:01   ` Paolo Bonzini
2018-07-18 13:25     ` Tianyu Lan
2018-07-09  9:02 ` [PATCH V2 3/5] KVM/VMX: Add identical ept table pointer check Tianyu Lan
2018-07-18 11:59   ` Paolo Bonzini
2018-07-18 13:38     ` Tianyu Lan
2018-07-09  9:02 ` [PATCH V2 4/5] KVM/x86: Add tlb_remote_flush callback support for vmx Tianyu Lan
2018-07-09  9:02 ` [PATCH V2 5/5] X86/Hyper-V: Add hyperv_nested_flush_guest_mapping ftrace support Tianyu Lan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c41ce53f-cf6a-2b0e-4a9c-da01839094c1@microsoft.com \
    --to=tianyu.lan@microsoft.com \
    --cc=Michael.H.Kelley@microsoft.com \
    --cc=devel@linuxdriverproject.org \
    --cc=haiyangz@microsoft.com \
    --cc=hpa@zytor.com \
    --cc=kvm@vger.kernel.org \
    --cc=kys@microsoft.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=rkrcmar@redhat.com \
    --cc=sthemmin@microsoft.com \
    --cc=tglx@linutronix.de \
    --cc=vkuznets@redhat.com \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).