linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Michael Kelley <mikelley@microsoft.com>
To: vkuznets <vkuznets@redhat.com>, Roman Kagan <rkagan@virtuozzo.com>
Cc: "linux-hyperv@vger.kernel.org" <linux-hyperv@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"x86@kernel.org" <x86@kernel.org>,
	KY Srinivasan <kys@microsoft.com>,
	Haiyang Zhang <haiyangz@microsoft.com>,
	Stephen Hemminger <sthemmin@microsoft.com>,
	Sasha Levin <sashal@kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>
Subject: RE: [PATCH] x86/hyper-v: micro-optimize send_ipi_one case
Date: Fri, 25 Oct 2019 16:23:22 +0000	[thread overview]
Message-ID: <DM5PR21MB01378544207EB796E178B2CAD7650@DM5PR21MB0137.namprd21.prod.outlook.com> (raw)
In-Reply-To: <87r231xfyg.fsf@vitty.brq.redhat.com>

From: Vitaly Kuznetsov <vkuznets@redhat.com> Sent: Friday, October 25, 2019 3:44 AM
> 
> Roman Kagan <rkagan@virtuozzo.com> writes:
> 
> > On Thu, Oct 24, 2019 at 05:21:52PM +0200, Vitaly Kuznetsov wrote:
> >> When sending an IPI to a single CPU there is no need to deal with cpumasks.
> >> With 2 CPU guest on WS2019 I'm seeing a minor (like 3%, 8043 -> 7761 CPU
> >> cycles) improvement with smp_call_function_single() loop benchmark. The
> >> optimization, however, is tiny and straitforward. Also, send_ipi_one() is
> >> important for PV spinlock kick.
> >>
> >> I was also wondering if it would make sense to switch to using regular
> >> APIC IPI send for CPU > 64 case but no, it is twice as expesive (12650 CPU
> >> cycles for __send_ipi_mask_ex() call, 26000 for orig_apic.send_IPI(cpu,
> >> vector)).
> >
> > Is it with APICv or emulated apic?
> 
> That's actually a good question. Yesterday I was testing this on WS2019
> host with Xeon e5-2420 v2 (Ivy Bridge EN) which I *think* should already
> support APICv - but I'm not sure and ark.intel.com is not
> helpful. Today, I decided to re-test on something more modern and I got
> WS2016 host with E5-2667 v4 (Broadwell) and the results are:
> 
> 'Ex' hypercall: 18000 cycles
> orig_apic.send_IPI(): 46000 cycles
> 
> I'm, however, just assuming that Hyper-V uses APICv when it's available
> and have no idea how to check from within the guest. I'm also not sure
> if WS2019 is so much faster or if there are other differences on these
> hosts which matter.
> 

On Hyper-V 2016 and 2019 (not sure about 2012 R2), and when the guest is
using xAPIC (not x2APIC), you can tell within the guest whether Intel APICv is
enabled based on the setting of  HV_X64_APIC_ACCESS_RECOMMENDED.   If
this flag is set, then APICv is not present, because Hyper-V only recommends
using the synthetic MSRs when APICv is not present.  Conversely, if the flag is
not set, then APICv is present.

FWIW, when APICv is present in the hardware, you can disable its use in a
particular VM by using Powershell in the host to set  compatibility mode on
the VM:

Set-VMProcessor <vmname> -CompatibilityForMigrationEnabled $true

Then when the VM is booted, HV_X64_APIC_ACCESS_RECOMMENDED
will show as set even if the hardware has APICv.  This is useful for testing
the older code paths on hardware that has APICv.

Michael

  reply	other threads:[~2019-10-25 16:24 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-24 15:21 [PATCH] x86/hyper-v: micro-optimize send_ipi_one case Vitaly Kuznetsov
2019-10-24 16:32 ` Roman Kagan
2019-10-24 22:38   ` Thomas Gleixner
2019-10-25  9:41     ` Roman Kagan
2019-10-25 10:44   ` Vitaly Kuznetsov
2019-10-25 16:23     ` Michael Kelley [this message]
2019-10-24 16:47 ` Joe Perches

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=DM5PR21MB01378544207EB796E178B2CAD7650@DM5PR21MB0137.namprd21.prod.outlook.com \
    --to=mikelley@microsoft.com \
    --cc=bp@alien8.de \
    --cc=haiyangz@microsoft.com \
    --cc=hpa@zytor.com \
    --cc=kys@microsoft.com \
    --cc=linux-hyperv@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=rkagan@virtuozzo.com \
    --cc=sashal@kernel.org \
    --cc=sthemmin@microsoft.com \
    --cc=tglx@linutronix.de \
    --cc=vkuznets@redhat.com \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).