kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jon Kohler <jon@nutanix.com>
To: Andy Lutomirski <luto@kernel.org>
Cc: Dave Hansen <dave.hansen@intel.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Jon Kohler <jon@nutanix.com>, Babu Moger <babu.moger@amd.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	X86 ML <x86@kernel.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Sean Christopherson <seanjc@google.com>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Wanpeng Li <wanpengli@tencent.com>,
	Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>,
	Yu-cheng Yu <yu-cheng.yu@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	Tony Luck <tony.luck@intel.com>, Uros Bizjak <ubizjak@gmail.com>,
	Petteri Aimonen <jpa@git.mail.kapsi.fi>,
	Al Viro <viro@zeniv.linux.org.uk>,
	Kan Liang <kan.liang@linux.intel.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Mike Rapoport <rppt@kernel.org>, Fan Yang <Fan_Yang@sjtu.edu.cn>,
	Juergen Gross <jgross@suse.com>,
	Benjamin Thiel <b.thiel@posteo.de>,
	Dave Jiang <dave.jiang@intel.com>,
	Ricardo Neri <ricardo.neri-calderon@linux.intel.com>,
	Arvind Sankar <nivedita@alum.mit.edu>,
	LKML <linux-kernel@vger.kernel.org>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>
Subject: Re: [PATCH v3] KVM: x86: use wrpkru directly in kvm_load_{guest|host}_xsave_state
Date: Mon, 17 May 2021 02:58:25 +0000	[thread overview]
Message-ID: <2FD095E7-5C74-4B58-953F-3195BA97ABEF@nutanix.com> (raw)
In-Reply-To: <1bde6f22-166b-8552-e7f3-5731508182ea@kernel.org>



> On May 14, 2021, at 12:46 AM, Andy Lutomirski <luto@kernel.org> wrote:
> 
> 
> 
> On Wed, May 12, 2021, at 11:33 AM, Dave Hansen wrote:
>> On 5/12/21 12:41 AM, Peter Zijlstra wrote:
>> > On Tue, May 11, 2021 at 01:05:02PM -0400, Jon Kohler wrote:
>> >> diff --git a/arch/x86/include/asm/fpu/internal.h b/arch/x86/include/asm/fpu/internal.h
>> >> index 8d33ad80704f..5bc4df3a4c27 100644
>> >> --- a/arch/x86/include/asm/fpu/internal.h
>> >> +++ b/arch/x86/include/asm/fpu/internal.h
>> >> @@ -583,7 +583,13 @@ static inline void switch_fpu_finish(struct fpu *new_fpu)
>> >>  if (pk)
>> >>  pkru_val = pk->pkru;
>> >>  }
>> >> - __write_pkru(pkru_val);
>> >> +
>> >> + /*
>> >> + * WRPKRU is relatively expensive compared to RDPKRU.
>> >> + * Avoid WRPKRU when it would not change the value.
>> >> + */
>> >> + if (pkru_val != rdpkru())
>> >> + wrpkru(pkru_val);
>> > Just wondering; why aren't we having that in a per-cpu variable? The
>> > usual per-cpu MSR shadow approach avoids issuing any 'special' ops
>> > entirely.
>> 
>> It could be a per-cpu variable.  When I wrote this originally I figured
>> that a rdpkru would be cheaper than a load from memory (even per-cpu
>> memory).
>> 
>> But, now that I think about it, assuming that 'prku_val' is in %rdi, doing:
>> 
>> cmp %gs:0x1234, %rdi
>> 
>> might end up being cheaper than clobbering a *pair* of GPRs with rdpkru:
>> 
>> xor    %ecx,%ecx
>> rdpkru
>> cmp %rax, %rdi
>> 
>> I'm too lazy to go figure out what would be faster in practice, though.
>> Does anyone care?

Strictly from a profiling perspective, my observation is that the rdpkru
is pretty quick, its the wrpkru that seems heavier under the covers, so
any speedup in rdpkru would likely go unnoticed by comparison. Now
that said if this per cpu variable would somehow get rid of the underlying
instruction and just emulate the whole thing, that might be interesting.

From an incremental change perspective though, this patch puts
us in a better spot, happy to take a look at future work if y’all have
some tips on top of this.

> 
> RDPKRU gets bonus points for being impossible to get out of sync.


      parent reply	other threads:[~2021-05-17  2:59 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-11 17:05 [PATCH v3] KVM: x86: use wrpkru directly in kvm_load_{guest|host}_xsave_state Jon Kohler
2021-05-11 19:08 ` Dave Hansen
2021-05-12  7:41 ` Peter Zijlstra
2021-05-12 18:33   ` Dave Hansen
     [not found]     ` <1bde6f22-166b-8552-e7f3-5731508182ea@kernel.org>
2021-05-17  2:58       ` Jon Kohler [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2FD095E7-5C74-4B58-953F-3195BA97ABEF@nutanix.com \
    --to=jon@nutanix.com \
    --cc=Fan_Yang@sjtu.edu.cn \
    --cc=akpm@linux-foundation.org \
    --cc=b.thiel@posteo.de \
    --cc=babu.moger@amd.com \
    --cc=bp@alien8.de \
    --cc=dave.hansen@intel.com \
    --cc=dave.jiang@intel.com \
    --cc=fenghua.yu@intel.com \
    --cc=hpa@zytor.com \
    --cc=jgross@suse.com \
    --cc=jmattson@google.com \
    --cc=joro@8bytes.org \
    --cc=jpa@git.mail.kapsi.fi \
    --cc=kan.liang@linux.intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=luto@kernel.org \
    --cc=mingo@redhat.com \
    --cc=nivedita@alum.mit.edu \
    --cc=pbonzini@redhat.com \
    --cc=peterz@infradead.org \
    --cc=ricardo.neri-calderon@linux.intel.com \
    --cc=rppt@kernel.org \
    --cc=seanjc@google.com \
    --cc=tglx@linutronix.de \
    --cc=tony.luck@intel.com \
    --cc=ubizjak@gmail.com \
    --cc=viro@zeniv.linux.org.uk \
    --cc=vkuznets@redhat.com \
    --cc=wanpengli@tencent.com \
    --cc=x86@kernel.org \
    --cc=yu-cheng.yu@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).