linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/61] KVM: x86: Introduce KVM cpu caps
@ 2020-02-01 18:51 Sean Christopherson
  2020-02-01 18:51 ` [PATCH 01/61] KVM: x86: Return -E2BIG when KVM_GET_SUPPORTED_CPUID hits max entries Sean Christopherson
                   ` (61 more replies)
  0 siblings, 62 replies; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Introduce what is effectively a KVM-specific copy of the x86_capabilities
array in boot_cpu_data, kvm_cpu_caps.  kvm_cpu_caps is initialized by
copying boot_cpu_data.x86_capabilities before ->hardware_setup().  It is
then updated by KVM's CPUID logic (both common x86 and VMX/SVM specific)
to adjust the caps to reflect the CPU that KVM will expose to the guest.

Super cool things:
  - Kills off 8 kvm_x86_ops hooks.
  - Eliminates a retpoline from pretty much every page fault, and more
    retpolines throughout KVM.
  - Automagically handles selecting the appropriate eax/ebx/ecx/edx
    registers when updating CPUID feature bits.
  - Adds an auditing capability to double check that the function and
    index of a CPUID entry are correct during reverse CPUID lookup.

This is sort of a v2 of "KVM: x86: Purge kvm_x86_ops->*_supported()"[*],
but only a handful of the 26 patches from that series are carried forward
as is, and this series is obviously much more ambitiuous in scope.  And
unlike that series, there isn't a single patch in here that makes me go
"eww", and the end result is pretty awesome :-)

Quick synopsis:
  1. Refactor the KVM_GET_SUPPORTED_CPUID stack to consolidate code,
     remove crustiness, and set the stage for introducing kvm_cpu_caps.

  2. Introduce cpuid_entry_*() accessors/mutators to automatically
     handle retrieving the correct reg from a CPUID entry, and to audit
     that the entry matches the reserve CPUID lookup entry.  The
     cpuid_entry_*() helpers make moving the code from common x86 to
     vendor code much less risky.

  3. Move CPUID adjustments to vendor code in preparation for kvm_cpu_caps,
     which will be initialized at load time before the kvm_x86_ops hooks
     are ready to be used, i.e. before ->hardware_setup().

  4. Introduce kvm_cpu_caps and move all the CPUID code over to kvm_cpu_caps.

  5. Use kvm_cpu_cap_has() to kill off a bunch of ->*_supported() hooks.

  6. Additional cleanup in tangentially related areas to kill off even more
     ->*_supported() hooks.

  7. Profit!

Some of (6) could maybe be moved to a different series, but there would
likely be a number of minor conflicts.  I dropped as many arbitrary cleanup
patches as I could without letting any of the ->*_supported() hooks live,
and without losing confidence in the correctness of the refactoring.

Tested by verifying the output of KVM_GET_SUPPORTED_CPUID is identical
before and after on every patch on a Haswell and Coffee Lake.  Verified
correctness when hiding features via Qemu (running this version of KVM
in L1), e.g. that UMIP is correctly emulated for L2 when it's hidden from
L1, on relevant patches.

Boot tested and ran kvm-unit-tests at key points, e.g. large page handling.

The big untested pieces are PKU, XSAVES and PT on Intel, and everything AMD.

[*] https://lkml.kernel.org/r/20200129234640.8147-1-sean.j.christopherson@intel.com

Sean Christopherson (61):
  KVM: x86: Return -E2BIG when KVM_GET_SUPPORTED_CPUID hits max entries
  KVM: x86: Refactor loop around do_cpuid_func() to separate helper
  KVM: x86: Simplify handling of Centaur CPUID leafs
  KVM: x86: Clean up error handling in kvm_dev_ioctl_get_cpuid()
  KVM: x86: Check userapce CPUID array size after validating sub-leaf
  KVM: x86: Move CPUID 0xD.1 handling out of the index>0 loop
  KVM: x86: Check for CPUID 0xD.N support before validating array size
  KVM: x86: Warn on zero-size save state for valid CPUID 0xD.N sub-leaf
  KVM: x86: Refactor CPUID 0xD.N sub-leaf entry creation
  KVM: x86: Clean up CPUID 0x7 sub-leaf loop
  KVM: x86: Drop the explicit @index from do_cpuid_7_mask()
  KVM: x86: Drop redundant boot cpu checks on SSBD feature bits
  KVM: x86: Consolidate CPUID array max num entries checking
  KVM: x86: Hoist loop counter and terminator to top of
    __do_cpuid_func()
  KVM: x86: Refactor CPUID 0x4 and 0x8000001d handling
  KVM: x86: Encapsulate CPUID entries and metadata in struct
  KVM: x86: Drop redundant array size check
  KVM: x86: Use common loop iterator when handling CPUID 0xD.N
  KVM: VMX: Add helpers to query Intel PT mode
  KVM: x86: Calculate the supported xcr0 mask at load time
  KVM: x86: Use supported_xcr0 to detect MPX support
  KVM: x86: Make kvm_mpx_supported() an inline function
  KVM: x86: Clear output regs for CPUID 0x14 if PT isn't exposed to
    guest
  KVM: x86: Drop explicit @func param from ->set_supported_cpuid()
  KVM: x86: Use u32 for holding CPUID register value in helpers
  KVM: x86: Introduce cpuid_entry_{get,has}() accessors
  KVM: x86: Introduce cpuid_entry_{change,set,clear}() mutators
  KVM: x86: Refactor cpuid_mask() to auto-retrieve the register
  KVM: x86: Add Kconfig-controlled auditing of reverse CPUID lookups
  KVM: x86: Handle MPX CPUID adjustment in VMX code
  KVM: x86: Handle INVPCID CPUID adjustment in VMX code
  KVM: x86: Handle UMIP emulation CPUID adjustment in VMX code
  KVM: x86: Handle PKU CPUID adjustment in VMX code
  KVM: x86: Handle RDTSCP CPUID adjustment in VMX code
  KVM: x86: Handle Intel PT CPUID adjustment in VMX code
  KVM: x86: Handle GBPAGE CPUID adjustment for EPT in VMX code
  KVM: x86: Refactor handling of XSAVES CPUID adjustment
  KVM: x86: Introduce kvm_cpu_caps to replace runtime CPUID masking
  KVM: SVM: Convert feature updates from CPUID to KVM cpu caps
  KVM: VMX: Convert feature updates from CPUID to KVM cpu caps
  KVM: x86: Move XSAVES CPUID adjust to VMX's KVM cpu cap update
  KVM: x86: Add a helper to check kernel support when setting cpu cap
  KVM: x86: Use KVM cpu caps to mark CR4.LA57 as not-reserved
  KVM: x86: Use KVM cpu caps to track UMIP emulation
  KVM: x86: Fold CPUID 0x7 masking back into __do_cpuid_func()
  KVM: x86: Remove the unnecessary loop on CPUID 0x7 sub-leafs
  KVM: x86: Squash CPUID 0x2.0 insanity for modern CPUs
  KVM: x86: Do host CPUID at load time to mask KVM cpu caps
  KVM: x86: Override host CPUID results with kvm_cpu_caps
  KVM: x86: Set emulated/transmuted feature bits via kvm_cpu_caps
  KVM: x86: Use kvm_cpu_caps to detect Intel PT support
  KVM: x86: Use KVM cpu caps to detect MSR_TSC_AUX virt support
  KVM: VMX: Directly use VMX capabilities helper to detect RDTSCP
    support
  KVM: x86: Check for Intel PT MSR virtualization using KVM cpu caps
  KVM: VMX: Directly query Intel PT mode when refreshing PMUs
  KVM: SVM: Refactor logging of NPT enabled/disabled
  KVM: x86/mmu: Merge kvm_{enable,disable}_tdp() into a common function
  KVM: x86/mmu: Configure max page level during hardware setup
  KVM: x86: Don't propagate MMU lpage support to memslot.disallow_lpage
  KVM: Drop largepages_enabled and its accessor/mutator
  KVM: x86: Move VMX's host_efer to common x86 code

 arch/x86/include/asm/kvm_host.h |  15 +-
 arch/x86/kvm/Kconfig            |  10 +
 arch/x86/kvm/cpuid.c            | 771 +++++++++++++++-----------------
 arch/x86/kvm/cpuid.h            | 123 ++++-
 arch/x86/kvm/mmu/mmu.c          |  22 +-
 arch/x86/kvm/svm.c              | 117 ++---
 arch/x86/kvm/vmx/capabilities.h |  25 +-
 arch/x86/kvm/vmx/nested.c       |   2 +-
 arch/x86/kvm/vmx/pmu_intel.c    |   2 +-
 arch/x86/kvm/vmx/vmx.c          | 125 +++---
 arch/x86/kvm/vmx/vmx.h          |   5 +-
 arch/x86/kvm/x86.c              |  48 +-
 arch/x86/kvm/x86.h              |  10 +-
 include/linux/kvm_host.h        |   2 -
 virt/kvm/kvm_main.c             |  13 -
 15 files changed, 662 insertions(+), 628 deletions(-)

-- 
2.24.1


^ permalink raw reply	[flat|nested] 168+ messages in thread

* [PATCH 01/61] KVM: x86: Return -E2BIG when KVM_GET_SUPPORTED_CPUID hits max entries
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
@ 2020-02-01 18:51 ` Sean Christopherson
  2020-02-03 12:55   ` Vitaly Kuznetsov
  2020-02-01 18:51 ` [PATCH 02/61] KVM: x86: Refactor loop around do_cpuid_func() to separate helper Sean Christopherson
                   ` (60 subsequent siblings)
  61 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Fix a long-standing bug that causes KVM to return 0 instead of -E2BIG
when userspace's array is insufficiently sized.

Note, while the Fixes: tag is accurate with respect to the immediate
bug, it's likely that similar bugs in KVM_GET_SUPPORTED_CPUID existed
prior to the refactoring, e.g. Qemu contains a workaround for the broken
KVM_GET_SUPPORTED_CPUID behavior that predates the buggy commit by over
two years.  The Qemu workaround is also likely the main reason the bug
has gone unreported for so long.

Qemu hack:
  commit 76ae317f7c16aec6b469604b1764094870a75470
  Author: Mark McLoughlin <markmc@redhat.com>
  Date:   Tue May 19 18:55:21 2009 +0100

    kvm: work around supported cpuid ioctl() brokenness

    KVM_GET_SUPPORTED_CPUID has been known to fail to return -E2BIG
    when it runs out of entries. Detect this by always trying again
    with a bigger table if the ioctl() fills the table.

Fixes: 831bf664e9c1f ("KVM: Refactor and simplify kvm_dev_ioctl_get_supported_cpuid")
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/cpuid.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index b1c469446b07..47ce04762c20 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -908,9 +908,14 @@ int kvm_dev_ioctl_get_cpuid(struct kvm_cpuid2 *cpuid,
 			goto out_free;
 
 		limit = cpuid_entries[nent - 1].eax;
-		for (func = ent->func + 1; func <= limit && nent < cpuid->nent && r == 0; ++func)
+		for (func = ent->func + 1; func <= limit && r == 0; ++func) {
+			if (nent >= cpuid->nent) {
+				r = -E2BIG;
+				goto out_free;
+			}
 			r = do_cpuid_func(&cpuid_entries[nent], func,
 				          &nent, cpuid->nent, type);
+		}
 
 		if (r)
 			goto out_free;
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 02/61] KVM: x86: Refactor loop around do_cpuid_func() to separate helper
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
  2020-02-01 18:51 ` [PATCH 01/61] KVM: x86: Return -E2BIG when KVM_GET_SUPPORTED_CPUID hits max entries Sean Christopherson
@ 2020-02-01 18:51 ` Sean Christopherson
  2020-02-06 14:59   ` Vitaly Kuznetsov
  2020-02-01 18:51 ` [PATCH 03/61] KVM: x86: Simplify handling of Centaur CPUID leafs Sean Christopherson
                   ` (59 subsequent siblings)
  61 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Move the guts of kvm_dev_ioctl_get_cpuid()'s CPUID func loop to a
separate helper to improve code readability and pave the way for future
cleanup.

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/cpuid.c | 45 ++++++++++++++++++++++++++------------------
 1 file changed, 27 insertions(+), 18 deletions(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 47ce04762c20..f49fdd06f511 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -839,6 +839,29 @@ static bool is_centaur_cpu(const struct kvm_cpuid_param *param)
 	return boot_cpu_data.x86_vendor == X86_VENDOR_CENTAUR;
 }
 
+static int get_cpuid_func(struct kvm_cpuid_entry2 *entries, u32 func,
+			  int *nent, int maxnent, unsigned int type)
+{
+	u32 limit;
+	int r;
+
+	r = do_cpuid_func(&entries[*nent], func, nent, maxnent, type);
+	if (r)
+		return r;
+
+	limit = entries[*nent - 1].eax;
+	for (func = func + 1; func <= limit; ++func) {
+		if (*nent >= maxnent)
+			return -E2BIG;
+
+		r = do_cpuid_func(&entries[*nent], func, nent, maxnent, type);
+		if (r)
+			break;
+	}
+
+	return r;
+}
+
 static bool sanity_check_entries(struct kvm_cpuid_entry2 __user *entries,
 				 __u32 num_entries, unsigned int ioctl_type)
 {
@@ -871,8 +894,8 @@ int kvm_dev_ioctl_get_cpuid(struct kvm_cpuid2 *cpuid,
 			    unsigned int type)
 {
 	struct kvm_cpuid_entry2 *cpuid_entries;
-	int limit, nent = 0, r = -E2BIG, i;
-	u32 func;
+	int nent = 0, r = -E2BIG, i;
+
 	static const struct kvm_cpuid_param param[] = {
 		{ .func = 0 },
 		{ .func = 0x80000000 },
@@ -901,22 +924,8 @@ int kvm_dev_ioctl_get_cpuid(struct kvm_cpuid2 *cpuid,
 		if (ent->qualifier && !ent->qualifier(ent))
 			continue;
 
-		r = do_cpuid_func(&cpuid_entries[nent], ent->func,
-				  &nent, cpuid->nent, type);
-
-		if (r)
-			goto out_free;
-
-		limit = cpuid_entries[nent - 1].eax;
-		for (func = ent->func + 1; func <= limit && r == 0; ++func) {
-			if (nent >= cpuid->nent) {
-				r = -E2BIG;
-				goto out_free;
-			}
-			r = do_cpuid_func(&cpuid_entries[nent], func,
-				          &nent, cpuid->nent, type);
-		}
-
+		r = get_cpuid_func(cpuid_entries, ent->func, &nent,
+				   cpuid->nent, type);
 		if (r)
 			goto out_free;
 	}
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 03/61] KVM: x86: Simplify handling of Centaur CPUID leafs
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
  2020-02-01 18:51 ` [PATCH 01/61] KVM: x86: Return -E2BIG when KVM_GET_SUPPORTED_CPUID hits max entries Sean Christopherson
  2020-02-01 18:51 ` [PATCH 02/61] KVM: x86: Refactor loop around do_cpuid_func() to separate helper Sean Christopherson
@ 2020-02-01 18:51 ` Sean Christopherson
  2020-02-06 15:05   ` Vitaly Kuznetsov
  2020-02-01 18:51 ` [PATCH 04/61] KVM: x86: Clean up error handling in kvm_dev_ioctl_get_cpuid() Sean Christopherson
                   ` (58 subsequent siblings)
  61 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Refactor the handling of the Centaur-only CPUID leaf to detect the leaf
via a runtime query instead of adding a one-off callback in the static
array.  When the callback was introduced, there were additional fields
in the array's structs, and more importantly, retpoline wasn't a thing.

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/cpuid.c | 32 ++++++++++----------------------
 1 file changed, 10 insertions(+), 22 deletions(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index f49fdd06f511..de52cbb46171 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -829,15 +829,7 @@ static int do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 func,
 	return __do_cpuid_func(entry, func, nent, maxnent);
 }
 
-struct kvm_cpuid_param {
-	u32 func;
-	bool (*qualifier)(const struct kvm_cpuid_param *param);
-};
-
-static bool is_centaur_cpu(const struct kvm_cpuid_param *param)
-{
-	return boot_cpu_data.x86_vendor == X86_VENDOR_CENTAUR;
-}
+#define CENTAUR_CPUID_SIGNATURE 0xC0000000
 
 static int get_cpuid_func(struct kvm_cpuid_entry2 *entries, u32 func,
 			  int *nent, int maxnent, unsigned int type)
@@ -845,6 +837,10 @@ static int get_cpuid_func(struct kvm_cpuid_entry2 *entries, u32 func,
 	u32 limit;
 	int r;
 
+	if (func == CENTAUR_CPUID_SIGNATURE &&
+	    boot_cpu_data.x86_vendor != X86_VENDOR_CENTAUR)
+		return 0;
+
 	r = do_cpuid_func(&entries[*nent], func, nent, maxnent, type);
 	if (r)
 		return r;
@@ -896,11 +892,8 @@ int kvm_dev_ioctl_get_cpuid(struct kvm_cpuid2 *cpuid,
 	struct kvm_cpuid_entry2 *cpuid_entries;
 	int nent = 0, r = -E2BIG, i;
 
-	static const struct kvm_cpuid_param param[] = {
-		{ .func = 0 },
-		{ .func = 0x80000000 },
-		{ .func = 0xC0000000, .qualifier = is_centaur_cpu },
-		{ .func = KVM_CPUID_SIGNATURE },
+	static const u32 funcs[] = {
+		0, 0x80000000, CENTAUR_CPUID_SIGNATURE, KVM_CPUID_SIGNATURE,
 	};
 
 	if (cpuid->nent < 1)
@@ -918,14 +911,9 @@ int kvm_dev_ioctl_get_cpuid(struct kvm_cpuid2 *cpuid,
 		goto out;
 
 	r = 0;
-	for (i = 0; i < ARRAY_SIZE(param); i++) {
-		const struct kvm_cpuid_param *ent = &param[i];
-
-		if (ent->qualifier && !ent->qualifier(ent))
-			continue;
-
-		r = get_cpuid_func(cpuid_entries, ent->func, &nent,
-				   cpuid->nent, type);
+	for (i = 0; i < ARRAY_SIZE(funcs); i++) {
+		r = get_cpuid_func(cpuid_entries, funcs[i], &nent, cpuid->nent,
+				   type);
 		if (r)
 			goto out_free;
 	}
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 04/61] KVM: x86: Clean up error handling in kvm_dev_ioctl_get_cpuid()
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (2 preceding siblings ...)
  2020-02-01 18:51 ` [PATCH 03/61] KVM: x86: Simplify handling of Centaur CPUID leafs Sean Christopherson
@ 2020-02-01 18:51 ` Sean Christopherson
  2020-02-06 15:09   ` Vitaly Kuznetsov
  2020-02-01 18:51 ` [PATCH 05/61] KVM: x86: Check userapce CPUID array size after validating sub-leaf Sean Christopherson
                   ` (57 subsequent siblings)
  61 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Clean up the error handling in kvm_dev_ioctl_get_cpuid(), which has
gotten a bit crusty as the function has evolved over the years.

Opportunistically hoist the static @funcs declaration to the top of the
function to make it more obvious that it's a "static const".

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/cpuid.c | 19 +++++++------------
 1 file changed, 7 insertions(+), 12 deletions(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index de52cbb46171..11d5f311ef10 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -889,45 +889,40 @@ int kvm_dev_ioctl_get_cpuid(struct kvm_cpuid2 *cpuid,
 			    struct kvm_cpuid_entry2 __user *entries,
 			    unsigned int type)
 {
-	struct kvm_cpuid_entry2 *cpuid_entries;
-	int nent = 0, r = -E2BIG, i;
-
 	static const u32 funcs[] = {
 		0, 0x80000000, CENTAUR_CPUID_SIGNATURE, KVM_CPUID_SIGNATURE,
 	};
 
+	struct kvm_cpuid_entry2 *cpuid_entries;
+	int nent = 0, r, i;
+
 	if (cpuid->nent < 1)
-		goto out;
+		return -E2BIG;
 	if (cpuid->nent > KVM_MAX_CPUID_ENTRIES)
 		cpuid->nent = KVM_MAX_CPUID_ENTRIES;
 
 	if (sanity_check_entries(entries, cpuid->nent, type))
 		return -EINVAL;
 
-	r = -ENOMEM;
 	cpuid_entries = vzalloc(array_size(sizeof(struct kvm_cpuid_entry2),
 					   cpuid->nent));
 	if (!cpuid_entries)
-		goto out;
+		return -ENOMEM;
 
-	r = 0;
 	for (i = 0; i < ARRAY_SIZE(funcs); i++) {
 		r = get_cpuid_func(cpuid_entries, funcs[i], &nent, cpuid->nent,
 				   type);
 		if (r)
 			goto out_free;
 	}
+	cpuid->nent = nent;
 
-	r = -EFAULT;
 	if (copy_to_user(entries, cpuid_entries,
 			 nent * sizeof(struct kvm_cpuid_entry2)))
-		goto out_free;
-	cpuid->nent = nent;
-	r = 0;
+		r = -EFAULT;
 
 out_free:
 	vfree(cpuid_entries);
-out:
 	return r;
 }
 
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 05/61] KVM: x86: Check userapce CPUID array size after validating sub-leaf
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (3 preceding siblings ...)
  2020-02-01 18:51 ` [PATCH 04/61] KVM: x86: Clean up error handling in kvm_dev_ioctl_get_cpuid() Sean Christopherson
@ 2020-02-01 18:51 ` Sean Christopherson
  2020-02-06 15:24   ` Vitaly Kuznetsov
  2020-02-01 18:51 ` [PATCH 06/61] KVM: x86: Move CPUID 0xD.1 handling out of the index>0 loop Sean Christopherson
                   ` (56 subsequent siblings)
  61 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Verify that the next sub-leaf of CPUID 0x4 (or 0x8000001d) is valid
before rejecting the entire KVM_GET_SUPPORTED_CPUID due to insufficent
space in the userspace array.

Note, although this is technically a bug, it's not visible to userspace
as KVM_GET_SUPPORTED_CPUID is guaranteed to fail on KVM_CPUID_SIGNATURE,
which is hardcoded to be added after the affected leafs.  The real
motivation for the change is to tightly couple the nent/maxnent and
do_host_cpuid() sequences in preparation for future cleanup.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/cpuid.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 11d5f311ef10..e5cf1e0cf84a 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -552,12 +552,12 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
 
 		/* read more entries until cache_type is zero */
 		for (i = 1; ; ++i) {
-			if (*nent >= maxnent)
-				goto out;
-
 			cache_type = entry[i - 1].eax & 0x1f;
 			if (!cache_type)
 				break;
+
+			if (*nent >= maxnent)
+				goto out;
 			do_host_cpuid(&entry[i], function, i);
 			++*nent;
 		}
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 06/61] KVM: x86: Move CPUID 0xD.1 handling out of the index>0 loop
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (4 preceding siblings ...)
  2020-02-01 18:51 ` [PATCH 05/61] KVM: x86: Check userapce CPUID array size after validating sub-leaf Sean Christopherson
@ 2020-02-01 18:51 ` Sean Christopherson
  2020-02-07 15:38   ` Vitaly Kuznetsov
  2020-02-01 18:51 ` [PATCH 07/61] KVM: x86: Check for CPUID 0xD.N support before validating array size Sean Christopherson
                   ` (55 subsequent siblings)
  61 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Mov the sub-leaf 1 handling for CPUID 0xD out of the index>0 loop so
that the loop only handles index>2.  Sub-leafs 2+ have identical
semantics, whereas sub-leaf 1 is effectively a feature sub-leaf.

Moving sub-leaf 1 out of the loop does duplicate a bit of code, but
the nent/maxnent code will be consolidated in a future patch, and
duplicating the clear of ECX/EDX is arguably a good thing as the reasons
for clearing said registers are completely different.

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/cpuid.c | 37 ++++++++++++++++++++++---------------
 1 file changed, 22 insertions(+), 15 deletions(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index e5cf1e0cf84a..fc8540596386 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -653,26 +653,33 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
 		if (!supported)
 			break;
 
-		for (idx = 1, i = 1; idx < 64; ++idx) {
+		if (*nent >= maxnent)
+			goto out;
+
+		do_host_cpuid(&entry[1], function, 1);
+		++*nent;
+
+		entry[1].eax &= kvm_cpuid_D_1_eax_x86_features;
+		cpuid_mask(&entry[1].eax, CPUID_D_1_EAX);
+		if (entry[1].eax & (F(XSAVES)|F(XSAVEC)))
+			entry[1].ebx = xstate_required_size(supported, true);
+		else
+			entry[1].ebx = 0;
+		/* Saving XSS controlled state via XSAVES isn't supported. */
+		entry[1].ecx = 0;
+		entry[1].edx = 0;
+
+		for (idx = 2, i = 2; idx < 64; ++idx) {
 			u64 mask = ((u64)1 << idx);
+
 			if (*nent >= maxnent)
 				goto out;
 
 			do_host_cpuid(&entry[i], function, idx);
-			if (idx == 1) {
-				entry[i].eax &= kvm_cpuid_D_1_eax_x86_features;
-				cpuid_mask(&entry[i].eax, CPUID_D_1_EAX);
-				entry[i].ebx = 0;
-				if (entry[i].eax & (F(XSAVES)|F(XSAVEC)))
-					entry[i].ebx =
-						xstate_required_size(supported,
-								     true);
-			} else {
-				if (entry[i].eax == 0 || !(supported & mask))
-					continue;
-				if (WARN_ON_ONCE(entry[i].ecx & 1))
-					continue;
-			}
+			if (entry[i].eax == 0 || !(supported & mask))
+				continue;
+			if (WARN_ON_ONCE(entry[i].ecx & 1))
+				continue;
 			entry[i].ecx = 0;
 			entry[i].edx = 0;
 			++*nent;
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 07/61] KVM: x86: Check for CPUID 0xD.N support before validating array size
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (5 preceding siblings ...)
  2020-02-01 18:51 ` [PATCH 06/61] KVM: x86: Move CPUID 0xD.1 handling out of the index>0 loop Sean Christopherson
@ 2020-02-01 18:51 ` Sean Christopherson
  2020-02-07 15:48   ` Vitaly Kuznetsov
  2020-02-01 18:51 ` [PATCH 08/61] KVM: x86: Warn on zero-size save state for valid CPUID 0xD.N sub-leaf Sean Christopherson
                   ` (54 subsequent siblings)
  61 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Now that sub-leaf 1 is handled separately, verify the next sub-leaf is
needed before rejecting KVM_GET_SUPPORTED_CPUID due to an insufficiently
sized userspace array.

Note, although this is technically a bug, it's not visible to userspace
as KVM_GET_SUPPORTED_CPUID is guaranteed to fail on KVM_CPUID_SIGNATURE,
which is hardcoded to be added after leaf 0xD.  The real motivation for
the change is to tightly couple the nent/maxnent and do_host_cpuid()
sequences in preparation for future cleanup.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/cpuid.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index fc8540596386..fd9b29aa7abc 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -670,13 +670,14 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
 		entry[1].edx = 0;
 
 		for (idx = 2, i = 2; idx < 64; ++idx) {
-			u64 mask = ((u64)1 << idx);
+			if (!(supported & BIT_ULL(idx)))
+				continue;
 
 			if (*nent >= maxnent)
 				goto out;
 
 			do_host_cpuid(&entry[i], function, idx);
-			if (entry[i].eax == 0 || !(supported & mask))
+			if (entry[i].eax == 0)
 				continue;
 			if (WARN_ON_ONCE(entry[i].ecx & 1))
 				continue;
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 08/61] KVM: x86: Warn on zero-size save state for valid CPUID 0xD.N sub-leaf
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (6 preceding siblings ...)
  2020-02-01 18:51 ` [PATCH 07/61] KVM: x86: Check for CPUID 0xD.N support before validating array size Sean Christopherson
@ 2020-02-01 18:51 ` Sean Christopherson
  2020-02-07 15:54   ` Vitaly Kuznetsov
  2020-02-01 18:51 ` [PATCH 09/61] KVM: x86: Refactor CPUID 0xD.N sub-leaf entry creation Sean Christopherson
                   ` (53 subsequent siblings)
  61 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

WARN if the save state size for a valid XCR0-managed sub-leaf is zero,
which would indicate a KVM or CPU bug.  Add a comment to explain why KVM
WARNs so the reader doesn't have to tease out the relevant bits from
Intel's SDM and KVM's XCR0/XSS code.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/cpuid.c | 13 ++++++++++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index fd9b29aa7abc..424dde41cb5d 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -677,10 +677,17 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
 				goto out;
 
 			do_host_cpuid(&entry[i], function, idx);
-			if (entry[i].eax == 0)
-				continue;
-			if (WARN_ON_ONCE(entry[i].ecx & 1))
+
+			/*
+			 * The @supported check above should have filtered out
+			 * invalid sub-leafs as well as sub-leafs managed by
+			 * IA32_XSS MSR.  Only XCR0-managed sub-leafs should
+			 * reach this point, and they should have a non-zero
+			 * save state size.
+			 */
+			if (WARN_ON_ONCE(!entry[i].eax || (entry[i].ecx & 1)))
 				continue;
+
 			entry[i].ecx = 0;
 			entry[i].edx = 0;
 			++*nent;
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 09/61] KVM: x86: Refactor CPUID 0xD.N sub-leaf entry creation
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (7 preceding siblings ...)
  2020-02-01 18:51 ` [PATCH 08/61] KVM: x86: Warn on zero-size save state for valid CPUID 0xD.N sub-leaf Sean Christopherson
@ 2020-02-01 18:51 ` Sean Christopherson
  2020-02-07 15:56   ` Vitaly Kuznetsov
  2020-02-01 18:51 ` [PATCH 10/61] KVM: x86: Clean up CPUID 0x7 sub-leaf loop Sean Christopherson
                   ` (52 subsequent siblings)
  61 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Increment the number of CPUID entries immediately after do_host_cpuid()
in preparation for moving the logic into do_host_cpuid().  Handle the
rare/impossible case of encountering a bogus sub-leaf by decrementing
the number entries on failure.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/cpuid.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 424dde41cb5d..6e1685a16cca 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -677,6 +677,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
 				goto out;
 
 			do_host_cpuid(&entry[i], function, idx);
+			++*nent;
 
 			/*
 			 * The @supported check above should have filtered out
@@ -685,12 +686,13 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
 			 * reach this point, and they should have a non-zero
 			 * save state size.
 			 */
-			if (WARN_ON_ONCE(!entry[i].eax || (entry[i].ecx & 1)))
+			if (WARN_ON_ONCE(!entry[i].eax || (entry[i].ecx & 1))) {
+				--*nent;
 				continue;
+			}
 
 			entry[i].ecx = 0;
 			entry[i].edx = 0;
-			++*nent;
 			++i;
 		}
 		break;
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 10/61] KVM: x86: Clean up CPUID 0x7 sub-leaf loop
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (8 preceding siblings ...)
  2020-02-01 18:51 ` [PATCH 09/61] KVM: x86: Refactor CPUID 0xD.N sub-leaf entry creation Sean Christopherson
@ 2020-02-01 18:51 ` Sean Christopherson
  2020-02-21 14:20   ` Vitaly Kuznetsov
  2020-02-01 18:51 ` [PATCH 11/61] KVM: x86: Drop the explicit @index from do_cpuid_7_mask() Sean Christopherson
                   ` (51 subsequent siblings)
  61 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Refactor the sub-leaf loop for CPUID 0x7 to move the main leaf out of
said loop.  The emitted code savings is basically a mirage, as the
handling of the main leaf can easily be split to its own helper to avoid
code bloat.

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/cpuid.c | 18 +++++++++---------
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 6e1685a16cca..b626893a11d5 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -573,16 +573,16 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
 	case 7: {
 		int i;
 
-		for (i = 0; ; ) {
+		do_cpuid_7_mask(entry, 0);
+
+		for (i = 1; i <= entry->eax; i++) {
+			if (*nent >= maxnent)
+				goto out;
+
+			do_host_cpuid(&entry[i], function, i);
+			++*nent;
+
 			do_cpuid_7_mask(&entry[i], i);
-			if (i == entry->eax)
-				break;
-			if (*nent >= maxnent)
-				goto out;
-
-			++i;
-			do_host_cpuid(&entry[i], function, i);
-			++*nent;
 		}
 		break;
 	}
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 11/61] KVM: x86: Drop the explicit @index from do_cpuid_7_mask()
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (9 preceding siblings ...)
  2020-02-01 18:51 ` [PATCH 10/61] KVM: x86: Clean up CPUID 0x7 sub-leaf loop Sean Christopherson
@ 2020-02-01 18:51 ` Sean Christopherson
  2020-02-21 14:22   ` Vitaly Kuznetsov
  2020-02-01 18:51 ` [PATCH 12/61] KVM: x86: Drop redundant boot cpu checks on SSBD feature bits Sean Christopherson
                   ` (50 subsequent siblings)
  61 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Drop the index param from do_cpuid_7_mask() and instead switch on the
entry's index, which is guaranteed to be set by do_host_cpuid().

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/cpuid.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index b626893a11d5..fd04f17d1836 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -346,7 +346,7 @@ static int __do_cpuid_func_emulated(struct kvm_cpuid_entry2 *entry,
 	return 0;
 }
 
-static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry, int index)
+static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry)
 {
 	unsigned f_invpcid = kvm_x86_ops->invpcid_supported() ? F(INVPCID) : 0;
 	unsigned f_mpx = kvm_mpx_supported() ? F(MPX) : 0;
@@ -380,7 +380,7 @@ static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry, int index)
 	const u32 kvm_cpuid_7_1_eax_x86_features =
 		F(AVX512_BF16);
 
-	switch (index) {
+	switch (entry->index) {
 	case 0:
 		entry->eax = min(entry->eax, 1u);
 		entry->ebx &= kvm_cpuid_7_0_ebx_x86_features;
@@ -573,7 +573,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
 	case 7: {
 		int i;
 
-		do_cpuid_7_mask(entry, 0);
+		do_cpuid_7_mask(entry);
 
 		for (i = 1; i <= entry->eax; i++) {
 			if (*nent >= maxnent)
@@ -582,7 +582,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
 			do_host_cpuid(&entry[i], function, i);
 			++*nent;
 
-			do_cpuid_7_mask(&entry[i], i);
+			do_cpuid_7_mask(&entry[i]);
 		}
 		break;
 	}
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 12/61] KVM: x86: Drop redundant boot cpu checks on SSBD feature bits
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (10 preceding siblings ...)
  2020-02-01 18:51 ` [PATCH 11/61] KVM: x86: Drop the explicit @index from do_cpuid_7_mask() Sean Christopherson
@ 2020-02-01 18:51 ` Sean Christopherson
  2020-02-01 18:51 ` [PATCH 13/61] KVM: x86: Consolidate CPUID array max num entries checking Sean Christopherson
                   ` (49 subsequent siblings)
  61 siblings, 0 replies; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Drop redundnant checks when "emulating" SSBD feature across vendors,
i.e. advertising the AMD variant when running on an Intel CPU and vice
versa.  Both SPEC_CTRL_SSBD and AMD_SSBD are already defined in the
leaf-specific feature masks and are *not* forcefully set by the kernel,
i.e. will already be set in the entry when supported by the host.

Functionally, this changes nothing, but the redundant check is
confusing, especially when considering future patches that will further
differentiate between "real" and "emulated" feature bits.

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/cpuid.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index fd04f17d1836..52f0af4e10d5 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -405,8 +405,7 @@ static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry)
 			entry->edx |= F(SPEC_CTRL);
 		if (boot_cpu_has(X86_FEATURE_STIBP))
 			entry->edx |= F(INTEL_STIBP);
-		if (boot_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD) ||
-		    boot_cpu_has(X86_FEATURE_AMD_SSBD))
+		if (boot_cpu_has(X86_FEATURE_AMD_SSBD))
 			entry->edx |= F(SPEC_CTRL_SSBD);
 		/*
 		 * We emulate ARCH_CAPABILITIES in software even
@@ -780,8 +779,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
 			entry->ebx |= F(AMD_IBRS);
 		if (boot_cpu_has(X86_FEATURE_STIBP))
 			entry->ebx |= F(AMD_STIBP);
-		if (boot_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD) ||
-		    boot_cpu_has(X86_FEATURE_AMD_SSBD))
+		if (boot_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD))
 			entry->ebx |= F(AMD_SSBD);
 		if (!boot_cpu_has_bug(X86_BUG_SPEC_STORE_BYPASS))
 			entry->ebx |= F(AMD_SSB_NO);
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 13/61] KVM: x86: Consolidate CPUID array max num entries checking
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (11 preceding siblings ...)
  2020-02-01 18:51 ` [PATCH 12/61] KVM: x86: Drop redundant boot cpu checks on SSBD feature bits Sean Christopherson
@ 2020-02-01 18:51 ` Sean Christopherson
  2020-02-01 18:51 ` [PATCH 14/61] KVM: x86: Hoist loop counter and terminator to top of __do_cpuid_func() Sean Christopherson
                   ` (48 subsequent siblings)
  61 siblings, 0 replies; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Move the nent vs. maxnent check and nent increment into do_host_cpuid()
to consolidate what is now identical code.  To signal success vs.
failure, return the entry and NULL respectively.  A future patch will
build on this to also move the entry retrieval into do_host_cpuid().

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/cpuid.c | 49 +++++++++++++++-----------------------------
 1 file changed, 17 insertions(+), 32 deletions(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 52f0af4e10d5..1ae3b2502333 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -287,9 +287,14 @@ static __always_inline void cpuid_mask(u32 *word, int wordnum)
 	*word &= boot_cpu_data.x86_capability[wordnum];
 }
 
-static void do_host_cpuid(struct kvm_cpuid_entry2 *entry, u32 function,
-			   u32 index)
+static struct kvm_cpuid_entry2 *do_host_cpuid(struct kvm_cpuid_entry2 *entry,
+					      int *nent, int maxnent,
+					      u32 function, u32 index)
 {
+	if (*nent >= maxnent)
+		return NULL;
+	++*nent;
+
 	entry->function = function;
 	entry->index = index;
 	entry->flags = 0;
@@ -316,6 +321,8 @@ static void do_host_cpuid(struct kvm_cpuid_entry2 *entry, u32 function,
 		entry->flags |= KVM_CPUID_FLAG_SIGNIFCANT_INDEX;
 		break;
 	}
+
+	return entry;
 }
 
 static int __do_cpuid_func_emulated(struct kvm_cpuid_entry2 *entry,
@@ -507,12 +514,9 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
 
 	r = -E2BIG;
 
-	if (WARN_ON(*nent >= maxnent))
+	if (WARN_ON(!do_host_cpuid(entry, nent, maxnent, function, 0)))
 		goto out;
 
-	do_host_cpuid(entry, function, 0);
-	++*nent;
-
 	switch (function) {
 	case 0:
 		/* Limited to the highest leaf implemented in KVM. */
@@ -536,11 +540,8 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
 
 		entry->flags |= KVM_CPUID_FLAG_STATE_READ_NEXT;
 		for (t = 1; t < times; ++t) {
-			if (*nent >= maxnent)
+			if (!do_host_cpuid(&entry[t], nent, maxnent, function, 0))
 				goto out;
-
-			do_host_cpuid(&entry[t], function, 0);
-			++*nent;
 		}
 		break;
 	}
@@ -555,10 +556,8 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
 			if (!cache_type)
 				break;
 
-			if (*nent >= maxnent)
+			if (!do_host_cpuid(&entry[i], nent, maxnent, function, i))
 				goto out;
-			do_host_cpuid(&entry[i], function, i);
-			++*nent;
 		}
 		break;
 	}
@@ -575,12 +574,9 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
 		do_cpuid_7_mask(entry);
 
 		for (i = 1; i <= entry->eax; i++) {
-			if (*nent >= maxnent)
+			if (!do_host_cpuid(&entry[i], nent, maxnent, function, i))
 				goto out;
 
-			do_host_cpuid(&entry[i], function, i);
-			++*nent;
-
 			do_cpuid_7_mask(&entry[i]);
 		}
 		break;
@@ -633,11 +629,8 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
 		 * added entry is zero.
 		 */
 		for (i = 1; entry[i - 1].ecx & 0xff00; ++i) {
-			if (*nent >= maxnent)
+			if (!do_host_cpuid(&entry[i], nent, maxnent, function, i))
 				goto out;
-
-			do_host_cpuid(&entry[i], function, i);
-			++*nent;
 		}
 		break;
 	}
@@ -652,12 +645,9 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
 		if (!supported)
 			break;
 
-		if (*nent >= maxnent)
+		if (!do_host_cpuid(&entry[1], nent, maxnent, function, 1))
 			goto out;
 
-		do_host_cpuid(&entry[1], function, 1);
-		++*nent;
-
 		entry[1].eax &= kvm_cpuid_D_1_eax_x86_features;
 		cpuid_mask(&entry[1].eax, CPUID_D_1_EAX);
 		if (entry[1].eax & (F(XSAVES)|F(XSAVEC)))
@@ -672,12 +662,9 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
 			if (!(supported & BIT_ULL(idx)))
 				continue;
 
-			if (*nent >= maxnent)
+			if (!do_host_cpuid(&entry[i], nent, maxnent, function, idx))
 				goto out;
 
-			do_host_cpuid(&entry[i], function, idx);
-			++*nent;
-
 			/*
 			 * The @supported check above should have filtered out
 			 * invalid sub-leafs as well as sub-leafs managed by
@@ -704,10 +691,8 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
 			break;
 
 		for (t = 1; t <= times; ++t) {
-			if (*nent >= maxnent)
+			if (!do_host_cpuid(&entry[t], nent, maxnent, function, t))
 				goto out;
-			do_host_cpuid(&entry[t], function, t);
-			++*nent;
 		}
 		break;
 	}
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 14/61] KVM: x86: Hoist loop counter and terminator to top of __do_cpuid_func()
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (12 preceding siblings ...)
  2020-02-01 18:51 ` [PATCH 13/61] KVM: x86: Consolidate CPUID array max num entries checking Sean Christopherson
@ 2020-02-01 18:51 ` Sean Christopherson
  2020-02-01 18:51 ` [PATCH 15/61] KVM: x86: Refactor CPUID 0x4 and 0x8000001d handling Sean Christopherson
                   ` (47 subsequent siblings)
  61 siblings, 0 replies; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Declare "i" and "max_idx" at the top of __do_cpuid_func() to consolidate
a handful of declarations in various case statements.

More importantly, establish the pattern of using max_idx instead of e.g.
entry->eax as the loop terminator in preparation for refactoring how
entry is handled in __do_cpuid_func().

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/cpuid.c | 37 +++++++++++++------------------------
 1 file changed, 13 insertions(+), 24 deletions(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 1ae3b2502333..5044a595799f 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -439,7 +439,7 @@ static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry)
 static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
 				  int *nent, int maxnent)
 {
-	int r;
+	int r, i, max_idx;
 	unsigned f_nx = is_efer_nx() ? F(NX) : 0;
 #ifdef CONFIG_X86_64
 	unsigned f_gbpages = (kvm_x86_ops->get_lpage_level() == PT_PDPE_LEVEL)
@@ -535,20 +535,18 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
 	 * may return different values. This forces us to get_cpu() before
 	 * issuing the first command, and also to emulate this annoying behavior
 	 * in kvm_emulate_cpuid() using KVM_CPUID_FLAG_STATE_READ_NEXT */
-	case 2: {
-		int t, times = entry->eax & 0xff;
-
+	case 2:
 		entry->flags |= KVM_CPUID_FLAG_STATE_READ_NEXT;
-		for (t = 1; t < times; ++t) {
-			if (!do_host_cpuid(&entry[t], nent, maxnent, function, 0))
+
+		for (i = 1, max_idx = entry->eax & 0xff; i < max_idx; ++i) {
+			if (!do_host_cpuid(&entry[i], nent, maxnent, function, 0))
 				goto out;
 		}
 		break;
-	}
 	/* functions 4 and 0x8000001d have additional index. */
 	case 4:
 	case 0x8000001d: {
-		int i, cache_type;
+		int cache_type;
 
 		/* read more entries until cache_type is zero */
 		for (i = 1; ; ++i) {
@@ -568,19 +566,16 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
 		entry->edx = 0;
 		break;
 	/* function 7 has additional index. */
-	case 7: {
-		int i;
-
+	case 7:
 		do_cpuid_7_mask(entry);
 
-		for (i = 1; i <= entry->eax; i++) {
+		for (i = 1, max_idx = entry->eax; i <= max_idx; i++) {
 			if (!do_host_cpuid(&entry[i], nent, maxnent, function, i))
 				goto out;
 
 			do_cpuid_7_mask(&entry[i]);
 		}
 		break;
-	}
 	case 9:
 		break;
 	case 0xa: { /* Architectural Performance Monitoring */
@@ -617,9 +612,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
 	 * thus they can be handled by common code.
 	 */
 	case 0x1f:
-	case 0xb: {
-		int i;
-
+	case 0xb:
 		/*
 		 * We filled in entry[0] for CPUID(EAX=<function>,
 		 * ECX=00H) above.  If its level type (ECX[15:8]) is
@@ -633,9 +626,8 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
 				goto out;
 		}
 		break;
-	}
 	case 0xd: {
-		int idx, i;
+		int idx;
 		u64 supported = kvm_supported_xcr0();
 
 		entry->eax &= supported;
@@ -684,18 +676,15 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
 		break;
 	}
 	/* Intel PT */
-	case 0x14: {
-		int t, times = entry->eax;
-
+	case 0x14:
 		if (!f_intel_pt)
 			break;
 
-		for (t = 1; t <= times; ++t) {
-			if (!do_host_cpuid(&entry[t], nent, maxnent, function, t))
+		for (i = 1, max_idx = entry->eax; i <= max_idx; ++i) {
+			if (!do_host_cpuid(&entry[i], nent, maxnent, function, i))
 				goto out;
 		}
 		break;
-	}
 	case KVM_CPUID_SIGNATURE: {
 		static const char signature[12] = "KVMKVMKVM\0\0";
 		const u32 *sigptr = (const u32 *)signature;
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 15/61] KVM: x86: Refactor CPUID 0x4 and 0x8000001d handling
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (13 preceding siblings ...)
  2020-02-01 18:51 ` [PATCH 14/61] KVM: x86: Hoist loop counter and terminator to top of __do_cpuid_func() Sean Christopherson
@ 2020-02-01 18:51 ` Sean Christopherson
  2020-02-21 14:40   ` Vitaly Kuznetsov
  2020-02-01 18:51 ` [PATCH 16/61] KVM: x86: Encapsulate CPUID entries and metadata in struct Sean Christopherson
                   ` (46 subsequent siblings)
  61 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Refactoring the sub-leaf handling for CPUID 0x4/0x8000001d to eliminate
a one-off variable and its associated brackets.

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/cpuid.c | 16 ++++++----------
 1 file changed, 6 insertions(+), 10 deletions(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 5044a595799f..d75d539da759 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -545,20 +545,16 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
 		break;
 	/* functions 4 and 0x8000001d have additional index. */
 	case 4:
-	case 0x8000001d: {
-		int cache_type;
-
-		/* read more entries until cache_type is zero */
-		for (i = 1; ; ++i) {
-			cache_type = entry[i - 1].eax & 0x1f;
-			if (!cache_type)
-				break;
-
+	case 0x8000001d:
+		/*
+		 * Read entries until the cache type in the previous entry is
+		 * zero, i.e. indicates an invalid entry.
+		 */
+		for (i = 1; entry[i - 1].eax & 0x1f; ++i) {
 			if (!do_host_cpuid(&entry[i], nent, maxnent, function, i))
 				goto out;
 		}
 		break;
-	}
 	case 6: /* Thermal management */
 		entry->eax = 0x4; /* allow ARAT */
 		entry->ebx = 0;
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 16/61] KVM: x86: Encapsulate CPUID entries and metadata in struct
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (14 preceding siblings ...)
  2020-02-01 18:51 ` [PATCH 15/61] KVM: x86: Refactor CPUID 0x4 and 0x8000001d handling Sean Christopherson
@ 2020-02-01 18:51 ` Sean Christopherson
  2020-02-21 14:58   ` Vitaly Kuznetsov
  2020-02-01 18:51 ` [PATCH 17/61] KVM: x86: Drop redundant array size check Sean Christopherson
                   ` (45 subsequent siblings)
  61 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Add a struct to hold the array of CPUID entries and its associated
metadata when handling KVM_GET_SUPPORTED_CPUID.  Lookup and provide
the correct entry in do_host_cpuid(), which eliminates the majority of
array indexing shenanigans, e.g. entries[i -1], and generally makes the
code more readable.  The last array indexing holdout is kvm_get_cpuid(),
which can't really be avoided without throwing the baby out with the
bathwater.

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/cpuid.c | 138 ++++++++++++++++++++++++-------------------
 1 file changed, 76 insertions(+), 62 deletions(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index d75d539da759..f9cfc69199f0 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -287,13 +287,21 @@ static __always_inline void cpuid_mask(u32 *word, int wordnum)
 	*word &= boot_cpu_data.x86_capability[wordnum];
 }
 
-static struct kvm_cpuid_entry2 *do_host_cpuid(struct kvm_cpuid_entry2 *entry,
-					      int *nent, int maxnent,
+struct kvm_cpuid_array {
+	struct kvm_cpuid_entry2 *entries;
+	const int maxnent;
+	int nent;
+};
+
+static struct kvm_cpuid_entry2 *do_host_cpuid(struct kvm_cpuid_array *array,
 					      u32 function, u32 index)
 {
-	if (*nent >= maxnent)
+	struct kvm_cpuid_entry2 *entry;
+
+	if (array->nent >= array->maxnent)
 		return NULL;
-	++*nent;
+
+	entry = &array->entries[array->nent++];
 
 	entry->function = function;
 	entry->index = index;
@@ -325,9 +333,10 @@ static struct kvm_cpuid_entry2 *do_host_cpuid(struct kvm_cpuid_entry2 *entry,
 	return entry;
 }
 
-static int __do_cpuid_func_emulated(struct kvm_cpuid_entry2 *entry,
-				    u32 func, int *nent, int maxnent)
+static int __do_cpuid_func_emulated(struct kvm_cpuid_array *array, u32 func)
 {
+	struct kvm_cpuid_entry2 *entry = &array->entries[array->nent];
+
 	entry->function = func;
 	entry->index = 0;
 	entry->flags = 0;
@@ -335,17 +344,17 @@ static int __do_cpuid_func_emulated(struct kvm_cpuid_entry2 *entry,
 	switch (func) {
 	case 0:
 		entry->eax = 7;
-		++*nent;
+		++array->nent;
 		break;
 	case 1:
 		entry->ecx = F(MOVBE);
-		++*nent;
+		++array->nent;
 		break;
 	case 7:
 		entry->flags |= KVM_CPUID_FLAG_SIGNIFCANT_INDEX;
 		entry->eax = 0;
 		entry->ecx = F(RDPID);
-		++*nent;
+		++array->nent;
 	default:
 		break;
 	}
@@ -436,9 +445,9 @@ static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry)
 	}
 }
 
-static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
-				  int *nent, int maxnent)
+static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
 {
+	struct kvm_cpuid_entry2 *entry;
 	int r, i, max_idx;
 	unsigned f_nx = is_efer_nx() ? F(NX) : 0;
 #ifdef CONFIG_X86_64
@@ -514,7 +523,8 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
 
 	r = -E2BIG;
 
-	if (WARN_ON(!do_host_cpuid(entry, nent, maxnent, function, 0)))
+	entry = do_host_cpuid(array, function, 0);
+	if (WARN_ON(!entry))
 		goto out;
 
 	switch (function) {
@@ -539,7 +549,8 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
 		entry->flags |= KVM_CPUID_FLAG_STATE_READ_NEXT;
 
 		for (i = 1, max_idx = entry->eax & 0xff; i < max_idx; ++i) {
-			if (!do_host_cpuid(&entry[i], nent, maxnent, function, 0))
+			entry = do_host_cpuid(array, 2, 0);
+			if (!entry)
 				goto out;
 		}
 		break;
@@ -550,8 +561,9 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
 		 * Read entries until the cache type in the previous entry is
 		 * zero, i.e. indicates an invalid entry.
 		 */
-		for (i = 1; entry[i - 1].eax & 0x1f; ++i) {
-			if (!do_host_cpuid(&entry[i], nent, maxnent, function, i))
+		for (i = 1; entry->eax & 0x1f; ++i) {
+			entry = do_host_cpuid(array, function, i);
+			if (!entry)
 				goto out;
 		}
 		break;
@@ -566,10 +578,11 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
 		do_cpuid_7_mask(entry);
 
 		for (i = 1, max_idx = entry->eax; i <= max_idx; i++) {
-			if (!do_host_cpuid(&entry[i], nent, maxnent, function, i))
+			entry = do_host_cpuid(array, function, i);
+			if (!entry)
 				goto out;
 
-			do_cpuid_7_mask(&entry[i]);
+			do_cpuid_7_mask(entry);
 		}
 		break;
 	case 9:
@@ -610,15 +623,13 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
 	case 0x1f:
 	case 0xb:
 		/*
-		 * We filled in entry[0] for CPUID(EAX=<function>,
-		 * ECX=00H) above.  If its level type (ECX[15:8]) is
-		 * zero, then the leaf is unimplemented, and we're
-		 * done.  Otherwise, continue to populate entries
-		 * until the level type (ECX[15:8]) of the previously
-		 * added entry is zero.
+		 * Populate entries until the level type (ECX[15:8]) of the
+		 * previous entry is zero.  Note, CPUID EAX.{0x1f,0xb}.0 is
+		 * the starting entry, filled by the primary do_host_cpuid().
 		 */
-		for (i = 1; entry[i - 1].ecx & 0xff00; ++i) {
-			if (!do_host_cpuid(&entry[i], nent, maxnent, function, i))
+		for (i = 1; entry->ecx & 0xff00; ++i) {
+			entry = do_host_cpuid(array, function, i);
+			if (!entry)
 				goto out;
 		}
 		break;
@@ -633,24 +644,26 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
 		if (!supported)
 			break;
 
-		if (!do_host_cpuid(&entry[1], nent, maxnent, function, 1))
+		entry = do_host_cpuid(array, function, 1);
+		if (!entry)
 			goto out;
 
-		entry[1].eax &= kvm_cpuid_D_1_eax_x86_features;
-		cpuid_mask(&entry[1].eax, CPUID_D_1_EAX);
-		if (entry[1].eax & (F(XSAVES)|F(XSAVEC)))
-			entry[1].ebx = xstate_required_size(supported, true);
+		entry->eax &= kvm_cpuid_D_1_eax_x86_features;
+		cpuid_mask(&entry->eax, CPUID_D_1_EAX);
+		if (entry->eax & (F(XSAVES)|F(XSAVEC)))
+			entry->ebx = xstate_required_size(supported, true);
 		else
-			entry[1].ebx = 0;
+			entry->ebx = 0;
 		/* Saving XSS controlled state via XSAVES isn't supported. */
-		entry[1].ecx = 0;
-		entry[1].edx = 0;
+		entry->ecx = 0;
+		entry->edx = 0;
 
-		for (idx = 2, i = 2; idx < 64; ++idx) {
+		for (idx = 2; idx < 64; ++idx) {
 			if (!(supported & BIT_ULL(idx)))
 				continue;
 
-			if (!do_host_cpuid(&entry[i], nent, maxnent, function, idx))
+			entry = do_host_cpuid(array, function, idx);
+			if (!entry)
 				goto out;
 
 			/*
@@ -660,14 +673,13 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
 			 * reach this point, and they should have a non-zero
 			 * save state size.
 			 */
-			if (WARN_ON_ONCE(!entry[i].eax || (entry[i].ecx & 1))) {
-				--*nent;
+			if (WARN_ON_ONCE(!entry->eax || (entry->ecx & 1))) {
+				--array->nent;
 				continue;
 			}
 
-			entry[i].ecx = 0;
-			entry[i].edx = 0;
-			++i;
+			entry->ecx = 0;
+			entry->edx = 0;
 		}
 		break;
 	}
@@ -677,7 +689,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
 			break;
 
 		for (i = 1, max_idx = entry->eax; i <= max_idx; ++i) {
-			if (!do_host_cpuid(&entry[i], nent, maxnent, function, i))
+			if (!do_host_cpuid(array, function, i))
 				goto out;
 		}
 		break;
@@ -802,22 +814,22 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
 	return r;
 }
 
-static int do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 func,
-			 int *nent, int maxnent, unsigned int type)
+static int do_cpuid_func(struct kvm_cpuid_array *array, u32 func,
+			 unsigned int type)
 {
-	if (*nent >= maxnent)
+	if (array->nent >= array->maxnent)
 		return -E2BIG;
 
 	if (type == KVM_GET_EMULATED_CPUID)
-		return __do_cpuid_func_emulated(entry, func, nent, maxnent);
+		return __do_cpuid_func_emulated(array, func);
 
-	return __do_cpuid_func(entry, func, nent, maxnent);
+	return __do_cpuid_func(array, func);
 }
 
 #define CENTAUR_CPUID_SIGNATURE 0xC0000000
 
-static int get_cpuid_func(struct kvm_cpuid_entry2 *entries, u32 func,
-			  int *nent, int maxnent, unsigned int type)
+static int get_cpuid_func(struct kvm_cpuid_array *array, u32 func,
+			  unsigned int type)
 {
 	u32 limit;
 	int r;
@@ -826,16 +838,16 @@ static int get_cpuid_func(struct kvm_cpuid_entry2 *entries, u32 func,
 	    boot_cpu_data.x86_vendor != X86_VENDOR_CENTAUR)
 		return 0;
 
-	r = do_cpuid_func(&entries[*nent], func, nent, maxnent, type);
+	r = do_cpuid_func(array, func, type);
 	if (r)
 		return r;
 
-	limit = entries[*nent - 1].eax;
+	limit = array->entries[array->nent - 1].eax;
 	for (func = func + 1; func <= limit; ++func) {
-		if (*nent >= maxnent)
+		if (array->nent >= array->maxnent)
 			return -E2BIG;
 
-		r = do_cpuid_func(&entries[*nent], func, nent, maxnent, type);
+		r = do_cpuid_func(array, func, type);
 		if (r)
 			break;
 	}
@@ -878,8 +890,11 @@ int kvm_dev_ioctl_get_cpuid(struct kvm_cpuid2 *cpuid,
 		0, 0x80000000, CENTAUR_CPUID_SIGNATURE, KVM_CPUID_SIGNATURE,
 	};
 
-	struct kvm_cpuid_entry2 *cpuid_entries;
-	int nent = 0, r, i;
+	struct kvm_cpuid_array array = {
+		.nent = 0,
+		.maxnent = cpuid->nent,
+	};
+	int r, i;
 
 	if (cpuid->nent < 1)
 		return -E2BIG;
@@ -889,25 +904,24 @@ int kvm_dev_ioctl_get_cpuid(struct kvm_cpuid2 *cpuid,
 	if (sanity_check_entries(entries, cpuid->nent, type))
 		return -EINVAL;
 
-	cpuid_entries = vzalloc(array_size(sizeof(struct kvm_cpuid_entry2),
+	array.entries = vzalloc(array_size(sizeof(struct kvm_cpuid_entry2),
 					   cpuid->nent));
-	if (!cpuid_entries)
+	if (!array.entries)
 		return -ENOMEM;
 
 	for (i = 0; i < ARRAY_SIZE(funcs); i++) {
-		r = get_cpuid_func(cpuid_entries, funcs[i], &nent, cpuid->nent,
-				   type);
+		r = get_cpuid_func(&array, funcs[i], type);
 		if (r)
 			goto out_free;
 	}
-	cpuid->nent = nent;
+	cpuid->nent = array.nent;
 
-	if (copy_to_user(entries, cpuid_entries,
-			 nent * sizeof(struct kvm_cpuid_entry2)))
+	if (copy_to_user(entries, array.entries,
+			 array.nent * sizeof(struct kvm_cpuid_entry2)))
 		r = -EFAULT;
 
 out_free:
-	vfree(cpuid_entries);
+	vfree(array.entries);
 	return r;
 }
 
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 17/61] KVM: x86: Drop redundant array size check
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (15 preceding siblings ...)
  2020-02-01 18:51 ` [PATCH 16/61] KVM: x86: Encapsulate CPUID entries and metadata in struct Sean Christopherson
@ 2020-02-01 18:51 ` Sean Christopherson
  2020-02-01 18:51 ` [PATCH 18/61] KVM: x86: Use common loop iterator when handling CPUID 0xD.N Sean Christopherson
                   ` (44 subsequent siblings)
  61 siblings, 0 replies; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Drop a "nent >= maxnent" check in kvm_get_cpuid() that's fully redundant
now that kvm_get_cpuid() isn't indexing the array to pass an entry to
do_cpuid_func().

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/cpuid.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index f9cfc69199f0..6516fec361c1 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -844,9 +844,6 @@ static int get_cpuid_func(struct kvm_cpuid_array *array, u32 func,
 
 	limit = array->entries[array->nent - 1].eax;
 	for (func = func + 1; func <= limit; ++func) {
-		if (array->nent >= array->maxnent)
-			return -E2BIG;
-
 		r = do_cpuid_func(array, func, type);
 		if (r)
 			break;
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 18/61] KVM: x86: Use common loop iterator when handling CPUID 0xD.N
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (16 preceding siblings ...)
  2020-02-01 18:51 ` [PATCH 17/61] KVM: x86: Drop redundant array size check Sean Christopherson
@ 2020-02-01 18:51 ` Sean Christopherson
  2020-02-21 15:04   ` Vitaly Kuznetsov
  2020-02-01 18:51 ` [PATCH 19/61] KVM: VMX: Add helpers to query Intel PT mode Sean Christopherson
                   ` (43 subsequent siblings)
  61 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Use __do_cpuid_func()'s common loop iterator, "i", when enumerating the
sub-leafs for CPUID 0xD now that the CPUID 0xD loop doesn't need to
manual maintain separate counts for the entries index and CPUID index.

No functional changed intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/cpuid.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 6516fec361c1..bfd8304a8437 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -634,7 +634,6 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
 		}
 		break;
 	case 0xd: {
-		int idx;
 		u64 supported = kvm_supported_xcr0();
 
 		entry->eax &= supported;
@@ -658,11 +657,11 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
 		entry->ecx = 0;
 		entry->edx = 0;
 
-		for (idx = 2; idx < 64; ++idx) {
-			if (!(supported & BIT_ULL(idx)))
+		for (i = 2; i < 64; ++i) {
+			if (!(supported & BIT_ULL(i)))
 				continue;
 
-			entry = do_host_cpuid(array, function, idx);
+			entry = do_host_cpuid(array, function, i);
 			if (!entry)
 				goto out;
 
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 19/61] KVM: VMX: Add helpers to query Intel PT mode
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (17 preceding siblings ...)
  2020-02-01 18:51 ` [PATCH 18/61] KVM: x86: Use common loop iterator when handling CPUID 0xD.N Sean Christopherson
@ 2020-02-01 18:51 ` Sean Christopherson
       [not found]   ` <87pne8q8c0.fsf@vitty.brq.redhat.com>
  2020-02-01 18:51 ` [PATCH 20/61] KVM: x86: Calculate the supported xcr0 mask at load time Sean Christopherson
                   ` (42 subsequent siblings)
  61 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Add helpers to query which of the (two) supported PT modes is active.
The primary motivation is to help document that there is a third PT mode
(host-only) that's currently not supported by KVM.  As is, it's not
obvious that PT_MODE_SYSTEM != !PT_MODE_HOST_GUEST and vice versa, e.g.
that "pt_mode == PT_MODE_SYSTEM" and "pt_mode != PT_MODE_HOST_GUEST" are
two distinct checks.

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/vmx/capabilities.h | 18 ++++++++++++++++++
 arch/x86/kvm/vmx/nested.c       |  2 +-
 arch/x86/kvm/vmx/vmx.c          | 26 +++++++++++++-------------
 arch/x86/kvm/vmx/vmx.h          |  4 ++--
 4 files changed, 34 insertions(+), 16 deletions(-)

diff --git a/arch/x86/kvm/vmx/capabilities.h b/arch/x86/kvm/vmx/capabilities.h
index 283bdb7071af..1a6a99382e94 100644
--- a/arch/x86/kvm/vmx/capabilities.h
+++ b/arch/x86/kvm/vmx/capabilities.h
@@ -353,4 +353,22 @@ static inline bool cpu_has_vmx_intel_pt(void)
 		(vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_IA32_RTIT_CTL);
 }
 
+/*
+ * Processor Trace can operate in one of three modes:
+ *  a. system-wide: trace both host/guest and output to host buffer
+ *  b. host-only:   only trace host and output to host buffer
+ *  c. host-guest:  trace host and guest simultaneously and output to their
+ *                  respective buffer
+ *
+ * KVM currently only supports (a) and (c).
+ */
+static inline bool vmx_pt_mode_is_system(void)
+{
+	return pt_mode == PT_MODE_SYSTEM;
+}
+static inline bool vmx_pt_mode_is_host_guest(void)
+{
+	return pt_mode == PT_MODE_HOST_GUEST;
+}
+
 #endif /* __KVM_X86_VMX_CAPS_H */
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index 7608924ee8c1..e3c29cf0ffaf 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -4543,7 +4543,7 @@ static int enter_vmx_operation(struct kvm_vcpu *vcpu)
 	vmx->nested.vmcs02_initialized = false;
 	vmx->nested.vmxon = true;
 
-	if (pt_mode == PT_MODE_HOST_GUEST) {
+	if (vmx_pt_mode_is_host_guest()) {
 		vmx->pt_desc.guest.ctl = 0;
 		pt_update_intercept_for_msr(vmx);
 	}
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 1419c53aed16..588aa5e4164e 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -1059,7 +1059,7 @@ static unsigned long segment_base(u16 selector)
 
 static inline bool pt_can_write_msr(struct vcpu_vmx *vmx)
 {
-	return (pt_mode == PT_MODE_HOST_GUEST) &&
+	return vmx_pt_mode_is_host_guest() &&
 	       !(vmx->pt_desc.guest.ctl & RTIT_CTL_TRACEEN);
 }
 
@@ -1093,7 +1093,7 @@ static inline void pt_save_msr(struct pt_ctx *ctx, u32 addr_range)
 
 static void pt_guest_enter(struct vcpu_vmx *vmx)
 {
-	if (pt_mode == PT_MODE_SYSTEM)
+	if (vmx_pt_mode_is_system())
 		return;
 
 	/*
@@ -1110,7 +1110,7 @@ static void pt_guest_enter(struct vcpu_vmx *vmx)
 
 static void pt_guest_exit(struct vcpu_vmx *vmx)
 {
-	if (pt_mode == PT_MODE_SYSTEM)
+	if (vmx_pt_mode_is_system())
 		return;
 
 	if (vmx->pt_desc.guest.ctl & RTIT_CTL_TRACEEN) {
@@ -1856,24 +1856,24 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 		return vmx_get_vmx_msr(&vmx->nested.msrs, msr_info->index,
 				       &msr_info->data);
 	case MSR_IA32_RTIT_CTL:
-		if (pt_mode != PT_MODE_HOST_GUEST)
+		if (!vmx_pt_mode_is_host_guest())
 			return 1;
 		msr_info->data = vmx->pt_desc.guest.ctl;
 		break;
 	case MSR_IA32_RTIT_STATUS:
-		if (pt_mode != PT_MODE_HOST_GUEST)
+		if (!vmx_pt_mode_is_host_guest())
 			return 1;
 		msr_info->data = vmx->pt_desc.guest.status;
 		break;
 	case MSR_IA32_RTIT_CR3_MATCH:
-		if ((pt_mode != PT_MODE_HOST_GUEST) ||
+		if (!vmx_pt_mode_is_host_guest() ||
 			!intel_pt_validate_cap(vmx->pt_desc.caps,
 						PT_CAP_cr3_filtering))
 			return 1;
 		msr_info->data = vmx->pt_desc.guest.cr3_match;
 		break;
 	case MSR_IA32_RTIT_OUTPUT_BASE:
-		if ((pt_mode != PT_MODE_HOST_GUEST) ||
+		if (!vmx_pt_mode_is_host_guest() ||
 			(!intel_pt_validate_cap(vmx->pt_desc.caps,
 					PT_CAP_topa_output) &&
 			 !intel_pt_validate_cap(vmx->pt_desc.caps,
@@ -1882,7 +1882,7 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 		msr_info->data = vmx->pt_desc.guest.output_base;
 		break;
 	case MSR_IA32_RTIT_OUTPUT_MASK:
-		if ((pt_mode != PT_MODE_HOST_GUEST) ||
+		if (!vmx_pt_mode_is_host_guest() ||
 			(!intel_pt_validate_cap(vmx->pt_desc.caps,
 					PT_CAP_topa_output) &&
 			 !intel_pt_validate_cap(vmx->pt_desc.caps,
@@ -1892,7 +1892,7 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 		break;
 	case MSR_IA32_RTIT_ADDR0_A ... MSR_IA32_RTIT_ADDR3_B:
 		index = msr_info->index - MSR_IA32_RTIT_ADDR0_A;
-		if ((pt_mode != PT_MODE_HOST_GUEST) ||
+		if (!vmx_pt_mode_is_host_guest() ||
 			(index >= 2 * intel_pt_validate_cap(vmx->pt_desc.caps,
 					PT_CAP_num_address_ranges)))
 			return 1;
@@ -2098,7 +2098,7 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 			return 1;
 		return vmx_set_vmx_msr(vcpu, msr_index, data);
 	case MSR_IA32_RTIT_CTL:
-		if ((pt_mode != PT_MODE_HOST_GUEST) ||
+		if (!vmx_pt_mode_is_host_guest() ||
 			vmx_rtit_ctl_check(vcpu, data) ||
 			vmx->nested.vmxon)
 			return 1;
@@ -4001,7 +4001,7 @@ static void vmx_compute_secondary_exec_control(struct vcpu_vmx *vmx)
 
 	u32 exec_control = vmcs_config.cpu_based_2nd_exec_ctrl;
 
-	if (pt_mode == PT_MODE_SYSTEM)
+	if (vmx_pt_mode_is_system())
 		exec_control &= ~(SECONDARY_EXEC_PT_USE_GPA | SECONDARY_EXEC_PT_CONCEAL_VMX);
 	if (!cpu_need_virtualize_apic_accesses(vcpu))
 		exec_control &= ~SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
@@ -4242,7 +4242,7 @@ static void init_vmcs(struct vcpu_vmx *vmx)
 	if (cpu_has_vmx_encls_vmexit())
 		vmcs_write64(ENCLS_EXITING_BITMAP, -1ull);
 
-	if (pt_mode == PT_MODE_HOST_GUEST) {
+	if (vmx_pt_mode_is_host_guest()) {
 		memset(&vmx->pt_desc, 0, sizeof(vmx->pt_desc));
 		/* Bit[6~0] are forced to 1, writes are ignored. */
 		vmx->pt_desc.guest.output_mask = 0x7F;
@@ -6295,7 +6295,7 @@ static bool vmx_has_emulated_msr(int index)
 
 static bool vmx_pt_supported(void)
 {
-	return pt_mode == PT_MODE_HOST_GUEST;
+	return vmx_pt_mode_is_host_guest();
 }
 
 static void vmx_recover_nmi_blocking(struct vcpu_vmx *vmx)
diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
index a4f7f737c5d4..70eafa88876a 100644
--- a/arch/x86/kvm/vmx/vmx.h
+++ b/arch/x86/kvm/vmx/vmx.h
@@ -449,7 +449,7 @@ static inline void vmx_segment_cache_clear(struct vcpu_vmx *vmx)
 static inline u32 vmx_vmentry_ctrl(void)
 {
 	u32 vmentry_ctrl = vmcs_config.vmentry_ctrl;
-	if (pt_mode == PT_MODE_SYSTEM)
+	if (vmx_pt_mode_is_system())
 		vmentry_ctrl &= ~(VM_ENTRY_PT_CONCEAL_PIP |
 				  VM_ENTRY_LOAD_IA32_RTIT_CTL);
 	/* Loading of EFER and PERF_GLOBAL_CTRL are toggled dynamically */
@@ -460,7 +460,7 @@ static inline u32 vmx_vmentry_ctrl(void)
 static inline u32 vmx_vmexit_ctrl(void)
 {
 	u32 vmexit_ctrl = vmcs_config.vmexit_ctrl;
-	if (pt_mode == PT_MODE_SYSTEM)
+	if (vmx_pt_mode_is_system())
 		vmexit_ctrl &= ~(VM_EXIT_PT_CONCEAL_PIP |
 				 VM_EXIT_CLEAR_IA32_RTIT_CTL);
 	/* Loading of EFER and PERF_GLOBAL_CTRL are toggled dynamically */
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 20/61] KVM: x86: Calculate the supported xcr0 mask at load time
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (18 preceding siblings ...)
  2020-02-01 18:51 ` [PATCH 19/61] KVM: VMX: Add helpers to query Intel PT mode Sean Christopherson
@ 2020-02-01 18:51 ` Sean Christopherson
  2020-02-13 14:21   ` Xiaoyao Li
  2020-02-01 18:51 ` [PATCH 21/61] KVM: x86: Use supported_xcr0 to detect MPX support Sean Christopherson
                   ` (41 subsequent siblings)
  61 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Add a new global variable, supported_xcr0, to track which xcr0 bits can
be exposed to the guest instead of calculating the mask on every call.
The supported bits are constant for a given instance of KVM.

This paves the way toward eliminating the ->mpx_supported() call in
kvm_mpx_supported(), e.g. eliminates multiple retpolines in VMX's nested
VM-Enter path, and eventually toward eliminating ->mpx_supported()
altogether.

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/cpuid.c   | 32 +++++++++-----------------------
 arch/x86/kvm/svm.c     |  2 ++
 arch/x86/kvm/vmx/vmx.c |  4 ++++
 arch/x86/kvm/x86.c     | 14 +++++++++++---
 arch/x86/kvm/x86.h     |  7 +------
 5 files changed, 27 insertions(+), 32 deletions(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index bfd8304a8437..b9763eb711cb 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -52,16 +52,6 @@ bool kvm_mpx_supported(void)
 }
 EXPORT_SYMBOL_GPL(kvm_mpx_supported);
 
-u64 kvm_supported_xcr0(void)
-{
-	u64 xcr0 = KVM_SUPPORTED_XCR0 & host_xcr0;
-
-	if (!kvm_mpx_supported())
-		xcr0 &= ~(XFEATURE_MASK_BNDREGS | XFEATURE_MASK_BNDCSR);
-
-	return xcr0;
-}
-
 #define F feature_bit
 
 int kvm_update_cpuid(struct kvm_vcpu *vcpu)
@@ -107,8 +97,7 @@ int kvm_update_cpuid(struct kvm_vcpu *vcpu)
 		vcpu->arch.guest_xstate_size = XSAVE_HDR_SIZE + XSAVE_HDR_OFFSET;
 	} else {
 		vcpu->arch.guest_supported_xcr0 =
-			(best->eax | ((u64)best->edx << 32)) &
-			kvm_supported_xcr0();
+			(best->eax | ((u64)best->edx << 32)) & supported_xcr0;
 		vcpu->arch.guest_xstate_size = best->ebx =
 			xstate_required_size(vcpu->arch.xcr0, false);
 	}
@@ -633,14 +622,12 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
 				goto out;
 		}
 		break;
-	case 0xd: {
-		u64 supported = kvm_supported_xcr0();
-
-		entry->eax &= supported;
-		entry->ebx = xstate_required_size(supported, false);
+	case 0xd:
+		entry->eax &= supported_xcr0;
+		entry->ebx = xstate_required_size(supported_xcr0, false);
 		entry->ecx = entry->ebx;
-		entry->edx &= supported >> 32;
-		if (!supported)
+		entry->edx &= supported_xcr0 >> 32;
+		if (!supported_xcr0)
 			break;
 
 		entry = do_host_cpuid(array, function, 1);
@@ -650,7 +637,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
 		entry->eax &= kvm_cpuid_D_1_eax_x86_features;
 		cpuid_mask(&entry->eax, CPUID_D_1_EAX);
 		if (entry->eax & (F(XSAVES)|F(XSAVEC)))
-			entry->ebx = xstate_required_size(supported, true);
+			entry->ebx = xstate_required_size(supported_xcr0, true);
 		else
 			entry->ebx = 0;
 		/* Saving XSS controlled state via XSAVES isn't supported. */
@@ -658,7 +645,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
 		entry->edx = 0;
 
 		for (i = 2; i < 64; ++i) {
-			if (!(supported & BIT_ULL(i)))
+			if (!(supported_xcr0 & BIT_ULL(i)))
 				continue;
 
 			entry = do_host_cpuid(array, function, i);
@@ -666,7 +653,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
 				goto out;
 
 			/*
-			 * The @supported check above should have filtered out
+			 * The supported check above should have filtered out
 			 * invalid sub-leafs as well as sub-leafs managed by
 			 * IA32_XSS MSR.  Only XCR0-managed sub-leafs should
 			 * reach this point, and they should have a non-zero
@@ -681,7 +668,6 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
 			entry->edx = 0;
 		}
 		break;
-	}
 	/* Intel PT */
 	case 0x14:
 		if (!f_intel_pt)
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index bf0556588ad0..af096c4f9c5f 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -1368,6 +1368,8 @@ static __init int svm_hardware_setup(void)
 
 	init_msrpm_offsets();
 
+	supported_xcr0 &= ~(XFEATURE_MASK_BNDREGS | XFEATURE_MASK_BNDCSR);
+
 	if (boot_cpu_has(X86_FEATURE_NX))
 		kvm_enable_efer_bits(EFER_NX);
 
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 588aa5e4164e..32a84ec15064 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7590,6 +7590,10 @@ static __init int hardware_setup(void)
 		WARN_ONCE(host_bndcfgs, "KVM: BNDCFGS in host will be lost");
 	}
 
+	if (!kvm_mpx_supported())
+		supported_xcr0 &= ~(XFEATURE_MASK_BNDREGS |
+				    XFEATURE_MASK_BNDCSR);
+
 	if (!cpu_has_vmx_vpid() || !cpu_has_vmx_invvpid() ||
 	    !(cpu_has_vmx_invvpid_single() || cpu_has_vmx_invvpid_global()))
 		enable_vpid = 0;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 7e3f1d937224..f90c56c0c64a 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -180,6 +180,11 @@ struct kvm_shared_msrs {
 static struct kvm_shared_msrs_global __read_mostly shared_msrs_global;
 static struct kvm_shared_msrs __percpu *shared_msrs;
 
+#define KVM_SUPPORTED_XCR0     (XFEATURE_MASK_FP | XFEATURE_MASK_SSE \
+				| XFEATURE_MASK_YMM | XFEATURE_MASK_BNDREGS \
+				| XFEATURE_MASK_BNDCSR | XFEATURE_MASK_AVX512 \
+				| XFEATURE_MASK_PKRU)
+
 static u64 __read_mostly host_xss;
 
 struct kvm_stats_debugfs_item debugfs_entries[] = {
@@ -226,6 +231,8 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
 };
 
 u64 __read_mostly host_xcr0;
+u64 __read_mostly supported_xcr0;
+EXPORT_SYMBOL_GPL(supported_xcr0);
 
 struct kmem_cache *x86_fpu_cache;
 EXPORT_SYMBOL_GPL(x86_fpu_cache);
@@ -4081,8 +4088,7 @@ static int kvm_vcpu_ioctl_x86_set_xsave(struct kvm_vcpu *vcpu,
 		 * CPUID leaf 0xD, index 0, EDX:EAX.  This is for compatibility
 		 * with old userspace.
 		 */
-		if (xstate_bv & ~kvm_supported_xcr0() ||
-			mxcsr & ~mxcsr_feature_mask)
+		if (xstate_bv & ~supported_xcr0 || mxcsr & ~mxcsr_feature_mask)
 			return -EINVAL;
 		load_xsave(vcpu, (u8 *)guest_xsave->region);
 	} else {
@@ -7335,8 +7341,10 @@ int kvm_arch_init(void *opaque)
 
 	perf_register_guest_info_callbacks(&kvm_guest_cbs);
 
-	if (boot_cpu_has(X86_FEATURE_XSAVE))
+	if (boot_cpu_has(X86_FEATURE_XSAVE)) {
 		host_xcr0 = xgetbv(XCR_XFEATURE_ENABLED_MASK);
+		supported_xcr0 = host_xcr0 & KVM_SUPPORTED_XCR0;
+	}
 
 	kvm_lapic_init();
 	if (pi_inject_timer == -1)
diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
index 3624665acee4..02b49ee49e24 100644
--- a/arch/x86/kvm/x86.h
+++ b/arch/x86/kvm/x86.h
@@ -280,13 +280,8 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
 			    int emulation_type, void *insn, int insn_len);
 enum exit_fastpath_completion handle_fastpath_set_msr_irqoff(struct kvm_vcpu *vcpu);
 
-#define KVM_SUPPORTED_XCR0     (XFEATURE_MASK_FP | XFEATURE_MASK_SSE \
-				| XFEATURE_MASK_YMM | XFEATURE_MASK_BNDREGS \
-				| XFEATURE_MASK_BNDCSR | XFEATURE_MASK_AVX512 \
-				| XFEATURE_MASK_PKRU)
 extern u64 host_xcr0;
-
-extern u64 kvm_supported_xcr0(void);
+extern u64 supported_xcr0;
 
 extern unsigned int min_timer_period_us;
 
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 21/61] KVM: x86: Use supported_xcr0 to detect MPX support
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (19 preceding siblings ...)
  2020-02-01 18:51 ` [PATCH 20/61] KVM: x86: Calculate the supported xcr0 mask at load time Sean Christopherson
@ 2020-02-01 18:51 ` Sean Christopherson
  2020-02-13 14:25   ` Xiaoyao Li
  2020-02-21 15:32   ` Vitaly Kuznetsov
  2020-02-01 18:51 ` [PATCH 22/61] KVM: x86: Make kvm_mpx_supported() an inline function Sean Christopherson
                   ` (40 subsequent siblings)
  61 siblings, 2 replies; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Query supported_xcr0 when checking for MPX support instead of invoking
->mpx_supported() and drop ->mpx_supported() as kvm_mpx_supported() was
its last user.  Rename vmx_mpx_supported() to cpu_has_vmx_mpx() to
better align with VMX/VMCS nomenclature.

Modify VMX's adjustment of xcr0 to call cpus_has_vmx_mpx() (renamed from
vmx_mpx_supported()) directly to avoid reading supported_xcr0 before
it's fully configured.

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/include/asm/kvm_host.h | 2 +-
 arch/x86/kvm/cpuid.c            | 3 +--
 arch/x86/kvm/svm.c              | 6 ------
 arch/x86/kvm/vmx/capabilities.h | 2 +-
 arch/x86/kvm/vmx/vmx.c          | 3 +--
 5 files changed, 4 insertions(+), 12 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 77d206a93658..85f0d96cfeb2 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1163,7 +1163,7 @@ struct kvm_x86_ops {
 			       enum x86_intercept_stage stage);
 	void (*handle_exit_irqoff)(struct kvm_vcpu *vcpu,
 		enum exit_fastpath_completion *exit_fastpath);
-	bool (*mpx_supported)(void);
+
 	bool (*xsaves_supported)(void);
 	bool (*umip_emulated)(void);
 	bool (*pt_supported)(void);
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index b9763eb711cb..84006cc4007c 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -47,8 +47,7 @@ static u32 xstate_required_size(u64 xstate_bv, bool compacted)
 
 bool kvm_mpx_supported(void)
 {
-	return ((host_xcr0 & (XFEATURE_MASK_BNDREGS | XFEATURE_MASK_BNDCSR))
-		 && kvm_x86_ops->mpx_supported());
+	return supported_xcr0 & (XFEATURE_MASK_BNDREGS | XFEATURE_MASK_BNDCSR);
 }
 EXPORT_SYMBOL_GPL(kvm_mpx_supported);
 
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index af096c4f9c5f..3c7ddaff405d 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -6082,11 +6082,6 @@ static bool svm_invpcid_supported(void)
 	return false;
 }
 
-static bool svm_mpx_supported(void)
-{
-	return false;
-}
-
 static bool svm_xsaves_supported(void)
 {
 	return boot_cpu_has(X86_FEATURE_XSAVES);
@@ -7468,7 +7463,6 @@ static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
 
 	.rdtscp_supported = svm_rdtscp_supported,
 	.invpcid_supported = svm_invpcid_supported,
-	.mpx_supported = svm_mpx_supported,
 	.xsaves_supported = svm_xsaves_supported,
 	.umip_emulated = svm_umip_emulated,
 	.pt_supported = svm_pt_supported,
diff --git a/arch/x86/kvm/vmx/capabilities.h b/arch/x86/kvm/vmx/capabilities.h
index 1a6a99382e94..0a0b1494a934 100644
--- a/arch/x86/kvm/vmx/capabilities.h
+++ b/arch/x86/kvm/vmx/capabilities.h
@@ -100,7 +100,7 @@ static inline bool cpu_has_load_perf_global_ctrl(void)
 	       (vmcs_config.vmexit_ctrl & VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL);
 }
 
-static inline bool vmx_mpx_supported(void)
+static inline bool cpu_has_vmx_mpx(void)
 {
 	return (vmcs_config.vmexit_ctrl & VM_EXIT_CLEAR_BNDCFGS) &&
 		(vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_BNDCFGS);
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 32a84ec15064..98fd651f7f7e 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7590,7 +7590,7 @@ static __init int hardware_setup(void)
 		WARN_ONCE(host_bndcfgs, "KVM: BNDCFGS in host will be lost");
 	}
 
-	if (!kvm_mpx_supported())
+	if (!cpu_has_vmx_mpx())
 		supported_xcr0 &= ~(XFEATURE_MASK_BNDREGS |
 				    XFEATURE_MASK_BNDCSR);
 
@@ -7857,7 +7857,6 @@ static struct kvm_x86_ops vmx_x86_ops __ro_after_init = {
 
 	.check_intercept = vmx_check_intercept,
 	.handle_exit_irqoff = vmx_handle_exit_irqoff,
-	.mpx_supported = vmx_mpx_supported,
 	.xsaves_supported = vmx_xsaves_supported,
 	.umip_emulated = vmx_umip_emulated,
 	.pt_supported = vmx_pt_supported,
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 22/61] KVM: x86: Make kvm_mpx_supported() an inline function
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (20 preceding siblings ...)
  2020-02-01 18:51 ` [PATCH 21/61] KVM: x86: Use supported_xcr0 to detect MPX support Sean Christopherson
@ 2020-02-01 18:51 ` Sean Christopherson
  2020-02-13 14:26   ` Xiaoyao Li
  2020-02-21 15:33   ` Vitaly Kuznetsov
  2020-02-01 18:51 ` [PATCH 23/61] KVM: x86: Clear output regs for CPUID 0x14 if PT isn't exposed to guest Sean Christopherson
                   ` (39 subsequent siblings)
  61 siblings, 2 replies; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Expose kvm_mpx_supported() as a static inline so that it can be inlined
in kvm_intel.ko.

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/cpuid.c | 6 ------
 arch/x86/kvm/cpuid.h | 1 -
 arch/x86/kvm/x86.h   | 5 +++++
 3 files changed, 5 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 84006cc4007c..d3c93b94abc3 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -45,12 +45,6 @@ static u32 xstate_required_size(u64 xstate_bv, bool compacted)
 	return ret;
 }
 
-bool kvm_mpx_supported(void)
-{
-	return supported_xcr0 & (XFEATURE_MASK_BNDREGS | XFEATURE_MASK_BNDCSR);
-}
-EXPORT_SYMBOL_GPL(kvm_mpx_supported);
-
 #define F feature_bit
 
 int kvm_update_cpuid(struct kvm_vcpu *vcpu)
diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
index 7366c618aa04..c1ac0995843d 100644
--- a/arch/x86/kvm/cpuid.h
+++ b/arch/x86/kvm/cpuid.h
@@ -7,7 +7,6 @@
 #include <asm/processor.h>
 
 int kvm_update_cpuid(struct kvm_vcpu *vcpu);
-bool kvm_mpx_supported(void);
 struct kvm_cpuid_entry2 *kvm_find_cpuid_entry(struct kvm_vcpu *vcpu,
 					      u32 function, u32 index);
 int kvm_dev_ioctl_get_cpuid(struct kvm_cpuid2 *cpuid,
diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
index 02b49ee49e24..bfac4a80956c 100644
--- a/arch/x86/kvm/x86.h
+++ b/arch/x86/kvm/x86.h
@@ -283,6 +283,11 @@ enum exit_fastpath_completion handle_fastpath_set_msr_irqoff(struct kvm_vcpu *vc
 extern u64 host_xcr0;
 extern u64 supported_xcr0;
 
+static inline bool kvm_mpx_supported(void)
+{
+	return supported_xcr0 & (XFEATURE_MASK_BNDREGS | XFEATURE_MASK_BNDCSR);
+}
+
 extern unsigned int min_timer_period_us;
 
 extern bool enable_vmware_backdoor;
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 23/61] KVM: x86: Clear output regs for CPUID 0x14 if PT isn't exposed to guest
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (21 preceding siblings ...)
  2020-02-01 18:51 ` [PATCH 22/61] KVM: x86: Make kvm_mpx_supported() an inline function Sean Christopherson
@ 2020-02-01 18:51 ` Sean Christopherson
  2020-02-21 15:36   ` Vitaly Kuznetsov
  2020-02-01 18:51 ` [PATCH 24/61] KVM: x86: Drop explicit @func param from ->set_supported_cpuid() Sean Christopherson
                   ` (38 subsequent siblings)
  61 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Clear the output regs for the main CPUID 0x14 leaf (index=0) if Intel PT
isn't exposed to the guest.  Leaf 0x14 enumerates Intel PT capabilities
and should return zeroes if PT is not supported.  Incorrectly reporting
PT capabilities is essentially a cosmetic error, i.e. doesn't negatively
affect any known userspace/kernel, as the existence of PT itself is
correctly enumerated via CPUID 0x7.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/cpuid.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index d3c93b94abc3..056faf27b14b 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -663,8 +663,10 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
 		break;
 	/* Intel PT */
 	case 0x14:
-		if (!f_intel_pt)
+		if (!f_intel_pt) {
+			entry->eax = entry->ebx = entry->ecx = entry->edx = 0;
 			break;
+		}
 
 		for (i = 1, max_idx = entry->eax; i <= max_idx; ++i) {
 			if (!do_host_cpuid(array, function, i))
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 24/61] KVM: x86: Drop explicit @func param from ->set_supported_cpuid()
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (22 preceding siblings ...)
  2020-02-01 18:51 ` [PATCH 23/61] KVM: x86: Clear output regs for CPUID 0x14 if PT isn't exposed to guest Sean Christopherson
@ 2020-02-01 18:51 ` Sean Christopherson
  2020-02-21 15:39   ` Vitaly Kuznetsov
  2020-02-01 18:51 ` [PATCH 25/61] KVM: x86: Use u32 for holding CPUID register value in helpers Sean Christopherson
                   ` (37 subsequent siblings)
  61 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Drop the explicit @func param from ->set_supported_cpuid() and instead
pull the CPUID function from the relevant entry.  This sets the stage
for hardening guest CPUID updates in future patches, e.g. allows adding
run-time assertions that the CPUID feature being changed is actually
a bit in the referenced CPUID entry.

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/include/asm/kvm_host.h | 2 +-
 arch/x86/kvm/cpuid.c            | 2 +-
 arch/x86/kvm/svm.c              | 4 ++--
 arch/x86/kvm/vmx/vmx.c          | 4 ++--
 4 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 85f0d96cfeb2..a61928d5435b 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1148,7 +1148,7 @@ struct kvm_x86_ops {
 
 	void (*set_tdp_cr3)(struct kvm_vcpu *vcpu, unsigned long cr3);
 
-	void (*set_supported_cpuid)(u32 func, struct kvm_cpuid_entry2 *entry);
+	void (*set_supported_cpuid)(struct kvm_cpuid_entry2 *entry);
 
 	bool (*has_wbinvd_exit)(void);
 
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 056faf27b14b..e3026fe638aa 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -784,7 +784,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
 		break;
 	}
 
-	kvm_x86_ops->set_supported_cpuid(function, entry);
+	kvm_x86_ops->set_supported_cpuid(entry);
 
 	r = 0;
 
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 3c7ddaff405d..535eb746fb0f 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -6032,9 +6032,9 @@ static void svm_cpuid_update(struct kvm_vcpu *vcpu)
 
 #define F feature_bit
 
-static void svm_set_supported_cpuid(u32 func, struct kvm_cpuid_entry2 *entry)
+static void svm_set_supported_cpuid(struct kvm_cpuid_entry2 *entry)
 {
-	switch (func) {
+	switch (entry->function) {
 	case 0x1:
 		if (avic)
 			entry->ecx &= ~F(X2APIC);
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 98fd651f7f7e..3ff830e2258e 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7104,9 +7104,9 @@ static void vmx_cpuid_update(struct kvm_vcpu *vcpu)
 	}
 }
 
-static void vmx_set_supported_cpuid(u32 func, struct kvm_cpuid_entry2 *entry)
+static void vmx_set_supported_cpuid(struct kvm_cpuid_entry2 *entry)
 {
-	if (func == 1 && nested)
+	if (entry->function == 1 && nested)
 		entry->ecx |= feature_bit(VMX);
 }
 
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 25/61] KVM: x86: Use u32 for holding CPUID register value in helpers
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (23 preceding siblings ...)
  2020-02-01 18:51 ` [PATCH 24/61] KVM: x86: Drop explicit @func param from ->set_supported_cpuid() Sean Christopherson
@ 2020-02-01 18:51 ` Sean Christopherson
  2020-02-21 15:43   ` Vitaly Kuznetsov
  2020-02-01 18:51 ` [PATCH 26/61] KVM: x86: Introduce cpuid_entry_{get,has}() accessors Sean Christopherson
                   ` (36 subsequent siblings)
  61 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Change the intermediate CPUID output register values from "int" to "u32"
to match both hardware and the storage type in struct cpuid_reg.

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/cpuid.h | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
index c1ac0995843d..72a79bdfed6b 100644
--- a/arch/x86/kvm/cpuid.h
+++ b/arch/x86/kvm/cpuid.h
@@ -95,7 +95,7 @@ static __always_inline struct cpuid_reg x86_feature_cpuid(unsigned x86_feature)
 	return reverse_cpuid[x86_leaf];
 }
 
-static __always_inline int *guest_cpuid_get_register(struct kvm_vcpu *vcpu, unsigned x86_feature)
+static __always_inline u32 *guest_cpuid_get_register(struct kvm_vcpu *vcpu, unsigned x86_feature)
 {
 	struct kvm_cpuid_entry2 *entry;
 	const struct cpuid_reg cpuid = x86_feature_cpuid(x86_feature);
@@ -121,7 +121,7 @@ static __always_inline int *guest_cpuid_get_register(struct kvm_vcpu *vcpu, unsi
 
 static __always_inline bool guest_cpuid_has(struct kvm_vcpu *vcpu, unsigned x86_feature)
 {
-	int *reg;
+	u32 *reg;
 
 	reg = guest_cpuid_get_register(vcpu, x86_feature);
 	if (!reg)
@@ -132,7 +132,7 @@ static __always_inline bool guest_cpuid_has(struct kvm_vcpu *vcpu, unsigned x86_
 
 static __always_inline void guest_cpuid_clear(struct kvm_vcpu *vcpu, unsigned x86_feature)
 {
-	int *reg;
+	u32 *reg;
 
 	reg = guest_cpuid_get_register(vcpu, x86_feature);
 	if (reg)
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 26/61] KVM: x86: Introduce cpuid_entry_{get,has}() accessors
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (24 preceding siblings ...)
  2020-02-01 18:51 ` [PATCH 25/61] KVM: x86: Use u32 for holding CPUID register value in helpers Sean Christopherson
@ 2020-02-01 18:51 ` Sean Christopherson
  2020-02-14  9:44   ` Xiaoyao Li
  2020-02-21 15:57   ` Vitaly Kuznetsov
  2020-02-01 18:51 ` [PATCH 27/61] KVM: x86: Introduce cpuid_entry_{change,set,clear}() mutators Sean Christopherson
                   ` (35 subsequent siblings)
  61 siblings, 2 replies; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Introduce accessors to retrieve feature bits from CPUID entries and use
the new accessors where applicable.  Using the accessors eliminates the
need to manually specify the register to be queried at no extra cost
(binary output is identical) and will allow adding runtime consistency
checks on the function and index in a future patch.

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/cpuid.c |  9 +++++----
 arch/x86/kvm/cpuid.h | 46 +++++++++++++++++++++++++++++++++++---------
 2 files changed, 42 insertions(+), 13 deletions(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index e3026fe638aa..3316963dad3d 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -68,7 +68,7 @@ int kvm_update_cpuid(struct kvm_vcpu *vcpu)
 		best->edx |= F(APIC);
 
 	if (apic) {
-		if (best->ecx & F(TSC_DEADLINE_TIMER))
+		if (cpuid_entry_has(best, X86_FEATURE_TSC_DEADLINE_TIMER))
 			apic->lapic_timer.timer_mode_mask = 3 << 17;
 		else
 			apic->lapic_timer.timer_mode_mask = 1 << 17;
@@ -96,7 +96,8 @@ int kvm_update_cpuid(struct kvm_vcpu *vcpu)
 	}
 
 	best = kvm_find_cpuid_entry(vcpu, 0xD, 1);
-	if (best && (best->eax & (F(XSAVES) | F(XSAVEC))))
+	if (best && (cpuid_entry_has(best, X86_FEATURE_XSAVES) ||
+		     cpuid_entry_has(best, X86_FEATURE_XSAVEC)))
 		best->ebx = xstate_required_size(vcpu->arch.xcr0, true);
 
 	/*
@@ -155,7 +156,7 @@ static void cpuid_fix_nx_cap(struct kvm_vcpu *vcpu)
 			break;
 		}
 	}
-	if (entry && (entry->edx & F(NX)) && !is_efer_nx()) {
+	if (entry && cpuid_entry_has(entry, X86_FEATURE_NX) && !is_efer_nx()) {
 		entry->edx &= ~F(NX);
 		printk(KERN_INFO "kvm: guest NX capability removed\n");
 	}
@@ -387,7 +388,7 @@ static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry)
 		entry->ebx |= F(TSC_ADJUST);
 
 		entry->ecx &= kvm_cpuid_7_0_ecx_x86_features;
-		f_la57 = entry->ecx & F(LA57);
+		f_la57 = cpuid_entry_get(entry, X86_FEATURE_LA57);
 		cpuid_mask(&entry->ecx, CPUID_7_ECX);
 		/* Set LA57 based on hardware capability. */
 		entry->ecx |= f_la57;
diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
index 72a79bdfed6b..64e96e4086e2 100644
--- a/arch/x86/kvm/cpuid.h
+++ b/arch/x86/kvm/cpuid.h
@@ -95,16 +95,10 @@ static __always_inline struct cpuid_reg x86_feature_cpuid(unsigned x86_feature)
 	return reverse_cpuid[x86_leaf];
 }
 
-static __always_inline u32 *guest_cpuid_get_register(struct kvm_vcpu *vcpu, unsigned x86_feature)
+static __always_inline u32 *__cpuid_entry_get_reg(struct kvm_cpuid_entry2 *entry,
+						  const struct cpuid_reg *cpuid)
 {
-	struct kvm_cpuid_entry2 *entry;
-	const struct cpuid_reg cpuid = x86_feature_cpuid(x86_feature);
-
-	entry = kvm_find_cpuid_entry(vcpu, cpuid.function, cpuid.index);
-	if (!entry)
-		return NULL;
-
-	switch (cpuid.reg) {
+	switch (cpuid->reg) {
 	case CPUID_EAX:
 		return &entry->eax;
 	case CPUID_EBX:
@@ -119,6 +113,40 @@ static __always_inline u32 *guest_cpuid_get_register(struct kvm_vcpu *vcpu, unsi
 	}
 }
 
+static __always_inline u32 *cpuid_entry_get_reg(struct kvm_cpuid_entry2 *entry,
+						unsigned x86_feature)
+{
+	const struct cpuid_reg cpuid = x86_feature_cpuid(x86_feature);
+
+	return __cpuid_entry_get_reg(entry, &cpuid);
+}
+
+static __always_inline u32 cpuid_entry_get(struct kvm_cpuid_entry2 *entry,
+					   unsigned x86_feature)
+{
+	u32 *reg = cpuid_entry_get_reg(entry, x86_feature);
+
+	return *reg & __feature_bit(x86_feature);
+}
+
+static __always_inline bool cpuid_entry_has(struct kvm_cpuid_entry2 *entry,
+					    unsigned x86_feature)
+{
+	return cpuid_entry_get(entry, x86_feature);
+}
+
+static __always_inline int *guest_cpuid_get_register(struct kvm_vcpu *vcpu, unsigned x86_feature)
+{
+	struct kvm_cpuid_entry2 *entry;
+	const struct cpuid_reg cpuid = x86_feature_cpuid(x86_feature);
+
+	entry = kvm_find_cpuid_entry(vcpu, cpuid.function, cpuid.index);
+	if (!entry)
+		return NULL;
+
+	return __cpuid_entry_get_reg(entry, &cpuid);
+}
+
 static __always_inline bool guest_cpuid_has(struct kvm_vcpu *vcpu, unsigned x86_feature)
 {
 	u32 *reg;
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 27/61] KVM: x86: Introduce cpuid_entry_{change,set,clear}() mutators
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (25 preceding siblings ...)
  2020-02-01 18:51 ` [PATCH 26/61] KVM: x86: Introduce cpuid_entry_{get,has}() accessors Sean Christopherson
@ 2020-02-01 18:51 ` Sean Christopherson
       [not found]   ` <87ftf0p0d0.fsf@vitty.brq.redhat.com>
  2020-02-01 18:51 ` [PATCH 28/61] KVM: x86: Refactor cpuid_mask() to auto-retrieve the register Sean Christopherson
                   ` (34 subsequent siblings)
  61 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Introduce mutators to modify feature bits in CPUID entries and use the
new mutators where applicable.  Using the mutators eliminates the need
to manually specify the register to modify query at no extra cost and
will allow adding runtime consistency checks on the function/index in a
future patch.

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/cpuid.c | 62 +++++++++++++++++++-------------------------
 arch/x86/kvm/cpuid.h | 31 ++++++++++++++++++++++
 arch/x86/kvm/svm.c   | 13 ++++------
 3 files changed, 62 insertions(+), 44 deletions(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 3316963dad3d..195f4dcc8c6a 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -57,15 +57,12 @@ int kvm_update_cpuid(struct kvm_vcpu *vcpu)
 		return 0;
 
 	/* Update OSXSAVE bit */
-	if (boot_cpu_has(X86_FEATURE_XSAVE) && best->function == 0x1) {
-		best->ecx &= ~F(OSXSAVE);
-		if (kvm_read_cr4_bits(vcpu, X86_CR4_OSXSAVE))
-			best->ecx |= F(OSXSAVE);
-	}
+	if (boot_cpu_has(X86_FEATURE_XSAVE) && best->function == 0x1)
+		cpuid_entry_change(best, X86_FEATURE_OSXSAVE,
+				   kvm_read_cr4_bits(vcpu, X86_CR4_OSXSAVE));
 
-	best->edx &= ~F(APIC);
-	if (vcpu->arch.apic_base & MSR_IA32_APICBASE_ENABLE)
-		best->edx |= F(APIC);
+	cpuid_entry_change(best, X86_FEATURE_APIC,
+			   vcpu->arch.apic_base & MSR_IA32_APICBASE_ENABLE);
 
 	if (apic) {
 		if (cpuid_entry_has(best, X86_FEATURE_TSC_DEADLINE_TIMER))
@@ -75,14 +72,9 @@ int kvm_update_cpuid(struct kvm_vcpu *vcpu)
 	}
 
 	best = kvm_find_cpuid_entry(vcpu, 7, 0);
-	if (best) {
-		/* Update OSPKE bit */
-		if (boot_cpu_has(X86_FEATURE_PKU) && best->function == 0x7) {
-			best->ecx &= ~F(OSPKE);
-			if (kvm_read_cr4_bits(vcpu, X86_CR4_PKE))
-				best->ecx |= F(OSPKE);
-		}
-	}
+	if (best && boot_cpu_has(X86_FEATURE_PKU) && best->function == 0x7)
+		cpuid_entry_change(best, X86_FEATURE_OSPKE,
+				   kvm_read_cr4_bits(vcpu, X86_CR4_PKE));
 
 	best = kvm_find_cpuid_entry(vcpu, 0xD, 0);
 	if (!best) {
@@ -119,12 +111,10 @@ int kvm_update_cpuid(struct kvm_vcpu *vcpu)
 
 	if (!kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_MISC_ENABLE_NO_MWAIT)) {
 		best = kvm_find_cpuid_entry(vcpu, 0x1, 0);
-		if (best) {
-			if (vcpu->arch.ia32_misc_enable_msr & MSR_IA32_MISC_ENABLE_MWAIT)
-				best->ecx |= F(MWAIT);
-			else
-				best->ecx &= ~F(MWAIT);
-		}
+		if (best)
+			cpuid_entry_change(best, X86_FEATURE_MWAIT,
+					   vcpu->arch.ia32_misc_enable_msr &
+					   MSR_IA32_MISC_ENABLE_MWAIT);
 	}
 
 	/* Update physical-address width */
@@ -157,7 +147,7 @@ static void cpuid_fix_nx_cap(struct kvm_vcpu *vcpu)
 		}
 	}
 	if (entry && cpuid_entry_has(entry, X86_FEATURE_NX) && !is_efer_nx()) {
-		entry->edx &= ~F(NX);
+		cpuid_entry_clear(entry, X86_FEATURE_NX);
 		printk(KERN_INFO "kvm: guest NX capability removed\n");
 	}
 }
@@ -385,7 +375,7 @@ static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry)
 		entry->ebx &= kvm_cpuid_7_0_ebx_x86_features;
 		cpuid_mask(&entry->ebx, CPUID_7_0_EBX);
 		/* TSC_ADJUST is emulated */
-		entry->ebx |= F(TSC_ADJUST);
+		cpuid_entry_set(entry, X86_FEATURE_TSC_ADJUST);
 
 		entry->ecx &= kvm_cpuid_7_0_ecx_x86_features;
 		f_la57 = cpuid_entry_get(entry, X86_FEATURE_LA57);
@@ -396,21 +386,21 @@ static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry)
 		entry->ecx |= f_pku;
 		/* PKU is not yet implemented for shadow paging. */
 		if (!tdp_enabled || !boot_cpu_has(X86_FEATURE_OSPKE))
-			entry->ecx &= ~F(PKU);
+			cpuid_entry_clear(entry, X86_FEATURE_PKU);
 
 		entry->edx &= kvm_cpuid_7_0_edx_x86_features;
 		cpuid_mask(&entry->edx, CPUID_7_EDX);
 		if (boot_cpu_has(X86_FEATURE_IBPB) && boot_cpu_has(X86_FEATURE_IBRS))
-			entry->edx |= F(SPEC_CTRL);
+			cpuid_entry_set(entry, X86_FEATURE_SPEC_CTRL);
 		if (boot_cpu_has(X86_FEATURE_STIBP))
-			entry->edx |= F(INTEL_STIBP);
+			cpuid_entry_set(entry, X86_FEATURE_INTEL_STIBP);
 		if (boot_cpu_has(X86_FEATURE_AMD_SSBD))
-			entry->edx |= F(SPEC_CTRL_SSBD);
+			cpuid_entry_set(entry, X86_FEATURE_SPEC_CTRL_SSBD);
 		/*
 		 * We emulate ARCH_CAPABILITIES in software even
 		 * if the host doesn't support it.
 		 */
-		entry->edx |= F(ARCH_CAPABILITIES);
+		cpuid_entry_set(entry, X86_FEATURE_ARCH_CAPABILITIES);
 		break;
 	case 1:
 		entry->eax &= kvm_cpuid_7_1_eax_x86_features;
@@ -522,7 +512,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
 		cpuid_mask(&entry->ecx, CPUID_1_ECX);
 		/* we support x2apic emulation even if host does not support
 		 * it since we emulate x2apic in software */
-		entry->ecx |= F(X2APIC);
+		cpuid_entry_set(entry, X86_FEATURE_X2APIC);
 		break;
 	/* function 2 entries are STATEFUL. That is, repeated cpuid commands
 	 * may return different values. This forces us to get_cpu() before
@@ -737,22 +727,22 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
 		 * record that in cpufeatures so use them.
 		 */
 		if (boot_cpu_has(X86_FEATURE_IBPB))
-			entry->ebx |= F(AMD_IBPB);
+			cpuid_entry_set(entry, X86_FEATURE_AMD_IBPB);
 		if (boot_cpu_has(X86_FEATURE_IBRS))
-			entry->ebx |= F(AMD_IBRS);
+			cpuid_entry_set(entry, X86_FEATURE_AMD_IBRS);
 		if (boot_cpu_has(X86_FEATURE_STIBP))
-			entry->ebx |= F(AMD_STIBP);
+			cpuid_entry_set(entry, X86_FEATURE_AMD_STIBP);
 		if (boot_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD))
-			entry->ebx |= F(AMD_SSBD);
+			cpuid_entry_set(entry, X86_FEATURE_AMD_SSBD);
 		if (!boot_cpu_has_bug(X86_BUG_SPEC_STORE_BYPASS))
-			entry->ebx |= F(AMD_SSB_NO);
+			cpuid_entry_set(entry, X86_FEATURE_AMD_SSB_NO);
 		/*
 		 * The preference is to use SPEC CTRL MSR instead of the
 		 * VIRT_SPEC MSR.
 		 */
 		if (boot_cpu_has(X86_FEATURE_LS_CFG_SSBD) &&
 		    !boot_cpu_has(X86_FEATURE_AMD_SSBD))
-			entry->ebx |= F(VIRT_SSBD);
+			cpuid_entry_set(entry, X86_FEATURE_VIRT_SSBD);
 		break;
 	}
 	case 0x80000019:
diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
index 64e96e4086e2..51f19eade5a0 100644
--- a/arch/x86/kvm/cpuid.h
+++ b/arch/x86/kvm/cpuid.h
@@ -135,6 +135,37 @@ static __always_inline bool cpuid_entry_has(struct kvm_cpuid_entry2 *entry,
 	return cpuid_entry_get(entry, x86_feature);
 }
 
+static __always_inline void cpuid_entry_clear(struct kvm_cpuid_entry2 *entry,
+					      unsigned x86_feature)
+{
+	u32 *reg = cpuid_entry_get_reg(entry, x86_feature);
+
+	*reg &= ~__feature_bit(x86_feature);
+}
+
+static __always_inline void cpuid_entry_set(struct kvm_cpuid_entry2 *entry,
+					    unsigned x86_feature)
+{
+	int *reg = cpuid_entry_get_reg(entry, x86_feature);
+
+	*reg |= __feature_bit(x86_feature);
+}
+
+static __always_inline void cpuid_entry_change(struct kvm_cpuid_entry2 *entry,
+					       unsigned x86_feature, bool set)
+{
+	int *reg = cpuid_entry_get_reg(entry, x86_feature);
+
+	/*
+	 * Open coded instead of using cpuid_entry_{clear,set}() to coerce the
+	 * compiler into using CMOV instead of Jcc when possible.
+	 */
+	if (set)
+		*reg |= __feature_bit(x86_feature);
+	else
+		*reg &= ~__feature_bit(x86_feature);
+}
+
 static __always_inline int *guest_cpuid_get_register(struct kvm_vcpu *vcpu, unsigned x86_feature)
 {
 	struct kvm_cpuid_entry2 *entry;
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 535eb746fb0f..7bb5d81f0f11 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -6030,23 +6030,21 @@ static void svm_cpuid_update(struct kvm_vcpu *vcpu)
 					 APICV_INHIBIT_REASON_NESTED);
 }
 
-#define F feature_bit
-
 static void svm_set_supported_cpuid(struct kvm_cpuid_entry2 *entry)
 {
 	switch (entry->function) {
 	case 0x1:
 		if (avic)
-			entry->ecx &= ~F(X2APIC);
+			cpuid_entry_clear(entry, X86_FEATURE_X2APIC);
 		break;
 	case 0x80000001:
 		if (nested)
-			entry->ecx |= (1 << 2); /* Set SVM bit */
+			cpuid_entry_set(entry, X86_FEATURE_SVM);
 		break;
 	case 0x80000008:
 		if (boot_cpu_has(X86_FEATURE_LS_CFG_SSBD) ||
 		     boot_cpu_has(X86_FEATURE_AMD_SSBD))
-			entry->ebx |= F(VIRT_SSBD);
+			cpuid_entry_set(entry, X86_FEATURE_VIRT_SSBD);
 		break;
 	case 0x8000000A:
 		entry->eax = 1; /* SVM revision 1 */
@@ -6058,12 +6056,11 @@ static void svm_set_supported_cpuid(struct kvm_cpuid_entry2 *entry)
 
 		/* Support next_rip if host supports it */
 		if (boot_cpu_has(X86_FEATURE_NRIPS))
-			entry->edx |= F(NRIPS);
+			cpuid_entry_set(entry, X86_FEATURE_NRIPS);
 
 		/* Support NPT for the guest if enabled */
 		if (npt_enabled)
-			entry->edx |= F(NPT);
-
+			cpuid_entry_set(entry, X86_FEATURE_NPT);
 	}
 }
 
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 28/61] KVM: x86: Refactor cpuid_mask() to auto-retrieve the register
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (26 preceding siblings ...)
  2020-02-01 18:51 ` [PATCH 27/61] KVM: x86: Introduce cpuid_entry_{change,set,clear}() mutators Sean Christopherson
@ 2020-02-01 18:51 ` Sean Christopherson
  2020-02-24 13:49   ` Vitaly Kuznetsov
  2020-02-01 18:51 ` [PATCH 29/61] KVM: x86: Add Kconfig-controlled auditing of reverse CPUID lookups Sean Christopherson
                   ` (33 subsequent siblings)
  61 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Use the recently introduced cpuid_entry_get_reg() to automatically get
the appropriate register when masking a CPUID entry.

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/cpuid.c | 28 +++++++++++++++-------------
 1 file changed, 15 insertions(+), 13 deletions(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 195f4dcc8c6a..cb5870a323cc 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -254,10 +254,12 @@ int kvm_vcpu_ioctl_get_cpuid2(struct kvm_vcpu *vcpu,
 	return r;
 }
 
-static __always_inline void cpuid_mask(u32 *word, int wordnum)
+static __always_inline void cpuid_entry_mask(struct kvm_cpuid_entry2 *entry,
+					     enum cpuid_leafs leaf)
 {
-	reverse_cpuid_check(wordnum);
-	*word &= boot_cpu_data.x86_capability[wordnum];
+	u32 *reg = cpuid_entry_get_reg(entry, leaf * 32);
+
+	*reg &= boot_cpu_data.x86_capability[leaf];
 }
 
 struct kvm_cpuid_array {
@@ -373,13 +375,13 @@ static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry)
 	case 0:
 		entry->eax = min(entry->eax, 1u);
 		entry->ebx &= kvm_cpuid_7_0_ebx_x86_features;
-		cpuid_mask(&entry->ebx, CPUID_7_0_EBX);
+		cpuid_entry_mask(entry, CPUID_7_0_EBX);
 		/* TSC_ADJUST is emulated */
 		cpuid_entry_set(entry, X86_FEATURE_TSC_ADJUST);
 
 		entry->ecx &= kvm_cpuid_7_0_ecx_x86_features;
 		f_la57 = cpuid_entry_get(entry, X86_FEATURE_LA57);
-		cpuid_mask(&entry->ecx, CPUID_7_ECX);
+		cpuid_entry_mask(entry, CPUID_7_ECX);
 		/* Set LA57 based on hardware capability. */
 		entry->ecx |= f_la57;
 		entry->ecx |= f_umip;
@@ -389,7 +391,7 @@ static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry)
 			cpuid_entry_clear(entry, X86_FEATURE_PKU);
 
 		entry->edx &= kvm_cpuid_7_0_edx_x86_features;
-		cpuid_mask(&entry->edx, CPUID_7_EDX);
+		cpuid_entry_mask(entry, CPUID_7_EDX);
 		if (boot_cpu_has(X86_FEATURE_IBPB) && boot_cpu_has(X86_FEATURE_IBRS))
 			cpuid_entry_set(entry, X86_FEATURE_SPEC_CTRL);
 		if (boot_cpu_has(X86_FEATURE_STIBP))
@@ -507,9 +509,9 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
 		break;
 	case 1:
 		entry->edx &= kvm_cpuid_1_edx_x86_features;
-		cpuid_mask(&entry->edx, CPUID_1_EDX);
+		cpuid_entry_mask(entry, CPUID_1_EDX);
 		entry->ecx &= kvm_cpuid_1_ecx_x86_features;
-		cpuid_mask(&entry->ecx, CPUID_1_ECX);
+		cpuid_entry_mask(entry, CPUID_1_ECX);
 		/* we support x2apic emulation even if host does not support
 		 * it since we emulate x2apic in software */
 		cpuid_entry_set(entry, X86_FEATURE_X2APIC);
@@ -619,7 +621,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
 			goto out;
 
 		entry->eax &= kvm_cpuid_D_1_eax_x86_features;
-		cpuid_mask(&entry->eax, CPUID_D_1_EAX);
+		cpuid_entry_mask(entry, CPUID_D_1_EAX);
 		if (entry->eax & (F(XSAVES)|F(XSAVEC)))
 			entry->ebx = xstate_required_size(supported_xcr0, true);
 		else
@@ -699,9 +701,9 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
 		break;
 	case 0x80000001:
 		entry->edx &= kvm_cpuid_8000_0001_edx_x86_features;
-		cpuid_mask(&entry->edx, CPUID_8000_0001_EDX);
+		cpuid_entry_mask(entry, CPUID_8000_0001_EDX);
 		entry->ecx &= kvm_cpuid_8000_0001_ecx_x86_features;
-		cpuid_mask(&entry->ecx, CPUID_8000_0001_ECX);
+		cpuid_entry_mask(entry, CPUID_8000_0001_ECX);
 		break;
 	case 0x80000007: /* Advanced power management */
 		/* invariant TSC is CPUID.80000007H:EDX[8] */
@@ -720,7 +722,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
 		entry->eax = g_phys_as | (virt_as << 8);
 		entry->edx = 0;
 		entry->ebx &= kvm_cpuid_8000_0008_ebx_x86_features;
-		cpuid_mask(&entry->ebx, CPUID_8000_0008_EBX);
+		cpuid_entry_mask(entry, CPUID_8000_0008_EBX);
 		/*
 		 * AMD has separate bits for each SPEC_CTRL bit.
 		 * arch/x86/kernel/cpu/bugs.c is kind enough to
@@ -763,7 +765,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
 		break;
 	case 0xC0000001:
 		entry->edx &= kvm_cpuid_C000_0001_edx_x86_features;
-		cpuid_mask(&entry->edx, CPUID_C000_0001_EDX);
+		cpuid_entry_mask(entry, CPUID_C000_0001_EDX);
 		break;
 	case 3: /* Processor serial number */
 	case 5: /* MONITOR/MWAIT */
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 29/61] KVM: x86: Add Kconfig-controlled auditing of reverse CPUID lookups
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (27 preceding siblings ...)
  2020-02-01 18:51 ` [PATCH 28/61] KVM: x86: Refactor cpuid_mask() to auto-retrieve the register Sean Christopherson
@ 2020-02-01 18:51 ` Sean Christopherson
  2020-02-24 13:54   ` Vitaly Kuznetsov
  2020-02-01 18:51 ` [PATCH 30/61] KVM: x86: Handle MPX CPUID adjustment in VMX code Sean Christopherson
                   ` (32 subsequent siblings)
  61 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Add WARNs in the low level __cpuid_entry_get_reg() to assert that the
function and index of the CPUID entry and reverse CPUID entry match.
Wrap the WARNs in a new Kconfig, KVM_CPUID_AUDIT, as the checks add
almost no value in a production environment, i.e. will only detect
blatant KVM bugs and fatal hardware errors.  Add a Kconfig instead of
simply wrapping the WARNs with an off-by-default #ifdef so that syzbot
and other automated testing can enable the auditing.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/Kconfig | 10 ++++++++++
 arch/x86/kvm/cpuid.h |  5 +++++
 2 files changed, 15 insertions(+)

diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig
index 840e12583b85..bbbc3258358e 100644
--- a/arch/x86/kvm/Kconfig
+++ b/arch/x86/kvm/Kconfig
@@ -96,6 +96,16 @@ config KVM_MMU_AUDIT
 	 This option adds a R/W kVM module parameter 'mmu_audit', which allows
 	 auditing of KVM MMU events at runtime.
 
+config KVM_CPUID_AUDIT
+	bool "Audit KVM reverse CPUID lookups"
+	depends on KVM
+	help
+	 This option enables runtime checking of reverse CPUID lookups in KVM
+	 to verify the function and index of the referenced X86_FEATURE_* match
+	 the function and index of the CPUID entry being accessed.
+
+	 If unsure, say N.
+
 # OK, it's a little counter-intuitive to do this, but it puts it neatly under
 # the virtualization menu.
 source "drivers/vhost/Kconfig"
diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
index 51f19eade5a0..41ff94a7d3e0 100644
--- a/arch/x86/kvm/cpuid.h
+++ b/arch/x86/kvm/cpuid.h
@@ -98,6 +98,11 @@ static __always_inline struct cpuid_reg x86_feature_cpuid(unsigned x86_feature)
 static __always_inline u32 *__cpuid_entry_get_reg(struct kvm_cpuid_entry2 *entry,
 						  const struct cpuid_reg *cpuid)
 {
+#ifdef CONFIG_KVM_CPUID_AUDIT
+	WARN_ON_ONCE(entry->function != cpuid->function);
+	WARN_ON_ONCE(entry->index != cpuid->index);
+#endif
+
 	switch (cpuid->reg) {
 	case CPUID_EAX:
 		return &entry->eax;
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 30/61] KVM: x86: Handle MPX CPUID adjustment in VMX code
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (28 preceding siblings ...)
  2020-02-01 18:51 ` [PATCH 29/61] KVM: x86: Add Kconfig-controlled auditing of reverse CPUID lookups Sean Christopherson
@ 2020-02-01 18:51 ` Sean Christopherson
  2020-02-13 13:51   ` Xiaoyao Li
  2020-02-24 15:14   ` Vitaly Kuznetsov
  2020-02-01 18:51 ` [PATCH 31/61] KVM: x86: Handle INVPCID " Sean Christopherson
                   ` (31 subsequent siblings)
  61 siblings, 2 replies; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Move the MPX CPUID adjustments into VMX to eliminate an instance of the
undesirable "unsigned f_* = *_supported ? F(*) : 0" pattern in the
common CPUID handling code.

Note, VMX must manually check for kernel support via
boot_cpu_has(X86_FEATURE_MPX).

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/cpuid.c   |  3 +--
 arch/x86/kvm/vmx/vmx.c | 14 ++++++++++++--
 2 files changed, 13 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index cb5870a323cc..09e24d1d731c 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -340,7 +340,6 @@ static int __do_cpuid_func_emulated(struct kvm_cpuid_array *array, u32 func)
 static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry)
 {
 	unsigned f_invpcid = kvm_x86_ops->invpcid_supported() ? F(INVPCID) : 0;
-	unsigned f_mpx = kvm_mpx_supported() ? F(MPX) : 0;
 	unsigned f_umip = kvm_x86_ops->umip_emulated() ? F(UMIP) : 0;
 	unsigned f_intel_pt = kvm_x86_ops->pt_supported() ? F(INTEL_PT) : 0;
 	unsigned f_la57;
@@ -349,7 +348,7 @@ static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry)
 	/* cpuid 7.0.ebx */
 	const u32 kvm_cpuid_7_0_ebx_x86_features =
 		F(FSGSBASE) | F(BMI1) | F(HLE) | F(AVX2) | F(SMEP) |
-		F(BMI2) | F(ERMS) | f_invpcid | F(RTM) | f_mpx | F(RDSEED) |
+		F(BMI2) | F(ERMS) | f_invpcid | F(RTM) | 0 /*MPX*/ | F(RDSEED) |
 		F(ADX) | F(SMAP) | F(AVX512IFMA) | F(AVX512F) | F(AVX512PF) |
 		F(AVX512ER) | F(AVX512CD) | F(CLFLUSHOPT) | F(CLWB) | F(AVX512DQ) |
 		F(SHA_NI) | F(AVX512BW) | F(AVX512VL) | f_intel_pt;
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 3ff830e2258e..143193fc178e 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7106,8 +7106,18 @@ static void vmx_cpuid_update(struct kvm_vcpu *vcpu)
 
 static void vmx_set_supported_cpuid(struct kvm_cpuid_entry2 *entry)
 {
-	if (entry->function == 1 && nested)
-		entry->ecx |= feature_bit(VMX);
+	switch (entry->function) {
+	case 0x1:
+		if (nested)
+			cpuid_entry_set(entry, X86_FEATURE_VMX);
+		break;
+	case 0x7:
+		if (boot_cpu_has(X86_FEATURE_MPX) && kvm_mpx_supported())
+			cpuid_entry_set(entry, X86_FEATURE_MPX);
+		break;
+	default:
+		break;
+	}
 }
 
 static void vmx_request_immediate_exit(struct kvm_vcpu *vcpu)
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 31/61] KVM: x86: Handle INVPCID CPUID adjustment in VMX code
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (29 preceding siblings ...)
  2020-02-01 18:51 ` [PATCH 30/61] KVM: x86: Handle MPX CPUID adjustment in VMX code Sean Christopherson
@ 2020-02-01 18:51 ` Sean Christopherson
  2020-02-24 15:19   ` Vitaly Kuznetsov
  2020-02-01 18:51 ` [PATCH 32/61] KVM: x86: Handle UMIP emulation " Sean Christopherson
                   ` (30 subsequent siblings)
  61 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Move the INVPCID CPUID adjustments into VMX to eliminate an instance of
the undesirable "unsigned f_* = *_supported ? F(*) : 0" pattern in the
common CPUID handling code.  Drop ->invpcid_supported(), CPUID
adjustment was the only user.

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/include/asm/kvm_host.h |  1 -
 arch/x86/kvm/cpuid.c            |  3 +--
 arch/x86/kvm/svm.c              |  6 ------
 arch/x86/kvm/vmx/vmx.c          | 10 +++-------
 4 files changed, 4 insertions(+), 16 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index a61928d5435b..9baff70ad419 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1144,7 +1144,6 @@ struct kvm_x86_ops {
 	u64 (*get_mt_mask)(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio);
 	int (*get_lpage_level)(void);
 	bool (*rdtscp_supported)(void);
-	bool (*invpcid_supported)(void);
 
 	void (*set_tdp_cr3)(struct kvm_vcpu *vcpu, unsigned long cr3);
 
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 09e24d1d731c..a5f150204d73 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -339,7 +339,6 @@ static int __do_cpuid_func_emulated(struct kvm_cpuid_array *array, u32 func)
 
 static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry)
 {
-	unsigned f_invpcid = kvm_x86_ops->invpcid_supported() ? F(INVPCID) : 0;
 	unsigned f_umip = kvm_x86_ops->umip_emulated() ? F(UMIP) : 0;
 	unsigned f_intel_pt = kvm_x86_ops->pt_supported() ? F(INTEL_PT) : 0;
 	unsigned f_la57;
@@ -348,7 +347,7 @@ static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry)
 	/* cpuid 7.0.ebx */
 	const u32 kvm_cpuid_7_0_ebx_x86_features =
 		F(FSGSBASE) | F(BMI1) | F(HLE) | F(AVX2) | F(SMEP) |
-		F(BMI2) | F(ERMS) | f_invpcid | F(RTM) | 0 /*MPX*/ | F(RDSEED) |
+		F(BMI2) | F(ERMS) | 0 /*INVPCID*/ | F(RTM) | 0 /*MPX*/ | F(RDSEED) |
 		F(ADX) | F(SMAP) | F(AVX512IFMA) | F(AVX512F) | F(AVX512PF) |
 		F(AVX512ER) | F(AVX512CD) | F(CLFLUSHOPT) | F(CLWB) | F(AVX512DQ) |
 		F(SHA_NI) | F(AVX512BW) | F(AVX512VL) | f_intel_pt;
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 7bb5d81f0f11..c0f8c09f3b04 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -6074,11 +6074,6 @@ static bool svm_rdtscp_supported(void)
 	return boot_cpu_has(X86_FEATURE_RDTSCP);
 }
 
-static bool svm_invpcid_supported(void)
-{
-	return false;
-}
-
 static bool svm_xsaves_supported(void)
 {
 	return boot_cpu_has(X86_FEATURE_XSAVES);
@@ -7459,7 +7454,6 @@ static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
 	.cpuid_update = svm_cpuid_update,
 
 	.rdtscp_supported = svm_rdtscp_supported,
-	.invpcid_supported = svm_invpcid_supported,
 	.xsaves_supported = svm_xsaves_supported,
 	.umip_emulated = svm_umip_emulated,
 	.pt_supported = svm_pt_supported,
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 143193fc178e..49ee4c600934 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -1656,11 +1656,6 @@ static bool vmx_rdtscp_supported(void)
 	return cpu_has_vmx_rdtscp();
 }
 
-static bool vmx_invpcid_supported(void)
-{
-	return cpu_has_vmx_invpcid();
-}
-
 /*
  * Swap MSR entry in host/guest MSR entry array.
  */
@@ -4071,7 +4066,7 @@ static void vmx_compute_secondary_exec_control(struct vcpu_vmx *vmx)
 		}
 	}
 
-	if (vmx_invpcid_supported()) {
+	if (cpu_has_vmx_invpcid()) {
 		/* Exposing INVPCID only when PCID is exposed */
 		bool invpcid_enabled =
 			guest_cpuid_has(vcpu, X86_FEATURE_INVPCID) &&
@@ -7114,6 +7109,8 @@ static void vmx_set_supported_cpuid(struct kvm_cpuid_entry2 *entry)
 	case 0x7:
 		if (boot_cpu_has(X86_FEATURE_MPX) && kvm_mpx_supported())
 			cpuid_entry_set(entry, X86_FEATURE_MPX);
+		if (boot_cpu_has(X86_FEATURE_INVPCID) && cpu_has_vmx_invpcid())
+			cpuid_entry_set(entry, X86_FEATURE_INVPCID);
 		break;
 	default:
 		break;
@@ -7854,7 +7851,6 @@ static struct kvm_x86_ops vmx_x86_ops __ro_after_init = {
 	.cpuid_update = vmx_cpuid_update,
 
 	.rdtscp_supported = vmx_rdtscp_supported,
-	.invpcid_supported = vmx_invpcid_supported,
 
 	.set_supported_cpuid = vmx_set_supported_cpuid,
 
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 32/61] KVM: x86: Handle UMIP emulation CPUID adjustment in VMX code
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (30 preceding siblings ...)
  2020-02-01 18:51 ` [PATCH 31/61] KVM: x86: Handle INVPCID " Sean Christopherson
@ 2020-02-01 18:51 ` Sean Christopherson
  2020-02-24 15:21   ` Vitaly Kuznetsov
  2020-02-01 18:51 ` [PATCH 33/61] KVM: x86: Handle PKU " Sean Christopherson
                   ` (29 subsequent siblings)
  61 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Move the CPUID adjustment for UMIP emulation into VMX code to eliminate
an instance of the undesirable "unsigned f_* = *_supported ? F(*) : 0"
pattern in the common CPUID handling code.

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/cpuid.c   | 2 --
 arch/x86/kvm/vmx/vmx.c | 2 ++
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index a5f150204d73..202a6c0f1db8 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -339,7 +339,6 @@ static int __do_cpuid_func_emulated(struct kvm_cpuid_array *array, u32 func)
 
 static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry)
 {
-	unsigned f_umip = kvm_x86_ops->umip_emulated() ? F(UMIP) : 0;
 	unsigned f_intel_pt = kvm_x86_ops->pt_supported() ? F(INTEL_PT) : 0;
 	unsigned f_la57;
 	unsigned f_pku = kvm_x86_ops->pku_supported() ? F(PKU) : 0;
@@ -382,7 +381,6 @@ static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry)
 		cpuid_entry_mask(entry, CPUID_7_ECX);
 		/* Set LA57 based on hardware capability. */
 		entry->ecx |= f_la57;
-		entry->ecx |= f_umip;
 		entry->ecx |= f_pku;
 		/* PKU is not yet implemented for shadow paging. */
 		if (!tdp_enabled || !boot_cpu_has(X86_FEATURE_OSPKE))
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 49ee4c600934..9d2e36a5ecb9 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7111,6 +7111,8 @@ static void vmx_set_supported_cpuid(struct kvm_cpuid_entry2 *entry)
 			cpuid_entry_set(entry, X86_FEATURE_MPX);
 		if (boot_cpu_has(X86_FEATURE_INVPCID) && cpu_has_vmx_invpcid())
 			cpuid_entry_set(entry, X86_FEATURE_INVPCID);
+		if (vmx_umip_emulated())
+			cpuid_entry_set(entry, X86_FEATURE_UMIP);
 		break;
 	default:
 		break;
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 33/61] KVM: x86: Handle PKU CPUID adjustment in VMX code
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (31 preceding siblings ...)
  2020-02-01 18:51 ` [PATCH 32/61] KVM: x86: Handle UMIP emulation " Sean Christopherson
@ 2020-02-01 18:51 ` Sean Christopherson
  2020-02-24 15:24   ` Vitaly Kuznetsov
  2020-02-01 18:51 ` [PATCH 34/61] KVM: x86: Handle RDTSCP " Sean Christopherson
                   ` (28 subsequent siblings)
  61 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Move the setting of the PKU CPUID bit into VMX to eliminate an instance
of the undesirable "unsigned f_* = *_supported ? F(*) : 0" pattern in
the common CPUID handling code.  Drop ->pku_supported(), CPUID
adjustment was the only user.

Note, some AMD CPUs now support PKU, but SVM doesn't yet support
exposing it to a guest.

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/include/asm/kvm_host.h | 1 -
 arch/x86/kvm/cpuid.c            | 5 -----
 arch/x86/kvm/svm.c              | 6 ------
 arch/x86/kvm/vmx/capabilities.h | 5 -----
 arch/x86/kvm/vmx/vmx.c          | 6 +++++-
 5 files changed, 5 insertions(+), 18 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 9baff70ad419..ba828569cda5 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1166,7 +1166,6 @@ struct kvm_x86_ops {
 	bool (*xsaves_supported)(void);
 	bool (*umip_emulated)(void);
 	bool (*pt_supported)(void);
-	bool (*pku_supported)(void);
 
 	int (*check_nested_events)(struct kvm_vcpu *vcpu, bool external_intr);
 	void (*request_immediate_exit)(struct kvm_vcpu *vcpu);
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 202a6c0f1db8..a1f46b3ca16e 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -341,7 +341,6 @@ static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry)
 {
 	unsigned f_intel_pt = kvm_x86_ops->pt_supported() ? F(INTEL_PT) : 0;
 	unsigned f_la57;
-	unsigned f_pku = kvm_x86_ops->pku_supported() ? F(PKU) : 0;
 
 	/* cpuid 7.0.ebx */
 	const u32 kvm_cpuid_7_0_ebx_x86_features =
@@ -381,10 +380,6 @@ static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry)
 		cpuid_entry_mask(entry, CPUID_7_ECX);
 		/* Set LA57 based on hardware capability. */
 		entry->ecx |= f_la57;
-		entry->ecx |= f_pku;
-		/* PKU is not yet implemented for shadow paging. */
-		if (!tdp_enabled || !boot_cpu_has(X86_FEATURE_OSPKE))
-			cpuid_entry_clear(entry, X86_FEATURE_PKU);
 
 		entry->edx &= kvm_cpuid_7_0_edx_x86_features;
 		cpuid_entry_mask(entry, CPUID_7_EDX);
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index c0f8c09f3b04..630520f8adfa 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -6094,11 +6094,6 @@ static bool svm_has_wbinvd_exit(void)
 	return true;
 }
 
-static bool svm_pku_supported(void)
-{
-	return false;
-}
-
 #define PRE_EX(exit)  { .exit_code = (exit), \
 			.stage = X86_ICPT_PRE_EXCEPT, }
 #define POST_EX(exit) { .exit_code = (exit), \
@@ -7457,7 +7452,6 @@ static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
 	.xsaves_supported = svm_xsaves_supported,
 	.umip_emulated = svm_umip_emulated,
 	.pt_supported = svm_pt_supported,
-	.pku_supported = svm_pku_supported,
 
 	.set_supported_cpuid = svm_set_supported_cpuid,
 
diff --git a/arch/x86/kvm/vmx/capabilities.h b/arch/x86/kvm/vmx/capabilities.h
index 0a0b1494a934..7cae355e3490 100644
--- a/arch/x86/kvm/vmx/capabilities.h
+++ b/arch/x86/kvm/vmx/capabilities.h
@@ -145,11 +145,6 @@ static inline bool vmx_umip_emulated(void)
 		SECONDARY_EXEC_DESC;
 }
 
-static inline bool vmx_pku_supported(void)
-{
-	return boot_cpu_has(X86_FEATURE_PKU);
-}
-
 static inline bool cpu_has_vmx_rdtscp(void)
 {
 	return vmcs_config.cpu_based_2nd_exec_ctrl &
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 9d2e36a5ecb9..a9728cc0c343 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7113,6 +7113,11 @@ static void vmx_set_supported_cpuid(struct kvm_cpuid_entry2 *entry)
 			cpuid_entry_set(entry, X86_FEATURE_INVPCID);
 		if (vmx_umip_emulated())
 			cpuid_entry_set(entry, X86_FEATURE_UMIP);
+
+		/* PKU is not yet implemented for shadow paging. */
+		if (enable_ept && boot_cpu_has(X86_FEATURE_PKU) &&
+		    boot_cpu_has(X86_FEATURE_OSPKE))
+			cpuid_entry_set(entry, X86_FEATURE_PKU);
 		break;
 	default:
 		break;
@@ -7868,7 +7873,6 @@ static struct kvm_x86_ops vmx_x86_ops __ro_after_init = {
 	.xsaves_supported = vmx_xsaves_supported,
 	.umip_emulated = vmx_umip_emulated,
 	.pt_supported = vmx_pt_supported,
-	.pku_supported = vmx_pku_supported,
 
 	.request_immediate_exit = vmx_request_immediate_exit,
 
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 34/61] KVM: x86: Handle RDTSCP CPUID adjustment in VMX code
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (32 preceding siblings ...)
  2020-02-01 18:51 ` [PATCH 33/61] KVM: x86: Handle PKU " Sean Christopherson
@ 2020-02-01 18:51 ` Sean Christopherson
  2020-02-24 15:28   ` Vitaly Kuznetsov
  2020-02-01 18:51 ` [PATCH 35/61] KVM: x86: Handle Intel PT " Sean Christopherson
                   ` (27 subsequent siblings)
  61 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Move the clearing of the RDTSCP CPUID bit into VMX, which has a separate
VMCS control to enable RDTSCP in non-root, to eliminate an instance of
the undesirable "unsigned f_* = *_supported ? F(*) : 0" pattern in the
common CPUID handling code.  Drop ->rdtscp_supported() since CPUID
adjustment was the last remaining user.

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/cpuid.c   | 3 +--
 arch/x86/kvm/vmx/vmx.c | 4 ++++
 2 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index a1f46b3ca16e..fc507270f3f3 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -424,7 +424,6 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
 	unsigned f_gbpages = 0;
 	unsigned f_lm = 0;
 #endif
-	unsigned f_rdtscp = kvm_x86_ops->rdtscp_supported() ? F(RDTSCP) : 0;
 	unsigned f_xsaves = kvm_x86_ops->xsaves_supported() ? F(XSAVES) : 0;
 	unsigned f_intel_pt = kvm_x86_ops->pt_supported() ? F(INTEL_PT) : 0;
 
@@ -446,7 +445,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
 		F(MTRR) | F(PGE) | F(MCA) | F(CMOV) |
 		F(PAT) | F(PSE36) | 0 /* Reserved */ |
 		f_nx | 0 /* Reserved */ | F(MMXEXT) | F(MMX) |
-		F(FXSR) | F(FXSR_OPT) | f_gbpages | f_rdtscp |
+		F(FXSR) | F(FXSR_OPT) | f_gbpages | F(RDTSCP) |
 		0 /* Reserved */ | f_lm | F(3DNOWEXT) | F(3DNOW);
 	/* cpuid 1.ecx */
 	const u32 kvm_cpuid_1_ecx_x86_features =
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index a9728cc0c343..3990ba691d07 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7119,6 +7119,10 @@ static void vmx_set_supported_cpuid(struct kvm_cpuid_entry2 *entry)
 		    boot_cpu_has(X86_FEATURE_OSPKE))
 			cpuid_entry_set(entry, X86_FEATURE_PKU);
 		break;
+	case 0x80000001:
+		if (!cpu_has_vmx_rdtscp())
+			cpuid_entry_clear(entry, X86_FEATURE_RDTSCP);
+		break;
 	default:
 		break;
 	}
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 35/61] KVM: x86: Handle Intel PT CPUID adjustment in VMX code
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (33 preceding siblings ...)
  2020-02-01 18:51 ` [PATCH 34/61] KVM: x86: Handle RDTSCP " Sean Christopherson
@ 2020-02-01 18:51 ` Sean Christopherson
  2020-02-24 15:30   ` Vitaly Kuznetsov
  2020-02-01 18:51 ` [PATCH 36/61] KVM: x86: Handle GBPAGE CPUID adjustment for EPT " Sean Christopherson
                   ` (26 subsequent siblings)
  61 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Move the Processor Trace CPUID adjustment into VMX code to eliminate
an instance of the undesirable "unsigned f_* = *_supported ? F(*) : 0"
pattern in the common CPUID handling code, and to pave the way toward
eventually removing ->pt_supported().

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/cpuid.c   | 3 +--
 arch/x86/kvm/vmx/vmx.c | 3 +++
 2 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index fc507270f3f3..f4a3655451dd 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -339,7 +339,6 @@ static int __do_cpuid_func_emulated(struct kvm_cpuid_array *array, u32 func)
 
 static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry)
 {
-	unsigned f_intel_pt = kvm_x86_ops->pt_supported() ? F(INTEL_PT) : 0;
 	unsigned f_la57;
 
 	/* cpuid 7.0.ebx */
@@ -348,7 +347,7 @@ static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry)
 		F(BMI2) | F(ERMS) | 0 /*INVPCID*/ | F(RTM) | 0 /*MPX*/ | F(RDSEED) |
 		F(ADX) | F(SMAP) | F(AVX512IFMA) | F(AVX512F) | F(AVX512PF) |
 		F(AVX512ER) | F(AVX512CD) | F(CLFLUSHOPT) | F(CLWB) | F(AVX512DQ) |
-		F(SHA_NI) | F(AVX512BW) | F(AVX512VL) | f_intel_pt;
+		F(SHA_NI) | F(AVX512BW) | F(AVX512VL) | 0 /*INTEL_PT*/;
 
 	/* cpuid 7.0.ecx*/
 	const u32 kvm_cpuid_7_0_ecx_x86_features =
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 3990ba691d07..fcec3d8a0176 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7111,6 +7111,9 @@ static void vmx_set_supported_cpuid(struct kvm_cpuid_entry2 *entry)
 			cpuid_entry_set(entry, X86_FEATURE_MPX);
 		if (boot_cpu_has(X86_FEATURE_INVPCID) && cpu_has_vmx_invpcid())
 			cpuid_entry_set(entry, X86_FEATURE_INVPCID);
+		if (boot_cpu_has(X86_FEATURE_INTEL_PT) &&
+		    vmx_pt_mode_is_host_guest())
+			cpuid_entry_set(entry, X86_FEATURE_INTEL_PT);
 		if (vmx_umip_emulated())
 			cpuid_entry_set(entry, X86_FEATURE_UMIP);
 
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 36/61] KVM: x86: Handle GBPAGE CPUID adjustment for EPT in VMX code
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (34 preceding siblings ...)
  2020-02-01 18:51 ` [PATCH 35/61] KVM: x86: Handle Intel PT " Sean Christopherson
@ 2020-02-01 18:51 ` Sean Christopherson
  2020-02-24 15:34   ` Vitaly Kuznetsov
  2020-02-01 18:51 ` [PATCH 37/61] KVM: x86: Refactor handling of XSAVES CPUID adjustment Sean Christopherson
                   ` (25 subsequent siblings)
  61 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Move the clearing of the GBPAGE CPUID bit into VMX to eliminate an
instance of the undesirable "unsigned f_* = *_supported ? F(*) : 0"
pattern in the common CPUID handling code, and to pave the way toward
eliminating ->get_lpage_level().

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/cpuid.c   | 3 +--
 arch/x86/kvm/vmx/vmx.c | 2 ++
 2 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index f4a3655451dd..c74253202af8 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -416,8 +416,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
 	int r, i, max_idx;
 	unsigned f_nx = is_efer_nx() ? F(NX) : 0;
 #ifdef CONFIG_X86_64
-	unsigned f_gbpages = (kvm_x86_ops->get_lpage_level() == PT_PDPE_LEVEL)
-				? F(GBPAGES) : 0;
+	unsigned f_gbpages = F(GBPAGES);
 	unsigned f_lm = F(LM);
 #else
 	unsigned f_gbpages = 0;
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index fcec3d8a0176..11b9c1e7e520 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7125,6 +7125,8 @@ static void vmx_set_supported_cpuid(struct kvm_cpuid_entry2 *entry)
 	case 0x80000001:
 		if (!cpu_has_vmx_rdtscp())
 			cpuid_entry_clear(entry, X86_FEATURE_RDTSCP);
+		if (enable_ept && !cpu_has_vmx_ept_1g_page())
+			cpuid_entry_clear(entry, X86_FEATURE_GBPAGES);
 		break;
 	default:
 		break;
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 37/61] KVM: x86: Refactor handling of XSAVES CPUID adjustment
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (35 preceding siblings ...)
  2020-02-01 18:51 ` [PATCH 36/61] KVM: x86: Handle GBPAGE CPUID adjustment for EPT " Sean Christopherson
@ 2020-02-01 18:51 ` Sean Christopherson
  2020-02-24 15:39   ` Vitaly Kuznetsov
  2020-02-01 18:51 ` [PATCH 38/61] KVM: x86: Introduce kvm_cpu_caps to replace runtime CPUID masking Sean Christopherson
                   ` (24 subsequent siblings)
  61 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Invert the handling of XSAVES, i.e. set it based on boot_cpu_has() by
default, in preparation for adding KVM cpu caps, which will generate the
mask at load time before ->xsaves_supported() is ready.

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/cpuid.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index c74253202af8..20a7af320291 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -422,7 +422,6 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
 	unsigned f_gbpages = 0;
 	unsigned f_lm = 0;
 #endif
-	unsigned f_xsaves = kvm_x86_ops->xsaves_supported() ? F(XSAVES) : 0;
 	unsigned f_intel_pt = kvm_x86_ops->pt_supported() ? F(INTEL_PT) : 0;
 
 	/* cpuid 1.edx */
@@ -479,7 +478,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
 
 	/* cpuid 0xD.1.eax */
 	const u32 kvm_cpuid_D_1_eax_x86_features =
-		F(XSAVEOPT) | F(XSAVEC) | F(XGETBV1) | f_xsaves;
+		F(XSAVEOPT) | F(XSAVEC) | F(XGETBV1) | F(XSAVES);
 
 	/* all calls to cpuid_count() should be made on the same cpu */
 	get_cpu();
@@ -610,6 +609,10 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
 
 		entry->eax &= kvm_cpuid_D_1_eax_x86_features;
 		cpuid_entry_mask(entry, CPUID_D_1_EAX);
+
+		if (!kvm_x86_ops->xsaves_supported())
+			cpuid_entry_clear(entry, X86_FEATURE_XSAVES);
+
 		if (entry->eax & (F(XSAVES)|F(XSAVEC)))
 			entry->ebx = xstate_required_size(supported_xcr0, true);
 		else
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 38/61] KVM: x86: Introduce kvm_cpu_caps to replace runtime CPUID masking
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (36 preceding siblings ...)
  2020-02-01 18:51 ` [PATCH 37/61] KVM: x86: Refactor handling of XSAVES CPUID adjustment Sean Christopherson
@ 2020-02-01 18:51 ` Sean Christopherson
  2020-02-24 16:32   ` Vitaly Kuznetsov
  2020-02-01 18:51 ` [PATCH 39/61] KVM: SVM: Convert feature updates from CPUID to KVM cpu caps Sean Christopherson
                   ` (23 subsequent siblings)
  61 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Calculate the CPUID masks for KVM_GET_SUPPORTED_CPUID at load time using
what is effectively a KVM-adjusted copy of boot_cpu_data, or more
precisely, the x86_capability array in boot_cpu_data.

In terms of KVM support, the vast majority of CPUID feature bits are
constant, and *all* feature support is known at KVM load time.  Rather
than apply boot_cpu_data, which is effectively read-only after init,
at runtime, copy it into a KVM-specific array and use *that* to mask
CPUID registers.

In additional to consolidating the masking, kvm_cpu_caps can be adjusted
by SVM/VMX at load time and thus eliminate all feature bit manipulation
in ->set_supported_cpuid().

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/cpuid.c | 229 +++++++++++++++++++++++--------------------
 arch/x86/kvm/cpuid.h |  19 ++++
 arch/x86/kvm/x86.c   |   2 +
 3 files changed, 142 insertions(+), 108 deletions(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 20a7af320291..c2a4c9df49a9 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -24,6 +24,13 @@
 #include "trace.h"
 #include "pmu.h"
 
+/*
+ * Unlike "struct cpuinfo_x86.x86_capability", kvm_cpu_caps doesn't need to be
+ * aligned to sizeof(unsigned long) because it's not accessed via bitops.
+ */
+u32 kvm_cpu_caps[NCAPINTS] __read_mostly;
+EXPORT_SYMBOL_GPL(kvm_cpu_caps);
+
 static u32 xstate_required_size(u64 xstate_bv, bool compacted)
 {
 	int feature_bit = 0;
@@ -259,7 +266,119 @@ static __always_inline void cpuid_entry_mask(struct kvm_cpuid_entry2 *entry,
 {
 	u32 *reg = cpuid_entry_get_reg(entry, leaf * 32);
 
-	*reg &= boot_cpu_data.x86_capability[leaf];
+	BUILD_BUG_ON(leaf > ARRAY_SIZE(kvm_cpu_caps));
+	*reg &= kvm_cpu_caps[leaf];
+}
+
+static __always_inline void kvm_cpu_cap_mask(enum cpuid_leafs leaf, u32 mask)
+{
+	reverse_cpuid_check(leaf);
+	kvm_cpu_caps[leaf] &= mask;
+}
+
+void kvm_set_cpu_caps(void)
+{
+	unsigned f_nx = is_efer_nx() ? F(NX) : 0;
+#ifdef CONFIG_X86_64
+	unsigned f_gbpages = F(GBPAGES);
+	unsigned f_lm = F(LM);
+#else
+	unsigned f_gbpages = 0;
+	unsigned f_lm = 0;
+#endif
+
+	BUILD_BUG_ON(sizeof(kvm_cpu_caps) >
+		     sizeof(boot_cpu_data.x86_capability));
+
+	memcpy(&kvm_cpu_caps, &boot_cpu_data.x86_capability,
+	       sizeof(kvm_cpu_caps));
+
+	kvm_cpu_cap_mask(CPUID_1_EDX,
+		F(FPU) | F(VME) | F(DE) | F(PSE) |
+		F(TSC) | F(MSR) | F(PAE) | F(MCE) |
+		F(CX8) | F(APIC) | 0 /* Reserved */ | F(SEP) |
+		F(MTRR) | F(PGE) | F(MCA) | F(CMOV) |
+		F(PAT) | F(PSE36) | 0 /* PSN */ | F(CLFLUSH) |
+		0 /* Reserved, DS, ACPI */ | F(MMX) |
+		F(FXSR) | F(XMM) | F(XMM2) | F(SELFSNOOP) |
+		0 /* HTT, TM, Reserved, PBE */
+	);
+
+	kvm_cpu_cap_mask(CPUID_8000_0001_EDX,
+		F(FPU) | F(VME) | F(DE) | F(PSE) |
+		F(TSC) | F(MSR) | F(PAE) | F(MCE) |
+		F(CX8) | F(APIC) | 0 /* Reserved */ | F(SYSCALL) |
+		F(MTRR) | F(PGE) | F(MCA) | F(CMOV) |
+		F(PAT) | F(PSE36) | 0 /* Reserved */ |
+		f_nx | 0 /* Reserved */ | F(MMXEXT) | F(MMX) |
+		F(FXSR) | F(FXSR_OPT) | f_gbpages | F(RDTSCP) |
+		0 /* Reserved */ | f_lm | F(3DNOWEXT) | F(3DNOW)
+	);
+
+	kvm_cpu_cap_mask(CPUID_1_ECX,
+		/* NOTE: MONITOR (and MWAIT) are emulated as NOP,
+		 * but *not* advertised to guests via CPUID ! */
+		F(XMM3) | F(PCLMULQDQ) | 0 /* DTES64, MONITOR */ |
+		0 /* DS-CPL, VMX, SMX, EST */ |
+		0 /* TM2 */ | F(SSSE3) | 0 /* CNXT-ID */ | 0 /* Reserved */ |
+		F(FMA) | F(CX16) | 0 /* xTPR Update, PDCM */ |
+		F(PCID) | 0 /* Reserved, DCA */ | F(XMM4_1) |
+		F(XMM4_2) | F(X2APIC) | F(MOVBE) | F(POPCNT) |
+		0 /* Reserved*/ | F(AES) | F(XSAVE) | 0 /* OSXSAVE */ | F(AVX) |
+		F(F16C) | F(RDRAND)
+	);
+
+	kvm_cpu_cap_mask(CPUID_7_0_EBX,
+		F(FSGSBASE) | F(BMI1) | F(HLE) | F(AVX2) | F(SMEP) |
+		F(BMI2) | F(ERMS) | 0 /*INVPCID*/ | F(RTM) | 0 /*MPX*/ | F(RDSEED) |
+		F(ADX) | F(SMAP) | F(AVX512IFMA) | F(AVX512F) | F(AVX512PF) |
+		F(AVX512ER) | F(AVX512CD) | F(CLFLUSHOPT) | F(CLWB) | F(AVX512DQ) |
+		F(SHA_NI) | F(AVX512BW) | F(AVX512VL) | 0 /*INTEL_PT*/
+	);
+
+	kvm_cpu_cap_mask(CPUID_7_ECX,
+		F(AVX512VBMI) | F(LA57) | 0 /*PKU*/ | 0 /*OSPKE*/ | F(RDPID) |
+		F(AVX512_VPOPCNTDQ) | F(UMIP) | F(AVX512_VBMI2) | F(GFNI) |
+		F(VAES) | F(VPCLMULQDQ) | F(AVX512_VNNI) | F(AVX512_BITALG) |
+		F(CLDEMOTE) | F(MOVDIRI) | F(MOVDIR64B) | 0 /*WAITPKG*/
+	);
+	/* Set LA57 based on hardware capability. */
+	if (cpuid_ecx(7) & F(LA57))
+		kvm_cpu_cap_set(X86_FEATURE_LA57);
+
+	kvm_cpu_cap_mask(CPUID_7_EDX,
+		F(AVX512_4VNNIW) | F(AVX512_4FMAPS) | F(SPEC_CTRL) |
+		F(SPEC_CTRL_SSBD) | F(ARCH_CAPABILITIES) | F(INTEL_STIBP) |
+		F(MD_CLEAR)
+	);
+
+	kvm_cpu_cap_mask(CPUID_7_1_EAX,
+		F(AVX512_BF16)
+	);
+
+	kvm_cpu_cap_mask(CPUID_D_1_EAX,
+		F(XSAVEOPT) | F(XSAVEC) | F(XGETBV1) | F(XSAVES)
+	);
+
+	kvm_cpu_cap_mask(CPUID_8000_0001_ECX,
+		F(LAHF_LM) | F(CMP_LEGACY) | 0 /*SVM*/ | 0 /* ExtApicSpace */ |
+		F(CR8_LEGACY) | F(ABM) | F(SSE4A) | F(MISALIGNSSE) |
+		F(3DNOWPREFETCH) | F(OSVW) | 0 /* IBS */ | F(XOP) |
+		0 /* SKINIT, WDT, LWP */ | F(FMA4) | F(TBM) |
+		F(TOPOEXT) | F(PERFCTR_CORE)
+	);
+
+	kvm_cpu_cap_mask(CPUID_8000_0008_EBX,
+		F(CLZERO) | F(XSAVEERPTR) |
+		F(WBNOINVD) | F(AMD_IBPB) | F(AMD_IBRS) | F(AMD_SSBD) | F(VIRT_SSBD) |
+		F(AMD_SSB_NO) | F(AMD_STIBP) | F(AMD_STIBP_ALWAYS_ON)
+	);
+
+	kvm_cpu_cap_mask(CPUID_C000_0001_EDX,
+		F(XSTORE) | F(XSTORE_EN) | F(XCRYPT) | F(XCRYPT_EN) |
+		F(ACE2) | F(ACE2_EN) | F(PHE) | F(PHE_EN) |
+		F(PMM) | F(PMM_EN)
+	);
 }
 
 struct kvm_cpuid_array {
@@ -339,48 +458,13 @@ static int __do_cpuid_func_emulated(struct kvm_cpuid_array *array, u32 func)
 
 static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry)
 {
-	unsigned f_la57;
-
-	/* cpuid 7.0.ebx */
-	const u32 kvm_cpuid_7_0_ebx_x86_features =
-		F(FSGSBASE) | F(BMI1) | F(HLE) | F(AVX2) | F(SMEP) |
-		F(BMI2) | F(ERMS) | 0 /*INVPCID*/ | F(RTM) | 0 /*MPX*/ | F(RDSEED) |
-		F(ADX) | F(SMAP) | F(AVX512IFMA) | F(AVX512F) | F(AVX512PF) |
-		F(AVX512ER) | F(AVX512CD) | F(CLFLUSHOPT) | F(CLWB) | F(AVX512DQ) |
-		F(SHA_NI) | F(AVX512BW) | F(AVX512VL) | 0 /*INTEL_PT*/;
-
-	/* cpuid 7.0.ecx*/
-	const u32 kvm_cpuid_7_0_ecx_x86_features =
-		F(AVX512VBMI) | F(LA57) | 0 /*PKU*/ | 0 /*OSPKE*/ | F(RDPID) |
-		F(AVX512_VPOPCNTDQ) | F(UMIP) | F(AVX512_VBMI2) | F(GFNI) |
-		F(VAES) | F(VPCLMULQDQ) | F(AVX512_VNNI) | F(AVX512_BITALG) |
-		F(CLDEMOTE) | F(MOVDIRI) | F(MOVDIR64B) | 0 /*WAITPKG*/;
-
-	/* cpuid 7.0.edx*/
-	const u32 kvm_cpuid_7_0_edx_x86_features =
-		F(AVX512_4VNNIW) | F(AVX512_4FMAPS) | F(SPEC_CTRL) |
-		F(SPEC_CTRL_SSBD) | F(ARCH_CAPABILITIES) | F(INTEL_STIBP) |
-		F(MD_CLEAR);
-
-	/* cpuid 7.1.eax */
-	const u32 kvm_cpuid_7_1_eax_x86_features =
-		F(AVX512_BF16);
-
 	switch (entry->index) {
 	case 0:
 		entry->eax = min(entry->eax, 1u);
-		entry->ebx &= kvm_cpuid_7_0_ebx_x86_features;
 		cpuid_entry_mask(entry, CPUID_7_0_EBX);
 		/* TSC_ADJUST is emulated */
 		cpuid_entry_set(entry, X86_FEATURE_TSC_ADJUST);
-
-		entry->ecx &= kvm_cpuid_7_0_ecx_x86_features;
-		f_la57 = cpuid_entry_get(entry, X86_FEATURE_LA57);
 		cpuid_entry_mask(entry, CPUID_7_ECX);
-		/* Set LA57 based on hardware capability. */
-		entry->ecx |= f_la57;
-
-		entry->edx &= kvm_cpuid_7_0_edx_x86_features;
 		cpuid_entry_mask(entry, CPUID_7_EDX);
 		if (boot_cpu_has(X86_FEATURE_IBPB) && boot_cpu_has(X86_FEATURE_IBRS))
 			cpuid_entry_set(entry, X86_FEATURE_SPEC_CTRL);
@@ -395,7 +479,7 @@ static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry)
 		cpuid_entry_set(entry, X86_FEATURE_ARCH_CAPABILITIES);
 		break;
 	case 1:
-		entry->eax &= kvm_cpuid_7_1_eax_x86_features;
+		cpuid_entry_mask(entry, CPUID_7_1_EAX);
 		entry->ebx = 0;
 		entry->ecx = 0;
 		entry->edx = 0;
@@ -414,72 +498,8 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
 {
 	struct kvm_cpuid_entry2 *entry;
 	int r, i, max_idx;
-	unsigned f_nx = is_efer_nx() ? F(NX) : 0;
-#ifdef CONFIG_X86_64
-	unsigned f_gbpages = F(GBPAGES);
-	unsigned f_lm = F(LM);
-#else
-	unsigned f_gbpages = 0;
-	unsigned f_lm = 0;
-#endif
 	unsigned f_intel_pt = kvm_x86_ops->pt_supported() ? F(INTEL_PT) : 0;
 
-	/* cpuid 1.edx */
-	const u32 kvm_cpuid_1_edx_x86_features =
-		F(FPU) | F(VME) | F(DE) | F(PSE) |
-		F(TSC) | F(MSR) | F(PAE) | F(MCE) |
-		F(CX8) | F(APIC) | 0 /* Reserved */ | F(SEP) |
-		F(MTRR) | F(PGE) | F(MCA) | F(CMOV) |
-		F(PAT) | F(PSE36) | 0 /* PSN */ | F(CLFLUSH) |
-		0 /* Reserved, DS, ACPI */ | F(MMX) |
-		F(FXSR) | F(XMM) | F(XMM2) | F(SELFSNOOP) |
-		0 /* HTT, TM, Reserved, PBE */;
-	/* cpuid 0x80000001.edx */
-	const u32 kvm_cpuid_8000_0001_edx_x86_features =
-		F(FPU) | F(VME) | F(DE) | F(PSE) |
-		F(TSC) | F(MSR) | F(PAE) | F(MCE) |
-		F(CX8) | F(APIC) | 0 /* Reserved */ | F(SYSCALL) |
-		F(MTRR) | F(PGE) | F(MCA) | F(CMOV) |
-		F(PAT) | F(PSE36) | 0 /* Reserved */ |
-		f_nx | 0 /* Reserved */ | F(MMXEXT) | F(MMX) |
-		F(FXSR) | F(FXSR_OPT) | f_gbpages | F(RDTSCP) |
-		0 /* Reserved */ | f_lm | F(3DNOWEXT) | F(3DNOW);
-	/* cpuid 1.ecx */
-	const u32 kvm_cpuid_1_ecx_x86_features =
-		/* NOTE: MONITOR (and MWAIT) are emulated as NOP,
-		 * but *not* advertised to guests via CPUID ! */
-		F(XMM3) | F(PCLMULQDQ) | 0 /* DTES64, MONITOR */ |
-		0 /* DS-CPL, VMX, SMX, EST */ |
-		0 /* TM2 */ | F(SSSE3) | 0 /* CNXT-ID */ | 0 /* Reserved */ |
-		F(FMA) | F(CX16) | 0 /* xTPR Update, PDCM */ |
-		F(PCID) | 0 /* Reserved, DCA */ | F(XMM4_1) |
-		F(XMM4_2) | F(X2APIC) | F(MOVBE) | F(POPCNT) |
-		0 /* Reserved*/ | F(AES) | F(XSAVE) | 0 /* OSXSAVE */ | F(AVX) |
-		F(F16C) | F(RDRAND);
-	/* cpuid 0x80000001.ecx */
-	const u32 kvm_cpuid_8000_0001_ecx_x86_features =
-		F(LAHF_LM) | F(CMP_LEGACY) | 0 /*SVM*/ | 0 /* ExtApicSpace */ |
-		F(CR8_LEGACY) | F(ABM) | F(SSE4A) | F(MISALIGNSSE) |
-		F(3DNOWPREFETCH) | F(OSVW) | 0 /* IBS */ | F(XOP) |
-		0 /* SKINIT, WDT, LWP */ | F(FMA4) | F(TBM) |
-		F(TOPOEXT) | F(PERFCTR_CORE);
-
-	/* cpuid 0x80000008.ebx */
-	const u32 kvm_cpuid_8000_0008_ebx_x86_features =
-		F(CLZERO) | F(XSAVEERPTR) |
-		F(WBNOINVD) | F(AMD_IBPB) | F(AMD_IBRS) | F(AMD_SSBD) | F(VIRT_SSBD) |
-		F(AMD_SSB_NO) | F(AMD_STIBP) | F(AMD_STIBP_ALWAYS_ON);
-
-	/* cpuid 0xC0000001.edx */
-	const u32 kvm_cpuid_C000_0001_edx_x86_features =
-		F(XSTORE) | F(XSTORE_EN) | F(XCRYPT) | F(XCRYPT_EN) |
-		F(ACE2) | F(ACE2_EN) | F(PHE) | F(PHE_EN) |
-		F(PMM) | F(PMM_EN);
-
-	/* cpuid 0xD.1.eax */
-	const u32 kvm_cpuid_D_1_eax_x86_features =
-		F(XSAVEOPT) | F(XSAVEC) | F(XGETBV1) | F(XSAVES);
-
 	/* all calls to cpuid_count() should be made on the same cpu */
 	get_cpu();
 
@@ -495,9 +515,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
 		entry->eax = min(entry->eax, 0x1fU);
 		break;
 	case 1:
-		entry->edx &= kvm_cpuid_1_edx_x86_features;
 		cpuid_entry_mask(entry, CPUID_1_EDX);
-		entry->ecx &= kvm_cpuid_1_ecx_x86_features;
 		cpuid_entry_mask(entry, CPUID_1_ECX);
 		/* we support x2apic emulation even if host does not support
 		 * it since we emulate x2apic in software */
@@ -607,7 +625,6 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
 		if (!entry)
 			goto out;
 
-		entry->eax &= kvm_cpuid_D_1_eax_x86_features;
 		cpuid_entry_mask(entry, CPUID_D_1_EAX);
 
 		if (!kvm_x86_ops->xsaves_supported())
@@ -691,9 +708,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
 		entry->eax = min(entry->eax, 0x8000001f);
 		break;
 	case 0x80000001:
-		entry->edx &= kvm_cpuid_8000_0001_edx_x86_features;
 		cpuid_entry_mask(entry, CPUID_8000_0001_EDX);
-		entry->ecx &= kvm_cpuid_8000_0001_ecx_x86_features;
 		cpuid_entry_mask(entry, CPUID_8000_0001_ECX);
 		break;
 	case 0x80000007: /* Advanced power management */
@@ -712,7 +727,6 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
 			g_phys_as = phys_as;
 		entry->eax = g_phys_as | (virt_as << 8);
 		entry->edx = 0;
-		entry->ebx &= kvm_cpuid_8000_0008_ebx_x86_features;
 		cpuid_entry_mask(entry, CPUID_8000_0008_EBX);
 		/*
 		 * AMD has separate bits for each SPEC_CTRL bit.
@@ -755,7 +769,6 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
 		entry->eax = min(entry->eax, 0xC0000004);
 		break;
 	case 0xC0000001:
-		entry->edx &= kvm_cpuid_C000_0001_edx_x86_features;
 		cpuid_entry_mask(entry, CPUID_C000_0001_EDX);
 		break;
 	case 3: /* Processor serial number */
diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
index 41ff94a7d3e0..c64283582d96 100644
--- a/arch/x86/kvm/cpuid.h
+++ b/arch/x86/kvm/cpuid.h
@@ -6,6 +6,9 @@
 #include <asm/cpu.h>
 #include <asm/processor.h>
 
+extern u32 kvm_cpu_caps[NCAPINTS] __read_mostly;
+void kvm_set_cpu_caps(void);
+
 int kvm_update_cpuid(struct kvm_vcpu *vcpu);
 struct kvm_cpuid_entry2 *kvm_find_cpuid_entry(struct kvm_vcpu *vcpu,
 					      u32 function, u32 index);
@@ -255,4 +258,20 @@ static inline bool cpuid_fault_enabled(struct kvm_vcpu *vcpu)
 		  MSR_MISC_FEATURES_ENABLES_CPUID_FAULT;
 }
 
+static __always_inline void kvm_cpu_cap_clear(unsigned x86_feature)
+{
+	unsigned x86_leaf = x86_feature / 32;
+
+	reverse_cpuid_check(x86_leaf);
+	kvm_cpu_caps[x86_leaf] &= ~__feature_bit(x86_feature);
+}
+
+static __always_inline void kvm_cpu_cap_set(unsigned x86_feature)
+{
+	unsigned x86_leaf = x86_feature / 32;
+
+	reverse_cpuid_check(x86_leaf);
+	kvm_cpu_caps[x86_leaf] |= __feature_bit(x86_feature);
+}
+
 #endif
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index f90c56c0c64a..c5ed199d6cd9 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -9591,6 +9591,8 @@ int kvm_arch_hardware_setup(void)
 {
 	int r;
 
+	kvm_set_cpu_caps();
+
 	r = kvm_x86_ops->hardware_setup();
 	if (r != 0)
 		return r;
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 39/61] KVM: SVM: Convert feature updates from CPUID to KVM cpu caps
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (37 preceding siblings ...)
  2020-02-01 18:51 ` [PATCH 38/61] KVM: x86: Introduce kvm_cpu_caps to replace runtime CPUID masking Sean Christopherson
@ 2020-02-01 18:51 ` Sean Christopherson
  2020-02-24 21:33   ` Vitaly Kuznetsov
  2020-02-25 15:10   ` Paolo Bonzini
  2020-02-01 18:51 ` [PATCH 40/61] KVM: VMX: " Sean Christopherson
                   ` (22 subsequent siblings)
  61 siblings, 2 replies; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Use the recently introduced KVM CPU caps to propagate SVM-only (kernel)
settings to supported CPUID flags.

Note, setting a flag based on a *different* feature is effectively
emulation, and so must be done at runtime via ->set_supported_cpuid().

Opportunistically add a technically unnecessary break and fix an
indentation issue in svm_set_supported_cpuid().

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/svm.c | 40 +++++++++++++++++++++++-----------------
 1 file changed, 23 insertions(+), 17 deletions(-)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 630520f8adfa..f98a192459f7 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -1350,6 +1350,25 @@ static __init void svm_adjust_mmio_mask(void)
 	kvm_mmu_set_mmio_spte_mask(mask, mask, PT_WRITABLE_MASK | PT_USER_MASK);
 }
 
+static __init void svm_set_cpu_caps(void)
+{
+	/* CPUID 0x1 */
+	if (avic)
+		kvm_cpu_cap_clear(X86_FEATURE_X2APIC);
+
+	/* CPUID 0x80000001 */
+	if (nested)
+		kvm_cpu_cap_set(X86_FEATURE_SVM);
+
+	/* CPUID 0x8000000A */
+	/* Support next_rip if host supports it */
+	if (boot_cpu_has(X86_FEATURE_NRIPS))
+		kvm_cpu_cap_set(X86_FEATURE_NRIPS);
+
+	if (npt_enabled)
+		kvm_cpu_cap_set(X86_FEATURE_NPT);
+}
+
 static __init int svm_hardware_setup(void)
 {
 	int cpu;
@@ -1462,6 +1481,8 @@ static __init int svm_hardware_setup(void)
 			pr_info("Virtual GIF supported\n");
 	}
 
+	svm_set_cpu_caps();
+
 	return 0;
 
 err:
@@ -6033,17 +6054,9 @@ static void svm_cpuid_update(struct kvm_vcpu *vcpu)
 static void svm_set_supported_cpuid(struct kvm_cpuid_entry2 *entry)
 {
 	switch (entry->function) {
-	case 0x1:
-		if (avic)
-			cpuid_entry_clear(entry, X86_FEATURE_X2APIC);
-		break;
-	case 0x80000001:
-		if (nested)
-			cpuid_entry_set(entry, X86_FEATURE_SVM);
-		break;
 	case 0x80000008:
 		if (boot_cpu_has(X86_FEATURE_LS_CFG_SSBD) ||
-		     boot_cpu_has(X86_FEATURE_AMD_SSBD))
+		    boot_cpu_has(X86_FEATURE_AMD_SSBD))
 			cpuid_entry_set(entry, X86_FEATURE_VIRT_SSBD);
 		break;
 	case 0x8000000A:
@@ -6053,14 +6066,7 @@ static void svm_set_supported_cpuid(struct kvm_cpuid_entry2 *entry)
 		entry->ecx = 0; /* Reserved */
 		entry->edx = 0; /* Per default do not support any
 				   additional features */
-
-		/* Support next_rip if host supports it */
-		if (boot_cpu_has(X86_FEATURE_NRIPS))
-			cpuid_entry_set(entry, X86_FEATURE_NRIPS);
-
-		/* Support NPT for the guest if enabled */
-		if (npt_enabled)
-			cpuid_entry_set(entry, X86_FEATURE_NPT);
+		break;
 	}
 }
 
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 40/61] KVM: VMX: Convert feature updates from CPUID to KVM cpu caps
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (38 preceding siblings ...)
  2020-02-01 18:51 ` [PATCH 39/61] KVM: SVM: Convert feature updates from CPUID to KVM cpu caps Sean Christopherson
@ 2020-02-01 18:51 ` Sean Christopherson
  2020-02-24 21:40   ` Vitaly Kuznetsov
  2020-02-01 18:51 ` [PATCH 41/61] KVM: x86: Move XSAVES CPUID adjust to VMX's KVM cpu cap update Sean Christopherson
                   ` (21 subsequent siblings)
  61 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Use the recently introduced KVM CPU caps to propagate VMX-only (kernel)
settings to supported CPUID flags.

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/vmx/vmx.c | 51 ++++++++++++++++++++++++------------------
 1 file changed, 29 insertions(+), 22 deletions(-)

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 11b9c1e7e520..bae915431c72 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7102,37 +7102,42 @@ static void vmx_cpuid_update(struct kvm_vcpu *vcpu)
 static void vmx_set_supported_cpuid(struct kvm_cpuid_entry2 *entry)
 {
 	switch (entry->function) {
-	case 0x1:
-		if (nested)
-			cpuid_entry_set(entry, X86_FEATURE_VMX);
-		break;
 	case 0x7:
-		if (boot_cpu_has(X86_FEATURE_MPX) && kvm_mpx_supported())
-			cpuid_entry_set(entry, X86_FEATURE_MPX);
-		if (boot_cpu_has(X86_FEATURE_INVPCID) && cpu_has_vmx_invpcid())
-			cpuid_entry_set(entry, X86_FEATURE_INVPCID);
-		if (boot_cpu_has(X86_FEATURE_INTEL_PT) &&
-		    vmx_pt_mode_is_host_guest())
-			cpuid_entry_set(entry, X86_FEATURE_INTEL_PT);
 		if (vmx_umip_emulated())
 			cpuid_entry_set(entry, X86_FEATURE_UMIP);
-
-		/* PKU is not yet implemented for shadow paging. */
-		if (enable_ept && boot_cpu_has(X86_FEATURE_PKU) &&
-		    boot_cpu_has(X86_FEATURE_OSPKE))
-			cpuid_entry_set(entry, X86_FEATURE_PKU);
-		break;
-	case 0x80000001:
-		if (!cpu_has_vmx_rdtscp())
-			cpuid_entry_clear(entry, X86_FEATURE_RDTSCP);
-		if (enable_ept && !cpu_has_vmx_ept_1g_page())
-			cpuid_entry_clear(entry, X86_FEATURE_GBPAGES);
 		break;
 	default:
 		break;
 	}
 }
 
+static __init void vmx_set_cpu_caps(void)
+{
+	/* CPUID 0x1 */
+	if (nested)
+		kvm_cpu_cap_set(X86_FEATURE_VMX);
+
+	/* CPUID 0x7 */
+	if (boot_cpu_has(X86_FEATURE_MPX) && kvm_mpx_supported())
+		kvm_cpu_cap_set(X86_FEATURE_MPX);
+	if (boot_cpu_has(X86_FEATURE_INVPCID) && cpu_has_vmx_invpcid())
+		kvm_cpu_cap_set(X86_FEATURE_INVPCID);
+	if (boot_cpu_has(X86_FEATURE_INTEL_PT) &&
+	    vmx_pt_mode_is_host_guest())
+		kvm_cpu_cap_set(X86_FEATURE_INTEL_PT);
+
+	/* PKU is not yet implemented for shadow paging. */
+	if (enable_ept && boot_cpu_has(X86_FEATURE_PKU) &&
+	    boot_cpu_has(X86_FEATURE_OSPKE))
+		kvm_cpu_cap_set(X86_FEATURE_PKU);
+
+	/* CPUID 0x80000001 */
+	if (!cpu_has_vmx_rdtscp())
+		kvm_cpu_cap_clear(X86_FEATURE_RDTSCP);
+	if (enable_ept && !cpu_has_vmx_ept_1g_page())
+		kvm_cpu_cap_clear(X86_FEATURE_GBPAGES);
+}
+
 static void vmx_request_immediate_exit(struct kvm_vcpu *vcpu)
 {
 	to_vmx(vcpu)->req_immediate_exit = true;
@@ -7750,6 +7755,8 @@ static __init int hardware_setup(void)
 			return r;
 	}
 
+	vmx_set_cpu_caps();
+
 	r = alloc_kvm_area();
 	if (r)
 		nested_vmx_hardware_unsetup();
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 41/61] KVM: x86: Move XSAVES CPUID adjust to VMX's KVM cpu cap update
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (39 preceding siblings ...)
  2020-02-01 18:51 ` [PATCH 40/61] KVM: VMX: " Sean Christopherson
@ 2020-02-01 18:51 ` Sean Christopherson
  2020-02-24 21:43   ` Vitaly Kuznetsov
  2020-02-01 18:51 ` [PATCH 42/61] KVM: x86: Add a helper to check kernel support when setting cpu cap Sean Christopherson
                   ` (20 subsequent siblings)
  61 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Move the clearing of the XSAVES CPUID bit into VMX, which has a separate
VMCS control to enable XSAVES in non-root, to eliminate the last ugly
renmant of the undesirable "unsigned f_* = *_supported ? F(*) : 0"
pattern in the common CPUID handling code.

Drop ->xsaves_supported(), CPUID adjustment was the only user.

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/include/asm/kvm_host.h | 1 -
 arch/x86/kvm/cpuid.c            | 4 ----
 arch/x86/kvm/svm.c              | 6 ------
 arch/x86/kvm/vmx/vmx.c          | 5 ++++-
 4 files changed, 4 insertions(+), 12 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index ba828569cda5..dd690fb5ceca 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1163,7 +1163,6 @@ struct kvm_x86_ops {
 	void (*handle_exit_irqoff)(struct kvm_vcpu *vcpu,
 		enum exit_fastpath_completion *exit_fastpath);
 
-	bool (*xsaves_supported)(void);
 	bool (*umip_emulated)(void);
 	bool (*pt_supported)(void);
 
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index c2a4c9df49a9..77a6c1db138d 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -626,10 +626,6 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
 			goto out;
 
 		cpuid_entry_mask(entry, CPUID_D_1_EAX);
-
-		if (!kvm_x86_ops->xsaves_supported())
-			cpuid_entry_clear(entry, X86_FEATURE_XSAVES);
-
 		if (entry->eax & (F(XSAVES)|F(XSAVEC)))
 			entry->ebx = xstate_required_size(supported_xcr0, true);
 		else
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index f98a192459f7..7cb05945162e 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -6080,11 +6080,6 @@ static bool svm_rdtscp_supported(void)
 	return boot_cpu_has(X86_FEATURE_RDTSCP);
 }
 
-static bool svm_xsaves_supported(void)
-{
-	return boot_cpu_has(X86_FEATURE_XSAVES);
-}
-
 static bool svm_umip_emulated(void)
 {
 	return false;
@@ -7455,7 +7450,6 @@ static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
 	.cpuid_update = svm_cpuid_update,
 
 	.rdtscp_supported = svm_rdtscp_supported,
-	.xsaves_supported = svm_xsaves_supported,
 	.umip_emulated = svm_umip_emulated,
 	.pt_supported = svm_pt_supported,
 
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index bae915431c72..cfd0ef314176 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7131,6 +7131,10 @@ static __init void vmx_set_cpu_caps(void)
 	    boot_cpu_has(X86_FEATURE_OSPKE))
 		kvm_cpu_cap_set(X86_FEATURE_PKU);
 
+	/* CPUID 0xD.1 */
+	if (!vmx_xsaves_supported())
+		kvm_cpu_cap_clear(X86_FEATURE_XSAVES);
+
 	/* CPUID 0x80000001 */
 	if (!cpu_has_vmx_rdtscp())
 		kvm_cpu_cap_clear(X86_FEATURE_RDTSCP);
@@ -7886,7 +7890,6 @@ static struct kvm_x86_ops vmx_x86_ops __ro_after_init = {
 
 	.check_intercept = vmx_check_intercept,
 	.handle_exit_irqoff = vmx_handle_exit_irqoff,
-	.xsaves_supported = vmx_xsaves_supported,
 	.umip_emulated = vmx_umip_emulated,
 	.pt_supported = vmx_pt_supported,
 
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 42/61] KVM: x86: Add a helper to check kernel support when setting cpu cap
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (40 preceding siblings ...)
  2020-02-01 18:51 ` [PATCH 41/61] KVM: x86: Move XSAVES CPUID adjust to VMX's KVM cpu cap update Sean Christopherson
@ 2020-02-01 18:51 ` Sean Christopherson
  2020-02-24 21:47   ` Vitaly Kuznetsov
  2020-02-01 18:52 ` [PATCH 43/61] KVM: x86: Use KVM cpu caps to mark CR4.LA57 as not-reserved Sean Christopherson
                   ` (19 subsequent siblings)
  61 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Add a helper, kvm_cpu_cap_check_and_set(), to query boot_cpu_has() as
part of setting a KVM cpu capability.  VMX in particular has a number of
features that are dependent on both a VMCS capability and kernel
support.

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/cpuid.h   |  6 ++++++
 arch/x86/kvm/svm.c     |  3 +--
 arch/x86/kvm/vmx/vmx.c | 18 ++++++++----------
 3 files changed, 15 insertions(+), 12 deletions(-)

diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
index c64283582d96..7b71ae0ca05e 100644
--- a/arch/x86/kvm/cpuid.h
+++ b/arch/x86/kvm/cpuid.h
@@ -274,4 +274,10 @@ static __always_inline void kvm_cpu_cap_set(unsigned x86_feature)
 	kvm_cpu_caps[x86_leaf] |= __feature_bit(x86_feature);
 }
 
+static __always_inline void kvm_cpu_cap_check_and_set(unsigned x86_feature)
+{
+	if (boot_cpu_has(x86_feature))
+		kvm_cpu_cap_set(x86_feature);
+}
+
 #endif
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 7cb05945162e..defb2c0dbf8a 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -1362,8 +1362,7 @@ static __init void svm_set_cpu_caps(void)
 
 	/* CPUID 0x8000000A */
 	/* Support next_rip if host supports it */
-	if (boot_cpu_has(X86_FEATURE_NRIPS))
-		kvm_cpu_cap_set(X86_FEATURE_NRIPS);
+	kvm_cpu_cap_check_and_set(X86_FEATURE_NRIPS);
 
 	if (npt_enabled)
 		kvm_cpu_cap_set(X86_FEATURE_NPT);
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index cfd0ef314176..cecf59225136 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7118,18 +7118,16 @@ static __init void vmx_set_cpu_caps(void)
 		kvm_cpu_cap_set(X86_FEATURE_VMX);
 
 	/* CPUID 0x7 */
-	if (boot_cpu_has(X86_FEATURE_MPX) && kvm_mpx_supported())
-		kvm_cpu_cap_set(X86_FEATURE_MPX);
-	if (boot_cpu_has(X86_FEATURE_INVPCID) && cpu_has_vmx_invpcid())
-		kvm_cpu_cap_set(X86_FEATURE_INVPCID);
-	if (boot_cpu_has(X86_FEATURE_INTEL_PT) &&
-	    vmx_pt_mode_is_host_guest())
-		kvm_cpu_cap_set(X86_FEATURE_INTEL_PT);
+	if (kvm_mpx_supported())
+		kvm_cpu_cap_check_and_set(X86_FEATURE_MPX);
+	if (cpu_has_vmx_invpcid())
+		kvm_cpu_cap_check_and_set(X86_FEATURE_INVPCID);
+	if (vmx_pt_mode_is_host_guest())
+		kvm_cpu_cap_check_and_set(X86_FEATURE_INTEL_PT);
 
 	/* PKU is not yet implemented for shadow paging. */
-	if (enable_ept && boot_cpu_has(X86_FEATURE_PKU) &&
-	    boot_cpu_has(X86_FEATURE_OSPKE))
-		kvm_cpu_cap_set(X86_FEATURE_PKU);
+	if (enable_ept && boot_cpu_has(X86_FEATURE_OSPKE))
+		kvm_cpu_cap_check_and_set(X86_FEATURE_PKU);
 
 	/* CPUID 0xD.1 */
 	if (!vmx_xsaves_supported())
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 43/61] KVM: x86: Use KVM cpu caps to mark CR4.LA57 as not-reserved
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (41 preceding siblings ...)
  2020-02-01 18:51 ` [PATCH 42/61] KVM: x86: Add a helper to check kernel support when setting cpu cap Sean Christopherson
@ 2020-02-01 18:52 ` Sean Christopherson
  2020-02-24 22:08   ` Vitaly Kuznetsov
  2020-02-01 18:52 ` [PATCH 44/61] KVM: x86: Use KVM cpu caps to track UMIP emulation Sean Christopherson
                   ` (18 subsequent siblings)
  61 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:52 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Add accessor(s) for KVM cpu caps and use said accessor to detect
hardware support for LA57 instead of manually querying CPUID.

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/cpuid.h | 13 +++++++++++++
 arch/x86/kvm/x86.c   |  2 +-
 2 files changed, 14 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
index 7b71ae0ca05e..5ce4219d465f 100644
--- a/arch/x86/kvm/cpuid.h
+++ b/arch/x86/kvm/cpuid.h
@@ -274,6 +274,19 @@ static __always_inline void kvm_cpu_cap_set(unsigned x86_feature)
 	kvm_cpu_caps[x86_leaf] |= __feature_bit(x86_feature);
 }
 
+static __always_inline u32 kvm_cpu_cap_get(unsigned x86_feature)
+{
+	unsigned x86_leaf = x86_feature / 32;
+
+	reverse_cpuid_check(x86_leaf);
+	return kvm_cpu_caps[x86_leaf] & __feature_bit(x86_feature);
+}
+
+static __always_inline bool kvm_cpu_cap_has(unsigned x86_feature)
+{
+	return kvm_cpu_cap_get(x86_feature);
+}
+
 static __always_inline void kvm_cpu_cap_check_and_set(unsigned x86_feature)
 {
 	if (boot_cpu_has(x86_feature))
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index c5ed199d6cd9..cb40737187a1 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -912,7 +912,7 @@ static u64 kvm_host_cr4_reserved_bits(struct cpuinfo_x86 *c)
 {
 	u64 reserved_bits = __cr4_reserved_bits(cpu_has, c);
 
-	if (cpuid_ecx(0x7) & feature_bit(LA57))
+	if (kvm_cpu_cap_has(X86_FEATURE_LA57))
 		reserved_bits &= ~X86_CR4_LA57;
 
 	if (kvm_x86_ops->umip_emulated())
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 44/61] KVM: x86: Use KVM cpu caps to track UMIP emulation
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (42 preceding siblings ...)
  2020-02-01 18:52 ` [PATCH 43/61] KVM: x86: Use KVM cpu caps to mark CR4.LA57 as not-reserved Sean Christopherson
@ 2020-02-01 18:52 ` Sean Christopherson
  2020-02-24 22:13   ` Vitaly Kuznetsov
  2020-02-01 18:52 ` [PATCH 45/61] KVM: x86: Fold CPUID 0x7 masking back into __do_cpuid_func() Sean Christopherson
                   ` (17 subsequent siblings)
  61 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:52 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Set UMIP in kvm_cpu_caps when it is emulated by VMX, even though the
bit will be effectively be dropped by do_host_cpuid().  This allows
checking for UMIP emulation via kvm_cpu_caps instead of a dedicated
kvm_x86_ops callback.

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/include/asm/kvm_host.h | 1 -
 arch/x86/kvm/svm.c              | 6 ------
 arch/x86/kvm/vmx/vmx.c          | 8 +++++++-
 arch/x86/kvm/x86.c              | 2 +-
 4 files changed, 8 insertions(+), 9 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index dd690fb5ceca..113b138a0347 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1163,7 +1163,6 @@ struct kvm_x86_ops {
 	void (*handle_exit_irqoff)(struct kvm_vcpu *vcpu,
 		enum exit_fastpath_completion *exit_fastpath);
 
-	bool (*umip_emulated)(void);
 	bool (*pt_supported)(void);
 
 	int (*check_nested_events)(struct kvm_vcpu *vcpu, bool external_intr);
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index defb2c0dbf8a..e1ed5726964c 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -6079,11 +6079,6 @@ static bool svm_rdtscp_supported(void)
 	return boot_cpu_has(X86_FEATURE_RDTSCP);
 }
 
-static bool svm_umip_emulated(void)
-{
-	return false;
-}
-
 static bool svm_pt_supported(void)
 {
 	return false;
@@ -7449,7 +7444,6 @@ static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
 	.cpuid_update = svm_cpuid_update,
 
 	.rdtscp_supported = svm_rdtscp_supported,
-	.umip_emulated = svm_umip_emulated,
 	.pt_supported = svm_pt_supported,
 
 	.set_supported_cpuid = svm_set_supported_cpuid,
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index cecf59225136..cd5a624610c9 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7103,6 +7103,10 @@ static void vmx_set_supported_cpuid(struct kvm_cpuid_entry2 *entry)
 {
 	switch (entry->function) {
 	case 0x7:
+		/*
+		 * UMIP needs to be manually set even though vmx_set_cpu_caps()
+		 * also sets UMIP since do_host_cpuid() will drop it.
+		 */
 		if (vmx_umip_emulated())
 			cpuid_entry_set(entry, X86_FEATURE_UMIP);
 		break;
@@ -7129,6 +7133,9 @@ static __init void vmx_set_cpu_caps(void)
 	if (enable_ept && boot_cpu_has(X86_FEATURE_OSPKE))
 		kvm_cpu_cap_check_and_set(X86_FEATURE_PKU);
 
+	if (vmx_umip_emulated())
+		kvm_cpu_cap_set(X86_FEATURE_UMIP);
+
 	/* CPUID 0xD.1 */
 	if (!vmx_xsaves_supported())
 		kvm_cpu_cap_clear(X86_FEATURE_XSAVES);
@@ -7888,7 +7895,6 @@ static struct kvm_x86_ops vmx_x86_ops __ro_after_init = {
 
 	.check_intercept = vmx_check_intercept,
 	.handle_exit_irqoff = vmx_handle_exit_irqoff,
-	.umip_emulated = vmx_umip_emulated,
 	.pt_supported = vmx_pt_supported,
 
 	.request_immediate_exit = vmx_request_immediate_exit,
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index cb40737187a1..a6d5f22c7ef6 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -915,7 +915,7 @@ static u64 kvm_host_cr4_reserved_bits(struct cpuinfo_x86 *c)
 	if (kvm_cpu_cap_has(X86_FEATURE_LA57))
 		reserved_bits &= ~X86_CR4_LA57;
 
-	if (kvm_x86_ops->umip_emulated())
+	if (kvm_cpu_cap_has(X86_FEATURE_UMIP))
 		reserved_bits &= ~X86_CR4_UMIP;
 
 	return reserved_bits;
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 45/61] KVM: x86: Fold CPUID 0x7 masking back into __do_cpuid_func()
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (43 preceding siblings ...)
  2020-02-01 18:52 ` [PATCH 44/61] KVM: x86: Use KVM cpu caps to track UMIP emulation Sean Christopherson
@ 2020-02-01 18:52 ` Sean Christopherson
  2020-02-24 22:21   ` Vitaly Kuznetsov
  2020-02-01 18:52 ` [PATCH 46/61] KVM: x86: Remove the unnecessary loop on CPUID 0x7 sub-leafs Sean Christopherson
                   ` (16 subsequent siblings)
  61 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:52 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Move the CPUID 0x7 masking back into __do_cpuid_func() now that the
size of the code has been trimmed down significantly.

Tweak the WARN case, which is impossible to hit unless the CPU is
completely broken, to break the loop before creating the bogus entry.

Opportunustically reorder the cpuid_entry_set() calls and shorten the
comment about emulation to further reduce the footprint of CPUID 0x7.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/cpuid.c | 62 ++++++++++++++++----------------------------
 1 file changed, 22 insertions(+), 40 deletions(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 77a6c1db138d..7362e5238799 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -456,44 +456,6 @@ static int __do_cpuid_func_emulated(struct kvm_cpuid_array *array, u32 func)
 	return 0;
 }
 
-static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry)
-{
-	switch (entry->index) {
-	case 0:
-		entry->eax = min(entry->eax, 1u);
-		cpuid_entry_mask(entry, CPUID_7_0_EBX);
-		/* TSC_ADJUST is emulated */
-		cpuid_entry_set(entry, X86_FEATURE_TSC_ADJUST);
-		cpuid_entry_mask(entry, CPUID_7_ECX);
-		cpuid_entry_mask(entry, CPUID_7_EDX);
-		if (boot_cpu_has(X86_FEATURE_IBPB) && boot_cpu_has(X86_FEATURE_IBRS))
-			cpuid_entry_set(entry, X86_FEATURE_SPEC_CTRL);
-		if (boot_cpu_has(X86_FEATURE_STIBP))
-			cpuid_entry_set(entry, X86_FEATURE_INTEL_STIBP);
-		if (boot_cpu_has(X86_FEATURE_AMD_SSBD))
-			cpuid_entry_set(entry, X86_FEATURE_SPEC_CTRL_SSBD);
-		/*
-		 * We emulate ARCH_CAPABILITIES in software even
-		 * if the host doesn't support it.
-		 */
-		cpuid_entry_set(entry, X86_FEATURE_ARCH_CAPABILITIES);
-		break;
-	case 1:
-		cpuid_entry_mask(entry, CPUID_7_1_EAX);
-		entry->ebx = 0;
-		entry->ecx = 0;
-		entry->edx = 0;
-		break;
-	default:
-		WARN_ON_ONCE(1);
-		entry->eax = 0;
-		entry->ebx = 0;
-		entry->ecx = 0;
-		entry->edx = 0;
-		break;
-	}
-}
-
 static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
 {
 	struct kvm_cpuid_entry2 *entry;
@@ -555,14 +517,34 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
 		break;
 	/* function 7 has additional index. */
 	case 7:
-		do_cpuid_7_mask(entry);
+		entry->eax = min(entry->eax, 1u);
+		cpuid_entry_mask(entry, CPUID_7_0_EBX);
+		cpuid_entry_mask(entry, CPUID_7_ECX);
+		cpuid_entry_mask(entry, CPUID_7_EDX);
+
+		/* TSC_ADJUST and ARCH_CAPABILITIES are emulated in software. */
+		cpuid_entry_set(entry, X86_FEATURE_TSC_ADJUST);
+		cpuid_entry_set(entry, X86_FEATURE_ARCH_CAPABILITIES);
+
+		if (boot_cpu_has(X86_FEATURE_IBPB) && boot_cpu_has(X86_FEATURE_IBRS))
+			cpuid_entry_set(entry, X86_FEATURE_SPEC_CTRL);
+		if (boot_cpu_has(X86_FEATURE_STIBP))
+			cpuid_entry_set(entry, X86_FEATURE_INTEL_STIBP);
+		if (boot_cpu_has(X86_FEATURE_AMD_SSBD))
+			cpuid_entry_set(entry, X86_FEATURE_SPEC_CTRL_SSBD);
 
 		for (i = 1, max_idx = entry->eax; i <= max_idx; i++) {
+			if (WARN_ON_ONCE(i > 1))
+				break;
+
 			entry = do_host_cpuid(array, function, i);
 			if (!entry)
 				goto out;
 
-			do_cpuid_7_mask(entry);
+			cpuid_entry_mask(entry, CPUID_7_1_EAX);
+			entry->ebx = 0;
+			entry->ecx = 0;
+			entry->edx = 0;
 		}
 		break;
 	case 9:
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 46/61] KVM: x86: Remove the unnecessary loop on CPUID 0x7 sub-leafs
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (44 preceding siblings ...)
  2020-02-01 18:52 ` [PATCH 45/61] KVM: x86: Fold CPUID 0x7 masking back into __do_cpuid_func() Sean Christopherson
@ 2020-02-01 18:52 ` Sean Christopherson
  2020-02-24 22:25   ` Vitaly Kuznetsov
  2020-02-01 18:52 ` [PATCH 47/61] KVM: x86: Squash CPUID 0x2.0 insanity for modern CPUs Sean Christopherson
                   ` (15 subsequent siblings)
  61 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:52 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Explicitly handle CPUID 0x7 sub-leaf 1.  The kernel is currently aware
of exactly one feature in CPUID 0x7.1,  which means there is room for
another 127 features before CPUID 0x7.2 will see the light of day, i.e.
the looping is likely to be dead code for years to come.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/cpuid.c | 8 +++-----
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 7362e5238799..47f61f4497fb 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -533,11 +533,9 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
 		if (boot_cpu_has(X86_FEATURE_AMD_SSBD))
 			cpuid_entry_set(entry, X86_FEATURE_SPEC_CTRL_SSBD);
 
-		for (i = 1, max_idx = entry->eax; i <= max_idx; i++) {
-			if (WARN_ON_ONCE(i > 1))
-				break;
-
-			entry = do_host_cpuid(array, function, i);
+		/* KVM only supports 0x7.0 and 0x7.1, capped above via min(). */
+		if (entry->eax == 1) {
+			entry = do_host_cpuid(array, function, 1);
 			if (!entry)
 				goto out;
 
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 47/61] KVM: x86: Squash CPUID 0x2.0 insanity for modern CPUs
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (45 preceding siblings ...)
  2020-02-01 18:52 ` [PATCH 46/61] KVM: x86: Remove the unnecessary loop on CPUID 0x7 sub-leafs Sean Christopherson
@ 2020-02-01 18:52 ` Sean Christopherson
  2020-02-24 22:35   ` Vitaly Kuznetsov
  2020-02-25 15:17   ` Paolo Bonzini
  2020-02-01 18:52 ` [PATCH 48/61] KVM: x86: Do host CPUID at load time to mask KVM cpu caps Sean Christopherson
                   ` (14 subsequent siblings)
  61 siblings, 2 replies; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:52 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Rework CPUID 0x2.0 to be a normal CPUID leaf if it returns "01" in AL,
i.e. EAX & 0xff.

Long ago, Intel documented CPUID 0x2.0 as being a stateful leaf, e.g. a
version of the SDM circa 1995 states:

  The least-significant byte in register EAX (register AL) indicates the
  number of times the CPUID instruction must be executed with an input
  value of 2 to get a complete description of the processors's caches
  and TLBs.  The Pentium Pro family of processors will return a 1.

A 2000 version of the SDM only updated the paragraph to reference
Intel's new processory family:

  The first member of the family of Pentium 4 processors will return a 1.

Fast forward to the present, and Intel's SDM now states:

  The least-significant byte in register EAX (register AL) will always
  return 01H.  Software should ignore this value and not interpret it as
  an information descriptor.

AMD's APM simply states that CPUID 0x2 is reserved.

Given that CPUID itself was introduced in the Pentium, odds are good
that the only Intel CPU family that *maybe* implemented a stateful CPUID
was the P5.  Which obviously did not support VMX, or KVM.

In other words, KVM's emulation of a stateful CPUID 0x2.0 has likely
been dead code from the day it was introduced.  This is backed up by
commit 0fdf8e59faa5c ("KVM: Fix cpuid iteration on multiple leaves per
eac"), whichs show that the stateful iteration code was completely
broken when it was introduced by commit 0771671749b59 ("KVM: Enhance
guest cpuid management"), i.e. not actually tested.

Although it's _extremely_ tempting to yank KVM's stateful code, leave it
in for now but annotate all its code paths as "unlikely".  The code is
relatively contained, and if by some miracle there is someone running KVM
on a CPU with a stateful CPUID 0x2, more power to 'em.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/cpuid.c | 31 +++++++++++++++++++++----------
 1 file changed, 21 insertions(+), 10 deletions(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 47f61f4497fb..ab2a34337588 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -405,9 +405,6 @@ static struct kvm_cpuid_entry2 *do_host_cpuid(struct kvm_cpuid_array *array,
 		    &entry->eax, &entry->ebx, &entry->ecx, &entry->edx);
 
 	switch (function) {
-	case 2:
-		entry->flags |= KVM_CPUID_FLAG_STATEFUL_FUNC;
-		break;
 	case 4:
 	case 7:
 	case 0xb:
@@ -483,17 +480,31 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
 		 * it since we emulate x2apic in software */
 		cpuid_entry_set(entry, X86_FEATURE_X2APIC);
 		break;
-	/* function 2 entries are STATEFUL. That is, repeated cpuid commands
-	 * may return different values. This forces us to get_cpu() before
-	 * issuing the first command, and also to emulate this annoying behavior
-	 * in kvm_emulate_cpuid() using KVM_CPUID_FLAG_STATE_READ_NEXT */
 	case 2:
+		/*
+		 * On ancient CPUs, function 2 entries are STATEFUL.  That is,
+		 * CPUID(function=2, index=0) may return different results each
+		 * time, with the least-significant byte in EAX enumerating the
+		 * number of times software should do CPUID(2, 0).
+		 *
+		 * Modern CPUs (quite likely every CPU KVM has *ever* run on)
+		 * are less idiotic.  Intel's SDM states that EAX & 0xff "will
+		 * always return 01H. Software should ignore this value and not
+		 * interpret it as an informational descriptor", while AMD's
+		 * APM states that CPUID(2) is reserved.
+		 */
+		max_idx = entry->eax & 0xff;
+		if (likely(max_idx <= 1))
+			break;
+
+		entry->flags |= KVM_CPUID_FLAG_STATEFUL_FUNC;
 		entry->flags |= KVM_CPUID_FLAG_STATE_READ_NEXT;
 
-		for (i = 1, max_idx = entry->eax & 0xff; i < max_idx; ++i) {
+		for (i = 1; i < max_idx; ++i) {
 			entry = do_host_cpuid(array, 2, 0);
 			if (!entry)
 				goto out;
+			entry->flags |= KVM_CPUID_FLAG_STATEFUL_FUNC;
 		}
 		break;
 	/* functions 4 and 0x8000001d have additional index. */
@@ -903,7 +914,7 @@ static int is_matching_cpuid_entry(struct kvm_cpuid_entry2 *e,
 		return 0;
 	if ((e->flags & KVM_CPUID_FLAG_SIGNIFCANT_INDEX) && e->index != index)
 		return 0;
-	if ((e->flags & KVM_CPUID_FLAG_STATEFUL_FUNC) &&
+	if (unlikely(e->flags & KVM_CPUID_FLAG_STATEFUL_FUNC) &&
 	    !(e->flags & KVM_CPUID_FLAG_STATE_READ_NEXT))
 		return 0;
 	return 1;
@@ -920,7 +931,7 @@ struct kvm_cpuid_entry2 *kvm_find_cpuid_entry(struct kvm_vcpu *vcpu,
 
 		e = &vcpu->arch.cpuid_entries[i];
 		if (is_matching_cpuid_entry(e, function, index)) {
-			if (e->flags & KVM_CPUID_FLAG_STATEFUL_FUNC)
+			if (unlikely(e->flags & KVM_CPUID_FLAG_STATEFUL_FUNC))
 				move_to_next_stateful_cpuid_entry(vcpu, i);
 			best = e;
 			break;
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 48/61] KVM: x86: Do host CPUID at load time to mask KVM cpu caps
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (46 preceding siblings ...)
  2020-02-01 18:52 ` [PATCH 47/61] KVM: x86: Squash CPUID 0x2.0 insanity for modern CPUs Sean Christopherson
@ 2020-02-01 18:52 ` Sean Christopherson
       [not found]   ` <87o8tnmwni.fsf@vitty.brq.redhat.com>
  2020-02-25 15:18   ` Paolo Bonzini
  2020-02-01 18:52 ` [PATCH 49/61] KVM: x86: Override host CPUID results with kvm_cpu_caps Sean Christopherson
                   ` (13 subsequent siblings)
  61 siblings, 2 replies; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:52 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Mask kvm_cpu_caps based on host CPUID in preparation for overriding the
CPUID results during KVM_GET_SUPPORTED_CPUID instead of doing the
masking at runtime.

Note, masking may or may not be necessary, e.g. the kernel rarely, if
ever, sets real CPUID bits that are not supported by hardware.  But, the
code is cheap and only runs once at load, so an abundance of caution is
warranted.

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/cpuid.c | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index ab2a34337588..4416f2422321 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -272,8 +272,22 @@ static __always_inline void cpuid_entry_mask(struct kvm_cpuid_entry2 *entry,
 
 static __always_inline void kvm_cpu_cap_mask(enum cpuid_leafs leaf, u32 mask)
 {
+	const struct cpuid_reg cpuid = x86_feature_cpuid(leaf * 32);
+	struct kvm_cpuid_entry2 entry;
+
 	reverse_cpuid_check(leaf);
 	kvm_cpu_caps[leaf] &= mask;
+
+#ifdef CONFIG_KVM_CPUID_AUDIT
+	/* Entry needs to be fully populated when auditing is enabled. */
+	entry.function = cpuid.function;
+	entry.index = cpuid.index;
+#endif
+
+	cpuid_count(cpuid.function, cpuid.index,
+		    &entry.eax, &entry.ebx, &entry.ecx, &entry.edx);
+
+	kvm_cpu_caps[leaf] &= *__cpuid_entry_get_reg(&entry, &cpuid);
 }
 
 void kvm_set_cpu_caps(void)
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 49/61] KVM: x86: Override host CPUID results with kvm_cpu_caps
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (47 preceding siblings ...)
  2020-02-01 18:52 ` [PATCH 48/61] KVM: x86: Do host CPUID at load time to mask KVM cpu caps Sean Christopherson
@ 2020-02-01 18:52 ` Sean Christopherson
  2020-02-24 22:57   ` Vitaly Kuznetsov
  2020-02-01 18:52 ` [PATCH 50/61] KVM: x86: Set emulated/transmuted feature bits via kvm_cpu_caps Sean Christopherson
                   ` (12 subsequent siblings)
  61 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:52 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Override CPUID entries with kvm_cpu_caps during KVM_GET_SUPPORTED_CPUID
instead of masking the host CPUID result, which is redundant now that
the host CPUID is incorporated into kvm_cpu_caps at runtime.

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/cpuid.c | 28 ++++++++++++++--------------
 1 file changed, 14 insertions(+), 14 deletions(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 4416f2422321..871c0bd04e19 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -261,13 +261,13 @@ int kvm_vcpu_ioctl_get_cpuid2(struct kvm_vcpu *vcpu,
 	return r;
 }
 
-static __always_inline void cpuid_entry_mask(struct kvm_cpuid_entry2 *entry,
-					     enum cpuid_leafs leaf)
+static __always_inline void cpuid_entry_override(struct kvm_cpuid_entry2 *entry,
+						 enum cpuid_leafs leaf)
 {
 	u32 *reg = cpuid_entry_get_reg(entry, leaf * 32);
 
 	BUILD_BUG_ON(leaf > ARRAY_SIZE(kvm_cpu_caps));
-	*reg &= kvm_cpu_caps[leaf];
+	*reg = kvm_cpu_caps[leaf];
 }
 
 static __always_inline void kvm_cpu_cap_mask(enum cpuid_leafs leaf, u32 mask)
@@ -488,8 +488,8 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
 		entry->eax = min(entry->eax, 0x1fU);
 		break;
 	case 1:
-		cpuid_entry_mask(entry, CPUID_1_EDX);
-		cpuid_entry_mask(entry, CPUID_1_ECX);
+		cpuid_entry_override(entry, CPUID_1_EDX);
+		cpuid_entry_override(entry, CPUID_1_ECX);
 		/* we support x2apic emulation even if host does not support
 		 * it since we emulate x2apic in software */
 		cpuid_entry_set(entry, X86_FEATURE_X2APIC);
@@ -543,9 +543,9 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
 	/* function 7 has additional index. */
 	case 7:
 		entry->eax = min(entry->eax, 1u);
-		cpuid_entry_mask(entry, CPUID_7_0_EBX);
-		cpuid_entry_mask(entry, CPUID_7_ECX);
-		cpuid_entry_mask(entry, CPUID_7_EDX);
+		cpuid_entry_override(entry, CPUID_7_0_EBX);
+		cpuid_entry_override(entry, CPUID_7_ECX);
+		cpuid_entry_override(entry, CPUID_7_EDX);
 
 		/* TSC_ADJUST and ARCH_CAPABILITIES are emulated in software. */
 		cpuid_entry_set(entry, X86_FEATURE_TSC_ADJUST);
@@ -564,7 +564,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
 			if (!entry)
 				goto out;
 
-			cpuid_entry_mask(entry, CPUID_7_1_EAX);
+			cpuid_entry_override(entry, CPUID_7_1_EAX);
 			entry->ebx = 0;
 			entry->ecx = 0;
 			entry->edx = 0;
@@ -630,7 +630,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
 		if (!entry)
 			goto out;
 
-		cpuid_entry_mask(entry, CPUID_D_1_EAX);
+		cpuid_entry_override(entry, CPUID_D_1_EAX);
 		if (entry->eax & (F(XSAVES)|F(XSAVEC)))
 			entry->ebx = xstate_required_size(supported_xcr0, true);
 		else
@@ -709,8 +709,8 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
 		entry->eax = min(entry->eax, 0x8000001f);
 		break;
 	case 0x80000001:
-		cpuid_entry_mask(entry, CPUID_8000_0001_EDX);
-		cpuid_entry_mask(entry, CPUID_8000_0001_ECX);
+		cpuid_entry_override(entry, CPUID_8000_0001_EDX);
+		cpuid_entry_override(entry, CPUID_8000_0001_ECX);
 		break;
 	case 0x80000007: /* Advanced power management */
 		/* invariant TSC is CPUID.80000007H:EDX[8] */
@@ -728,7 +728,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
 			g_phys_as = phys_as;
 		entry->eax = g_phys_as | (virt_as << 8);
 		entry->edx = 0;
-		cpuid_entry_mask(entry, CPUID_8000_0008_EBX);
+		cpuid_entry_override(entry, CPUID_8000_0008_EBX);
 		/*
 		 * AMD has separate bits for each SPEC_CTRL bit.
 		 * arch/x86/kernel/cpu/bugs.c is kind enough to
@@ -770,7 +770,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
 		entry->eax = min(entry->eax, 0xC0000004);
 		break;
 	case 0xC0000001:
-		cpuid_entry_mask(entry, CPUID_C000_0001_EDX);
+		cpuid_entry_override(entry, CPUID_C000_0001_EDX);
 		break;
 	case 3: /* Processor serial number */
 	case 5: /* MONITOR/MWAIT */
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 50/61] KVM: x86: Set emulated/transmuted feature bits via kvm_cpu_caps
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (48 preceding siblings ...)
  2020-02-01 18:52 ` [PATCH 49/61] KVM: x86: Override host CPUID results with kvm_cpu_caps Sean Christopherson
@ 2020-02-01 18:52 ` Sean Christopherson
  2020-02-25 13:59   ` Vitaly Kuznetsov
  2020-02-01 18:52 ` [PATCH 51/61] KVM: x86: Use kvm_cpu_caps to detect Intel PT support Sean Christopherson
                   ` (11 subsequent siblings)
  61 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:52 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Set emulated and transmuted (set based on other features) feature bits
via kvm_cpu_caps now that the CPUID output for KVM_GET_SUPPORTED_CPUID
is direcly overidden with kvm_cpu_caps.

Note, VMX emulation of UMIP already sets kvm_cpu_caps.

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/cpuid.c   | 72 +++++++++++++++++++++---------------------
 arch/x86/kvm/svm.c     | 10 +++---
 arch/x86/kvm/vmx/vmx.c | 13 +-------
 3 files changed, 42 insertions(+), 53 deletions(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 871c0bd04e19..a37cb6fda979 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -341,6 +341,8 @@ void kvm_set_cpu_caps(void)
 		0 /* Reserved*/ | F(AES) | F(XSAVE) | 0 /* OSXSAVE */ | F(AVX) |
 		F(F16C) | F(RDRAND)
 	);
+	/* KVM emulates x2apic in software irrespective of host support. */
+	kvm_cpu_cap_set(X86_FEATURE_X2APIC);
 
 	kvm_cpu_cap_mask(CPUID_7_0_EBX,
 		F(FSGSBASE) | F(BMI1) | F(HLE) | F(AVX2) | F(SMEP) |
@@ -366,6 +368,17 @@ void kvm_set_cpu_caps(void)
 		F(MD_CLEAR)
 	);
 
+	/* TSC_ADJUST and ARCH_CAPABILITIES are emulated in software. */
+	kvm_cpu_cap_set(X86_FEATURE_TSC_ADJUST);
+	kvm_cpu_cap_set(X86_FEATURE_ARCH_CAPABILITIES);
+
+	if (boot_cpu_has(X86_FEATURE_IBPB) && boot_cpu_has(X86_FEATURE_IBRS))
+		kvm_cpu_cap_set(X86_FEATURE_SPEC_CTRL);
+	if (boot_cpu_has(X86_FEATURE_STIBP))
+		kvm_cpu_cap_set(X86_FEATURE_INTEL_STIBP);
+	if (boot_cpu_has(X86_FEATURE_AMD_SSBD))
+		kvm_cpu_cap_set(X86_FEATURE_SPEC_CTRL_SSBD);
+
 	kvm_cpu_cap_mask(CPUID_7_1_EAX,
 		F(AVX512_BF16)
 	);
@@ -388,6 +401,29 @@ void kvm_set_cpu_caps(void)
 		F(AMD_SSB_NO) | F(AMD_STIBP) | F(AMD_STIBP_ALWAYS_ON)
 	);
 
+	/*
+	 * AMD has separate bits for each SPEC_CTRL bit.
+	 * arch/x86/kernel/cpu/bugs.c is kind enough to
+	 * record that in cpufeatures so use them.
+	 */
+	if (boot_cpu_has(X86_FEATURE_IBPB))
+		kvm_cpu_cap_set(X86_FEATURE_AMD_IBPB);
+	if (boot_cpu_has(X86_FEATURE_IBRS))
+		kvm_cpu_cap_set(X86_FEATURE_AMD_IBRS);
+	if (boot_cpu_has(X86_FEATURE_STIBP))
+		kvm_cpu_cap_set(X86_FEATURE_AMD_STIBP);
+	if (boot_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD))
+		kvm_cpu_cap_set(X86_FEATURE_AMD_SSBD);
+	if (!boot_cpu_has_bug(X86_BUG_SPEC_STORE_BYPASS))
+		kvm_cpu_cap_set(X86_FEATURE_AMD_SSB_NO);
+	/*
+	 * The preference is to use SPEC CTRL MSR instead of the
+	 * VIRT_SPEC MSR.
+	 */
+	if (boot_cpu_has(X86_FEATURE_LS_CFG_SSBD) &&
+	    !boot_cpu_has(X86_FEATURE_AMD_SSBD))
+		kvm_cpu_cap_set(X86_FEATURE_VIRT_SSBD);
+
 	kvm_cpu_cap_mask(CPUID_C000_0001_EDX,
 		F(XSTORE) | F(XSTORE_EN) | F(XCRYPT) | F(XCRYPT_EN) |
 		F(ACE2) | F(ACE2_EN) | F(PHE) | F(PHE_EN) |
@@ -490,9 +526,6 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
 	case 1:
 		cpuid_entry_override(entry, CPUID_1_EDX);
 		cpuid_entry_override(entry, CPUID_1_ECX);
-		/* we support x2apic emulation even if host does not support
-		 * it since we emulate x2apic in software */
-		cpuid_entry_set(entry, X86_FEATURE_X2APIC);
 		break;
 	case 2:
 		/*
@@ -547,17 +580,6 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
 		cpuid_entry_override(entry, CPUID_7_ECX);
 		cpuid_entry_override(entry, CPUID_7_EDX);
 
-		/* TSC_ADJUST and ARCH_CAPABILITIES are emulated in software. */
-		cpuid_entry_set(entry, X86_FEATURE_TSC_ADJUST);
-		cpuid_entry_set(entry, X86_FEATURE_ARCH_CAPABILITIES);
-
-		if (boot_cpu_has(X86_FEATURE_IBPB) && boot_cpu_has(X86_FEATURE_IBRS))
-			cpuid_entry_set(entry, X86_FEATURE_SPEC_CTRL);
-		if (boot_cpu_has(X86_FEATURE_STIBP))
-			cpuid_entry_set(entry, X86_FEATURE_INTEL_STIBP);
-		if (boot_cpu_has(X86_FEATURE_AMD_SSBD))
-			cpuid_entry_set(entry, X86_FEATURE_SPEC_CTRL_SSBD);
-
 		/* KVM only supports 0x7.0 and 0x7.1, capped above via min(). */
 		if (entry->eax == 1) {
 			entry = do_host_cpuid(array, function, 1);
@@ -729,28 +751,6 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
 		entry->eax = g_phys_as | (virt_as << 8);
 		entry->edx = 0;
 		cpuid_entry_override(entry, CPUID_8000_0008_EBX);
-		/*
-		 * AMD has separate bits for each SPEC_CTRL bit.
-		 * arch/x86/kernel/cpu/bugs.c is kind enough to
-		 * record that in cpufeatures so use them.
-		 */
-		if (boot_cpu_has(X86_FEATURE_IBPB))
-			cpuid_entry_set(entry, X86_FEATURE_AMD_IBPB);
-		if (boot_cpu_has(X86_FEATURE_IBRS))
-			cpuid_entry_set(entry, X86_FEATURE_AMD_IBRS);
-		if (boot_cpu_has(X86_FEATURE_STIBP))
-			cpuid_entry_set(entry, X86_FEATURE_AMD_STIBP);
-		if (boot_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD))
-			cpuid_entry_set(entry, X86_FEATURE_AMD_SSBD);
-		if (!boot_cpu_has_bug(X86_BUG_SPEC_STORE_BYPASS))
-			cpuid_entry_set(entry, X86_FEATURE_AMD_SSB_NO);
-		/*
-		 * The preference is to use SPEC CTRL MSR instead of the
-		 * VIRT_SPEC MSR.
-		 */
-		if (boot_cpu_has(X86_FEATURE_LS_CFG_SSBD) &&
-		    !boot_cpu_has(X86_FEATURE_AMD_SSBD))
-			cpuid_entry_set(entry, X86_FEATURE_VIRT_SSBD);
 		break;
 	}
 	case 0x80000019:
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index e1ed5726964c..f4434816dcdf 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -1360,6 +1360,11 @@ static __init void svm_set_cpu_caps(void)
 	if (nested)
 		kvm_cpu_cap_set(X86_FEATURE_SVM);
 
+	/* CPUID 0x80000008 */
+	if (boot_cpu_has(X86_FEATURE_LS_CFG_SSBD) ||
+	    boot_cpu_has(X86_FEATURE_AMD_SSBD))
+		kvm_cpu_cap_set(X86_FEATURE_VIRT_SSBD);
+
 	/* CPUID 0x8000000A */
 	/* Support next_rip if host supports it */
 	kvm_cpu_cap_check_and_set(X86_FEATURE_NRIPS);
@@ -6053,11 +6058,6 @@ static void svm_cpuid_update(struct kvm_vcpu *vcpu)
 static void svm_set_supported_cpuid(struct kvm_cpuid_entry2 *entry)
 {
 	switch (entry->function) {
-	case 0x80000008:
-		if (boot_cpu_has(X86_FEATURE_LS_CFG_SSBD) ||
-		    boot_cpu_has(X86_FEATURE_AMD_SSBD))
-			cpuid_entry_set(entry, X86_FEATURE_VIRT_SSBD);
-		break;
 	case 0x8000000A:
 		entry->eax = 1; /* SVM revision 1 */
 		entry->ebx = 8; /* Lets support 8 ASIDs in case we add proper
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index cd5a624610c9..2a1df1b714db 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7101,18 +7101,7 @@ static void vmx_cpuid_update(struct kvm_vcpu *vcpu)
 
 static void vmx_set_supported_cpuid(struct kvm_cpuid_entry2 *entry)
 {
-	switch (entry->function) {
-	case 0x7:
-		/*
-		 * UMIP needs to be manually set even though vmx_set_cpu_caps()
-		 * also sets UMIP since do_host_cpuid() will drop it.
-		 */
-		if (vmx_umip_emulated())
-			cpuid_entry_set(entry, X86_FEATURE_UMIP);
-		break;
-	default:
-		break;
-	}
+
 }
 
 static __init void vmx_set_cpu_caps(void)
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 51/61] KVM: x86: Use kvm_cpu_caps to detect Intel PT support
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (49 preceding siblings ...)
  2020-02-01 18:52 ` [PATCH 50/61] KVM: x86: Set emulated/transmuted feature bits via kvm_cpu_caps Sean Christopherson
@ 2020-02-01 18:52 ` Sean Christopherson
  2020-02-25 14:06   ` Vitaly Kuznetsov
  2020-02-01 18:52 ` [PATCH 52/61] KVM: x86: Use KVM cpu caps to detect MSR_TSC_AUX virt support Sean Christopherson
                   ` (10 subsequent siblings)
  61 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:52 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Check for Intel PT using kvm_cpu_cap_has() to pave the way toward
eliminating ->pt_supported().

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/cpuid.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index a37cb6fda979..3d287fc6eb6e 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -507,7 +507,6 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
 {
 	struct kvm_cpuid_entry2 *entry;
 	int r, i, max_idx;
-	unsigned f_intel_pt = kvm_x86_ops->pt_supported() ? F(INTEL_PT) : 0;
 
 	/* all calls to cpuid_count() should be made on the same cpu */
 	get_cpu();
@@ -687,7 +686,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
 		break;
 	/* Intel PT */
 	case 0x14:
-		if (!f_intel_pt) {
+		if (!kvm_cpu_cap_has(X86_FEATURE_INTEL_PT)) {
 			entry->eax = entry->ebx = entry->ecx = entry->edx = 0;
 			break;
 		}
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 52/61] KVM: x86: Use KVM cpu caps to detect MSR_TSC_AUX virt support
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (50 preceding siblings ...)
  2020-02-01 18:52 ` [PATCH 51/61] KVM: x86: Use kvm_cpu_caps to detect Intel PT support Sean Christopherson
@ 2020-02-01 18:52 ` Sean Christopherson
  2020-02-25 14:08   ` Vitaly Kuznetsov
  2020-02-01 18:52 ` [PATCH 53/61] KVM: VMX: Directly use VMX capabilities helper to detect RDTSCP support Sean Christopherson
                   ` (9 subsequent siblings)
  61 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:52 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Check for MSR_TSC_AUX virtualization via kvm_cpu_cap_has() and drop
->rdtscp_supported().

Note, vmx_rdtscp_supported() needs to hang around a tiny bit longer due
other usage in VMX code.

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/include/asm/kvm_host.h | 1 -
 arch/x86/kvm/svm.c              | 6 ------
 arch/x86/kvm/vmx/vmx.c          | 3 ---
 arch/x86/kvm/x86.c              | 2 +-
 4 files changed, 1 insertion(+), 11 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 113b138a0347..1dd5ac8a2136 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1143,7 +1143,6 @@ struct kvm_x86_ops {
 	int (*get_tdp_level)(struct kvm_vcpu *vcpu);
 	u64 (*get_mt_mask)(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio);
 	int (*get_lpage_level)(void);
-	bool (*rdtscp_supported)(void);
 
 	void (*set_tdp_cr3)(struct kvm_vcpu *vcpu, unsigned long cr3);
 
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index f4434816dcdf..6dd9c810c0dc 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -6074,11 +6074,6 @@ static int svm_get_lpage_level(void)
 	return PT_PDPE_LEVEL;
 }
 
-static bool svm_rdtscp_supported(void)
-{
-	return boot_cpu_has(X86_FEATURE_RDTSCP);
-}
-
 static bool svm_pt_supported(void)
 {
 	return false;
@@ -7443,7 +7438,6 @@ static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
 
 	.cpuid_update = svm_cpuid_update,
 
-	.rdtscp_supported = svm_rdtscp_supported,
 	.pt_supported = svm_pt_supported,
 
 	.set_supported_cpuid = svm_set_supported_cpuid,
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 2a1df1b714db..c3577f11f538 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7870,9 +7870,6 @@ static struct kvm_x86_ops vmx_x86_ops __ro_after_init = {
 	.get_lpage_level = vmx_get_lpage_level,
 
 	.cpuid_update = vmx_cpuid_update,
-
-	.rdtscp_supported = vmx_rdtscp_supported,
-
 	.set_supported_cpuid = vmx_set_supported_cpuid,
 
 	.has_wbinvd_exit = cpu_has_vmx_wbinvd_exit,
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index a6d5f22c7ef6..e4353c03269c 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5246,7 +5246,7 @@ static void kvm_init_msr_list(void)
 				continue;
 			break;
 		case MSR_TSC_AUX:
-			if (!kvm_x86_ops->rdtscp_supported())
+			if (!kvm_cpu_cap_has(X86_FEATURE_RDTSCP))
 				continue;
 			break;
 		case MSR_IA32_RTIT_CTL:
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 53/61] KVM: VMX: Directly use VMX capabilities helper to detect RDTSCP support
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (51 preceding siblings ...)
  2020-02-01 18:52 ` [PATCH 52/61] KVM: x86: Use KVM cpu caps to detect MSR_TSC_AUX virt support Sean Christopherson
@ 2020-02-01 18:52 ` Sean Christopherson
  2020-02-25 14:10   ` Vitaly Kuznetsov
  2020-02-01 18:52 ` [PATCH 54/61] KVM: x86: Check for Intel PT MSR virtualization using KVM cpu caps Sean Christopherson
                   ` (8 subsequent siblings)
  61 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:52 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Use cpu_has_vmx_rdtscp() directly when computing secondary exec controls
and drop the now defunct vmx_rdtscp_supported().

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/vmx/vmx.c | 7 +------
 1 file changed, 1 insertion(+), 6 deletions(-)

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index c3577f11f538..98d54cfa0cbe 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -1651,11 +1651,6 @@ static void vmx_queue_exception(struct kvm_vcpu *vcpu)
 	vmx_clear_hlt(vcpu);
 }
 
-static bool vmx_rdtscp_supported(void)
-{
-	return cpu_has_vmx_rdtscp();
-}
-
 /*
  * Swap MSR entry in host/guest MSR entry array.
  */
@@ -4051,7 +4046,7 @@ static void vmx_compute_secondary_exec_control(struct vcpu_vmx *vmx)
 		}
 	}
 
-	if (vmx_rdtscp_supported()) {
+	if (cpu_has_vmx_rdtscp()) {
 		bool rdtscp_enabled = guest_cpuid_has(vcpu, X86_FEATURE_RDTSCP);
 		if (!rdtscp_enabled)
 			exec_control &= ~SECONDARY_EXEC_RDTSCP;
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 54/61] KVM: x86: Check for Intel PT MSR virtualization using KVM cpu caps
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (52 preceding siblings ...)
  2020-02-01 18:52 ` [PATCH 53/61] KVM: VMX: Directly use VMX capabilities helper to detect RDTSCP support Sean Christopherson
@ 2020-02-01 18:52 ` Sean Christopherson
  2020-02-25 14:11   ` Vitaly Kuznetsov
  2020-02-01 18:52 ` [PATCH 55/61] KVM: VMX: Directly query Intel PT mode when refreshing PMUs Sean Christopherson
                   ` (7 subsequent siblings)
  61 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:52 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Use kvm_cpu_cap_has() to check for Intel PT when processing the list of
virtualized MSRs to pave the way toward removing ->pt_supported().

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/x86.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index e4353c03269c..9d38dcdbb613 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5251,23 +5251,23 @@ static void kvm_init_msr_list(void)
 			break;
 		case MSR_IA32_RTIT_CTL:
 		case MSR_IA32_RTIT_STATUS:
-			if (!kvm_x86_ops->pt_supported())
+			if (!kvm_cpu_cap_has(X86_FEATURE_INTEL_PT))
 				continue;
 			break;
 		case MSR_IA32_RTIT_CR3_MATCH:
-			if (!kvm_x86_ops->pt_supported() ||
+			if (!kvm_cpu_cap_has(X86_FEATURE_INTEL_PT) ||
 			    !intel_pt_validate_hw_cap(PT_CAP_cr3_filtering))
 				continue;
 			break;
 		case MSR_IA32_RTIT_OUTPUT_BASE:
 		case MSR_IA32_RTIT_OUTPUT_MASK:
-			if (!kvm_x86_ops->pt_supported() ||
+			if (!kvm_cpu_cap_has(X86_FEATURE_INTEL_PT) ||
 				(!intel_pt_validate_hw_cap(PT_CAP_topa_output) &&
 				 !intel_pt_validate_hw_cap(PT_CAP_single_range_output)))
 				continue;
 			break;
 		case MSR_IA32_RTIT_ADDR0_A ... MSR_IA32_RTIT_ADDR3_B: {
-			if (!kvm_x86_ops->pt_supported() ||
+			if (!kvm_cpu_cap_has(X86_FEATURE_INTEL_PT) ||
 				msrs_to_save_all[i] - MSR_IA32_RTIT_ADDR0_A >=
 				intel_pt_validate_hw_cap(PT_CAP_num_address_ranges) * 2)
 				continue;
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 55/61] KVM: VMX: Directly query Intel PT mode when refreshing PMUs
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (53 preceding siblings ...)
  2020-02-01 18:52 ` [PATCH 54/61] KVM: x86: Check for Intel PT MSR virtualization using KVM cpu caps Sean Christopherson
@ 2020-02-01 18:52 ` Sean Christopherson
  2020-02-25 14:16   ` Vitaly Kuznetsov
  2020-02-01 18:52 ` [PATCH 56/61] KVM: SVM: Refactor logging of NPT enabled/disabled Sean Christopherson
                   ` (6 subsequent siblings)
  61 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:52 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Use vmx_pt_mode_is_host_guest() in intel_pmu_refresh() instead of
bouncing through kvm_x86_ops->pt_supported, and remove ->pt_supported()
as the PMU code was the last remaining user.

Opportunistically clean up the wording of a comment that referenced
kvm_x86_ops->pt_supported().

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/include/asm/kvm_host.h | 2 --
 arch/x86/kvm/svm.c              | 7 -------
 arch/x86/kvm/vmx/pmu_intel.c    | 2 +-
 arch/x86/kvm/vmx/vmx.c          | 6 ------
 arch/x86/kvm/x86.c              | 7 +++----
 5 files changed, 4 insertions(+), 20 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 1dd5ac8a2136..a8bae9d88bce 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1162,8 +1162,6 @@ struct kvm_x86_ops {
 	void (*handle_exit_irqoff)(struct kvm_vcpu *vcpu,
 		enum exit_fastpath_completion *exit_fastpath);
 
-	bool (*pt_supported)(void);
-
 	int (*check_nested_events)(struct kvm_vcpu *vcpu, bool external_intr);
 	void (*request_immediate_exit)(struct kvm_vcpu *vcpu);
 
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 6dd9c810c0dc..a27f83f7521c 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -6074,11 +6074,6 @@ static int svm_get_lpage_level(void)
 	return PT_PDPE_LEVEL;
 }
 
-static bool svm_pt_supported(void)
-{
-	return false;
-}
-
 static bool svm_has_wbinvd_exit(void)
 {
 	return true;
@@ -7438,8 +7433,6 @@ static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
 
 	.cpuid_update = svm_cpuid_update,
 
-	.pt_supported = svm_pt_supported,
-
 	.set_supported_cpuid = svm_set_supported_cpuid,
 
 	.has_wbinvd_exit = svm_has_wbinvd_exit,
diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c
index 34a3a17bb6d7..d8f5cb312b9d 100644
--- a/arch/x86/kvm/vmx/pmu_intel.c
+++ b/arch/x86/kvm/vmx/pmu_intel.c
@@ -330,7 +330,7 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu)
 	pmu->global_ovf_ctrl_mask = pmu->global_ctrl_mask
 			& ~(MSR_CORE_PERF_GLOBAL_OVF_CTRL_OVF_BUF |
 			    MSR_CORE_PERF_GLOBAL_OVF_CTRL_COND_CHGD);
-	if (kvm_x86_ops->pt_supported())
+	if (vmx_pt_mode_is_host_guest())
 		pmu->global_ovf_ctrl_mask &=
 				~MSR_CORE_PERF_GLOBAL_OVF_CTRL_TRACE_TOPA_PMI;
 
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 98d54cfa0cbe..e6284b6aac56 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -6283,11 +6283,6 @@ static bool vmx_has_emulated_msr(int index)
 	}
 }
 
-static bool vmx_pt_supported(void)
-{
-	return vmx_pt_mode_is_host_guest();
-}
-
 static void vmx_recover_nmi_blocking(struct vcpu_vmx *vmx)
 {
 	u32 exit_intr_info;
@@ -7876,7 +7871,6 @@ static struct kvm_x86_ops vmx_x86_ops __ro_after_init = {
 
 	.check_intercept = vmx_check_intercept,
 	.handle_exit_irqoff = vmx_handle_exit_irqoff,
-	.pt_supported = vmx_pt_supported,
 
 	.request_immediate_exit = vmx_request_immediate_exit,
 
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 9d38dcdbb613..144143a57d0b 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -2805,10 +2805,9 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 		    !guest_cpuid_has(vcpu, X86_FEATURE_XSAVES))
 			return 1;
 		/*
-		 * We do support PT if kvm_x86_ops->pt_supported(), but we do
-		 * not support IA32_XSS[bit 8]. Guests will have to use
-		 * RDMSR/WRMSR rather than XSAVES/XRSTORS to save/restore PT
-		 * MSRs.
+		 * KVM supports exposing PT to the guest, but does not support
+		 * IA32_XSS[bit 8]. Guests have to use RDMSR/WRMSR rather than
+		 * XSAVES/XRSTORS to save/restore PT MSRs.
 		 */
 		if (data != 0)
 			return 1;
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 56/61] KVM: SVM: Refactor logging of NPT enabled/disabled
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (54 preceding siblings ...)
  2020-02-01 18:52 ` [PATCH 55/61] KVM: VMX: Directly query Intel PT mode when refreshing PMUs Sean Christopherson
@ 2020-02-01 18:52 ` Sean Christopherson
  2020-02-25 14:21   ` Vitaly Kuznetsov
  2020-02-01 18:52 ` [PATCH 57/61] KVM: x86/mmu: Merge kvm_{enable,disable}_tdp() into a common function Sean Christopherson
                   ` (5 subsequent siblings)
  61 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:52 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Tweak SVM's logging of NPT enabled/disabled to handle the logging in a
single pr_info() in preparation for merging kvm_enable_tdp() and
kvm_disable_tdp() into a single function.

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/svm.c | 10 ++++------
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index a27f83f7521c..80962c1eea8f 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -1440,16 +1440,14 @@ static __init int svm_hardware_setup(void)
 	if (!boot_cpu_has(X86_FEATURE_NPT))
 		npt_enabled = false;
 
-	if (npt_enabled && !npt) {
-		printk(KERN_INFO "kvm: Nested Paging disabled\n");
+	if (npt_enabled && !npt)
 		npt_enabled = false;
-	}
 
-	if (npt_enabled) {
-		printk(KERN_INFO "kvm: Nested Paging enabled\n");
+	if (npt_enabled)
 		kvm_enable_tdp();
-	} else
+	else
 		kvm_disable_tdp();
+	pr_info("kvm: Nested Paging %sabled\n", npt_enabled ? "en" : "dis");
 
 	if (nrips) {
 		if (!boot_cpu_has(X86_FEATURE_NRIPS))
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 57/61] KVM: x86/mmu: Merge kvm_{enable,disable}_tdp() into a common function
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (55 preceding siblings ...)
  2020-02-01 18:52 ` [PATCH 56/61] KVM: SVM: Refactor logging of NPT enabled/disabled Sean Christopherson
@ 2020-02-01 18:52 ` Sean Christopherson
  2020-02-25 14:27   ` Vitaly Kuznetsov
  2020-02-01 18:52 ` [PATCH 58/61] KVM: x86/mmu: Configure max page level during hardware setup Sean Christopherson
                   ` (4 subsequent siblings)
  61 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:52 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Combine kvm_enable_tdp() and kvm_disable_tdp() into a single function,
kvm_configure_mmu(), in preparation for doing additional configuration
during hardware setup.  And because having separate helpers is silly.

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/include/asm/kvm_host.h |  3 +--
 arch/x86/kvm/mmu/mmu.c          | 13 +++----------
 arch/x86/kvm/svm.c              |  5 +----
 arch/x86/kvm/vmx/vmx.c          |  4 +---
 4 files changed, 6 insertions(+), 19 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index a8bae9d88bce..1a13a53bbaeb 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1494,8 +1494,7 @@ void kvm_mmu_invlpg(struct kvm_vcpu *vcpu, gva_t gva);
 void kvm_mmu_invpcid_gva(struct kvm_vcpu *vcpu, gva_t gva, unsigned long pcid);
 void kvm_mmu_new_cr3(struct kvm_vcpu *vcpu, gpa_t new_cr3, bool skip_tlb_flush);
 
-void kvm_enable_tdp(void);
-void kvm_disable_tdp(void);
+void kvm_configure_mmu(bool enable_tdp);
 
 static inline gpa_t translate_gpa(struct kvm_vcpu *vcpu, gpa_t gpa, u32 access,
 				  struct x86_exception *exception)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 84eeb61d06aa..08c80c7c88d4 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -5541,18 +5541,11 @@ void kvm_mmu_invpcid_gva(struct kvm_vcpu *vcpu, gva_t gva, unsigned long pcid)
 }
 EXPORT_SYMBOL_GPL(kvm_mmu_invpcid_gva);
 
-void kvm_enable_tdp(void)
+void kvm_configure_mmu(bool enable_tdp)
 {
-	tdp_enabled = true;
+	tdp_enabled = enable_tdp;
 }
-EXPORT_SYMBOL_GPL(kvm_enable_tdp);
-
-void kvm_disable_tdp(void)
-{
-	tdp_enabled = false;
-}
-EXPORT_SYMBOL_GPL(kvm_disable_tdp);
-
+EXPORT_SYMBOL_GPL(kvm_configure_mmu);
 
 /* The return value indicates if tlb flush on all vcpus is needed. */
 typedef bool (*slot_level_handler) (struct kvm *kvm, struct kvm_rmap_head *rmap_head);
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 80962c1eea8f..19dc74ae1efb 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -1443,10 +1443,7 @@ static __init int svm_hardware_setup(void)
 	if (npt_enabled && !npt)
 		npt_enabled = false;
 
-	if (npt_enabled)
-		kvm_enable_tdp();
-	else
-		kvm_disable_tdp();
+	kvm_configure_mmu(npt_enabled);
 	pr_info("kvm: Nested Paging %sabled\n", npt_enabled ? "en" : "dis");
 
 	if (nrips) {
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index e6284b6aac56..59206c22b5e1 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -5295,7 +5295,6 @@ static void vmx_enable_tdp(void)
 		VMX_EPT_RWX_MASK, 0ull);
 
 	ept_set_mmio_spte_mask();
-	kvm_enable_tdp();
 }
 
 /*
@@ -7678,8 +7677,7 @@ static __init int hardware_setup(void)
 
 	if (enable_ept)
 		vmx_enable_tdp();
-	else
-		kvm_disable_tdp();
+	kvm_configure_mmu(enable_ept);
 
 	/*
 	 * Only enable PML when hardware supports PML feature, and both EPT
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 58/61] KVM: x86/mmu: Configure max page level during hardware setup
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (56 preceding siblings ...)
  2020-02-01 18:52 ` [PATCH 57/61] KVM: x86/mmu: Merge kvm_{enable,disable}_tdp() into a common function Sean Christopherson
@ 2020-02-01 18:52 ` Sean Christopherson
  2020-02-25 14:43   ` Vitaly Kuznetsov
  2020-02-01 18:52 ` [PATCH 59/61] KVM: x86: Don't propagate MMU lpage support to memslot.disallow_lpage Sean Christopherson
                   ` (3 subsequent siblings)
  61 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:52 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Configure the max page level during hardware setup to avoid a retpoline
in the page fault handler.  Drop ->get_lpage_level() as the page fault
handler was the last user.

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/include/asm/kvm_host.h |  3 +--
 arch/x86/kvm/mmu/mmu.c          | 13 +++++++++++--
 arch/x86/kvm/svm.c              |  9 +--------
 arch/x86/kvm/vmx/vmx.c          | 24 +++++++++++-------------
 4 files changed, 24 insertions(+), 25 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 1a13a53bbaeb..4165d3ef11e4 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1142,7 +1142,6 @@ struct kvm_x86_ops {
 	int (*set_identity_map_addr)(struct kvm *kvm, u64 ident_addr);
 	int (*get_tdp_level)(struct kvm_vcpu *vcpu);
 	u64 (*get_mt_mask)(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio);
-	int (*get_lpage_level)(void);
 
 	void (*set_tdp_cr3)(struct kvm_vcpu *vcpu, unsigned long cr3);
 
@@ -1494,7 +1493,7 @@ void kvm_mmu_invlpg(struct kvm_vcpu *vcpu, gva_t gva);
 void kvm_mmu_invpcid_gva(struct kvm_vcpu *vcpu, gva_t gva, unsigned long pcid);
 void kvm_mmu_new_cr3(struct kvm_vcpu *vcpu, gpa_t new_cr3, bool skip_tlb_flush);
 
-void kvm_configure_mmu(bool enable_tdp);
+void kvm_configure_mmu(bool enable_tdp, int tdp_page_level);
 
 static inline gpa_t translate_gpa(struct kvm_vcpu *vcpu, gpa_t gpa, u32 access,
 				  struct x86_exception *exception)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 08c80c7c88d4..1aedb71e7a20 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -86,6 +86,8 @@ __MODULE_PARM_TYPE(nx_huge_pages_recovery_ratio, "uint");
  */
 bool tdp_enabled = false;
 
+static int max_page_level __read_mostly;
+
 enum {
 	AUDIT_PRE_PAGE_FAULT,
 	AUDIT_POST_PAGE_FAULT,
@@ -3280,7 +3282,7 @@ static int kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, gfn_t gfn,
 	if (!slot)
 		return PT_PAGE_TABLE_LEVEL;
 
-	max_level = min(max_level, kvm_x86_ops->get_lpage_level());
+	max_level = min(max_level, max_page_level);
 	for ( ; max_level > PT_PAGE_TABLE_LEVEL; max_level--) {
 		linfo = lpage_info_slot(gfn, slot, max_level);
 		if (!linfo->disallow_lpage)
@@ -5541,9 +5543,16 @@ void kvm_mmu_invpcid_gva(struct kvm_vcpu *vcpu, gva_t gva, unsigned long pcid)
 }
 EXPORT_SYMBOL_GPL(kvm_mmu_invpcid_gva);
 
-void kvm_configure_mmu(bool enable_tdp)
+void kvm_configure_mmu(bool enable_tdp, int tdp_page_level)
 {
 	tdp_enabled = enable_tdp;
+
+	if (tdp_enabled)
+		max_page_level = tdp_page_level;
+	else if (boot_cpu_has(X86_FEATURE_GBPAGES))
+		max_page_level = PT_PDPE_LEVEL;
+	else
+		max_page_level = PT_DIRECTORY_LEVEL;
 }
 EXPORT_SYMBOL_GPL(kvm_configure_mmu);
 
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 19dc74ae1efb..76c24b3491f6 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -1443,7 +1443,7 @@ static __init int svm_hardware_setup(void)
 	if (npt_enabled && !npt)
 		npt_enabled = false;
 
-	kvm_configure_mmu(npt_enabled);
+	kvm_configure_mmu(npt_enabled, PT_PDPE_LEVEL);
 	pr_info("kvm: Nested Paging %sabled\n", npt_enabled ? "en" : "dis");
 
 	if (nrips) {
@@ -6064,11 +6064,6 @@ static void svm_set_supported_cpuid(struct kvm_cpuid_entry2 *entry)
 	}
 }
 
-static int svm_get_lpage_level(void)
-{
-	return PT_PDPE_LEVEL;
-}
-
 static bool svm_has_wbinvd_exit(void)
 {
 	return true;
@@ -7424,8 +7419,6 @@ static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
 
 	.get_exit_info = svm_get_exit_info,
 
-	.get_lpage_level = svm_get_lpage_level,
-
 	.cpuid_update = svm_cpuid_update,
 
 	.set_supported_cpuid = svm_set_supported_cpuid,
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 59206c22b5e1..3ad24ca692a6 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -6889,15 +6889,6 @@ static u64 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio)
 	return (cache << VMX_EPT_MT_EPTE_SHIFT) | ipat;
 }
 
-static int vmx_get_lpage_level(void)
-{
-	if (enable_ept && !cpu_has_vmx_ept_1g_page())
-		return PT_DIRECTORY_LEVEL;
-	else
-		/* For shadow and EPT supported 1GB page */
-		return PT_PDPE_LEVEL;
-}
-
 static void vmcs_set_secondary_exec_control(struct vcpu_vmx *vmx)
 {
 	/*
@@ -7584,7 +7575,7 @@ static __init int hardware_setup(void)
 {
 	unsigned long host_bndcfgs;
 	struct desc_ptr dt;
-	int r, i;
+	int r, i, ept_lpage_level;
 
 	rdmsrl_safe(MSR_EFER, &host_efer);
 
@@ -7677,7 +7668,16 @@ static __init int hardware_setup(void)
 
 	if (enable_ept)
 		vmx_enable_tdp();
-	kvm_configure_mmu(enable_ept);
+
+	if (!enable_ept)
+		ept_lpage_level = 0;
+	else if (cpu_has_vmx_ept_1g_page())
+		ept_lpage_level = PT_PDPE_LEVEL;
+	else if (cpu_has_vmx_ept_2m_page())
+		ept_lpage_level = PT_DIRECTORY_LEVEL;
+	else
+		ept_lpage_level = PT_PAGE_TABLE_LEVEL;
+	kvm_configure_mmu(enable_ept, ept_lpage_level);
 
 	/*
 	 * Only enable PML when hardware supports PML feature, and both EPT
@@ -7855,8 +7855,6 @@ static struct kvm_x86_ops vmx_x86_ops __ro_after_init = {
 
 	.get_exit_info = vmx_get_exit_info,
 
-	.get_lpage_level = vmx_get_lpage_level,
-
 	.cpuid_update = vmx_cpuid_update,
 	.set_supported_cpuid = vmx_set_supported_cpuid,
 
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 59/61] KVM: x86: Don't propagate MMU lpage support to memslot.disallow_lpage
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (57 preceding siblings ...)
  2020-02-01 18:52 ` [PATCH 58/61] KVM: x86/mmu: Configure max page level during hardware setup Sean Christopherson
@ 2020-02-01 18:52 ` Sean Christopherson
  2020-02-25 14:55   ` Vitaly Kuznetsov
  2020-02-01 18:52 ` [PATCH 60/61] KVM: Drop largepages_enabled and its accessor/mutator Sean Christopherson
                   ` (2 subsequent siblings)
  61 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:52 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Stop propagating MMU large page support into a memslot's disallow_lpage
now that the MMU's max_page_level handles the scenario where VMX's EPT is
enabled and EPT doesn't support 2M pages.

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/vmx/vmx.c | 3 ---
 arch/x86/kvm/x86.c     | 6 ++----
 2 files changed, 2 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 3ad24ca692a6..e349689ac0cf 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7633,9 +7633,6 @@ static __init int hardware_setup(void)
 	if (!cpu_has_vmx_tpr_shadow())
 		kvm_x86_ops->update_cr8_intercept = NULL;
 
-	if (enable_ept && !cpu_has_vmx_ept_2m_page())
-		kvm_disable_largepages();
-
 #if IS_ENABLED(CONFIG_HYPERV)
 	if (ms_hyperv.nested_features & HV_X64_NESTED_GUEST_MAPPING_FLUSH
 	    && enable_ept) {
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 144143a57d0b..b40488fd2969 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -9884,11 +9884,9 @@ int kvm_arch_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot,
 		ugfn = slot->userspace_addr >> PAGE_SHIFT;
 		/*
 		 * If the gfn and userspace address are not aligned wrt each
-		 * other, or if explicitly asked to, disable large page
-		 * support for this slot
+		 * other, disable large page support for this slot.
 		 */
-		if ((slot->base_gfn ^ ugfn) & (KVM_PAGES_PER_HPAGE(level) - 1) ||
-		    !kvm_largepages_enabled()) {
+		if ((slot->base_gfn ^ ugfn) & (KVM_PAGES_PER_HPAGE(level) - 1)) {
 			unsigned long j;
 
 			for (j = 0; j < lpages; ++j)
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 60/61] KVM: Drop largepages_enabled and its accessor/mutator
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (58 preceding siblings ...)
  2020-02-01 18:52 ` [PATCH 59/61] KVM: x86: Don't propagate MMU lpage support to memslot.disallow_lpage Sean Christopherson
@ 2020-02-01 18:52 ` Sean Christopherson
  2020-02-25 14:56   ` Vitaly Kuznetsov
  2020-02-01 18:52 ` [PATCH 61/61] KVM: x86: Move VMX's host_efer to common x86 code Sean Christopherson
       [not found] ` <87wo8ak84x.fsf@vitty.brq.redhat.com>
  61 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:52 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Drop largepages_enabled, kvm_largepages_enabled() and
kvm_disable_largepages() now that all users are gone.

Note, largepages_enabled was an x86-only flag that got left in common
KVM code when KVM gained support for multiple architectures.

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 include/linux/kvm_host.h |  2 --
 virt/kvm/kvm_main.c      | 13 -------------
 2 files changed, 15 deletions(-)

diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 6d5331b0d937..50105b5c6370 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -683,8 +683,6 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
 				const struct kvm_memory_slot *old,
 				const struct kvm_memory_slot *new,
 				enum kvm_mr_change change);
-bool kvm_largepages_enabled(void);
-void kvm_disable_largepages(void);
 /* flush all memory translations */
 void kvm_arch_flush_shadow_all(struct kvm *kvm);
 /* flush memory translations pointing to 'slot' */
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index eb3709d55139..5851a8c27a28 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -149,8 +149,6 @@ static void mark_page_dirty_in_slot(struct kvm_memory_slot *memslot, gfn_t gfn);
 __visible bool kvm_rebooting;
 EXPORT_SYMBOL_GPL(kvm_rebooting);
 
-static bool largepages_enabled = true;
-
 #define KVM_EVENT_CREATE_VM 0
 #define KVM_EVENT_DESTROY_VM 1
 static void kvm_uevent_notify_change(unsigned int type, struct kvm *kvm);
@@ -1368,17 +1366,6 @@ int kvm_clear_dirty_log_protect(struct kvm *kvm,
 EXPORT_SYMBOL_GPL(kvm_clear_dirty_log_protect);
 #endif
 
-bool kvm_largepages_enabled(void)
-{
-	return largepages_enabled;
-}
-
-void kvm_disable_largepages(void)
-{
-	largepages_enabled = false;
-}
-EXPORT_SYMBOL_GPL(kvm_disable_largepages);
-
 struct kvm_memory_slot *gfn_to_memslot(struct kvm *kvm, gfn_t gfn)
 {
 	return __gfn_to_memslot(kvm_memslots(kvm), gfn);
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* [PATCH 61/61] KVM: x86: Move VMX's host_efer to common x86 code
  2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
                   ` (59 preceding siblings ...)
  2020-02-01 18:52 ` [PATCH 60/61] KVM: Drop largepages_enabled and its accessor/mutator Sean Christopherson
@ 2020-02-01 18:52 ` Sean Christopherson
  2020-02-25 15:02   ` Vitaly Kuznetsov
       [not found] ` <87wo8ak84x.fsf@vitty.brq.redhat.com>
  61 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-01 18:52 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Move host_efer to common x86 code and use it for CPUID's is_efer_nx() to
avoid constantly re-reading the MSR.

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/include/asm/kvm_host.h | 2 ++
 arch/x86/kvm/cpuid.c            | 5 +----
 arch/x86/kvm/vmx/vmx.c          | 3 ---
 arch/x86/kvm/vmx/vmx.h          | 1 -
 arch/x86/kvm/x86.c              | 5 +++++
 5 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 4165d3ef11e4..a2a091d328c6 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1257,6 +1257,8 @@ struct kvm_arch_async_pf {
 	bool direct_map;
 };
 
+extern u64 __read_mostly host_efer;
+
 extern struct kvm_x86_ops *kvm_x86_ops;
 extern struct kmem_cache *x86_fpu_cache;
 
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 3d287fc6eb6e..e8beb1e542a8 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -134,10 +134,7 @@ int kvm_update_cpuid(struct kvm_vcpu *vcpu)
 
 static int is_efer_nx(void)
 {
-	unsigned long long efer = 0;
-
-	rdmsrl_safe(MSR_EFER, &efer);
-	return efer & EFER_NX;
+	return host_efer & EFER_NX;
 }
 
 static void cpuid_fix_nx_cap(struct kvm_vcpu *vcpu)
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index e349689ac0cf..0009066e2009 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -433,7 +433,6 @@ static const struct kvm_vmx_segment_field {
 	VMX_SEGMENT_FIELD(LDTR),
 };
 
-u64 host_efer;
 static unsigned long host_idt_base;
 
 /*
@@ -7577,8 +7576,6 @@ static __init int hardware_setup(void)
 	struct desc_ptr dt;
 	int r, i, ept_lpage_level;
 
-	rdmsrl_safe(MSR_EFER, &host_efer);
-
 	store_idt(&dt);
 	host_idt_base = dt.address;
 
diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
index 70eafa88876a..0e50fbcb8413 100644
--- a/arch/x86/kvm/vmx/vmx.h
+++ b/arch/x86/kvm/vmx/vmx.h
@@ -12,7 +12,6 @@
 #include "vmcs.h"
 
 extern const u32 vmx_msr_index[];
-extern u64 host_efer;
 
 extern u32 get_umwait_control_msr(void);
 
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index b40488fd2969..2103101eca78 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -185,6 +185,9 @@ static struct kvm_shared_msrs __percpu *shared_msrs;
 				| XFEATURE_MASK_BNDCSR | XFEATURE_MASK_AVX512 \
 				| XFEATURE_MASK_PKRU)
 
+u64 __read_mostly host_efer;
+EXPORT_SYMBOL_GPL(host_efer);
+
 static u64 __read_mostly host_xss;
 
 struct kvm_stats_debugfs_item debugfs_entries[] = {
@@ -9590,6 +9593,8 @@ int kvm_arch_hardware_setup(void)
 {
 	int r;
 
+	rdmsrl_safe(MSR_EFER, &host_efer);
+
 	kvm_set_cpu_caps();
 
 	r = kvm_x86_ops->hardware_setup();
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 168+ messages in thread

* Re: [PATCH 01/61] KVM: x86: Return -E2BIG when KVM_GET_SUPPORTED_CPUID hits max entries
  2020-02-01 18:51 ` [PATCH 01/61] KVM: x86: Return -E2BIG when KVM_GET_SUPPORTED_CPUID hits max entries Sean Christopherson
@ 2020-02-03 12:55   ` Vitaly Kuznetsov
  2020-02-03 15:59     ` Sean Christopherson
  0 siblings, 1 reply; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-03 12:55 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel, Paolo Bonzini

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Fix a long-standing bug that causes KVM to return 0 instead of -E2BIG
> when userspace's array is insufficiently sized.
>
> Note, while the Fixes: tag is accurate with respect to the immediate
> bug, it's likely that similar bugs in KVM_GET_SUPPORTED_CPUID existed
> prior to the refactoring, e.g. Qemu contains a workaround for the broken
> KVM_GET_SUPPORTED_CPUID behavior that predates the buggy commit by over
> two years.  The Qemu workaround is also likely the main reason the bug
> has gone unreported for so long.
>
> Qemu hack:
>   commit 76ae317f7c16aec6b469604b1764094870a75470
>   Author: Mark McLoughlin <markmc@redhat.com>
>   Date:   Tue May 19 18:55:21 2009 +0100
>
>     kvm: work around supported cpuid ioctl() brokenness
>
>     KVM_GET_SUPPORTED_CPUID has been known to fail to return -E2BIG
>     when it runs out of entries. Detect this by always trying again
>     with a bigger table if the ioctl() fills the table.
>
> Fixes: 831bf664e9c1f ("KVM: Refactor and simplify kvm_dev_ioctl_get_supported_cpuid")
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kvm/cpuid.c | 7 ++++++-
>  1 file changed, 6 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index b1c469446b07..47ce04762c20 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -908,9 +908,14 @@ int kvm_dev_ioctl_get_cpuid(struct kvm_cpuid2 *cpuid,
>  			goto out_free;
>  
>  		limit = cpuid_entries[nent - 1].eax;
> -		for (func = ent->func + 1; func <= limit && nent < cpuid->nent && r == 0; ++func)
> +		for (func = ent->func + 1; func <= limit && r == 0; ++func) {
> +			if (nent >= cpuid->nent) {
> +				r = -E2BIG;
> +				goto out_free;
> +			}
>  			r = do_cpuid_func(&cpuid_entries[nent], func,
>  				          &nent, cpuid->nent, type);
> +		}
>  
>  		if (r)
>  			goto out_free;

Is fixing a bug a valid reason for breaking buggy userspace? :-)
Personally, I think so. In particular, here the change is both the
return value and the fact that we don't do copy_to_user() anymore so I
think it's possible to meet a userspace which is going to get broken by
the change.

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 01/61] KVM: x86: Return -E2BIG when KVM_GET_SUPPORTED_CPUID hits max entries
  2020-02-03 12:55   ` Vitaly Kuznetsov
@ 2020-02-03 15:59     ` Sean Christopherson
  2020-02-25 14:36       ` Paolo Bonzini
  0 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-03 15:59 UTC (permalink / raw)
  To: Vitaly Kuznetsov
  Cc: Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel, Paolo Bonzini

On Mon, Feb 03, 2020 at 01:55:40PM +0100, Vitaly Kuznetsov wrote:
> Sean Christopherson <sean.j.christopherson@intel.com> writes:
> 
> > diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> > index b1c469446b07..47ce04762c20 100644
> > --- a/arch/x86/kvm/cpuid.c
> > +++ b/arch/x86/kvm/cpuid.c
> > @@ -908,9 +908,14 @@ int kvm_dev_ioctl_get_cpuid(struct kvm_cpuid2 *cpuid,
> >  			goto out_free;
> >  
> >  		limit = cpuid_entries[nent - 1].eax;
> > -		for (func = ent->func + 1; func <= limit && nent < cpuid->nent && r == 0; ++func)
> > +		for (func = ent->func + 1; func <= limit && r == 0; ++func) {
> > +			if (nent >= cpuid->nent) {
> > +				r = -E2BIG;
> > +				goto out_free;
> > +			}
> >  			r = do_cpuid_func(&cpuid_entries[nent], func,
> >  				          &nent, cpuid->nent, type);
> > +		}
> >  
> >  		if (r)
> >  			goto out_free;
> 
> Is fixing a bug a valid reason for breaking buggy userspace? :-)
> Personally, I think so.

Linus usually disagrees :-)

> In particular, here the change is both the
> return value and the fact that we don't do copy_to_user() anymore so I
> think it's possible to meet a userspace which is going to get broken by
> the change.

Ugh, yeah, it would be possible.  Qemu (retries), CrosVM (hardcoded to
256 entries) and Firecracker (doesn't use the ioctl()) are all ok,
hopefully all other VMMs used in production environments follow suit.

^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 02/61] KVM: x86: Refactor loop around do_cpuid_func() to separate helper
  2020-02-01 18:51 ` [PATCH 02/61] KVM: x86: Refactor loop around do_cpuid_func() to separate helper Sean Christopherson
@ 2020-02-06 14:59   ` Vitaly Kuznetsov
  2020-02-07 19:53     ` Sean Christopherson
  0 siblings, 1 reply; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-06 14:59 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel, Paolo Bonzini

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Move the guts of kvm_dev_ioctl_get_cpuid()'s CPUID func loop to a
> separate helper to improve code readability and pave the way for future
> cleanup.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kvm/cpuid.c | 45 ++++++++++++++++++++++++++------------------
>  1 file changed, 27 insertions(+), 18 deletions(-)
>
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index 47ce04762c20..f49fdd06f511 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -839,6 +839,29 @@ static bool is_centaur_cpu(const struct kvm_cpuid_param *param)
>  	return boot_cpu_data.x86_vendor == X86_VENDOR_CENTAUR;
>  }
>  
> +static int get_cpuid_func(struct kvm_cpuid_entry2 *entries, u32 func,
> +			  int *nent, int maxnent, unsigned int type)
> +{
> +	u32 limit;
> +	int r;
> +
> +	r = do_cpuid_func(&entries[*nent], func, nent, maxnent, type);
> +	if (r)
> +		return r;
> +
> +	limit = entries[*nent - 1].eax;
> +	for (func = func + 1; func <= limit; ++func) {
> +		if (*nent >= maxnent)
> +			return -E2BIG;
> +
> +		r = do_cpuid_func(&entries[*nent], func, nent, maxnent, type);
> +		if (r)
> +			break;
> +	}
> +
> +	return r;
> +}
> +
>  static bool sanity_check_entries(struct kvm_cpuid_entry2 __user *entries,
>  				 __u32 num_entries, unsigned int ioctl_type)
>  {
> @@ -871,8 +894,8 @@ int kvm_dev_ioctl_get_cpuid(struct kvm_cpuid2 *cpuid,
>  			    unsigned int type)
>  {
>  	struct kvm_cpuid_entry2 *cpuid_entries;
> -	int limit, nent = 0, r = -E2BIG, i;
> -	u32 func;
> +	int nent = 0, r = -E2BIG, i;

Not this patches fault, but I just noticed that '-E2BIG' initializer
here is only being used for 

 'if (cpuid->nent < 1)'

case so I have two suggestion:
1) Return directly without the 'goto' , drop the initializer.
2) Return -EINVAL instead.

> +
>  	static const struct kvm_cpuid_param param[] = {
>  		{ .func = 0 },
>  		{ .func = 0x80000000 },
> @@ -901,22 +924,8 @@ int kvm_dev_ioctl_get_cpuid(struct kvm_cpuid2 *cpuid,
>  		if (ent->qualifier && !ent->qualifier(ent))
>  			continue;
>  
> -		r = do_cpuid_func(&cpuid_entries[nent], ent->func,
> -				  &nent, cpuid->nent, type);
> -
> -		if (r)
> -			goto out_free;
> -
> -		limit = cpuid_entries[nent - 1].eax;
> -		for (func = ent->func + 1; func <= limit && r == 0; ++func) {
> -			if (nent >= cpuid->nent) {
> -				r = -E2BIG;
> -				goto out_free;
> -			}
> -			r = do_cpuid_func(&cpuid_entries[nent], func,
> -				          &nent, cpuid->nent, type);
> -		}
> -
> +		r = get_cpuid_func(cpuid_entries, ent->func, &nent,
> +				   cpuid->nent, type);
>  		if (r)
>  			goto out_free;
>  	}

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 03/61] KVM: x86: Simplify handling of Centaur CPUID leafs
  2020-02-01 18:51 ` [PATCH 03/61] KVM: x86: Simplify handling of Centaur CPUID leafs Sean Christopherson
@ 2020-02-06 15:05   ` Vitaly Kuznetsov
  2020-02-07 19:47     ` Sean Christopherson
  0 siblings, 1 reply; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-06 15:05 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel, Paolo Bonzini

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Refactor the handling of the Centaur-only CPUID leaf to detect the leaf
> via a runtime query instead of adding a one-off callback in the static
> array.  When the callback was introduced, there were additional fields
> in the array's structs, and more importantly, retpoline wasn't a thing.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kvm/cpuid.c | 32 ++++++++++----------------------
>  1 file changed, 10 insertions(+), 22 deletions(-)
>
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index f49fdd06f511..de52cbb46171 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -829,15 +829,7 @@ static int do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 func,
>  	return __do_cpuid_func(entry, func, nent, maxnent);
>  }
>  
> -struct kvm_cpuid_param {
> -	u32 func;
> -	bool (*qualifier)(const struct kvm_cpuid_param *param);
> -};
> -
> -static bool is_centaur_cpu(const struct kvm_cpuid_param *param)
> -{
> -	return boot_cpu_data.x86_vendor == X86_VENDOR_CENTAUR;
> -}
> +#define CENTAUR_CPUID_SIGNATURE 0xC0000000

arch/x86/kernel/cpu/centaur.c also hardcodes the value, would make sense
to put it to some x86 header instead.

>  
>  static int get_cpuid_func(struct kvm_cpuid_entry2 *entries, u32 func,
>  			  int *nent, int maxnent, unsigned int type)
> @@ -845,6 +837,10 @@ static int get_cpuid_func(struct kvm_cpuid_entry2 *entries, u32 func,
>  	u32 limit;
>  	int r;
>  
> +	if (func == CENTAUR_CPUID_SIGNATURE &&
> +	    boot_cpu_data.x86_vendor != X86_VENDOR_CENTAUR)
> +		return 0;
> +
>  	r = do_cpuid_func(&entries[*nent], func, nent, maxnent, type);
>  	if (r)
>  		return r;
> @@ -896,11 +892,8 @@ int kvm_dev_ioctl_get_cpuid(struct kvm_cpuid2 *cpuid,
>  	struct kvm_cpuid_entry2 *cpuid_entries;
>  	int nent = 0, r = -E2BIG, i;
>  
> -	static const struct kvm_cpuid_param param[] = {
> -		{ .func = 0 },
> -		{ .func = 0x80000000 },
> -		{ .func = 0xC0000000, .qualifier = is_centaur_cpu },
> -		{ .func = KVM_CPUID_SIGNATURE },
> +	static const u32 funcs[] = {
> +		0, 0x80000000, CENTAUR_CPUID_SIGNATURE, KVM_CPUID_SIGNATURE,
>  	};
>  
>  	if (cpuid->nent < 1)
> @@ -918,14 +911,9 @@ int kvm_dev_ioctl_get_cpuid(struct kvm_cpuid2 *cpuid,
>  		goto out;
>  
>  	r = 0;
> -	for (i = 0; i < ARRAY_SIZE(param); i++) {
> -		const struct kvm_cpuid_param *ent = &param[i];
> -
> -		if (ent->qualifier && !ent->qualifier(ent))
> -			continue;
> -
> -		r = get_cpuid_func(cpuid_entries, ent->func, &nent,
> -				   cpuid->nent, type);
> +	for (i = 0; i < ARRAY_SIZE(funcs); i++) {
> +		r = get_cpuid_func(cpuid_entries, funcs[i], &nent, cpuid->nent,
> +				   type);
>  		if (r)
>  			goto out_free;
>  	}

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 04/61] KVM: x86: Clean up error handling in kvm_dev_ioctl_get_cpuid()
  2020-02-01 18:51 ` [PATCH 04/61] KVM: x86: Clean up error handling in kvm_dev_ioctl_get_cpuid() Sean Christopherson
@ 2020-02-06 15:09   ` Vitaly Kuznetsov
  0 siblings, 0 replies; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-06 15:09 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel, Paolo Bonzini

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Clean up the error handling in kvm_dev_ioctl_get_cpuid(), which has
> gotten a bit crusty as the function has evolved over the years.
>
> Opportunistically hoist the static @funcs declaration to the top of the
> function to make it more obvious that it's a "static const".
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kvm/cpuid.c | 19 +++++++------------
>  1 file changed, 7 insertions(+), 12 deletions(-)
>
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index de52cbb46171..11d5f311ef10 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -889,45 +889,40 @@ int kvm_dev_ioctl_get_cpuid(struct kvm_cpuid2 *cpuid,
>  			    struct kvm_cpuid_entry2 __user *entries,
>  			    unsigned int type)
>  {
> -	struct kvm_cpuid_entry2 *cpuid_entries;
> -	int nent = 0, r = -E2BIG, i;
> -
>  	static const u32 funcs[] = {
>  		0, 0x80000000, CENTAUR_CPUID_SIGNATURE, KVM_CPUID_SIGNATURE,
>  	};
>  
> +	struct kvm_cpuid_entry2 *cpuid_entries;
> +	int nent = 0, r, i;
> +
>  	if (cpuid->nent < 1)
> -		goto out;
> +		return -E2BIG;
>  	if (cpuid->nent > KVM_MAX_CPUID_ENTRIES)
>  		cpuid->nent = KVM_MAX_CPUID_ENTRIES;
>  
>  	if (sanity_check_entries(entries, cpuid->nent, type))
>  		return -EINVAL;
>  
> -	r = -ENOMEM;
>  	cpuid_entries = vzalloc(array_size(sizeof(struct kvm_cpuid_entry2),
>  					   cpuid->nent));
>  	if (!cpuid_entries)
> -		goto out;
> +		return -ENOMEM;
>  
> -	r = 0;
>  	for (i = 0; i < ARRAY_SIZE(funcs); i++) {
>  		r = get_cpuid_func(cpuid_entries, funcs[i], &nent, cpuid->nent,
>  				   type);
>  		if (r)
>  			goto out_free;
>  	}
> +	cpuid->nent = nent;
>  
> -	r = -EFAULT;
>  	if (copy_to_user(entries, cpuid_entries,
>  			 nent * sizeof(struct kvm_cpuid_entry2)))
> -		goto out_free;
> -	cpuid->nent = nent;
> -	r = 0;
> +		r = -EFAULT;
>  
>  out_free:
>  	vfree(cpuid_entries);
> -out:
>  	return r;
>  }

Please [partially] disregard my comment on PATCH 02

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 05/61] KVM: x86: Check userapce CPUID array size after validating sub-leaf
  2020-02-01 18:51 ` [PATCH 05/61] KVM: x86: Check userapce CPUID array size after validating sub-leaf Sean Christopherson
@ 2020-02-06 15:24   ` Vitaly Kuznetsov
  0 siblings, 0 replies; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-06 15:24 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel, Paolo Bonzini

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Verify that the next sub-leaf of CPUID 0x4 (or 0x8000001d) is valid
> before rejecting the entire KVM_GET_SUPPORTED_CPUID due to insufficent
> space in the userspace array.
>
> Note, although this is technically a bug, it's not visible to userspace
> as KVM_GET_SUPPORTED_CPUID is guaranteed to fail on KVM_CPUID_SIGNATURE,
> which is hardcoded to be added after the affected leafs.  The real
> motivation for the change is to tightly couple the nent/maxnent and
> do_host_cpuid() sequences in preparation for future cleanup.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kvm/cpuid.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index 11d5f311ef10..e5cf1e0cf84a 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -552,12 +552,12 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
>  
>  		/* read more entries until cache_type is zero */
>  		for (i = 1; ; ++i) {
> -			if (*nent >= maxnent)
> -				goto out;
> -
>  			cache_type = entry[i - 1].eax & 0x1f;
>  			if (!cache_type)
>  				break;
> +
> +			if (*nent >= maxnent)
> +				goto out;
>  			do_host_cpuid(&entry[i], function, i);
>  			++*nent;
>  		}

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 06/61] KVM: x86: Move CPUID 0xD.1 handling out of the index>0 loop
  2020-02-01 18:51 ` [PATCH 06/61] KVM: x86: Move CPUID 0xD.1 handling out of the index>0 loop Sean Christopherson
@ 2020-02-07 15:38   ` Vitaly Kuznetsov
  0 siblings, 0 replies; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-07 15:38 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel, Paolo Bonzini

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Mov the sub-leaf 1 handling for CPUID 0xD out of the index>0 loop so
> that the loop only handles index>2.  Sub-leafs 2+ have identical
> semantics, whereas sub-leaf 1 is effectively a feature sub-leaf.
>
> Moving sub-leaf 1 out of the loop does duplicate a bit of code, but
> the nent/maxnent code will be consolidated in a future patch, and
> duplicating the clear of ECX/EDX is arguably a good thing as the reasons
> for clearing said registers are completely different.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kvm/cpuid.c | 37 ++++++++++++++++++++++---------------
>  1 file changed, 22 insertions(+), 15 deletions(-)
>
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index e5cf1e0cf84a..fc8540596386 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -653,26 +653,33 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
>  		if (!supported)
>  			break;
>  
> -		for (idx = 1, i = 1; idx < 64; ++idx) {
> +		if (*nent >= maxnent)
> +			goto out;
> +
> +		do_host_cpuid(&entry[1], function, 1);
> +		++*nent;
> +
> +		entry[1].eax &= kvm_cpuid_D_1_eax_x86_features;
> +		cpuid_mask(&entry[1].eax, CPUID_D_1_EAX);
> +		if (entry[1].eax & (F(XSAVES)|F(XSAVEC)))
> +			entry[1].ebx = xstate_required_size(supported, true);
> +		else
> +			entry[1].ebx = 0;
> +		/* Saving XSS controlled state via XSAVES isn't supported. */
> +		entry[1].ecx = 0;
> +		entry[1].edx = 0;
> +
> +		for (idx = 2, i = 2; idx < 64; ++idx) {
>  			u64 mask = ((u64)1 << idx);
> +
>  			if (*nent >= maxnent)
>  				goto out;
>  
>  			do_host_cpuid(&entry[i], function, idx);
> -			if (idx == 1) {
> -				entry[i].eax &= kvm_cpuid_D_1_eax_x86_features;
> -				cpuid_mask(&entry[i].eax, CPUID_D_1_EAX);
> -				entry[i].ebx = 0;
> -				if (entry[i].eax & (F(XSAVES)|F(XSAVEC)))
> -					entry[i].ebx =
> -						xstate_required_size(supported,
> -								     true);
> -			} else {
> -				if (entry[i].eax == 0 || !(supported & mask))
> -					continue;
> -				if (WARN_ON_ONCE(entry[i].ecx & 1))
> -					continue;
> -			}
> +			if (entry[i].eax == 0 || !(supported & mask))
> +				continue;
> +			if (WARN_ON_ONCE(entry[i].ecx & 1))
> +				continue;
>  			entry[i].ecx = 0;
>  			entry[i].edx = 0;
>  			++*nent;

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 07/61] KVM: x86: Check for CPUID 0xD.N support before validating array size
  2020-02-01 18:51 ` [PATCH 07/61] KVM: x86: Check for CPUID 0xD.N support before validating array size Sean Christopherson
@ 2020-02-07 15:48   ` Vitaly Kuznetsov
  0 siblings, 0 replies; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-07 15:48 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel, Paolo Bonzini

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Now that sub-leaf 1 is handled separately, verify the next sub-leaf is
> needed before rejecting KVM_GET_SUPPORTED_CPUID due to an insufficiently
> sized userspace array.
>
> Note, although this is technically a bug, it's not visible to userspace
> as KVM_GET_SUPPORTED_CPUID is guaranteed to fail on KVM_CPUID_SIGNATURE,
> which is hardcoded to be added after leaf 0xD.  The real motivation for
> the change is to tightly couple the nent/maxnent and do_host_cpuid()
> sequences in preparation for future cleanup.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kvm/cpuid.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index fc8540596386..fd9b29aa7abc 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -670,13 +670,14 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
>  		entry[1].edx = 0;
>  
>  		for (idx = 2, i = 2; idx < 64; ++idx) {
> -			u64 mask = ((u64)1 << idx);
> +			if (!(supported & BIT_ULL(idx)))
> +				continue;
>  
>  			if (*nent >= maxnent)
>  				goto out;
>  
>  			do_host_cpuid(&entry[i], function, idx);
> -			if (entry[i].eax == 0 || !(supported & mask))
> +			if (entry[i].eax == 0)
>  				continue;
>  			if (WARN_ON_ONCE(entry[i].ecx & 1))
>  				continue;

The remaining WARN_ON_ONCE() is technically the same 'bug not visible to
userspace' :-)

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 08/61] KVM: x86: Warn on zero-size save state for valid CPUID 0xD.N sub-leaf
  2020-02-01 18:51 ` [PATCH 08/61] KVM: x86: Warn on zero-size save state for valid CPUID 0xD.N sub-leaf Sean Christopherson
@ 2020-02-07 15:54   ` Vitaly Kuznetsov
  2020-02-07 15:56     ` Sean Christopherson
  0 siblings, 1 reply; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-07 15:54 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel, Paolo Bonzini

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> WARN if the save state size for a valid XCR0-managed sub-leaf is zero,
> which would indicate a KVM or CPU bug.  Add a comment to explain why KVM
> WARNs so the reader doesn't have to tease out the relevant bits from
> Intel's SDM and KVM's XCR0/XSS code.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kvm/cpuid.c | 13 ++++++++++---
>  1 file changed, 10 insertions(+), 3 deletions(-)
>
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index fd9b29aa7abc..424dde41cb5d 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -677,10 +677,17 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
>  				goto out;
>  
>  			do_host_cpuid(&entry[i], function, idx);
> -			if (entry[i].eax == 0)
> -				continue;
> -			if (WARN_ON_ONCE(entry[i].ecx & 1))
> +
> +			/*
> +			 * The @supported check above should have filtered out
> +			 * invalid sub-leafs as well as sub-leafs managed by

Is it 'sub-leafs' or 'sub-leaves' actually? :-)

> +			 * IA32_XSS MSR.  Only XCR0-managed sub-leafs should
> +			 * reach this point, and they should have a non-zero
> +			 * save state size.
> +			 */
> +			if (WARN_ON_ONCE(!entry[i].eax || (entry[i].ecx & 1)))
>  				continue;
> +
>  			entry[i].ecx = 0;
>  			entry[i].edx = 0;
>  			++*nent;

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 09/61] KVM: x86: Refactor CPUID 0xD.N sub-leaf entry creation
  2020-02-01 18:51 ` [PATCH 09/61] KVM: x86: Refactor CPUID 0xD.N sub-leaf entry creation Sean Christopherson
@ 2020-02-07 15:56   ` Vitaly Kuznetsov
  0 siblings, 0 replies; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-07 15:56 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel, Paolo Bonzini

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Increment the number of CPUID entries immediately after do_host_cpuid()
> in preparation for moving the logic into do_host_cpuid().  Handle the
> rare/impossible case of encountering a bogus sub-leaf by decrementing
> the number entries on failure.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kvm/cpuid.c | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index 424dde41cb5d..6e1685a16cca 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -677,6 +677,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
>  				goto out;
>  
>  			do_host_cpuid(&entry[i], function, idx);
> +			++*nent;
>  
>  			/*
>  			 * The @supported check above should have filtered out
> @@ -685,12 +686,13 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
>  			 * reach this point, and they should have a non-zero
>  			 * save state size.
>  			 */
> -			if (WARN_ON_ONCE(!entry[i].eax || (entry[i].ecx & 1)))
> +			if (WARN_ON_ONCE(!entry[i].eax || (entry[i].ecx & 1))) {
> +				--*nent;
>  				continue;
> +			}
>  
>  			entry[i].ecx = 0;
>  			entry[i].edx = 0;
> -			++*nent;
>  			++i;
>  		}
>  		break;

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 08/61] KVM: x86: Warn on zero-size save state for valid CPUID 0xD.N sub-leaf
  2020-02-07 15:54   ` Vitaly Kuznetsov
@ 2020-02-07 15:56     ` Sean Christopherson
  0 siblings, 0 replies; 168+ messages in thread
From: Sean Christopherson @ 2020-02-07 15:56 UTC (permalink / raw)
  To: Vitaly Kuznetsov
  Cc: Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel, Paolo Bonzini

On Fri, Feb 07, 2020 at 04:54:59PM +0100, Vitaly Kuznetsov wrote:
> Sean Christopherson <sean.j.christopherson@intel.com> writes:
> 
> > WARN if the save state size for a valid XCR0-managed sub-leaf is zero,
> > which would indicate a KVM or CPU bug.  Add a comment to explain why KVM
> > WARNs so the reader doesn't have to tease out the relevant bits from
> > Intel's SDM and KVM's XCR0/XSS code.
> >
> > Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> > ---
> >  arch/x86/kvm/cpuid.c | 13 ++++++++++---
> >  1 file changed, 10 insertions(+), 3 deletions(-)
> >
> > diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> > index fd9b29aa7abc..424dde41cb5d 100644
> > --- a/arch/x86/kvm/cpuid.c
> > +++ b/arch/x86/kvm/cpuid.c
> > @@ -677,10 +677,17 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
> >  				goto out;
> >  
> >  			do_host_cpuid(&entry[i], function, idx);
> > -			if (entry[i].eax == 0)
> > -				continue;
> > -			if (WARN_ON_ONCE(entry[i].ecx & 1))
> > +
> > +			/*
> > +			 * The @supported check above should have filtered out
> > +			 * invalid sub-leafs as well as sub-leafs managed by
> 
> Is it 'sub-leafs' or 'sub-leaves' actually? :-)

Yes.  :-D

^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 03/61] KVM: x86: Simplify handling of Centaur CPUID leafs
  2020-02-06 15:05   ` Vitaly Kuznetsov
@ 2020-02-07 19:47     ` Sean Christopherson
  0 siblings, 0 replies; 168+ messages in thread
From: Sean Christopherson @ 2020-02-07 19:47 UTC (permalink / raw)
  To: Vitaly Kuznetsov
  Cc: Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel, Paolo Bonzini

On Thu, Feb 06, 2020 at 04:05:54PM +0100, Vitaly Kuznetsov wrote:
> Sean Christopherson <sean.j.christopherson@intel.com> writes:
> 
> > Refactor the handling of the Centaur-only CPUID leaf to detect the leaf
> > via a runtime query instead of adding a one-off callback in the static
> > array.  When the callback was introduced, there were additional fields
> > in the array's structs, and more importantly, retpoline wasn't a thing.
> >
> > No functional change intended.
> >
> > Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> > ---
> >  arch/x86/kvm/cpuid.c | 32 ++++++++++----------------------
> >  1 file changed, 10 insertions(+), 22 deletions(-)
> >
> > diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> > index f49fdd06f511..de52cbb46171 100644
> > --- a/arch/x86/kvm/cpuid.c
> > +++ b/arch/x86/kvm/cpuid.c
> > @@ -829,15 +829,7 @@ static int do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 func,
> >  	return __do_cpuid_func(entry, func, nent, maxnent);
> >  }
> >  
> > -struct kvm_cpuid_param {
> > -	u32 func;
> > -	bool (*qualifier)(const struct kvm_cpuid_param *param);
> > -};
> > -
> > -static bool is_centaur_cpu(const struct kvm_cpuid_param *param)
> > -{
> > -	return boot_cpu_data.x86_vendor == X86_VENDOR_CENTAUR;
> > -}
> > +#define CENTAUR_CPUID_SIGNATURE 0xC0000000
> 
> arch/x86/kernel/cpu/centaur.c also hardcodes the value, would make sense
> to put it to some x86 header instead.

Ya, I just didn't want to touch non-KVM code in a 60+ patch series.

> >  static int get_cpuid_func(struct kvm_cpuid_entry2 *entries, u32 func,
> >  			  int *nent, int maxnent, unsigned int type)
> > @@ -845,6 +837,10 @@ static int get_cpuid_func(struct kvm_cpuid_entry2 *entries, u32 func,
> >  	u32 limit;
> >  	int r;
> >  
> > +	if (func == CENTAUR_CPUID_SIGNATURE &&
> > +	    boot_cpu_data.x86_vendor != X86_VENDOR_CENTAUR)
> > +		return 0;
> > +
> >  	r = do_cpuid_func(&entries[*nent], func, nent, maxnent, type);
> >  	if (r)
> >  		return r;
> > @@ -896,11 +892,8 @@ int kvm_dev_ioctl_get_cpuid(struct kvm_cpuid2 *cpuid,
> >  	struct kvm_cpuid_entry2 *cpuid_entries;
> >  	int nent = 0, r = -E2BIG, i;
> >  
> > -	static const struct kvm_cpuid_param param[] = {
> > -		{ .func = 0 },
> > -		{ .func = 0x80000000 },
> > -		{ .func = 0xC0000000, .qualifier = is_centaur_cpu },
> > -		{ .func = KVM_CPUID_SIGNATURE },
> > +	static const u32 funcs[] = {
> > +		0, 0x80000000, CENTAUR_CPUID_SIGNATURE, KVM_CPUID_SIGNATURE,
> >  	};
> >  
> >  	if (cpuid->nent < 1)
> > @@ -918,14 +911,9 @@ int kvm_dev_ioctl_get_cpuid(struct kvm_cpuid2 *cpuid,
> >  		goto out;
> >  
> >  	r = 0;
> > -	for (i = 0; i < ARRAY_SIZE(param); i++) {
> > -		const struct kvm_cpuid_param *ent = &param[i];
> > -
> > -		if (ent->qualifier && !ent->qualifier(ent))
> > -			continue;
> > -
> > -		r = get_cpuid_func(cpuid_entries, ent->func, &nent,
> > -				   cpuid->nent, type);
> > +	for (i = 0; i < ARRAY_SIZE(funcs); i++) {
> > +		r = get_cpuid_func(cpuid_entries, funcs[i], &nent, cpuid->nent,
> > +				   type);
> >  		if (r)
> >  			goto out_free;
> >  	}
> 
> Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>
> 
> -- 
> Vitaly
> 

^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 02/61] KVM: x86: Refactor loop around do_cpuid_func() to separate helper
  2020-02-06 14:59   ` Vitaly Kuznetsov
@ 2020-02-07 19:53     ` Sean Christopherson
  2020-02-25 14:37       ` Paolo Bonzini
  0 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-07 19:53 UTC (permalink / raw)
  To: Vitaly Kuznetsov
  Cc: Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel, Paolo Bonzini

On Thu, Feb 06, 2020 at 03:59:49PM +0100, Vitaly Kuznetsov wrote:
> Sean Christopherson <sean.j.christopherson@intel.com> writes:
> 
> > Move the guts of kvm_dev_ioctl_get_cpuid()'s CPUID func loop to a
> > separate helper to improve code readability and pave the way for future
> > cleanup.
> >
> > No functional change intended.
> >
> > Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> > ---
> >  arch/x86/kvm/cpuid.c | 45 ++++++++++++++++++++++++++------------------
> >  1 file changed, 27 insertions(+), 18 deletions(-)
> >
> > diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> > index 47ce04762c20..f49fdd06f511 100644
> > --- a/arch/x86/kvm/cpuid.c
> > +++ b/arch/x86/kvm/cpuid.c
> > @@ -839,6 +839,29 @@ static bool is_centaur_cpu(const struct kvm_cpuid_param *param)
> >  	return boot_cpu_data.x86_vendor == X86_VENDOR_CENTAUR;
> >  }
> >  
> > +static int get_cpuid_func(struct kvm_cpuid_entry2 *entries, u32 func,
> > +			  int *nent, int maxnent, unsigned int type)
> > +{
> > +	u32 limit;
> > +	int r;
> > +
> > +	r = do_cpuid_func(&entries[*nent], func, nent, maxnent, type);
> > +	if (r)
> > +		return r;
> > +
> > +	limit = entries[*nent - 1].eax;
> > +	for (func = func + 1; func <= limit; ++func) {
> > +		if (*nent >= maxnent)
> > +			return -E2BIG;
> > +
> > +		r = do_cpuid_func(&entries[*nent], func, nent, maxnent, type);
> > +		if (r)
> > +			break;
> > +	}
> > +
> > +	return r;
> > +}
> > +
> >  static bool sanity_check_entries(struct kvm_cpuid_entry2 __user *entries,
> >  				 __u32 num_entries, unsigned int ioctl_type)
> >  {
> > @@ -871,8 +894,8 @@ int kvm_dev_ioctl_get_cpuid(struct kvm_cpuid2 *cpuid,
> >  			    unsigned int type)
> >  {
> >  	struct kvm_cpuid_entry2 *cpuid_entries;
> > -	int limit, nent = 0, r = -E2BIG, i;
> > -	u32 func;
> > +	int nent = 0, r = -E2BIG, i;
> 
> Not this patches fault, but I just noticed that '-E2BIG' initializer
> here is only being used for 
> 
>  'if (cpuid->nent < 1)'
> 
> case so I have two suggestion:
> 1) Return directly without the 'goto' , drop the initializer.

Great minds think alike ;-)

> 2) Return -EINVAL instead.

I agree that it _should_ be -EINVAL, but I just don't think it's worth
the possibility of breaking (stupid) userspace that was doing something
like:

	for (i = 0; i < max_cpuid_size; i++) {
		cpuid.nent = i;

		r = ioctl(fd, KVM_GET_SUPPORTED_CPUID, &cpuid);
		if (!r || r != -E2BIG)
			break;
	}

^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 30/61] KVM: x86: Handle MPX CPUID adjustment in VMX code
  2020-02-01 18:51 ` [PATCH 30/61] KVM: x86: Handle MPX CPUID adjustment in VMX code Sean Christopherson
@ 2020-02-13 13:51   ` Xiaoyao Li
  2020-02-13 17:37     ` Sean Christopherson
  2020-02-24 15:14   ` Vitaly Kuznetsov
  1 sibling, 1 reply; 168+ messages in thread
From: Xiaoyao Li @ 2020-02-13 13:51 UTC (permalink / raw)
  To: Sean Christopherson, Paolo Bonzini
  Cc: Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm,
	linux-kernel

On 2/2/2020 2:51 AM, Sean Christopherson wrote:
> Move the MPX CPUID adjustments into VMX to eliminate an instance of the
> undesirable "unsigned f_* = *_supported ? F(*) : 0" pattern in the
> common CPUID handling code.
> 
> Note, VMX must manually check for kernel support via
> boot_cpu_has(X86_FEATURE_MPX).

Why must?

> No functional change intended.
> 
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>   arch/x86/kvm/cpuid.c   |  3 +--
>   arch/x86/kvm/vmx/vmx.c | 14 ++++++++++++--
>   2 files changed, 13 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index cb5870a323cc..09e24d1d731c 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -340,7 +340,6 @@ static int __do_cpuid_func_emulated(struct kvm_cpuid_array *array, u32 func)
>   static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry)
>   {
>   	unsigned f_invpcid = kvm_x86_ops->invpcid_supported() ? F(INVPCID) : 0;
> -	unsigned f_mpx = kvm_mpx_supported() ? F(MPX) : 0;
>   	unsigned f_umip = kvm_x86_ops->umip_emulated() ? F(UMIP) : 0;
>   	unsigned f_intel_pt = kvm_x86_ops->pt_supported() ? F(INTEL_PT) : 0;
>   	unsigned f_la57;
> @@ -349,7 +348,7 @@ static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry)
>   	/* cpuid 7.0.ebx */
>   	const u32 kvm_cpuid_7_0_ebx_x86_features =
>   		F(FSGSBASE) | F(BMI1) | F(HLE) | F(AVX2) | F(SMEP) |
> -		F(BMI2) | F(ERMS) | f_invpcid | F(RTM) | f_mpx | F(RDSEED) |
> +		F(BMI2) | F(ERMS) | f_invpcid | F(RTM) | 0 /*MPX*/ | F(RDSEED) |
>   		F(ADX) | F(SMAP) | F(AVX512IFMA) | F(AVX512F) | F(AVX512PF) |
>   		F(AVX512ER) | F(AVX512CD) | F(CLFLUSHOPT) | F(CLWB) | F(AVX512DQ) |
>   		F(SHA_NI) | F(AVX512BW) | F(AVX512VL) | f_intel_pt;
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 3ff830e2258e..143193fc178e 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -7106,8 +7106,18 @@ static void vmx_cpuid_update(struct kvm_vcpu *vcpu)
>   
>   static void vmx_set_supported_cpuid(struct kvm_cpuid_entry2 *entry)
>   {
> -	if (entry->function == 1 && nested)
> -		entry->ecx |= feature_bit(VMX);
> +	switch (entry->function) {
> +	case 0x1:
> +		if (nested)
> +			cpuid_entry_set(entry, X86_FEATURE_VMX);
> +		break;
> +	case 0x7:
> +		if (boot_cpu_has(X86_FEATURE_MPX) && kvm_mpx_supported())
> +			cpuid_entry_set(entry, X86_FEATURE_MPX);
> +		break;
> +	default:
> +		break;
> +	}
>   }
>   
>   static void vmx_request_immediate_exit(struct kvm_vcpu *vcpu)
> 


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 20/61] KVM: x86: Calculate the supported xcr0 mask at load time
  2020-02-01 18:51 ` [PATCH 20/61] KVM: x86: Calculate the supported xcr0 mask at load time Sean Christopherson
@ 2020-02-13 14:21   ` Xiaoyao Li
  0 siblings, 0 replies; 168+ messages in thread
From: Xiaoyao Li @ 2020-02-13 14:21 UTC (permalink / raw)
  To: Sean Christopherson, Paolo Bonzini
  Cc: Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm,
	linux-kernel

On 2/2/2020 2:51 AM, Sean Christopherson wrote:
> Add a new global variable, supported_xcr0, to track which xcr0 bits can
> be exposed to the guest instead of calculating the mask on every call.
> The supported bits are constant for a given instance of KVM.
> 
> This paves the way toward eliminating the ->mpx_supported() call in
> kvm_mpx_supported(), e.g. eliminates multiple retpolines in VMX's nested
> VM-Enter path, and eventually toward eliminating ->mpx_supported()
> altogether.
> 
> No functional change intended.
> 

Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com>

> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>   arch/x86/kvm/cpuid.c   | 32 +++++++++-----------------------
>   arch/x86/kvm/svm.c     |  2 ++
>   arch/x86/kvm/vmx/vmx.c |  4 ++++
>   arch/x86/kvm/x86.c     | 14 +++++++++++---
>   arch/x86/kvm/x86.h     |  7 +------
>   5 files changed, 27 insertions(+), 32 deletions(-)
> 
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index bfd8304a8437..b9763eb711cb 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -52,16 +52,6 @@ bool kvm_mpx_supported(void)
>   }
>   EXPORT_SYMBOL_GPL(kvm_mpx_supported);
>   
> -u64 kvm_supported_xcr0(void)
> -{
> -	u64 xcr0 = KVM_SUPPORTED_XCR0 & host_xcr0;
> -
> -	if (!kvm_mpx_supported())
> -		xcr0 &= ~(XFEATURE_MASK_BNDREGS | XFEATURE_MASK_BNDCSR);
> -
> -	return xcr0;
> -}
> -
>   #define F feature_bit
>   
>   int kvm_update_cpuid(struct kvm_vcpu *vcpu)
> @@ -107,8 +97,7 @@ int kvm_update_cpuid(struct kvm_vcpu *vcpu)
>   		vcpu->arch.guest_xstate_size = XSAVE_HDR_SIZE + XSAVE_HDR_OFFSET;
>   	} else {
>   		vcpu->arch.guest_supported_xcr0 =
> -			(best->eax | ((u64)best->edx << 32)) &
> -			kvm_supported_xcr0();
> +			(best->eax | ((u64)best->edx << 32)) & supported_xcr0;
>   		vcpu->arch.guest_xstate_size = best->ebx =
>   			xstate_required_size(vcpu->arch.xcr0, false);
>   	}
> @@ -633,14 +622,12 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
>   				goto out;
>   		}
>   		break;
> -	case 0xd: {
> -		u64 supported = kvm_supported_xcr0();
> -
> -		entry->eax &= supported;
> -		entry->ebx = xstate_required_size(supported, false);
> +	case 0xd:
> +		entry->eax &= supported_xcr0;
> +		entry->ebx = xstate_required_size(supported_xcr0, false);
>   		entry->ecx = entry->ebx;
> -		entry->edx &= supported >> 32;
> -		if (!supported)
> +		entry->edx &= supported_xcr0 >> 32;
> +		if (!supported_xcr0)
>   			break;
>   
>   		entry = do_host_cpuid(array, function, 1);
> @@ -650,7 +637,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
>   		entry->eax &= kvm_cpuid_D_1_eax_x86_features;
>   		cpuid_mask(&entry->eax, CPUID_D_1_EAX);
>   		if (entry->eax & (F(XSAVES)|F(XSAVEC)))
> -			entry->ebx = xstate_required_size(supported, true);
> +			entry->ebx = xstate_required_size(supported_xcr0, true);
>   		else
>   			entry->ebx = 0;
>   		/* Saving XSS controlled state via XSAVES isn't supported. */
> @@ -658,7 +645,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
>   		entry->edx = 0;
>   
>   		for (i = 2; i < 64; ++i) {
> -			if (!(supported & BIT_ULL(i)))
> +			if (!(supported_xcr0 & BIT_ULL(i)))
>   				continue;
>   
>   			entry = do_host_cpuid(array, function, i);
> @@ -666,7 +653,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
>   				goto out;
>   
>   			/*
> -			 * The @supported check above should have filtered out
> +			 * The supported check above should have filtered out
>   			 * invalid sub-leafs as well as sub-leafs managed by
>   			 * IA32_XSS MSR.  Only XCR0-managed sub-leafs should
>   			 * reach this point, and they should have a non-zero
> @@ -681,7 +668,6 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
>   			entry->edx = 0;
>   		}
>   		break;
> -	}
>   	/* Intel PT */
>   	case 0x14:
>   		if (!f_intel_pt)
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index bf0556588ad0..af096c4f9c5f 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -1368,6 +1368,8 @@ static __init int svm_hardware_setup(void)
>   
>   	init_msrpm_offsets();
>   
> +	supported_xcr0 &= ~(XFEATURE_MASK_BNDREGS | XFEATURE_MASK_BNDCSR);
> +
>   	if (boot_cpu_has(X86_FEATURE_NX))
>   		kvm_enable_efer_bits(EFER_NX);
>   
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 588aa5e4164e..32a84ec15064 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -7590,6 +7590,10 @@ static __init int hardware_setup(void)
>   		WARN_ONCE(host_bndcfgs, "KVM: BNDCFGS in host will be lost");
>   	}
>   
> +	if (!kvm_mpx_supported())
> +		supported_xcr0 &= ~(XFEATURE_MASK_BNDREGS |
> +				    XFEATURE_MASK_BNDCSR);
> +
>   	if (!cpu_has_vmx_vpid() || !cpu_has_vmx_invvpid() ||
>   	    !(cpu_has_vmx_invvpid_single() || cpu_has_vmx_invvpid_global()))
>   		enable_vpid = 0;
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 7e3f1d937224..f90c56c0c64a 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -180,6 +180,11 @@ struct kvm_shared_msrs {
>   static struct kvm_shared_msrs_global __read_mostly shared_msrs_global;
>   static struct kvm_shared_msrs __percpu *shared_msrs;
>   
> +#define KVM_SUPPORTED_XCR0     (XFEATURE_MASK_FP | XFEATURE_MASK_SSE \
> +				| XFEATURE_MASK_YMM | XFEATURE_MASK_BNDREGS \
> +				| XFEATURE_MASK_BNDCSR | XFEATURE_MASK_AVX512 \
> +				| XFEATURE_MASK_PKRU)
> +
>   static u64 __read_mostly host_xss;
>   
>   struct kvm_stats_debugfs_item debugfs_entries[] = {
> @@ -226,6 +231,8 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
>   };
>   
>   u64 __read_mostly host_xcr0;
> +u64 __read_mostly supported_xcr0;
> +EXPORT_SYMBOL_GPL(supported_xcr0);
>   
>   struct kmem_cache *x86_fpu_cache;
>   EXPORT_SYMBOL_GPL(x86_fpu_cache);
> @@ -4081,8 +4088,7 @@ static int kvm_vcpu_ioctl_x86_set_xsave(struct kvm_vcpu *vcpu,
>   		 * CPUID leaf 0xD, index 0, EDX:EAX.  This is for compatibility
>   		 * with old userspace.
>   		 */
> -		if (xstate_bv & ~kvm_supported_xcr0() ||
> -			mxcsr & ~mxcsr_feature_mask)
> +		if (xstate_bv & ~supported_xcr0 || mxcsr & ~mxcsr_feature_mask)
>   			return -EINVAL;
>   		load_xsave(vcpu, (u8 *)guest_xsave->region);
>   	} else {
> @@ -7335,8 +7341,10 @@ int kvm_arch_init(void *opaque)
>   
>   	perf_register_guest_info_callbacks(&kvm_guest_cbs);
>   
> -	if (boot_cpu_has(X86_FEATURE_XSAVE))
> +	if (boot_cpu_has(X86_FEATURE_XSAVE)) {
>   		host_xcr0 = xgetbv(XCR_XFEATURE_ENABLED_MASK);
> +		supported_xcr0 = host_xcr0 & KVM_SUPPORTED_XCR0;
> +	}
>   
>   	kvm_lapic_init();
>   	if (pi_inject_timer == -1)
> diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
> index 3624665acee4..02b49ee49e24 100644
> --- a/arch/x86/kvm/x86.h
> +++ b/arch/x86/kvm/x86.h
> @@ -280,13 +280,8 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
>   			    int emulation_type, void *insn, int insn_len);
>   enum exit_fastpath_completion handle_fastpath_set_msr_irqoff(struct kvm_vcpu *vcpu);
>   
> -#define KVM_SUPPORTED_XCR0     (XFEATURE_MASK_FP | XFEATURE_MASK_SSE \
> -				| XFEATURE_MASK_YMM | XFEATURE_MASK_BNDREGS \
> -				| XFEATURE_MASK_BNDCSR | XFEATURE_MASK_AVX512 \
> -				| XFEATURE_MASK_PKRU)
>   extern u64 host_xcr0;
> -
> -extern u64 kvm_supported_xcr0(void);
> +extern u64 supported_xcr0;
>   
>   extern unsigned int min_timer_period_us;
>   
> 


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 21/61] KVM: x86: Use supported_xcr0 to detect MPX support
  2020-02-01 18:51 ` [PATCH 21/61] KVM: x86: Use supported_xcr0 to detect MPX support Sean Christopherson
@ 2020-02-13 14:25   ` Xiaoyao Li
  2020-02-21 15:32   ` Vitaly Kuznetsov
  1 sibling, 0 replies; 168+ messages in thread
From: Xiaoyao Li @ 2020-02-13 14:25 UTC (permalink / raw)
  To: Sean Christopherson, Paolo Bonzini
  Cc: Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm,
	linux-kernel

On 2/2/2020 2:51 AM, Sean Christopherson wrote:
> Query supported_xcr0 when checking for MPX support instead of invoking
> ->mpx_supported() and drop ->mpx_supported() as kvm_mpx_supported() was
> its last user.  Rename vmx_mpx_supported() to cpu_has_vmx_mpx() to
> better align with VMX/VMCS nomenclature.
> 
> Modify VMX's adjustment of xcr0 to call cpus_has_vmx_mpx() (renamed from
> vmx_mpx_supported()) directly to avoid reading supported_xcr0 before
> it's fully configured.
> 
> No functional change intended.
> 

Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com>

> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>   arch/x86/include/asm/kvm_host.h | 2 +-
>   arch/x86/kvm/cpuid.c            | 3 +--
>   arch/x86/kvm/svm.c              | 6 ------
>   arch/x86/kvm/vmx/capabilities.h | 2 +-
>   arch/x86/kvm/vmx/vmx.c          | 3 +--
>   5 files changed, 4 insertions(+), 12 deletions(-)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 77d206a93658..85f0d96cfeb2 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1163,7 +1163,7 @@ struct kvm_x86_ops {
>   			       enum x86_intercept_stage stage);
>   	void (*handle_exit_irqoff)(struct kvm_vcpu *vcpu,
>   		enum exit_fastpath_completion *exit_fastpath);
> -	bool (*mpx_supported)(void);
> +
>   	bool (*xsaves_supported)(void);
>   	bool (*umip_emulated)(void);
>   	bool (*pt_supported)(void);
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index b9763eb711cb..84006cc4007c 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -47,8 +47,7 @@ static u32 xstate_required_size(u64 xstate_bv, bool compacted)
>   
>   bool kvm_mpx_supported(void)
>   {
> -	return ((host_xcr0 & (XFEATURE_MASK_BNDREGS | XFEATURE_MASK_BNDCSR))
> -		 && kvm_x86_ops->mpx_supported());
> +	return supported_xcr0 & (XFEATURE_MASK_BNDREGS | XFEATURE_MASK_BNDCSR);
>   }
>   EXPORT_SYMBOL_GPL(kvm_mpx_supported);
>   
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index af096c4f9c5f..3c7ddaff405d 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -6082,11 +6082,6 @@ static bool svm_invpcid_supported(void)
>   	return false;
>   }
>   
> -static bool svm_mpx_supported(void)
> -{
> -	return false;
> -}
> -
>   static bool svm_xsaves_supported(void)
>   {
>   	return boot_cpu_has(X86_FEATURE_XSAVES);
> @@ -7468,7 +7463,6 @@ static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
>   
>   	.rdtscp_supported = svm_rdtscp_supported,
>   	.invpcid_supported = svm_invpcid_supported,
> -	.mpx_supported = svm_mpx_supported,
>   	.xsaves_supported = svm_xsaves_supported,
>   	.umip_emulated = svm_umip_emulated,
>   	.pt_supported = svm_pt_supported,
> diff --git a/arch/x86/kvm/vmx/capabilities.h b/arch/x86/kvm/vmx/capabilities.h
> index 1a6a99382e94..0a0b1494a934 100644
> --- a/arch/x86/kvm/vmx/capabilities.h
> +++ b/arch/x86/kvm/vmx/capabilities.h
> @@ -100,7 +100,7 @@ static inline bool cpu_has_load_perf_global_ctrl(void)
>   	       (vmcs_config.vmexit_ctrl & VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL);
>   }
>   
> -static inline bool vmx_mpx_supported(void)
> +static inline bool cpu_has_vmx_mpx(void)
>   {
>   	return (vmcs_config.vmexit_ctrl & VM_EXIT_CLEAR_BNDCFGS) &&
>   		(vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_BNDCFGS);
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 32a84ec15064..98fd651f7f7e 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -7590,7 +7590,7 @@ static __init int hardware_setup(void)
>   		WARN_ONCE(host_bndcfgs, "KVM: BNDCFGS in host will be lost");
>   	}
>   
> -	if (!kvm_mpx_supported())
> +	if (!cpu_has_vmx_mpx())
>   		supported_xcr0 &= ~(XFEATURE_MASK_BNDREGS |
>   				    XFEATURE_MASK_BNDCSR);
>   
> @@ -7857,7 +7857,6 @@ static struct kvm_x86_ops vmx_x86_ops __ro_after_init = {
>   
>   	.check_intercept = vmx_check_intercept,
>   	.handle_exit_irqoff = vmx_handle_exit_irqoff,
> -	.mpx_supported = vmx_mpx_supported,
>   	.xsaves_supported = vmx_xsaves_supported,
>   	.umip_emulated = vmx_umip_emulated,
>   	.pt_supported = vmx_pt_supported,
> 


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 22/61] KVM: x86: Make kvm_mpx_supported() an inline function
  2020-02-01 18:51 ` [PATCH 22/61] KVM: x86: Make kvm_mpx_supported() an inline function Sean Christopherson
@ 2020-02-13 14:26   ` Xiaoyao Li
  2020-02-21 15:33   ` Vitaly Kuznetsov
  1 sibling, 0 replies; 168+ messages in thread
From: Xiaoyao Li @ 2020-02-13 14:26 UTC (permalink / raw)
  To: Sean Christopherson, Paolo Bonzini
  Cc: Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm,
	linux-kernel

On 2/2/2020 2:51 AM, Sean Christopherson wrote:
> Expose kvm_mpx_supported() as a static inline so that it can be inlined
> in kvm_intel.ko.
> 
> No functional change intended.
> 

Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>   arch/x86/kvm/cpuid.c | 6 ------
>   arch/x86/kvm/cpuid.h | 1 -
>   arch/x86/kvm/x86.h   | 5 +++++
>   3 files changed, 5 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index 84006cc4007c..d3c93b94abc3 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -45,12 +45,6 @@ static u32 xstate_required_size(u64 xstate_bv, bool compacted)
>   	return ret;
>   }
>   
> -bool kvm_mpx_supported(void)
> -{
> -	return supported_xcr0 & (XFEATURE_MASK_BNDREGS | XFEATURE_MASK_BNDCSR);
> -}
> -EXPORT_SYMBOL_GPL(kvm_mpx_supported);
> -
>   #define F feature_bit
>   
>   int kvm_update_cpuid(struct kvm_vcpu *vcpu)
> diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
> index 7366c618aa04..c1ac0995843d 100644
> --- a/arch/x86/kvm/cpuid.h
> +++ b/arch/x86/kvm/cpuid.h
> @@ -7,7 +7,6 @@
>   #include <asm/processor.h>
>   
>   int kvm_update_cpuid(struct kvm_vcpu *vcpu);
> -bool kvm_mpx_supported(void);
>   struct kvm_cpuid_entry2 *kvm_find_cpuid_entry(struct kvm_vcpu *vcpu,
>   					      u32 function, u32 index);
>   int kvm_dev_ioctl_get_cpuid(struct kvm_cpuid2 *cpuid,
> diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
> index 02b49ee49e24..bfac4a80956c 100644
> --- a/arch/x86/kvm/x86.h
> +++ b/arch/x86/kvm/x86.h
> @@ -283,6 +283,11 @@ enum exit_fastpath_completion handle_fastpath_set_msr_irqoff(struct kvm_vcpu *vc
>   extern u64 host_xcr0;
>   extern u64 supported_xcr0;
>   
> +static inline bool kvm_mpx_supported(void)
> +{
> +	return supported_xcr0 & (XFEATURE_MASK_BNDREGS | XFEATURE_MASK_BNDCSR);
> +}
> +
>   extern unsigned int min_timer_period_us;
>   
>   extern bool enable_vmware_backdoor;
> 


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 30/61] KVM: x86: Handle MPX CPUID adjustment in VMX code
  2020-02-13 13:51   ` Xiaoyao Li
@ 2020-02-13 17:37     ` Sean Christopherson
  0 siblings, 0 replies; 168+ messages in thread
From: Sean Christopherson @ 2020-02-13 17:37 UTC (permalink / raw)
  To: Xiaoyao Li
  Cc: Paolo Bonzini, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

On Thu, Feb 13, 2020 at 09:51:08PM +0800, Xiaoyao Li wrote:
> On 2/2/2020 2:51 AM, Sean Christopherson wrote:
> >Move the MPX CPUID adjustments into VMX to eliminate an instance of the
> >undesirable "unsigned f_* = *_supported ? F(*) : 0" pattern in the
> >common CPUID handling code.
> >
> >Note, VMX must manually check for kernel support via
> >boot_cpu_has(X86_FEATURE_MPX).
> 
> Why must?

do_cpuid_7_mask() runs the CPUID result through cpuid_mask(), which masks
features based on boot_cpu_data, i.e. clears bits for features that are
supported by hardware but unsupported/disabled by the kernel.

vmx_set_supported_cpuid() needs to to query boot_cpu_has() to preserve the
"supported by kernel" check provided by cpuid_mask().

> 
> >No functional change intended.
> >
> >Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> >---
> >  arch/x86/kvm/cpuid.c   |  3 +--
> >  arch/x86/kvm/vmx/vmx.c | 14 ++++++++++++--
> >  2 files changed, 13 insertions(+), 4 deletions(-)
> >
> >diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> >index cb5870a323cc..09e24d1d731c 100644
> >--- a/arch/x86/kvm/cpuid.c
> >+++ b/arch/x86/kvm/cpuid.c
> >@@ -340,7 +340,6 @@ static int __do_cpuid_func_emulated(struct kvm_cpuid_array *array, u32 func)
> >  static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry)
> >  {
> >  	unsigned f_invpcid = kvm_x86_ops->invpcid_supported() ? F(INVPCID) : 0;
> >-	unsigned f_mpx = kvm_mpx_supported() ? F(MPX) : 0;
> >  	unsigned f_umip = kvm_x86_ops->umip_emulated() ? F(UMIP) : 0;
> >  	unsigned f_intel_pt = kvm_x86_ops->pt_supported() ? F(INTEL_PT) : 0;
> >  	unsigned f_la57;
> >@@ -349,7 +348,7 @@ static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry)
> >  	/* cpuid 7.0.ebx */
> >  	const u32 kvm_cpuid_7_0_ebx_x86_features =
> >  		F(FSGSBASE) | F(BMI1) | F(HLE) | F(AVX2) | F(SMEP) |
> >-		F(BMI2) | F(ERMS) | f_invpcid | F(RTM) | f_mpx | F(RDSEED) |
> >+		F(BMI2) | F(ERMS) | f_invpcid | F(RTM) | 0 /*MPX*/ | F(RDSEED) |
> >  		F(ADX) | F(SMAP) | F(AVX512IFMA) | F(AVX512F) | F(AVX512PF) |
> >  		F(AVX512ER) | F(AVX512CD) | F(CLFLUSHOPT) | F(CLWB) | F(AVX512DQ) |
> >  		F(SHA_NI) | F(AVX512BW) | F(AVX512VL) | f_intel_pt;
> >diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> >index 3ff830e2258e..143193fc178e 100644
> >--- a/arch/x86/kvm/vmx/vmx.c
> >+++ b/arch/x86/kvm/vmx/vmx.c
> >@@ -7106,8 +7106,18 @@ static void vmx_cpuid_update(struct kvm_vcpu *vcpu)
> >  static void vmx_set_supported_cpuid(struct kvm_cpuid_entry2 *entry)
> >  {
> >-	if (entry->function == 1 && nested)
> >-		entry->ecx |= feature_bit(VMX);
> >+	switch (entry->function) {
> >+	case 0x1:
> >+		if (nested)
> >+			cpuid_entry_set(entry, X86_FEATURE_VMX);
> >+		break;
> >+	case 0x7:
> >+		if (boot_cpu_has(X86_FEATURE_MPX) && kvm_mpx_supported())
> >+			cpuid_entry_set(entry, X86_FEATURE_MPX);
> >+		break;
> >+	default:
> >+		break;
> >+	}
> >  }
> >  static void vmx_request_immediate_exit(struct kvm_vcpu *vcpu)
> >
> 

^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 26/61] KVM: x86: Introduce cpuid_entry_{get,has}() accessors
  2020-02-01 18:51 ` [PATCH 26/61] KVM: x86: Introduce cpuid_entry_{get,has}() accessors Sean Christopherson
@ 2020-02-14  9:44   ` Xiaoyao Li
  2020-02-14 17:09     ` Sean Christopherson
  2020-02-21 15:57   ` Vitaly Kuznetsov
  1 sibling, 1 reply; 168+ messages in thread
From: Xiaoyao Li @ 2020-02-14  9:44 UTC (permalink / raw)
  To: Sean Christopherson, Paolo Bonzini
  Cc: Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm,
	linux-kernel

On 2/2/2020 2:51 AM, Sean Christopherson wrote:
> Introduce accessors to retrieve feature bits from CPUID entries and use
> the new accessors where applicable.  Using the accessors eliminates the
> need to manually specify the register to be queried at no extra cost
> (binary output is identical) and will allow adding runtime consistency
> checks on the function and index in a future patch.
> 
> No functional change intended.
> 
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>   arch/x86/kvm/cpuid.c |  9 +++++----
>   arch/x86/kvm/cpuid.h | 46 +++++++++++++++++++++++++++++++++++---------
>   2 files changed, 42 insertions(+), 13 deletions(-)
> 
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index e3026fe638aa..3316963dad3d 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -68,7 +68,7 @@ int kvm_update_cpuid(struct kvm_vcpu *vcpu)
>   		best->edx |= F(APIC);
>   
>   	if (apic) {
> -		if (best->ecx & F(TSC_DEADLINE_TIMER))
> +		if (cpuid_entry_has(best, X86_FEATURE_TSC_DEADLINE_TIMER))
>   			apic->lapic_timer.timer_mode_mask = 3 << 17;
>   		else
>   			apic->lapic_timer.timer_mode_mask = 1 << 17;
> @@ -96,7 +96,8 @@ int kvm_update_cpuid(struct kvm_vcpu *vcpu)
>   	}
>   
>   	best = kvm_find_cpuid_entry(vcpu, 0xD, 1);
> -	if (best && (best->eax & (F(XSAVES) | F(XSAVEC))))
> +	if (best && (cpuid_entry_has(best, X86_FEATURE_XSAVES) ||
> +		     cpuid_entry_has(best, X86_FEATURE_XSAVEC)))
>   		best->ebx = xstate_required_size(vcpu->arch.xcr0, true);
>   
>   	/*
> @@ -155,7 +156,7 @@ static void cpuid_fix_nx_cap(struct kvm_vcpu *vcpu)
>   			break;
>   		}
>   	}
> -	if (entry && (entry->edx & F(NX)) && !is_efer_nx()) {
> +	if (entry && cpuid_entry_has(entry, X86_FEATURE_NX) && !is_efer_nx()) {
>   		entry->edx &= ~F(NX);
>   		printk(KERN_INFO "kvm: guest NX capability removed\n");
>   	}
> @@ -387,7 +388,7 @@ static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry)
>   		entry->ebx |= F(TSC_ADJUST);
>   
>   		entry->ecx &= kvm_cpuid_7_0_ecx_x86_features;
> -		f_la57 = entry->ecx & F(LA57);
> +		f_la57 = cpuid_entry_get(entry, X86_FEATURE_LA57);
>   		cpuid_mask(&entry->ecx, CPUID_7_ECX);
>   		/* Set LA57 based on hardware capability. */
>   		entry->ecx |= f_la57;
> diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
> index 72a79bdfed6b..64e96e4086e2 100644
> --- a/arch/x86/kvm/cpuid.h
> +++ b/arch/x86/kvm/cpuid.h
> @@ -95,16 +95,10 @@ static __always_inline struct cpuid_reg x86_feature_cpuid(unsigned x86_feature)
>   	return reverse_cpuid[x86_leaf];
>   }
>   
> -static __always_inline u32 *guest_cpuid_get_register(struct kvm_vcpu *vcpu, unsigned x86_feature)
> +static __always_inline u32 *__cpuid_entry_get_reg(struct kvm_cpuid_entry2 *entry,
> +						  const struct cpuid_reg *cpuid)
>   {
> -	struct kvm_cpuid_entry2 *entry;
> -	const struct cpuid_reg cpuid = x86_feature_cpuid(x86_feature);
> -
> -	entry = kvm_find_cpuid_entry(vcpu, cpuid.function, cpuid.index);
> -	if (!entry)
> -		return NULL;
> -
> -	switch (cpuid.reg) {
> +	switch (cpuid->reg) {
>   	case CPUID_EAX:
>   		return &entry->eax;
>   	case CPUID_EBX:
> @@ -119,6 +113,40 @@ static __always_inline u32 *guest_cpuid_get_register(struct kvm_vcpu *vcpu, unsi
>   	}
>   }
>   
> +static __always_inline u32 *cpuid_entry_get_reg(struct kvm_cpuid_entry2 *entry,
> +						unsigned x86_feature)
> +{
> +	const struct cpuid_reg cpuid = x86_feature_cpuid(x86_feature);
> +
> +	return __cpuid_entry_get_reg(entry, &cpuid);
> +}
> +
> +static __always_inline u32 cpuid_entry_get(struct kvm_cpuid_entry2 *entry,
> +					   unsigned x86_feature)
> +{
> +	u32 *reg = cpuid_entry_get_reg(entry, x86_feature);
> +
> +	return *reg & __feature_bit(x86_feature);
> +}
> +

This helper function is unnecessary. There is only one user throughout 
this series, i.e., cpuid_entry_has() below.

And I cannot image other possible use case of it.

> +static __always_inline bool cpuid_entry_has(struct kvm_cpuid_entry2 *entry,
> +					    unsigned x86_feature)
> +{
> +	return cpuid_entry_get(entry, x86_feature);
> +}
> +
> +static __always_inline int *guest_cpuid_get_register(struct kvm_vcpu *vcpu, unsigned x86_feature)
                           ^
Should be                 u32
otherwise, previous patch will be unhappy. :)

> +{
> +	struct kvm_cpuid_entry2 *entry;
> +	const struct cpuid_reg cpuid = x86_feature_cpuid(x86_feature);
> +
> +	entry = kvm_find_cpuid_entry(vcpu, cpuid.function, cpuid.index);
> +	if (!entry)
> +		return NULL;
> +
> +	return __cpuid_entry_get_reg(entry, &cpuid);
> +}
> +
>   static __always_inline bool guest_cpuid_has(struct kvm_vcpu *vcpu, unsigned x86_feature)
>   {
>   	u32 *reg;
> 


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 26/61] KVM: x86: Introduce cpuid_entry_{get,has}() accessors
  2020-02-14  9:44   ` Xiaoyao Li
@ 2020-02-14 17:09     ` Sean Christopherson
  0 siblings, 0 replies; 168+ messages in thread
From: Sean Christopherson @ 2020-02-14 17:09 UTC (permalink / raw)
  To: Xiaoyao Li
  Cc: Paolo Bonzini, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

On Fri, Feb 14, 2020 at 05:44:41PM +0800, Xiaoyao Li wrote:
> On 2/2/2020 2:51 AM, Sean Christopherson wrote:
> >@@ -387,7 +388,7 @@ static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry)
> >  		entry->ebx |= F(TSC_ADJUST);
> >  		entry->ecx &= kvm_cpuid_7_0_ecx_x86_features;
> >-		f_la57 = entry->ecx & F(LA57);
> >+		f_la57 = cpuid_entry_get(entry, X86_FEATURE_LA57);

Note, cpuid_entry_get() is used here.

> >  		cpuid_mask(&entry->ecx, CPUID_7_ECX);
> >  		/* Set LA57 based on hardware capability. */
> >  		entry->ecx |= f_la57;
> >diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
> >index 72a79bdfed6b..64e96e4086e2 100644
> >--- a/arch/x86/kvm/cpuid.h
> >+++ b/arch/x86/kvm/cpuid.h
> >@@ -95,16 +95,10 @@ static __always_inline struct cpuid_reg x86_feature_cpuid(unsigned x86_feature)
> >  	return reverse_cpuid[x86_leaf];
> >  }
> >-static __always_inline u32 *guest_cpuid_get_register(struct kvm_vcpu *vcpu, unsigned x86_feature)
> >+static __always_inline u32 *__cpuid_entry_get_reg(struct kvm_cpuid_entry2 *entry,
> >+						  const struct cpuid_reg *cpuid)
> >  {
> >-	struct kvm_cpuid_entry2 *entry;
> >-	const struct cpuid_reg cpuid = x86_feature_cpuid(x86_feature);
> >-
> >-	entry = kvm_find_cpuid_entry(vcpu, cpuid.function, cpuid.index);
> >-	if (!entry)
> >-		return NULL;
> >-
> >-	switch (cpuid.reg) {
> >+	switch (cpuid->reg) {
> >  	case CPUID_EAX:
> >  		return &entry->eax;
> >  	case CPUID_EBX:
> >@@ -119,6 +113,40 @@ static __always_inline u32 *guest_cpuid_get_register(struct kvm_vcpu *vcpu, unsi
> >  	}
> >  }
> >+static __always_inline u32 *cpuid_entry_get_reg(struct kvm_cpuid_entry2 *entry,
> >+						unsigned x86_feature)
> >+{
> >+	const struct cpuid_reg cpuid = x86_feature_cpuid(x86_feature);
> >+
> >+	return __cpuid_entry_get_reg(entry, &cpuid);
> >+}
> >+
> >+static __always_inline u32 cpuid_entry_get(struct kvm_cpuid_entry2 *entry,
> >+					   unsigned x86_feature)
> >+{
> >+	u32 *reg = cpuid_entry_get_reg(entry, x86_feature);
> >+
> >+	return *reg & __feature_bit(x86_feature);
> >+}
> >+
> 
> This helper function is unnecessary. There is only one user throughout this
> series, i.e., cpuid_entry_has() below.

And the LA57 case above.

> And I cannot image other possible use case of it.

The LA57 case, which admittedly goes away soon, was subtle enough (OR in
the flag instead of querying yes/no) that I wanted keep the accessor around
in case a similar case popped up in the future.

> >+static __always_inline bool cpuid_entry_has(struct kvm_cpuid_entry2 *entry,
> >+					    unsigned x86_feature)
> >+{
> >+	return cpuid_entry_get(entry, x86_feature);
> >+}
> >+
> >+static __always_inline int *guest_cpuid_get_register(struct kvm_vcpu *vcpu, unsigned x86_feature)
>                           ^
> Should be                 u32
> otherwise, previous patch will be unhappy. :)

Doh, thanks!
 
> >+{
> >+	struct kvm_cpuid_entry2 *entry;
> >+	const struct cpuid_reg cpuid = x86_feature_cpuid(x86_feature);
> >+
> >+	entry = kvm_find_cpuid_entry(vcpu, cpuid.function, cpuid.index);
> >+	if (!entry)
> >+		return NULL;
> >+
> >+	return __cpuid_entry_get_reg(entry, &cpuid);
> >+}
> >+
> >  static __always_inline bool guest_cpuid_has(struct kvm_vcpu *vcpu, unsigned x86_feature)
> >  {
> >  	u32 *reg;
> >
> 

^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 10/61] KVM: x86: Clean up CPUID 0x7 sub-leaf loop
  2020-02-01 18:51 ` [PATCH 10/61] KVM: x86: Clean up CPUID 0x7 sub-leaf loop Sean Christopherson
@ 2020-02-21 14:20   ` Vitaly Kuznetsov
  0 siblings, 0 replies; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-21 14:20 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Refactor the sub-leaf loop for CPUID 0x7 to move the main leaf out of
> said loop.  The emitted code savings is basically a mirage, as the
> handling of the main leaf can easily be split to its own helper to avoid
> code bloat.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kvm/cpuid.c | 18 +++++++++---------
>  1 file changed, 9 insertions(+), 9 deletions(-)
>
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index 6e1685a16cca..b626893a11d5 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -573,16 +573,16 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
>  	case 7: {
>  		int i;
>  
> -		for (i = 0; ; ) {
> +		do_cpuid_7_mask(entry, 0);
> +
> +		for (i = 1; i <= entry->eax; i++) {
> +			if (*nent >= maxnent)
> +				goto out;
> +
> +			do_host_cpuid(&entry[i], function, i);
> +			++*nent;
> +
>  			do_cpuid_7_mask(&entry[i], i);
> -			if (i == entry->eax)
> -				break;
> -			if (*nent >= maxnent)
> -				goto out;
> -
> -			++i;
> -			do_host_cpuid(&entry[i], function, i);
> -			++*nent;
>  		}
>  		break;
>  	}

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 11/61] KVM: x86: Drop the explicit @index from do_cpuid_7_mask()
  2020-02-01 18:51 ` [PATCH 11/61] KVM: x86: Drop the explicit @index from do_cpuid_7_mask() Sean Christopherson
@ 2020-02-21 14:22   ` Vitaly Kuznetsov
  0 siblings, 0 replies; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-21 14:22 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Drop the index param from do_cpuid_7_mask() and instead switch on the
> entry's index, which is guaranteed to be set by do_host_cpuid().
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kvm/cpuid.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index b626893a11d5..fd04f17d1836 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -346,7 +346,7 @@ static int __do_cpuid_func_emulated(struct kvm_cpuid_entry2 *entry,
>  	return 0;
>  }
>  
> -static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry, int index)
> +static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry)
>  {
>  	unsigned f_invpcid = kvm_x86_ops->invpcid_supported() ? F(INVPCID) : 0;
>  	unsigned f_mpx = kvm_mpx_supported() ? F(MPX) : 0;
> @@ -380,7 +380,7 @@ static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry, int index)
>  	const u32 kvm_cpuid_7_1_eax_x86_features =
>  		F(AVX512_BF16);
>  
> -	switch (index) {
> +	switch (entry->index) {
>  	case 0:
>  		entry->eax = min(entry->eax, 1u);
>  		entry->ebx &= kvm_cpuid_7_0_ebx_x86_features;
> @@ -573,7 +573,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
>  	case 7: {
>  		int i;
>  
> -		do_cpuid_7_mask(entry, 0);
> +		do_cpuid_7_mask(entry);
>  
>  		for (i = 1; i <= entry->eax; i++) {
>  			if (*nent >= maxnent)
> @@ -582,7 +582,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
>  			do_host_cpuid(&entry[i], function, i);
>  			++*nent;
>  
> -			do_cpuid_7_mask(&entry[i], i);
> +			do_cpuid_7_mask(&entry[i]);
>  		}
>  		break;
>  	}

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 15/61] KVM: x86: Refactor CPUID 0x4 and 0x8000001d handling
  2020-02-01 18:51 ` [PATCH 15/61] KVM: x86: Refactor CPUID 0x4 and 0x8000001d handling Sean Christopherson
@ 2020-02-21 14:40   ` Vitaly Kuznetsov
  0 siblings, 0 replies; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-21 14:40 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Refactoring the sub-leaf handling for CPUID 0x4/0x8000001d to eliminate
> a one-off variable and its associated brackets.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kvm/cpuid.c | 16 ++++++----------
>  1 file changed, 6 insertions(+), 10 deletions(-)
>
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index 5044a595799f..d75d539da759 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -545,20 +545,16 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
>  		break;
>  	/* functions 4 and 0x8000001d have additional index. */
>  	case 4:
> -	case 0x8000001d: {
> -		int cache_type;
> -
> -		/* read more entries until cache_type is zero */
> -		for (i = 1; ; ++i) {
> -			cache_type = entry[i - 1].eax & 0x1f;
> -			if (!cache_type)
> -				break;
> -
> +	case 0x8000001d:
> +		/*
> +		 * Read entries until the cache type in the previous entry is
> +		 * zero, i.e. indicates an invalid entry.
> +		 */
> +		for (i = 1; entry[i - 1].eax & 0x1f; ++i) {
>  			if (!do_host_cpuid(&entry[i], nent, maxnent, function, i))
>  				goto out;
>  		}
>  		break;
> -	}
>  	case 6: /* Thermal management */
>  		entry->eax = 0x4; /* allow ARAT */
>  		entry->ebx = 0;

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 16/61] KVM: x86: Encapsulate CPUID entries and metadata in struct
  2020-02-01 18:51 ` [PATCH 16/61] KVM: x86: Encapsulate CPUID entries and metadata in struct Sean Christopherson
@ 2020-02-21 14:58   ` Vitaly Kuznetsov
  2020-02-24 21:55     ` Sean Christopherson
  0 siblings, 1 reply; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-21 14:58 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Add a struct to hold the array of CPUID entries and its associated
> metadata when handling KVM_GET_SUPPORTED_CPUID.  Lookup and provide
> the correct entry in do_host_cpuid(), which eliminates the majority of
> array indexing shenanigans, e.g. entries[i -1], and generally makes the
> code more readable.  The last array indexing holdout is kvm_get_cpuid(),
> which can't really be avoided without throwing the baby out with the
> bathwater.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kvm/cpuid.c | 138 ++++++++++++++++++++++++-------------------
>  1 file changed, 76 insertions(+), 62 deletions(-)
>
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index d75d539da759..f9cfc69199f0 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -287,13 +287,21 @@ static __always_inline void cpuid_mask(u32 *word, int wordnum)
>  	*word &= boot_cpu_data.x86_capability[wordnum];
>  }
>  
> -static struct kvm_cpuid_entry2 *do_host_cpuid(struct kvm_cpuid_entry2 *entry,
> -					      int *nent, int maxnent,
> +struct kvm_cpuid_array {
> +	struct kvm_cpuid_entry2 *entries;
> +	const int maxnent;
> +	int nent;
> +};
> +
> +static struct kvm_cpuid_entry2 *do_host_cpuid(struct kvm_cpuid_array *array,
>  					      u32 function, u32 index)
>  {
> -	if (*nent >= maxnent)
> +	struct kvm_cpuid_entry2 *entry;
> +
> +	if (array->nent >= array->maxnent)
>  		return NULL;
> -	++*nent;
> +
> +	entry = &array->entries[array->nent++];
>  
>  	entry->function = function;
>  	entry->index = index;
> @@ -325,9 +333,10 @@ static struct kvm_cpuid_entry2 *do_host_cpuid(struct kvm_cpuid_entry2 *entry,
>  	return entry;
>  }
>  
> -static int __do_cpuid_func_emulated(struct kvm_cpuid_entry2 *entry,
> -				    u32 func, int *nent, int maxnent)
> +static int __do_cpuid_func_emulated(struct kvm_cpuid_array *array, u32 func)
>  {
> +	struct kvm_cpuid_entry2 *entry = &array->entries[array->nent];
> +
>  	entry->function = func;
>  	entry->index = 0;
>  	entry->flags = 0;
> @@ -335,17 +344,17 @@ static int __do_cpuid_func_emulated(struct kvm_cpuid_entry2 *entry,
>  	switch (func) {
>  	case 0:
>  		entry->eax = 7;
> -		++*nent;
> +		++array->nent;
>  		break;
>  	case 1:
>  		entry->ecx = F(MOVBE);
> -		++*nent;
> +		++array->nent;
>  		break;
>  	case 7:
>  		entry->flags |= KVM_CPUID_FLAG_SIGNIFCANT_INDEX;
>  		entry->eax = 0;
>  		entry->ecx = F(RDPID);
> -		++*nent;
> +		++array->nent;
>  	default:
>  		break;
>  	}
> @@ -436,9 +445,9 @@ static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry)
>  	}
>  }
>  
> -static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
> -				  int *nent, int maxnent)
> +static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
>  {
> +	struct kvm_cpuid_entry2 *entry;
>  	int r, i, max_idx;
>  	unsigned f_nx = is_efer_nx() ? F(NX) : 0;
>  #ifdef CONFIG_X86_64
> @@ -514,7 +523,8 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
>  
>  	r = -E2BIG;
>  
> -	if (WARN_ON(!do_host_cpuid(entry, nent, maxnent, function, 0)))
> +	entry = do_host_cpuid(array, function, 0);
> +	if (WARN_ON(!entry))
>  		goto out;
>  
>  	switch (function) {
> @@ -539,7 +549,8 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
>  		entry->flags |= KVM_CPUID_FLAG_STATE_READ_NEXT;
>  
>  		for (i = 1, max_idx = entry->eax & 0xff; i < max_idx; ++i) {
> -			if (!do_host_cpuid(&entry[i], nent, maxnent, function, 0))
> +			entry = do_host_cpuid(array, 2, 0);

I'd change this to 
                        entry = do_host_cpuid(array, function, 0);

to match other call sites.

> +			if (!entry)
>  				goto out;
>  		}
>  		break;
> @@ -550,8 +561,9 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
>  		 * Read entries until the cache type in the previous entry is
>  		 * zero, i.e. indicates an invalid entry.
>  		 */
> -		for (i = 1; entry[i - 1].eax & 0x1f; ++i) {
> -			if (!do_host_cpuid(&entry[i], nent, maxnent, function, i))
> +		for (i = 1; entry->eax & 0x1f; ++i) {
> +			entry = do_host_cpuid(array, function, i);
> +			if (!entry)
>  				goto out;
>  		}
>  		break;
> @@ -566,10 +578,11 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
>  		do_cpuid_7_mask(entry);
>  
>  		for (i = 1, max_idx = entry->eax; i <= max_idx; i++) {
> -			if (!do_host_cpuid(&entry[i], nent, maxnent, function, i))
> +			entry = do_host_cpuid(array, function, i);
> +			if (!entry)
>  				goto out;
>  
> -			do_cpuid_7_mask(&entry[i]);
> +			do_cpuid_7_mask(entry);
>  		}
>  		break;
>  	case 9:
> @@ -610,15 +623,13 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
>  	case 0x1f:
>  	case 0xb:
>  		/*
> -		 * We filled in entry[0] for CPUID(EAX=<function>,
> -		 * ECX=00H) above.  If its level type (ECX[15:8]) is
> -		 * zero, then the leaf is unimplemented, and we're
> -		 * done.  Otherwise, continue to populate entries
> -		 * until the level type (ECX[15:8]) of the previously
> -		 * added entry is zero.
> +		 * Populate entries until the level type (ECX[15:8]) of the
> +		 * previous entry is zero.  Note, CPUID EAX.{0x1f,0xb}.0 is
> +		 * the starting entry, filled by the primary do_host_cpuid().
>  		 */
> -		for (i = 1; entry[i - 1].ecx & 0xff00; ++i) {
> -			if (!do_host_cpuid(&entry[i], nent, maxnent, function, i))
> +		for (i = 1; entry->ecx & 0xff00; ++i) {
> +			entry = do_host_cpuid(array, function, i);
> +			if (!entry)
>  				goto out;
>  		}
>  		break;
> @@ -633,24 +644,26 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
>  		if (!supported)
>  			break;
>  
> -		if (!do_host_cpuid(&entry[1], nent, maxnent, function, 1))
> +		entry = do_host_cpuid(array, function, 1);
> +		if (!entry)
>  			goto out;
>  
> -		entry[1].eax &= kvm_cpuid_D_1_eax_x86_features;
> -		cpuid_mask(&entry[1].eax, CPUID_D_1_EAX);
> -		if (entry[1].eax & (F(XSAVES)|F(XSAVEC)))
> -			entry[1].ebx = xstate_required_size(supported, true);
> +		entry->eax &= kvm_cpuid_D_1_eax_x86_features;
> +		cpuid_mask(&entry->eax, CPUID_D_1_EAX);
> +		if (entry->eax & (F(XSAVES)|F(XSAVEC)))
> +			entry->ebx = xstate_required_size(supported, true);
>  		else
> -			entry[1].ebx = 0;
> +			entry->ebx = 0;
>  		/* Saving XSS controlled state via XSAVES isn't supported. */
> -		entry[1].ecx = 0;
> -		entry[1].edx = 0;
> +		entry->ecx = 0;
> +		entry->edx = 0;
>  
> -		for (idx = 2, i = 2; idx < 64; ++idx) {
> +		for (idx = 2; idx < 64; ++idx) {
>  			if (!(supported & BIT_ULL(idx)))
>  				continue;
>  
> -			if (!do_host_cpuid(&entry[i], nent, maxnent, function, idx))
> +			entry = do_host_cpuid(array, function, idx);
> +			if (!entry)
>  				goto out;
>  
>  			/*
> @@ -660,14 +673,13 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
>  			 * reach this point, and they should have a non-zero
>  			 * save state size.
>  			 */
> -			if (WARN_ON_ONCE(!entry[i].eax || (entry[i].ecx & 1))) {
> -				--*nent;
> +			if (WARN_ON_ONCE(!entry->eax || (entry->ecx & 1))) {
> +				--array->nent;
>  				continue;
>  			}
>  
> -			entry[i].ecx = 0;
> -			entry[i].edx = 0;
> -			++i;
> +			entry->ecx = 0;
> +			entry->edx = 0;
>  		}
>  		break;
>  	}
> @@ -677,7 +689,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
>  			break;
>  
>  		for (i = 1, max_idx = entry->eax; i <= max_idx; ++i) {
> -			if (!do_host_cpuid(&entry[i], nent, maxnent, function, i))
> +			if (!do_host_cpuid(array, function, i))
>  				goto out;
>  		}
>  		break;
> @@ -802,22 +814,22 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
>  	return r;
>  }
>  
> -static int do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 func,
> -			 int *nent, int maxnent, unsigned int type)
> +static int do_cpuid_func(struct kvm_cpuid_array *array, u32 func,
> +			 unsigned int type)
>  {
> -	if (*nent >= maxnent)
> +	if (array->nent >= array->maxnent)
>  		return -E2BIG;
>  
>  	if (type == KVM_GET_EMULATED_CPUID)
> -		return __do_cpuid_func_emulated(entry, func, nent, maxnent);
> +		return __do_cpuid_func_emulated(array, func);

Would it make sense to move 'if (array->nent >= array->maxnent)' check
to __do_cpuid_func_emulated() to match do_host_cpuid()?

>  
> -	return __do_cpuid_func(entry, func, nent, maxnent);
> +	return __do_cpuid_func(array, func);
>  }
>  
>  #define CENTAUR_CPUID_SIGNATURE 0xC0000000
>  
> -static int get_cpuid_func(struct kvm_cpuid_entry2 *entries, u32 func,
> -			  int *nent, int maxnent, unsigned int type)
> +static int get_cpuid_func(struct kvm_cpuid_array *array, u32 func,
> +			  unsigned int type)
>  {
>  	u32 limit;
>  	int r;
> @@ -826,16 +838,16 @@ static int get_cpuid_func(struct kvm_cpuid_entry2 *entries, u32 func,
>  	    boot_cpu_data.x86_vendor != X86_VENDOR_CENTAUR)
>  		return 0;
>  
> -	r = do_cpuid_func(&entries[*nent], func, nent, maxnent, type);
> +	r = do_cpuid_func(array, func, type);
>  	if (r)
>  		return r;
>  
> -	limit = entries[*nent - 1].eax;
> +	limit = array->entries[array->nent - 1].eax;
>  	for (func = func + 1; func <= limit; ++func) {
> -		if (*nent >= maxnent)
> +		if (array->nent >= array->maxnent)
>  			return -E2BIG;
>  
> -		r = do_cpuid_func(&entries[*nent], func, nent, maxnent, type);
> +		r = do_cpuid_func(array, func, type);


do_cpuid_func() above is already doing 'if (array->nent >=
array->maxnent)' check and returns -E2BIG when it fails, maybe the same
check here in get_cpuid_func() is not needed?

>  		if (r)
>  			break;
>  	}
> @@ -878,8 +890,11 @@ int kvm_dev_ioctl_get_cpuid(struct kvm_cpuid2 *cpuid,
>  		0, 0x80000000, CENTAUR_CPUID_SIGNATURE, KVM_CPUID_SIGNATURE,
>  	};
>  
> -	struct kvm_cpuid_entry2 *cpuid_entries;
> -	int nent = 0, r, i;
> +	struct kvm_cpuid_array array = {
> +		.nent = 0,
> +		.maxnent = cpuid->nent,
> +	};
> +	int r, i;
>  
>  	if (cpuid->nent < 1)
>  		return -E2BIG;
> @@ -889,25 +904,24 @@ int kvm_dev_ioctl_get_cpuid(struct kvm_cpuid2 *cpuid,
>  	if (sanity_check_entries(entries, cpuid->nent, type))
>  		return -EINVAL;
>  
> -	cpuid_entries = vzalloc(array_size(sizeof(struct kvm_cpuid_entry2),
> +	array.entries = vzalloc(array_size(sizeof(struct kvm_cpuid_entry2),
>  					   cpuid->nent));
> -	if (!cpuid_entries)
> +	if (!array.entries)
>  		return -ENOMEM;
>  
>  	for (i = 0; i < ARRAY_SIZE(funcs); i++) {
> -		r = get_cpuid_func(cpuid_entries, funcs[i], &nent, cpuid->nent,
> -				   type);
> +		r = get_cpuid_func(&array, funcs[i], type);
>  		if (r)
>  			goto out_free;
>  	}
> -	cpuid->nent = nent;
> +	cpuid->nent = array.nent;
>  
> -	if (copy_to_user(entries, cpuid_entries,
> -			 nent * sizeof(struct kvm_cpuid_entry2)))
> +	if (copy_to_user(entries, array.entries,
> +			 array.nent * sizeof(struct kvm_cpuid_entry2)))
>  		r = -EFAULT;
>  
>  out_free:
> -	vfree(cpuid_entries);
> +	vfree(array.entries);
>  	return r;
>  }

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 18/61] KVM: x86: Use common loop iterator when handling CPUID 0xD.N
  2020-02-01 18:51 ` [PATCH 18/61] KVM: x86: Use common loop iterator when handling CPUID 0xD.N Sean Christopherson
@ 2020-02-21 15:04   ` Vitaly Kuznetsov
  0 siblings, 0 replies; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-21 15:04 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Use __do_cpuid_func()'s common loop iterator, "i", when enumerating the
> sub-leafs for CPUID 0xD now that the CPUID 0xD loop doesn't need to
> manual maintain separate counts for the entries index and CPUID index.
>
> No functional changed intended.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kvm/cpuid.c | 7 +++----
>  1 file changed, 3 insertions(+), 4 deletions(-)
>
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index 6516fec361c1..bfd8304a8437 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -634,7 +634,6 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
>  		}
>  		break;
>  	case 0xd: {
> -		int idx;
>  		u64 supported = kvm_supported_xcr0();
>  
>  		entry->eax &= supported;
> @@ -658,11 +657,11 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
>  		entry->ecx = 0;
>  		entry->edx = 0;
>  
> -		for (idx = 2; idx < 64; ++idx) {
> -			if (!(supported & BIT_ULL(idx)))
> +		for (i = 2; i < 64; ++i) {
> +			if (!(supported & BIT_ULL(i)))
>  				continue;
>  
> -			entry = do_host_cpuid(array, function, idx);
> +			entry = do_host_cpuid(array, function, i);
>  			if (!entry)
>  				goto out;

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 21/61] KVM: x86: Use supported_xcr0 to detect MPX support
  2020-02-01 18:51 ` [PATCH 21/61] KVM: x86: Use supported_xcr0 to detect MPX support Sean Christopherson
  2020-02-13 14:25   ` Xiaoyao Li
@ 2020-02-21 15:32   ` Vitaly Kuznetsov
  1 sibling, 0 replies; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-21 15:32 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Query supported_xcr0 when checking for MPX support instead of invoking
> ->mpx_supported() and drop ->mpx_supported() as kvm_mpx_supported() was
> its last user.  Rename vmx_mpx_supported() to cpu_has_vmx_mpx() to
> better align with VMX/VMCS nomenclature.
>
> Modify VMX's adjustment of xcr0 to call cpus_has_vmx_mpx() (renamed from
> vmx_mpx_supported()) directly to avoid reading supported_xcr0 before
> it's fully configured.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/include/asm/kvm_host.h | 2 +-
>  arch/x86/kvm/cpuid.c            | 3 +--
>  arch/x86/kvm/svm.c              | 6 ------
>  arch/x86/kvm/vmx/capabilities.h | 2 +-
>  arch/x86/kvm/vmx/vmx.c          | 3 +--
>  5 files changed, 4 insertions(+), 12 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 77d206a93658..85f0d96cfeb2 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1163,7 +1163,7 @@ struct kvm_x86_ops {
>  			       enum x86_intercept_stage stage);
>  	void (*handle_exit_irqoff)(struct kvm_vcpu *vcpu,
>  		enum exit_fastpath_completion *exit_fastpath);
> -	bool (*mpx_supported)(void);
> +
>  	bool (*xsaves_supported)(void);
>  	bool (*umip_emulated)(void);
>  	bool (*pt_supported)(void);
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index b9763eb711cb..84006cc4007c 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -47,8 +47,7 @@ static u32 xstate_required_size(u64 xstate_bv, bool compacted)
>  
>  bool kvm_mpx_supported(void)
>  {
> -	return ((host_xcr0 & (XFEATURE_MASK_BNDREGS | XFEATURE_MASK_BNDCSR))
> -		 && kvm_x86_ops->mpx_supported());
> +	return supported_xcr0 & (XFEATURE_MASK_BNDREGS | XFEATURE_MASK_BNDCSR);
>  }
>  EXPORT_SYMBOL_GPL(kvm_mpx_supported);
>  
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index af096c4f9c5f..3c7ddaff405d 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -6082,11 +6082,6 @@ static bool svm_invpcid_supported(void)
>  	return false;
>  }
>  
> -static bool svm_mpx_supported(void)
> -{
> -	return false;
> -}
> -
>  static bool svm_xsaves_supported(void)
>  {
>  	return boot_cpu_has(X86_FEATURE_XSAVES);
> @@ -7468,7 +7463,6 @@ static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
>  
>  	.rdtscp_supported = svm_rdtscp_supported,
>  	.invpcid_supported = svm_invpcid_supported,
> -	.mpx_supported = svm_mpx_supported,
>  	.xsaves_supported = svm_xsaves_supported,
>  	.umip_emulated = svm_umip_emulated,
>  	.pt_supported = svm_pt_supported,
> diff --git a/arch/x86/kvm/vmx/capabilities.h b/arch/x86/kvm/vmx/capabilities.h
> index 1a6a99382e94..0a0b1494a934 100644
> --- a/arch/x86/kvm/vmx/capabilities.h
> +++ b/arch/x86/kvm/vmx/capabilities.h
> @@ -100,7 +100,7 @@ static inline bool cpu_has_load_perf_global_ctrl(void)
>  	       (vmcs_config.vmexit_ctrl & VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL);
>  }
>  
> -static inline bool vmx_mpx_supported(void)
> +static inline bool cpu_has_vmx_mpx(void)
>  {
>  	return (vmcs_config.vmexit_ctrl & VM_EXIT_CLEAR_BNDCFGS) &&
>  		(vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_BNDCFGS);
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 32a84ec15064..98fd651f7f7e 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -7590,7 +7590,7 @@ static __init int hardware_setup(void)
>  		WARN_ONCE(host_bndcfgs, "KVM: BNDCFGS in host will be lost");
>  	}
>  
> -	if (!kvm_mpx_supported())
> +	if (!cpu_has_vmx_mpx())
>  		supported_xcr0 &= ~(XFEATURE_MASK_BNDREGS |
>  				    XFEATURE_MASK_BNDCSR);
>  
> @@ -7857,7 +7857,6 @@ static struct kvm_x86_ops vmx_x86_ops __ro_after_init = {
>  
>  	.check_intercept = vmx_check_intercept,
>  	.handle_exit_irqoff = vmx_handle_exit_irqoff,
> -	.mpx_supported = vmx_mpx_supported,
>  	.xsaves_supported = vmx_xsaves_supported,
>  	.umip_emulated = vmx_umip_emulated,
>  	.pt_supported = vmx_pt_supported,

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 22/61] KVM: x86: Make kvm_mpx_supported() an inline function
  2020-02-01 18:51 ` [PATCH 22/61] KVM: x86: Make kvm_mpx_supported() an inline function Sean Christopherson
  2020-02-13 14:26   ` Xiaoyao Li
@ 2020-02-21 15:33   ` Vitaly Kuznetsov
  1 sibling, 0 replies; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-21 15:33 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Expose kvm_mpx_supported() as a static inline so that it can be inlined
> in kvm_intel.ko.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kvm/cpuid.c | 6 ------
>  arch/x86/kvm/cpuid.h | 1 -
>  arch/x86/kvm/x86.h   | 5 +++++
>  3 files changed, 5 insertions(+), 7 deletions(-)
>
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index 84006cc4007c..d3c93b94abc3 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -45,12 +45,6 @@ static u32 xstate_required_size(u64 xstate_bv, bool compacted)
>  	return ret;
>  }
>  
> -bool kvm_mpx_supported(void)
> -{
> -	return supported_xcr0 & (XFEATURE_MASK_BNDREGS | XFEATURE_MASK_BNDCSR);
> -}
> -EXPORT_SYMBOL_GPL(kvm_mpx_supported);
> -
>  #define F feature_bit
>  
>  int kvm_update_cpuid(struct kvm_vcpu *vcpu)
> diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
> index 7366c618aa04..c1ac0995843d 100644
> --- a/arch/x86/kvm/cpuid.h
> +++ b/arch/x86/kvm/cpuid.h
> @@ -7,7 +7,6 @@
>  #include <asm/processor.h>
>  
>  int kvm_update_cpuid(struct kvm_vcpu *vcpu);
> -bool kvm_mpx_supported(void);
>  struct kvm_cpuid_entry2 *kvm_find_cpuid_entry(struct kvm_vcpu *vcpu,
>  					      u32 function, u32 index);
>  int kvm_dev_ioctl_get_cpuid(struct kvm_cpuid2 *cpuid,
> diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
> index 02b49ee49e24..bfac4a80956c 100644
> --- a/arch/x86/kvm/x86.h
> +++ b/arch/x86/kvm/x86.h
> @@ -283,6 +283,11 @@ enum exit_fastpath_completion handle_fastpath_set_msr_irqoff(struct kvm_vcpu *vc
>  extern u64 host_xcr0;
>  extern u64 supported_xcr0;
>  
> +static inline bool kvm_mpx_supported(void)
> +{
> +	return supported_xcr0 & (XFEATURE_MASK_BNDREGS | XFEATURE_MASK_BNDCSR);
> +}
> +
>  extern unsigned int min_timer_period_us;
>  
>  extern bool enable_vmware_backdoor;

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 23/61] KVM: x86: Clear output regs for CPUID 0x14 if PT isn't exposed to guest
  2020-02-01 18:51 ` [PATCH 23/61] KVM: x86: Clear output regs for CPUID 0x14 if PT isn't exposed to guest Sean Christopherson
@ 2020-02-21 15:36   ` Vitaly Kuznetsov
  0 siblings, 0 replies; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-21 15:36 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Clear the output regs for the main CPUID 0x14 leaf (index=0) if Intel PT
> isn't exposed to the guest.  Leaf 0x14 enumerates Intel PT capabilities
> and should return zeroes if PT is not supported.  Incorrectly reporting
> PT capabilities is essentially a cosmetic error, i.e. doesn't negatively
> affect any known userspace/kernel, as the existence of PT itself is
> correctly enumerated via CPUID 0x7.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kvm/cpuid.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index d3c93b94abc3..056faf27b14b 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -663,8 +663,10 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
>  		break;
>  	/* Intel PT */
>  	case 0x14:
> -		if (!f_intel_pt)
> +		if (!f_intel_pt) {
> +			entry->eax = entry->ebx = entry->ecx = entry->edx = 0;
>  			break;
> +		}
>  
>  		for (i = 1, max_idx = entry->eax; i <= max_idx; ++i) {
>  			if (!do_host_cpuid(array, function, i))

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 24/61] KVM: x86: Drop explicit @func param from ->set_supported_cpuid()
  2020-02-01 18:51 ` [PATCH 24/61] KVM: x86: Drop explicit @func param from ->set_supported_cpuid() Sean Christopherson
@ 2020-02-21 15:39   ` Vitaly Kuznetsov
  0 siblings, 0 replies; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-21 15:39 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Drop the explicit @func param from ->set_supported_cpuid() and instead
> pull the CPUID function from the relevant entry.  This sets the stage
> for hardening guest CPUID updates in future patches, e.g. allows adding
> run-time assertions that the CPUID feature being changed is actually
> a bit in the referenced CPUID entry.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/include/asm/kvm_host.h | 2 +-
>  arch/x86/kvm/cpuid.c            | 2 +-
>  arch/x86/kvm/svm.c              | 4 ++--
>  arch/x86/kvm/vmx/vmx.c          | 4 ++--
>  4 files changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 85f0d96cfeb2..a61928d5435b 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1148,7 +1148,7 @@ struct kvm_x86_ops {
>  
>  	void (*set_tdp_cr3)(struct kvm_vcpu *vcpu, unsigned long cr3);
>  
> -	void (*set_supported_cpuid)(u32 func, struct kvm_cpuid_entry2 *entry);
> +	void (*set_supported_cpuid)(struct kvm_cpuid_entry2 *entry);
>  
>  	bool (*has_wbinvd_exit)(void);
>  
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index 056faf27b14b..e3026fe638aa 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -784,7 +784,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
>  		break;
>  	}
>  
> -	kvm_x86_ops->set_supported_cpuid(function, entry);
> +	kvm_x86_ops->set_supported_cpuid(entry);
>  
>  	r = 0;
>  
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index 3c7ddaff405d..535eb746fb0f 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -6032,9 +6032,9 @@ static void svm_cpuid_update(struct kvm_vcpu *vcpu)
>  
>  #define F feature_bit
>  
> -static void svm_set_supported_cpuid(u32 func, struct kvm_cpuid_entry2 *entry)
> +static void svm_set_supported_cpuid(struct kvm_cpuid_entry2 *entry)
>  {
> -	switch (func) {
> +	switch (entry->function) {
>  	case 0x1:
>  		if (avic)
>  			entry->ecx &= ~F(X2APIC);
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 98fd651f7f7e..3ff830e2258e 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -7104,9 +7104,9 @@ static void vmx_cpuid_update(struct kvm_vcpu *vcpu)
>  	}
>  }
>  
> -static void vmx_set_supported_cpuid(u32 func, struct kvm_cpuid_entry2 *entry)
> +static void vmx_set_supported_cpuid(struct kvm_cpuid_entry2 *entry)
>  {
> -	if (func == 1 && nested)
> +	if (entry->function == 1 && nested)
>  		entry->ecx |= feature_bit(VMX);
>  }

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 25/61] KVM: x86: Use u32 for holding CPUID register value in helpers
  2020-02-01 18:51 ` [PATCH 25/61] KVM: x86: Use u32 for holding CPUID register value in helpers Sean Christopherson
@ 2020-02-21 15:43   ` Vitaly Kuznetsov
  0 siblings, 0 replies; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-21 15:43 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Change the intermediate CPUID output register values from "int" to "u32"
> to match both hardware and the storage type in struct cpuid_reg.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kvm/cpuid.h | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
> index c1ac0995843d..72a79bdfed6b 100644
> --- a/arch/x86/kvm/cpuid.h
> +++ b/arch/x86/kvm/cpuid.h
> @@ -95,7 +95,7 @@ static __always_inline struct cpuid_reg x86_feature_cpuid(unsigned x86_feature)
>  	return reverse_cpuid[x86_leaf];
>  }
>  
> -static __always_inline int *guest_cpuid_get_register(struct kvm_vcpu *vcpu, unsigned x86_feature)
> +static __always_inline u32 *guest_cpuid_get_register(struct kvm_vcpu *vcpu, unsigned x86_feature)
>  {
>  	struct kvm_cpuid_entry2 *entry;
>  	const struct cpuid_reg cpuid = x86_feature_cpuid(x86_feature);
> @@ -121,7 +121,7 @@ static __always_inline int *guest_cpuid_get_register(struct kvm_vcpu *vcpu, unsi
>  
>  static __always_inline bool guest_cpuid_has(struct kvm_vcpu *vcpu, unsigned x86_feature)
>  {
> -	int *reg;
> +	u32 *reg;
>  
>  	reg = guest_cpuid_get_register(vcpu, x86_feature);
>  	if (!reg)
> @@ -132,7 +132,7 @@ static __always_inline bool guest_cpuid_has(struct kvm_vcpu *vcpu, unsigned x86_
>  
>  static __always_inline void guest_cpuid_clear(struct kvm_vcpu *vcpu, unsigned x86_feature)
>  {
> -	int *reg;
> +	u32 *reg;
>  
>  	reg = guest_cpuid_get_register(vcpu, x86_feature);
>  	if (reg)

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 26/61] KVM: x86: Introduce cpuid_entry_{get,has}() accessors
  2020-02-01 18:51 ` [PATCH 26/61] KVM: x86: Introduce cpuid_entry_{get,has}() accessors Sean Christopherson
  2020-02-14  9:44   ` Xiaoyao Li
@ 2020-02-21 15:57   ` Vitaly Kuznetsov
  2020-02-21 16:29     ` Sean Christopherson
  1 sibling, 1 reply; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-21 15:57 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Introduce accessors to retrieve feature bits from CPUID entries and use
> the new accessors where applicable.  Using the accessors eliminates the
> need to manually specify the register to be queried at no extra cost
> (binary output is identical) and will allow adding runtime consistency
> checks on the function and index in a future patch.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kvm/cpuid.c |  9 +++++----
>  arch/x86/kvm/cpuid.h | 46 +++++++++++++++++++++++++++++++++++---------
>  2 files changed, 42 insertions(+), 13 deletions(-)
>
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index e3026fe638aa..3316963dad3d 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -68,7 +68,7 @@ int kvm_update_cpuid(struct kvm_vcpu *vcpu)
>  		best->edx |= F(APIC);
>  
>  	if (apic) {
> -		if (best->ecx & F(TSC_DEADLINE_TIMER))
> +		if (cpuid_entry_has(best, X86_FEATURE_TSC_DEADLINE_TIMER))
>  			apic->lapic_timer.timer_mode_mask = 3 << 17;
>  		else
>  			apic->lapic_timer.timer_mode_mask = 1 << 17;
> @@ -96,7 +96,8 @@ int kvm_update_cpuid(struct kvm_vcpu *vcpu)
>  	}
>  
>  	best = kvm_find_cpuid_entry(vcpu, 0xD, 1);
> -	if (best && (best->eax & (F(XSAVES) | F(XSAVEC))))
> +	if (best && (cpuid_entry_has(best, X86_FEATURE_XSAVES) ||
> +		     cpuid_entry_has(best, X86_FEATURE_XSAVEC)))
>  		best->ebx = xstate_required_size(vcpu->arch.xcr0, true);
>  
>  	/*
> @@ -155,7 +156,7 @@ static void cpuid_fix_nx_cap(struct kvm_vcpu *vcpu)
>  			break;
>  		}
>  	}
> -	if (entry && (entry->edx & F(NX)) && !is_efer_nx()) {
> +	if (entry && cpuid_entry_has(entry, X86_FEATURE_NX) && !is_efer_nx()) {
>  		entry->edx &= ~F(NX);
>  		printk(KERN_INFO "kvm: guest NX capability removed\n");
>  	}
> @@ -387,7 +388,7 @@ static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry)
>  		entry->ebx |= F(TSC_ADJUST);
>  
>  		entry->ecx &= kvm_cpuid_7_0_ecx_x86_features;
> -		f_la57 = entry->ecx & F(LA57);
> +		f_la57 = cpuid_entry_get(entry, X86_FEATURE_LA57);
>  		cpuid_mask(&entry->ecx, CPUID_7_ECX);
>  		/* Set LA57 based on hardware capability. */
>  		entry->ecx |= f_la57;
> diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
> index 72a79bdfed6b..64e96e4086e2 100644
> --- a/arch/x86/kvm/cpuid.h
> +++ b/arch/x86/kvm/cpuid.h
> @@ -95,16 +95,10 @@ static __always_inline struct cpuid_reg x86_feature_cpuid(unsigned x86_feature)
>  	return reverse_cpuid[x86_leaf];
>  }
>  
> -static __always_inline u32 *guest_cpuid_get_register(struct kvm_vcpu *vcpu, unsigned x86_feature)
> +static __always_inline u32 *__cpuid_entry_get_reg(struct kvm_cpuid_entry2 *entry,
> +						  const struct cpuid_reg *cpuid)
>  {
> -	struct kvm_cpuid_entry2 *entry;
> -	const struct cpuid_reg cpuid = x86_feature_cpuid(x86_feature);
> -
> -	entry = kvm_find_cpuid_entry(vcpu, cpuid.function, cpuid.index);
> -	if (!entry)
> -		return NULL;
> -
> -	switch (cpuid.reg) {
> +	switch (cpuid->reg) {
>  	case CPUID_EAX:
>  		return &entry->eax;
>  	case CPUID_EBX:
> @@ -119,6 +113,40 @@ static __always_inline u32 *guest_cpuid_get_register(struct kvm_vcpu *vcpu, unsi
>  	}
>  }
>  
> +static __always_inline u32 *cpuid_entry_get_reg(struct kvm_cpuid_entry2 *entry,
> +						unsigned x86_feature)

It is just me who dislikes bare 'unsigned'?

> +{
> +	const struct cpuid_reg cpuid = x86_feature_cpuid(x86_feature);
> +
> +	return __cpuid_entry_get_reg(entry, &cpuid);
> +}
> +
> +static __always_inline u32 cpuid_entry_get(struct kvm_cpuid_entry2 *entry,
> +					   unsigned x86_feature)
> +{
> +	u32 *reg = cpuid_entry_get_reg(entry, x86_feature);
> +
> +	return *reg & __feature_bit(x86_feature);
> +}
> +
> +static __always_inline bool cpuid_entry_has(struct kvm_cpuid_entry2 *entry,
> +					    unsigned x86_feature)
> +{
> +	return cpuid_entry_get(entry, x86_feature);
> +}
> +
> +static __always_inline int *guest_cpuid_get_register(struct kvm_vcpu *vcpu, unsigned x86_feature)
> +{
> +	struct kvm_cpuid_entry2 *entry;
> +	const struct cpuid_reg cpuid = x86_feature_cpuid(x86_feature);
> +
> +	entry = kvm_find_cpuid_entry(vcpu, cpuid.function, cpuid.index);
> +	if (!entry)
> +		return NULL;
> +
> +	return __cpuid_entry_get_reg(entry, &cpuid);
> +}
> +
>  static __always_inline bool guest_cpuid_has(struct kvm_vcpu *vcpu, unsigned x86_feature)
>  {
>  	u32 *reg;

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 26/61] KVM: x86: Introduce cpuid_entry_{get,has}() accessors
  2020-02-21 15:57   ` Vitaly Kuznetsov
@ 2020-02-21 16:29     ` Sean Christopherson
  0 siblings, 0 replies; 168+ messages in thread
From: Sean Christopherson @ 2020-02-21 16:29 UTC (permalink / raw)
  To: Vitaly Kuznetsov
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

On Fri, Feb 21, 2020 at 04:57:52PM +0100, Vitaly Kuznetsov wrote:
> Sean Christopherson <sean.j.christopherson@intel.com> writes:
> 
> > @@ -119,6 +113,40 @@ static __always_inline u32 *guest_cpuid_get_register(struct kvm_vcpu *vcpu, unsi
> >  	}
> >  }
> >  
> > +static __always_inline u32 *cpuid_entry_get_reg(struct kvm_cpuid_entry2 *entry,
> > +						unsigned x86_feature)
> 
> It is just me who dislikes bare 'unsigned'?

I don't like it either.  I also don't like get yelled at by checkpatch.

I used "unsigned" here and throughout to be consistent with the existing
guest_cpuid_*() and x86_feature_cpuid() helpers in cpuid.h.

I will happily add a patch to change those to use "unsigned int" and
then also use "unsigned int" for the cpuid_entry_*() code.

^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 28/61] KVM: x86: Refactor cpuid_mask() to auto-retrieve the register
  2020-02-01 18:51 ` [PATCH 28/61] KVM: x86: Refactor cpuid_mask() to auto-retrieve the register Sean Christopherson
@ 2020-02-24 13:49   ` Vitaly Kuznetsov
  0 siblings, 0 replies; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-24 13:49 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Use the recently introduced cpuid_entry_get_reg() to automatically get
> the appropriate register when masking a CPUID entry.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kvm/cpuid.c | 28 +++++++++++++++-------------
>  1 file changed, 15 insertions(+), 13 deletions(-)
>
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index 195f4dcc8c6a..cb5870a323cc 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -254,10 +254,12 @@ int kvm_vcpu_ioctl_get_cpuid2(struct kvm_vcpu *vcpu,
>  	return r;
>  }
>  
> -static __always_inline void cpuid_mask(u32 *word, int wordnum)
> +static __always_inline void cpuid_entry_mask(struct kvm_cpuid_entry2 *entry,
> +					     enum cpuid_leafs leaf)
>  {
> -	reverse_cpuid_check(wordnum);
> -	*word &= boot_cpu_data.x86_capability[wordnum];
> +	u32 *reg = cpuid_entry_get_reg(entry, leaf * 32);
> +
> +	*reg &= boot_cpu_data.x86_capability[leaf];
>  }
>  
>  struct kvm_cpuid_array {
> @@ -373,13 +375,13 @@ static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry)
>  	case 0:
>  		entry->eax = min(entry->eax, 1u);
>  		entry->ebx &= kvm_cpuid_7_0_ebx_x86_features;
> -		cpuid_mask(&entry->ebx, CPUID_7_0_EBX);
> +		cpuid_entry_mask(entry, CPUID_7_0_EBX);
>  		/* TSC_ADJUST is emulated */
>  		cpuid_entry_set(entry, X86_FEATURE_TSC_ADJUST);
>  
>  		entry->ecx &= kvm_cpuid_7_0_ecx_x86_features;
>  		f_la57 = cpuid_entry_get(entry, X86_FEATURE_LA57);
> -		cpuid_mask(&entry->ecx, CPUID_7_ECX);
> +		cpuid_entry_mask(entry, CPUID_7_ECX);
>  		/* Set LA57 based on hardware capability. */
>  		entry->ecx |= f_la57;
>  		entry->ecx |= f_umip;
> @@ -389,7 +391,7 @@ static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry)
>  			cpuid_entry_clear(entry, X86_FEATURE_PKU);
>  
>  		entry->edx &= kvm_cpuid_7_0_edx_x86_features;
> -		cpuid_mask(&entry->edx, CPUID_7_EDX);
> +		cpuid_entry_mask(entry, CPUID_7_EDX);
>  		if (boot_cpu_has(X86_FEATURE_IBPB) && boot_cpu_has(X86_FEATURE_IBRS))
>  			cpuid_entry_set(entry, X86_FEATURE_SPEC_CTRL);
>  		if (boot_cpu_has(X86_FEATURE_STIBP))
> @@ -507,9 +509,9 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
>  		break;
>  	case 1:
>  		entry->edx &= kvm_cpuid_1_edx_x86_features;
> -		cpuid_mask(&entry->edx, CPUID_1_EDX);
> +		cpuid_entry_mask(entry, CPUID_1_EDX);
>  		entry->ecx &= kvm_cpuid_1_ecx_x86_features;
> -		cpuid_mask(&entry->ecx, CPUID_1_ECX);
> +		cpuid_entry_mask(entry, CPUID_1_ECX);
>  		/* we support x2apic emulation even if host does not support
>  		 * it since we emulate x2apic in software */
>  		cpuid_entry_set(entry, X86_FEATURE_X2APIC);
> @@ -619,7 +621,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
>  			goto out;
>  
>  		entry->eax &= kvm_cpuid_D_1_eax_x86_features;
> -		cpuid_mask(&entry->eax, CPUID_D_1_EAX);
> +		cpuid_entry_mask(entry, CPUID_D_1_EAX);
>  		if (entry->eax & (F(XSAVES)|F(XSAVEC)))
>  			entry->ebx = xstate_required_size(supported_xcr0, true);
>  		else
> @@ -699,9 +701,9 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
>  		break;
>  	case 0x80000001:
>  		entry->edx &= kvm_cpuid_8000_0001_edx_x86_features;
> -		cpuid_mask(&entry->edx, CPUID_8000_0001_EDX);
> +		cpuid_entry_mask(entry, CPUID_8000_0001_EDX);
>  		entry->ecx &= kvm_cpuid_8000_0001_ecx_x86_features;
> -		cpuid_mask(&entry->ecx, CPUID_8000_0001_ECX);
> +		cpuid_entry_mask(entry, CPUID_8000_0001_ECX);
>  		break;
>  	case 0x80000007: /* Advanced power management */
>  		/* invariant TSC is CPUID.80000007H:EDX[8] */
> @@ -720,7 +722,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
>  		entry->eax = g_phys_as | (virt_as << 8);
>  		entry->edx = 0;
>  		entry->ebx &= kvm_cpuid_8000_0008_ebx_x86_features;
> -		cpuid_mask(&entry->ebx, CPUID_8000_0008_EBX);
> +		cpuid_entry_mask(entry, CPUID_8000_0008_EBX);
>  		/*
>  		 * AMD has separate bits for each SPEC_CTRL bit.
>  		 * arch/x86/kernel/cpu/bugs.c is kind enough to
> @@ -763,7 +765,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
>  		break;
>  	case 0xC0000001:
>  		entry->edx &= kvm_cpuid_C000_0001_edx_x86_features;
> -		cpuid_mask(&entry->edx, CPUID_C000_0001_EDX);
> +		cpuid_entry_mask(entry, CPUID_C000_0001_EDX);
>  		break;
>  	case 3: /* Processor serial number */
>  	case 5: /* MONITOR/MWAIT */


Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 29/61] KVM: x86: Add Kconfig-controlled auditing of reverse CPUID lookups
  2020-02-01 18:51 ` [PATCH 29/61] KVM: x86: Add Kconfig-controlled auditing of reverse CPUID lookups Sean Christopherson
@ 2020-02-24 13:54   ` Vitaly Kuznetsov
  2020-02-24 22:46     ` Sean Christopherson
  2020-02-25 15:00     ` Paolo Bonzini
  0 siblings, 2 replies; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-24 13:54 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Add WARNs in the low level __cpuid_entry_get_reg() to assert that the
> function and index of the CPUID entry and reverse CPUID entry match.
> Wrap the WARNs in a new Kconfig, KVM_CPUID_AUDIT, as the checks add
> almost no value in a production environment, i.e. will only detect
> blatant KVM bugs and fatal hardware errors.  Add a Kconfig instead of
> simply wrapping the WARNs with an off-by-default #ifdef so that syzbot
> and other automated testing can enable the auditing.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kvm/Kconfig | 10 ++++++++++
>  arch/x86/kvm/cpuid.h |  5 +++++
>  2 files changed, 15 insertions(+)
>
> diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig
> index 840e12583b85..bbbc3258358e 100644
> --- a/arch/x86/kvm/Kconfig
> +++ b/arch/x86/kvm/Kconfig
> @@ -96,6 +96,16 @@ config KVM_MMU_AUDIT
>  	 This option adds a R/W kVM module parameter 'mmu_audit', which allows
>  	 auditing of KVM MMU events at runtime.
>  
> +config KVM_CPUID_AUDIT
> +	bool "Audit KVM reverse CPUID lookups"
> +	depends on KVM
> +	help
> +	 This option enables runtime checking of reverse CPUID lookups in KVM
> +	 to verify the function and index of the referenced X86_FEATURE_* match
> +	 the function and index of the CPUID entry being accessed.
> +
> +	 If unsure, say N.
> +
>  # OK, it's a little counter-intuitive to do this, but it puts it neatly under
>  # the virtualization menu.
>  source "drivers/vhost/Kconfig"
> diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
> index 51f19eade5a0..41ff94a7d3e0 100644
> --- a/arch/x86/kvm/cpuid.h
> +++ b/arch/x86/kvm/cpuid.h
> @@ -98,6 +98,11 @@ static __always_inline struct cpuid_reg x86_feature_cpuid(unsigned x86_feature)
>  static __always_inline u32 *__cpuid_entry_get_reg(struct kvm_cpuid_entry2 *entry,
>  						  const struct cpuid_reg *cpuid)
>  {
> +#ifdef CONFIG_KVM_CPUID_AUDIT
> +	WARN_ON_ONCE(entry->function != cpuid->function);
> +	WARN_ON_ONCE(entry->index != cpuid->index);
> +#endif
> +
>  	switch (cpuid->reg) {
>  	case CPUID_EAX:
>  		return &entry->eax;

Honestly, I was thinking we should BUG_ON() and even in production builds
but not everyone around is so rebellious I guess, so

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 30/61] KVM: x86: Handle MPX CPUID adjustment in VMX code
  2020-02-01 18:51 ` [PATCH 30/61] KVM: x86: Handle MPX CPUID adjustment in VMX code Sean Christopherson
  2020-02-13 13:51   ` Xiaoyao Li
@ 2020-02-24 15:14   ` Vitaly Kuznetsov
  2020-02-24 15:45     ` Sean Christopherson
  1 sibling, 1 reply; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-24 15:14 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Move the MPX CPUID adjustments into VMX to eliminate an instance of the
> undesirable "unsigned f_* = *_supported ? F(*) : 0" pattern in the
> common CPUID handling code.
>
> Note, VMX must manually check for kernel support via
> boot_cpu_has(X86_FEATURE_MPX).
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kvm/cpuid.c   |  3 +--
>  arch/x86/kvm/vmx/vmx.c | 14 ++++++++++++--
>  2 files changed, 13 insertions(+), 4 deletions(-)
>
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index cb5870a323cc..09e24d1d731c 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -340,7 +340,6 @@ static int __do_cpuid_func_emulated(struct kvm_cpuid_array *array, u32 func)
>  static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry)
>  {
>  	unsigned f_invpcid = kvm_x86_ops->invpcid_supported() ? F(INVPCID) : 0;
> -	unsigned f_mpx = kvm_mpx_supported() ? F(MPX) : 0;
>  	unsigned f_umip = kvm_x86_ops->umip_emulated() ? F(UMIP) : 0;
>  	unsigned f_intel_pt = kvm_x86_ops->pt_supported() ? F(INTEL_PT) : 0;
>  	unsigned f_la57;
> @@ -349,7 +348,7 @@ static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry)
>  	/* cpuid 7.0.ebx */
>  	const u32 kvm_cpuid_7_0_ebx_x86_features =
>  		F(FSGSBASE) | F(BMI1) | F(HLE) | F(AVX2) | F(SMEP) |
> -		F(BMI2) | F(ERMS) | f_invpcid | F(RTM) | f_mpx | F(RDSEED) |
> +		F(BMI2) | F(ERMS) | f_invpcid | F(RTM) | 0 /*MPX*/ | F(RDSEED) |
>  		F(ADX) | F(SMAP) | F(AVX512IFMA) | F(AVX512F) | F(AVX512PF) |
>  		F(AVX512ER) | F(AVX512CD) | F(CLFLUSHOPT) | F(CLWB) | F(AVX512DQ) |
>  		F(SHA_NI) | F(AVX512BW) | F(AVX512VL) | f_intel_pt;
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 3ff830e2258e..143193fc178e 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -7106,8 +7106,18 @@ static void vmx_cpuid_update(struct kvm_vcpu *vcpu)
>  
>  static void vmx_set_supported_cpuid(struct kvm_cpuid_entry2 *entry)
>  {
> -	if (entry->function == 1 && nested)
> -		entry->ecx |= feature_bit(VMX);
> +	switch (entry->function) {
> +	case 0x1:
> +		if (nested)
> +			cpuid_entry_set(entry, X86_FEATURE_VMX);
> +		break;
> +	case 0x7:
> +		if (boot_cpu_has(X86_FEATURE_MPX) && kvm_mpx_supported())
> +			cpuid_entry_set(entry, X86_FEATURE_MPX);
> +		break;
> +	default:
> +		break;
> +	}
>  }
>  
>  static void vmx_request_immediate_exit(struct kvm_vcpu *vcpu)

The word 'must' in the description seems to work like a trigger for
reviewers, their brains automatically turn into 'and what if not?' mode
:-)

So do I understand correctly that kvm_mpx_supported() (which checks for
XFEATURE_MASK_BNDREGS/XFEATURE_MASK_BNDCSR) may actually return true
while 'boot_cpu_has(X86_FEATURE_MPX)' is false? Is this done on purpose,
i.e. why don't we filter these out from vmcs_config early, similar to
SVM?

The patch itself looks good, so
Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 31/61] KVM: x86: Handle INVPCID CPUID adjustment in VMX code
  2020-02-01 18:51 ` [PATCH 31/61] KVM: x86: Handle INVPCID " Sean Christopherson
@ 2020-02-24 15:19   ` Vitaly Kuznetsov
  0 siblings, 0 replies; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-24 15:19 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Move the INVPCID CPUID adjustments into VMX to eliminate an instance of
> the undesirable "unsigned f_* = *_supported ? F(*) : 0" pattern in the
> common CPUID handling code.  Drop ->invpcid_supported(), CPUID
> adjustment was the only user.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/include/asm/kvm_host.h |  1 -
>  arch/x86/kvm/cpuid.c            |  3 +--
>  arch/x86/kvm/svm.c              |  6 ------
>  arch/x86/kvm/vmx/vmx.c          | 10 +++-------
>  4 files changed, 4 insertions(+), 16 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index a61928d5435b..9baff70ad419 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1144,7 +1144,6 @@ struct kvm_x86_ops {
>  	u64 (*get_mt_mask)(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio);
>  	int (*get_lpage_level)(void);
>  	bool (*rdtscp_supported)(void);
> -	bool (*invpcid_supported)(void);
>  
>  	void (*set_tdp_cr3)(struct kvm_vcpu *vcpu, unsigned long cr3);
>  
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index 09e24d1d731c..a5f150204d73 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -339,7 +339,6 @@ static int __do_cpuid_func_emulated(struct kvm_cpuid_array *array, u32 func)
>  
>  static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry)
>  {
> -	unsigned f_invpcid = kvm_x86_ops->invpcid_supported() ? F(INVPCID) : 0;
>  	unsigned f_umip = kvm_x86_ops->umip_emulated() ? F(UMIP) : 0;
>  	unsigned f_intel_pt = kvm_x86_ops->pt_supported() ? F(INTEL_PT) : 0;
>  	unsigned f_la57;
> @@ -348,7 +347,7 @@ static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry)
>  	/* cpuid 7.0.ebx */
>  	const u32 kvm_cpuid_7_0_ebx_x86_features =
>  		F(FSGSBASE) | F(BMI1) | F(HLE) | F(AVX2) | F(SMEP) |
> -		F(BMI2) | F(ERMS) | f_invpcid | F(RTM) | 0 /*MPX*/ | F(RDSEED) |
> +		F(BMI2) | F(ERMS) | 0 /*INVPCID*/ | F(RTM) | 0 /*MPX*/ | F(RDSEED) |
>  		F(ADX) | F(SMAP) | F(AVX512IFMA) | F(AVX512F) | F(AVX512PF) |
>  		F(AVX512ER) | F(AVX512CD) | F(CLFLUSHOPT) | F(CLWB) | F(AVX512DQ) |
>  		F(SHA_NI) | F(AVX512BW) | F(AVX512VL) | f_intel_pt;
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index 7bb5d81f0f11..c0f8c09f3b04 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -6074,11 +6074,6 @@ static bool svm_rdtscp_supported(void)
>  	return boot_cpu_has(X86_FEATURE_RDTSCP);
>  }
>  
> -static bool svm_invpcid_supported(void)
> -{
> -	return false;
> -}
> -
>  static bool svm_xsaves_supported(void)
>  {
>  	return boot_cpu_has(X86_FEATURE_XSAVES);
> @@ -7459,7 +7454,6 @@ static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
>  	.cpuid_update = svm_cpuid_update,
>  
>  	.rdtscp_supported = svm_rdtscp_supported,
> -	.invpcid_supported = svm_invpcid_supported,
>  	.xsaves_supported = svm_xsaves_supported,
>  	.umip_emulated = svm_umip_emulated,
>  	.pt_supported = svm_pt_supported,
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 143193fc178e..49ee4c600934 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -1656,11 +1656,6 @@ static bool vmx_rdtscp_supported(void)
>  	return cpu_has_vmx_rdtscp();
>  }
>  
> -static bool vmx_invpcid_supported(void)
> -{
> -	return cpu_has_vmx_invpcid();
> -}
> -
>  /*
>   * Swap MSR entry in host/guest MSR entry array.
>   */
> @@ -4071,7 +4066,7 @@ static void vmx_compute_secondary_exec_control(struct vcpu_vmx *vmx)
>  		}
>  	}
>  
> -	if (vmx_invpcid_supported()) {
> +	if (cpu_has_vmx_invpcid()) {
>  		/* Exposing INVPCID only when PCID is exposed */
>  		bool invpcid_enabled =
>  			guest_cpuid_has(vcpu, X86_FEATURE_INVPCID) &&
> @@ -7114,6 +7109,8 @@ static void vmx_set_supported_cpuid(struct kvm_cpuid_entry2 *entry)
>  	case 0x7:
>  		if (boot_cpu_has(X86_FEATURE_MPX) && kvm_mpx_supported())
>  			cpuid_entry_set(entry, X86_FEATURE_MPX);
> +		if (boot_cpu_has(X86_FEATURE_INVPCID) && cpu_has_vmx_invpcid())
> +			cpuid_entry_set(entry, X86_FEATURE_INVPCID);
>  		break;
>  	default:
>  		break;
> @@ -7854,7 +7851,6 @@ static struct kvm_x86_ops vmx_x86_ops __ro_after_init = {
>  	.cpuid_update = vmx_cpuid_update,
>  
>  	.rdtscp_supported = vmx_rdtscp_supported,
> -	.invpcid_supported = vmx_invpcid_supported,
>  
>  	.set_supported_cpuid = vmx_set_supported_cpuid,

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 32/61] KVM: x86: Handle UMIP emulation CPUID adjustment in VMX code
  2020-02-01 18:51 ` [PATCH 32/61] KVM: x86: Handle UMIP emulation " Sean Christopherson
@ 2020-02-24 15:21   ` Vitaly Kuznetsov
  0 siblings, 0 replies; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-24 15:21 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Move the CPUID adjustment for UMIP emulation into VMX code to eliminate
> an instance of the undesirable "unsigned f_* = *_supported ? F(*) : 0"
> pattern in the common CPUID handling code.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kvm/cpuid.c   | 2 --
>  arch/x86/kvm/vmx/vmx.c | 2 ++
>  2 files changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index a5f150204d73..202a6c0f1db8 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -339,7 +339,6 @@ static int __do_cpuid_func_emulated(struct kvm_cpuid_array *array, u32 func)
>  
>  static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry)
>  {
> -	unsigned f_umip = kvm_x86_ops->umip_emulated() ? F(UMIP) : 0;
>  	unsigned f_intel_pt = kvm_x86_ops->pt_supported() ? F(INTEL_PT) : 0;
>  	unsigned f_la57;
>  	unsigned f_pku = kvm_x86_ops->pku_supported() ? F(PKU) : 0;
> @@ -382,7 +381,6 @@ static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry)
>  		cpuid_entry_mask(entry, CPUID_7_ECX);
>  		/* Set LA57 based on hardware capability. */
>  		entry->ecx |= f_la57;
> -		entry->ecx |= f_umip;
>  		entry->ecx |= f_pku;
>  		/* PKU is not yet implemented for shadow paging. */
>  		if (!tdp_enabled || !boot_cpu_has(X86_FEATURE_OSPKE))
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 49ee4c600934..9d2e36a5ecb9 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -7111,6 +7111,8 @@ static void vmx_set_supported_cpuid(struct kvm_cpuid_entry2 *entry)
>  			cpuid_entry_set(entry, X86_FEATURE_MPX);
>  		if (boot_cpu_has(X86_FEATURE_INVPCID) && cpu_has_vmx_invpcid())
>  			cpuid_entry_set(entry, X86_FEATURE_INVPCID);
> +		if (vmx_umip_emulated())
> +			cpuid_entry_set(entry, X86_FEATURE_UMIP);
>  		break;
>  	default:
>  		break;

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 33/61] KVM: x86: Handle PKU CPUID adjustment in VMX code
  2020-02-01 18:51 ` [PATCH 33/61] KVM: x86: Handle PKU " Sean Christopherson
@ 2020-02-24 15:24   ` Vitaly Kuznetsov
  0 siblings, 0 replies; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-24 15:24 UTC (permalink / raw)
  To: Sean Christopherson, Paolo Bonzini
  Cc: Sean Christopherson, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm,
	linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Move the setting of the PKU CPUID bit into VMX to eliminate an instance
> of the undesirable "unsigned f_* = *_supported ? F(*) : 0" pattern in
> the common CPUID handling code.  Drop ->pku_supported(), CPUID
> adjustment was the only user.
>
> Note, some AMD CPUs now support PKU, but SVM doesn't yet support
> exposing it to a guest.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/include/asm/kvm_host.h | 1 -
>  arch/x86/kvm/cpuid.c            | 5 -----
>  arch/x86/kvm/svm.c              | 6 ------
>  arch/x86/kvm/vmx/capabilities.h | 5 -----
>  arch/x86/kvm/vmx/vmx.c          | 6 +++++-
>  5 files changed, 5 insertions(+), 18 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 9baff70ad419..ba828569cda5 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1166,7 +1166,6 @@ struct kvm_x86_ops {
>  	bool (*xsaves_supported)(void);
>  	bool (*umip_emulated)(void);
>  	bool (*pt_supported)(void);
> -	bool (*pku_supported)(void);
>  
>  	int (*check_nested_events)(struct kvm_vcpu *vcpu, bool external_intr);
>  	void (*request_immediate_exit)(struct kvm_vcpu *vcpu);
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index 202a6c0f1db8..a1f46b3ca16e 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -341,7 +341,6 @@ static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry)
>  {
>  	unsigned f_intel_pt = kvm_x86_ops->pt_supported() ? F(INTEL_PT) : 0;
>  	unsigned f_la57;
> -	unsigned f_pku = kvm_x86_ops->pku_supported() ? F(PKU) : 0;
>  
>  	/* cpuid 7.0.ebx */
>  	const u32 kvm_cpuid_7_0_ebx_x86_features =
> @@ -381,10 +380,6 @@ static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry)
>  		cpuid_entry_mask(entry, CPUID_7_ECX);
>  		/* Set LA57 based on hardware capability. */
>  		entry->ecx |= f_la57;
> -		entry->ecx |= f_pku;
> -		/* PKU is not yet implemented for shadow paging. */
> -		if (!tdp_enabled || !boot_cpu_has(X86_FEATURE_OSPKE))
> -			cpuid_entry_clear(entry, X86_FEATURE_PKU);
>  
>  		entry->edx &= kvm_cpuid_7_0_edx_x86_features;
>  		cpuid_entry_mask(entry, CPUID_7_EDX);
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index c0f8c09f3b04..630520f8adfa 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -6094,11 +6094,6 @@ static bool svm_has_wbinvd_exit(void)
>  	return true;
>  }
>  
> -static bool svm_pku_supported(void)
> -{
> -	return false;
> -}
> -
>  #define PRE_EX(exit)  { .exit_code = (exit), \
>  			.stage = X86_ICPT_PRE_EXCEPT, }
>  #define POST_EX(exit) { .exit_code = (exit), \
> @@ -7457,7 +7452,6 @@ static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
>  	.xsaves_supported = svm_xsaves_supported,
>  	.umip_emulated = svm_umip_emulated,
>  	.pt_supported = svm_pt_supported,
> -	.pku_supported = svm_pku_supported,
>  
>  	.set_supported_cpuid = svm_set_supported_cpuid,
>  
> diff --git a/arch/x86/kvm/vmx/capabilities.h b/arch/x86/kvm/vmx/capabilities.h
> index 0a0b1494a934..7cae355e3490 100644
> --- a/arch/x86/kvm/vmx/capabilities.h
> +++ b/arch/x86/kvm/vmx/capabilities.h
> @@ -145,11 +145,6 @@ static inline bool vmx_umip_emulated(void)
>  		SECONDARY_EXEC_DESC;
>  }
>  
> -static inline bool vmx_pku_supported(void)
> -{
> -	return boot_cpu_has(X86_FEATURE_PKU);
> -}
> -
>  static inline bool cpu_has_vmx_rdtscp(void)
>  {
>  	return vmcs_config.cpu_based_2nd_exec_ctrl &
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 9d2e36a5ecb9..a9728cc0c343 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -7113,6 +7113,11 @@ static void vmx_set_supported_cpuid(struct kvm_cpuid_entry2 *entry)
>  			cpuid_entry_set(entry, X86_FEATURE_INVPCID);
>  		if (vmx_umip_emulated())
>  			cpuid_entry_set(entry, X86_FEATURE_UMIP);
> +
> +		/* PKU is not yet implemented for shadow paging. */
> +		if (enable_ept && boot_cpu_has(X86_FEATURE_PKU) &&
> +		    boot_cpu_has(X86_FEATURE_OSPKE))
> +			cpuid_entry_set(entry, X86_FEATURE_PKU);
>  		break;
>  	default:
>  		break;
> @@ -7868,7 +7873,6 @@ static struct kvm_x86_ops vmx_x86_ops __ro_after_init = {
>  	.xsaves_supported = vmx_xsaves_supported,
>  	.umip_emulated = vmx_umip_emulated,
>  	.pt_supported = vmx_pt_supported,
> -	.pku_supported = vmx_pku_supported,
>  
>  	.request_immediate_exit = vmx_request_immediate_exit,

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 34/61] KVM: x86: Handle RDTSCP CPUID adjustment in VMX code
  2020-02-01 18:51 ` [PATCH 34/61] KVM: x86: Handle RDTSCP " Sean Christopherson
@ 2020-02-24 15:28   ` Vitaly Kuznetsov
  0 siblings, 0 replies; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-24 15:28 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Move the clearing of the RDTSCP CPUID bit into VMX, which has a separate
> VMCS control to enable RDTSCP in non-root, to eliminate an instance of
> the undesirable "unsigned f_* = *_supported ? F(*) : 0" pattern in the
> common CPUID handling code.  Drop ->rdtscp_supported() since CPUID
> adjustment was the last remaining user.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kvm/cpuid.c   | 3 +--
>  arch/x86/kvm/vmx/vmx.c | 4 ++++
>  2 files changed, 5 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index a1f46b3ca16e..fc507270f3f3 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -424,7 +424,6 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
>  	unsigned f_gbpages = 0;
>  	unsigned f_lm = 0;
>  #endif
> -	unsigned f_rdtscp = kvm_x86_ops->rdtscp_supported() ? F(RDTSCP) : 0;
>  	unsigned f_xsaves = kvm_x86_ops->xsaves_supported() ? F(XSAVES) : 0;
>  	unsigned f_intel_pt = kvm_x86_ops->pt_supported() ? F(INTEL_PT) : 0;
>  
> @@ -446,7 +445,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
>  		F(MTRR) | F(PGE) | F(MCA) | F(CMOV) |
>  		F(PAT) | F(PSE36) | 0 /* Reserved */ |
>  		f_nx | 0 /* Reserved */ | F(MMXEXT) | F(MMX) |
> -		F(FXSR) | F(FXSR_OPT) | f_gbpages | f_rdtscp |
> +		F(FXSR) | F(FXSR_OPT) | f_gbpages | F(RDTSCP) |
>  		0 /* Reserved */ | f_lm | F(3DNOWEXT) | F(3DNOW);
>  	/* cpuid 1.ecx */
>  	const u32 kvm_cpuid_1_ecx_x86_features =
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index a9728cc0c343..3990ba691d07 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -7119,6 +7119,10 @@ static void vmx_set_supported_cpuid(struct kvm_cpuid_entry2 *entry)
>  		    boot_cpu_has(X86_FEATURE_OSPKE))
>  			cpuid_entry_set(entry, X86_FEATURE_PKU);
>  		break;
> +	case 0x80000001:
> +		if (!cpu_has_vmx_rdtscp())
> +			cpuid_entry_clear(entry, X86_FEATURE_RDTSCP);
> +		break;
>  	default:
>  		break;
>  	}

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 35/61] KVM: x86: Handle Intel PT CPUID adjustment in VMX code
  2020-02-01 18:51 ` [PATCH 35/61] KVM: x86: Handle Intel PT " Sean Christopherson
@ 2020-02-24 15:30   ` Vitaly Kuznetsov
  0 siblings, 0 replies; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-24 15:30 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Move the Processor Trace CPUID adjustment into VMX code to eliminate
> an instance of the undesirable "unsigned f_* = *_supported ? F(*) : 0"
> pattern in the common CPUID handling code, and to pave the way toward
> eventually removing ->pt_supported().
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kvm/cpuid.c   | 3 +--
>  arch/x86/kvm/vmx/vmx.c | 3 +++
>  2 files changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index fc507270f3f3..f4a3655451dd 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -339,7 +339,6 @@ static int __do_cpuid_func_emulated(struct kvm_cpuid_array *array, u32 func)
>  
>  static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry)
>  {
> -	unsigned f_intel_pt = kvm_x86_ops->pt_supported() ? F(INTEL_PT) : 0;
>  	unsigned f_la57;
>  
>  	/* cpuid 7.0.ebx */
> @@ -348,7 +347,7 @@ static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry)
>  		F(BMI2) | F(ERMS) | 0 /*INVPCID*/ | F(RTM) | 0 /*MPX*/ | F(RDSEED) |
>  		F(ADX) | F(SMAP) | F(AVX512IFMA) | F(AVX512F) | F(AVX512PF) |
>  		F(AVX512ER) | F(AVX512CD) | F(CLFLUSHOPT) | F(CLWB) | F(AVX512DQ) |
> -		F(SHA_NI) | F(AVX512BW) | F(AVX512VL) | f_intel_pt;
> +		F(SHA_NI) | F(AVX512BW) | F(AVX512VL) | 0 /*INTEL_PT*/;
>  
>  	/* cpuid 7.0.ecx*/
>  	const u32 kvm_cpuid_7_0_ecx_x86_features =
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 3990ba691d07..fcec3d8a0176 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -7111,6 +7111,9 @@ static void vmx_set_supported_cpuid(struct kvm_cpuid_entry2 *entry)
>  			cpuid_entry_set(entry, X86_FEATURE_MPX);
>  		if (boot_cpu_has(X86_FEATURE_INVPCID) && cpu_has_vmx_invpcid())
>  			cpuid_entry_set(entry, X86_FEATURE_INVPCID);
> +		if (boot_cpu_has(X86_FEATURE_INTEL_PT) &&
> +		    vmx_pt_mode_is_host_guest())
> +			cpuid_entry_set(entry, X86_FEATURE_INTEL_PT);
>  		if (vmx_umip_emulated())
>  			cpuid_entry_set(entry, X86_FEATURE_UMIP);

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 36/61] KVM: x86: Handle GBPAGE CPUID adjustment for EPT in VMX code
  2020-02-01 18:51 ` [PATCH 36/61] KVM: x86: Handle GBPAGE CPUID adjustment for EPT " Sean Christopherson
@ 2020-02-24 15:34   ` Vitaly Kuznetsov
  0 siblings, 0 replies; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-24 15:34 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Move the clearing of the GBPAGE CPUID bit into VMX to eliminate an
> instance of the undesirable "unsigned f_* = *_supported ? F(*) : 0"
> pattern in the common CPUID handling code, and to pave the way toward
> eliminating ->get_lpage_level().
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kvm/cpuid.c   | 3 +--
>  arch/x86/kvm/vmx/vmx.c | 2 ++
>  2 files changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index f4a3655451dd..c74253202af8 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -416,8 +416,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
>  	int r, i, max_idx;
>  	unsigned f_nx = is_efer_nx() ? F(NX) : 0;
>  #ifdef CONFIG_X86_64
> -	unsigned f_gbpages = (kvm_x86_ops->get_lpage_level() == PT_PDPE_LEVEL)
> -				? F(GBPAGES) : 0;
> +	unsigned f_gbpages = F(GBPAGES);
>  	unsigned f_lm = F(LM);
>  #else
>  	unsigned f_gbpages = 0;
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index fcec3d8a0176..11b9c1e7e520 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -7125,6 +7125,8 @@ static void vmx_set_supported_cpuid(struct kvm_cpuid_entry2 *entry)
>  	case 0x80000001:
>  		if (!cpu_has_vmx_rdtscp())
>  			cpuid_entry_clear(entry, X86_FEATURE_RDTSCP);
> +		if (enable_ept && !cpu_has_vmx_ept_1g_page())
> +			cpuid_entry_clear(entry, X86_FEATURE_GBPAGES);
>  		break;
>  	default:
>  		break;

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 37/61] KVM: x86: Refactor handling of XSAVES CPUID adjustment
  2020-02-01 18:51 ` [PATCH 37/61] KVM: x86: Refactor handling of XSAVES CPUID adjustment Sean Christopherson
@ 2020-02-24 15:39   ` Vitaly Kuznetsov
  0 siblings, 0 replies; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-24 15:39 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Invert the handling of XSAVES, i.e. set it based on boot_cpu_has() by
> default, in preparation for adding KVM cpu caps, which will generate the
> mask at load time before ->xsaves_supported() is ready.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kvm/cpuid.c | 7 +++++--
>  1 file changed, 5 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index c74253202af8..20a7af320291 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -422,7 +422,6 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
>  	unsigned f_gbpages = 0;
>  	unsigned f_lm = 0;
>  #endif
> -	unsigned f_xsaves = kvm_x86_ops->xsaves_supported() ? F(XSAVES) : 0;
>  	unsigned f_intel_pt = kvm_x86_ops->pt_supported() ? F(INTEL_PT) : 0;
>  
>  	/* cpuid 1.edx */
> @@ -479,7 +478,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
>  
>  	/* cpuid 0xD.1.eax */
>  	const u32 kvm_cpuid_D_1_eax_x86_features =
> -		F(XSAVEOPT) | F(XSAVEC) | F(XGETBV1) | f_xsaves;
> +		F(XSAVEOPT) | F(XSAVEC) | F(XGETBV1) | F(XSAVES);
>  
>  	/* all calls to cpuid_count() should be made on the same cpu */
>  	get_cpu();
> @@ -610,6 +609,10 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
>  
>  		entry->eax &= kvm_cpuid_D_1_eax_x86_features;
>  		cpuid_entry_mask(entry, CPUID_D_1_EAX);
> +
> +		if (!kvm_x86_ops->xsaves_supported())
> +			cpuid_entry_clear(entry, X86_FEATURE_XSAVES);
> +
>  		if (entry->eax & (F(XSAVES)|F(XSAVEC)))
>  			entry->ebx = xstate_required_size(supported_xcr0, true);
>  		else

I was going to ask if this can be moved to set_supported_cpuid() for
both VMX and SVM but then I realized this is just a temporary change.

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 30/61] KVM: x86: Handle MPX CPUID adjustment in VMX code
  2020-02-24 15:14   ` Vitaly Kuznetsov
@ 2020-02-24 15:45     ` Sean Christopherson
  0 siblings, 0 replies; 168+ messages in thread
From: Sean Christopherson @ 2020-02-24 15:45 UTC (permalink / raw)
  To: Vitaly Kuznetsov
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

On Mon, Feb 24, 2020 at 04:14:56PM +0100, Vitaly Kuznetsov wrote:
> Sean Christopherson <sean.j.christopherson@intel.com> writes:
> 
> > Move the MPX CPUID adjustments into VMX to eliminate an instance of the
> > undesirable "unsigned f_* = *_supported ? F(*) : 0" pattern in the
> > common CPUID handling code.
> >
> > Note, VMX must manually check for kernel support via
> > boot_cpu_has(X86_FEATURE_MPX).
> >
> > No functional change intended.
> >
> > Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> > ---
> >  arch/x86/kvm/cpuid.c   |  3 +--
> >  arch/x86/kvm/vmx/vmx.c | 14 ++++++++++++--
> >  2 files changed, 13 insertions(+), 4 deletions(-)
> >
> > diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> > index cb5870a323cc..09e24d1d731c 100644
> > --- a/arch/x86/kvm/cpuid.c
> > +++ b/arch/x86/kvm/cpuid.c
> > @@ -340,7 +340,6 @@ static int __do_cpuid_func_emulated(struct kvm_cpuid_array *array, u32 func)
> >  static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry)
> >  {
> >  	unsigned f_invpcid = kvm_x86_ops->invpcid_supported() ? F(INVPCID) : 0;
> > -	unsigned f_mpx = kvm_mpx_supported() ? F(MPX) : 0;
> >  	unsigned f_umip = kvm_x86_ops->umip_emulated() ? F(UMIP) : 0;
> >  	unsigned f_intel_pt = kvm_x86_ops->pt_supported() ? F(INTEL_PT) : 0;
> >  	unsigned f_la57;
> > @@ -349,7 +348,7 @@ static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry)
> >  	/* cpuid 7.0.ebx */
> >  	const u32 kvm_cpuid_7_0_ebx_x86_features =
> >  		F(FSGSBASE) | F(BMI1) | F(HLE) | F(AVX2) | F(SMEP) |
> > -		F(BMI2) | F(ERMS) | f_invpcid | F(RTM) | f_mpx | F(RDSEED) |
> > +		F(BMI2) | F(ERMS) | f_invpcid | F(RTM) | 0 /*MPX*/ | F(RDSEED) |
> >  		F(ADX) | F(SMAP) | F(AVX512IFMA) | F(AVX512F) | F(AVX512PF) |
> >  		F(AVX512ER) | F(AVX512CD) | F(CLFLUSHOPT) | F(CLWB) | F(AVX512DQ) |
> >  		F(SHA_NI) | F(AVX512BW) | F(AVX512VL) | f_intel_pt;
> > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> > index 3ff830e2258e..143193fc178e 100644
> > --- a/arch/x86/kvm/vmx/vmx.c
> > +++ b/arch/x86/kvm/vmx/vmx.c
> > @@ -7106,8 +7106,18 @@ static void vmx_cpuid_update(struct kvm_vcpu *vcpu)
> >  
> >  static void vmx_set_supported_cpuid(struct kvm_cpuid_entry2 *entry)
> >  {
> > -	if (entry->function == 1 && nested)
> > -		entry->ecx |= feature_bit(VMX);
> > +	switch (entry->function) {
> > +	case 0x1:
> > +		if (nested)
> > +			cpuid_entry_set(entry, X86_FEATURE_VMX);
> > +		break;
> > +	case 0x7:
> > +		if (boot_cpu_has(X86_FEATURE_MPX) && kvm_mpx_supported())
> > +			cpuid_entry_set(entry, X86_FEATURE_MPX);
> > +		break;
> > +	default:
> > +		break;
> > +	}
> >  }
> >  
> >  static void vmx_request_immediate_exit(struct kvm_vcpu *vcpu)
> 
> The word 'must' in the description seems to work like a trigger for
> reviewers, their brains automatically turn into 'and what if not?' mode
> :-)

This is the second time that sentence has caused confusion, I definitely
need to tweak the changelog.  It's supposed to say something like:

  Note, to maintain existing behavior, VMX must manually check for kernel
  support for MPX by querying boot_cpu_has(X86_FEATURE_MPX).  Previously,
  do_cpuid_7_mask() masked MPX based on boot_cpu_data by invoking
  cpuid_mask() on the associated cpufeatures word, but cpuid_mask() runs
  prior to executing vmx_set_supported_cpuid().
 
> So do I understand correctly that kvm_mpx_supported() (which checks for
> XFEATURE_MASK_BNDREGS/XFEATURE_MASK_BNDCSR) may actually return true
> while 'boot_cpu_has(X86_FEATURE_MPX)' is false?

Yes.  The VMCS capabilities and host capabilities are tracked separately.

> Is this done on purpose, i.e. why don't we filter these out from vmcs_config
> early, similar to SVM?

Most (all?) SVM features that are conditionally available are enumerated
via CPUID, and thus are naturally reflected in boot_cpu_data.

VMX enumerates its features via MSRs, which, except for a few synthetic
flags in word 8 that are maintained for ABI compatibility, aren't reflected
in boot_cpu_data.  It would be possible to update the global vmcs_config,
but separating vmcs_config from boot_cpu_data has a few advantages:

  - Allows KVM full control over using features, e.g. EPT can be toggled
    simply by reloading kvm_intel, whereas controlling it via boot_cpu_data
    would require a host reboot.

  - Instructions like RDSEED, RDRAND and ENCLS are exectuable in VMX
    non-root by default, e.g. KVM needs to know that RDRAND-exiting is
    supported in hardware even if it's "disabled" in the host so that KVM
    can set the exiting control to intercept RDRAND and inject #UD.

> 
> The patch itself looks good, so
> Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>
> 
> -- 
> Vitaly
> 

^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 38/61] KVM: x86: Introduce kvm_cpu_caps to replace runtime CPUID masking
  2020-02-01 18:51 ` [PATCH 38/61] KVM: x86: Introduce kvm_cpu_caps to replace runtime CPUID masking Sean Christopherson
@ 2020-02-24 16:32   ` Vitaly Kuznetsov
  2020-02-24 22:57     ` Sean Christopherson
  0 siblings, 1 reply; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-24 16:32 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Calculate the CPUID masks for KVM_GET_SUPPORTED_CPUID at load time using
> what is effectively a KVM-adjusted copy of boot_cpu_data, or more
> precisely, the x86_capability array in boot_cpu_data.
>
> In terms of KVM support, the vast majority of CPUID feature bits are
> constant, and *all* feature support is known at KVM load time.  Rather
> than apply boot_cpu_data, which is effectively read-only after init,
> at runtime, copy it into a KVM-specific array and use *that* to mask
> CPUID registers.
>
> In additional to consolidating the masking, kvm_cpu_caps can be adjusted
> by SVM/VMX at load time and thus eliminate all feature bit manipulation
> in ->set_supported_cpuid().
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kvm/cpuid.c | 229 +++++++++++++++++++++++--------------------
>  arch/x86/kvm/cpuid.h |  19 ++++
>  arch/x86/kvm/x86.c   |   2 +
>  3 files changed, 142 insertions(+), 108 deletions(-)
>
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index 20a7af320291..c2a4c9df49a9 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -24,6 +24,13 @@
>  #include "trace.h"
>  #include "pmu.h"
>  
> +/*
> + * Unlike "struct cpuinfo_x86.x86_capability", kvm_cpu_caps doesn't need to be
> + * aligned to sizeof(unsigned long) because it's not accessed via bitops.
> + */
> +u32 kvm_cpu_caps[NCAPINTS] __read_mostly;
> +EXPORT_SYMBOL_GPL(kvm_cpu_caps);
> +
>  static u32 xstate_required_size(u64 xstate_bv, bool compacted)
>  {
>  	int feature_bit = 0;
> @@ -259,7 +266,119 @@ static __always_inline void cpuid_entry_mask(struct kvm_cpuid_entry2 *entry,
>  {
>  	u32 *reg = cpuid_entry_get_reg(entry, leaf * 32);
>  
> -	*reg &= boot_cpu_data.x86_capability[leaf];
> +	BUILD_BUG_ON(leaf > ARRAY_SIZE(kvm_cpu_caps));

Should this be '>=' ?

> +	*reg &= kvm_cpu_caps[leaf];
> +}
> +
> +static __always_inline void kvm_cpu_cap_mask(enum cpuid_leafs leaf, u32 mask)
> +{
> +	reverse_cpuid_check(leaf);
> +	kvm_cpu_caps[leaf] &= mask;
> +}
> +
> +void kvm_set_cpu_caps(void)
> +{
> +	unsigned f_nx = is_efer_nx() ? F(NX) : 0;
> +#ifdef CONFIG_X86_64
> +	unsigned f_gbpages = F(GBPAGES);
> +	unsigned f_lm = F(LM);
> +#else
> +	unsigned f_gbpages = 0;
> +	unsigned f_lm = 0;
> +#endif

Three too many bare 'unsinged's :-)

> +
> +	BUILD_BUG_ON(sizeof(kvm_cpu_caps) >
> +		     sizeof(boot_cpu_data.x86_capability));
> +
> +	memcpy(&kvm_cpu_caps, &boot_cpu_data.x86_capability,
> +	       sizeof(kvm_cpu_caps));
> +
> +	kvm_cpu_cap_mask(CPUID_1_EDX,
> +		F(FPU) | F(VME) | F(DE) | F(PSE) |
> +		F(TSC) | F(MSR) | F(PAE) | F(MCE) |
> +		F(CX8) | F(APIC) | 0 /* Reserved */ | F(SEP) |
> +		F(MTRR) | F(PGE) | F(MCA) | F(CMOV) |
> +		F(PAT) | F(PSE36) | 0 /* PSN */ | F(CLFLUSH) |
> +		0 /* Reserved, DS, ACPI */ | F(MMX) |
> +		F(FXSR) | F(XMM) | F(XMM2) | F(SELFSNOOP) |
> +		0 /* HTT, TM, Reserved, PBE */
> +	);
> +
> +	kvm_cpu_cap_mask(CPUID_8000_0001_EDX,
> +		F(FPU) | F(VME) | F(DE) | F(PSE) |
> +		F(TSC) | F(MSR) | F(PAE) | F(MCE) |
> +		F(CX8) | F(APIC) | 0 /* Reserved */ | F(SYSCALL) |
> +		F(MTRR) | F(PGE) | F(MCA) | F(CMOV) |
> +		F(PAT) | F(PSE36) | 0 /* Reserved */ |
> +		f_nx | 0 /* Reserved */ | F(MMXEXT) | F(MMX) |
> +		F(FXSR) | F(FXSR_OPT) | f_gbpages | F(RDTSCP) |
> +		0 /* Reserved */ | f_lm | F(3DNOWEXT) | F(3DNOW)
> +	);
> +
> +	kvm_cpu_cap_mask(CPUID_1_ECX,
> +		/* NOTE: MONITOR (and MWAIT) are emulated as NOP,
> +		 * but *not* advertised to guests via CPUID ! */
> +		F(XMM3) | F(PCLMULQDQ) | 0 /* DTES64, MONITOR */ |
> +		0 /* DS-CPL, VMX, SMX, EST */ |
> +		0 /* TM2 */ | F(SSSE3) | 0 /* CNXT-ID */ | 0 /* Reserved */ |
> +		F(FMA) | F(CX16) | 0 /* xTPR Update, PDCM */ |
> +		F(PCID) | 0 /* Reserved, DCA */ | F(XMM4_1) |
> +		F(XMM4_2) | F(X2APIC) | F(MOVBE) | F(POPCNT) |
> +		0 /* Reserved*/ | F(AES) | F(XSAVE) | 0 /* OSXSAVE */ | F(AVX) |
> +		F(F16C) | F(RDRAND)
> +	);

I would suggest we order things by CPUID_NUM here, i.e.

CPUID_1_ECX
CPUID_1_EDX
CPUID_7_1_EAX
CPUID_7_0_EBX
CPUID_7_ECX
CPUID_7_EDX
CPUID_D_1_EAX
...

> +
> +	kvm_cpu_cap_mask(CPUID_7_0_EBX,
> +		F(FSGSBASE) | F(BMI1) | F(HLE) | F(AVX2) | F(SMEP) |
> +		F(BMI2) | F(ERMS) | 0 /*INVPCID*/ | F(RTM) | 0 /*MPX*/ | F(RDSEED) |
> +		F(ADX) | F(SMAP) | F(AVX512IFMA) | F(AVX512F) | F(AVX512PF) |
> +		F(AVX512ER) | F(AVX512CD) | F(CLFLUSHOPT) | F(CLWB) | F(AVX512DQ) |
> +		F(SHA_NI) | F(AVX512BW) | F(AVX512VL) | 0 /*INTEL_PT*/
> +	);
> +
> +	kvm_cpu_cap_mask(CPUID_7_ECX,
> +		F(AVX512VBMI) | F(LA57) | 0 /*PKU*/ | 0 /*OSPKE*/ | F(RDPID) |
> +		F(AVX512_VPOPCNTDQ) | F(UMIP) | F(AVX512_VBMI2) | F(GFNI) |
> +		F(VAES) | F(VPCLMULQDQ) | F(AVX512_VNNI) | F(AVX512_BITALG) |
> +		F(CLDEMOTE) | F(MOVDIRI) | F(MOVDIR64B) | 0 /*WAITPKG*/
> +	);
> +	/* Set LA57 based on hardware capability. */
> +	if (cpuid_ecx(7) & F(LA57))
> +		kvm_cpu_cap_set(X86_FEATURE_LA57);
> +
> +	kvm_cpu_cap_mask(CPUID_7_EDX,
> +		F(AVX512_4VNNIW) | F(AVX512_4FMAPS) | F(SPEC_CTRL) |
> +		F(SPEC_CTRL_SSBD) | F(ARCH_CAPABILITIES) | F(INTEL_STIBP) |
> +		F(MD_CLEAR)
> +	);
> +
> +	kvm_cpu_cap_mask(CPUID_7_1_EAX,
> +		F(AVX512_BF16)
> +	);
> +
> +	kvm_cpu_cap_mask(CPUID_D_1_EAX,
> +		F(XSAVEOPT) | F(XSAVEC) | F(XGETBV1) | F(XSAVES)
> +	);
> +
> +	kvm_cpu_cap_mask(CPUID_8000_0001_ECX,
> +		F(LAHF_LM) | F(CMP_LEGACY) | 0 /*SVM*/ | 0 /* ExtApicSpace */ |
> +		F(CR8_LEGACY) | F(ABM) | F(SSE4A) | F(MISALIGNSSE) |
> +		F(3DNOWPREFETCH) | F(OSVW) | 0 /* IBS */ | F(XOP) |
> +		0 /* SKINIT, WDT, LWP */ | F(FMA4) | F(TBM) |
> +		F(TOPOEXT) | F(PERFCTR_CORE)
> +	);
> +
> +	kvm_cpu_cap_mask(CPUID_8000_0008_EBX,
> +		F(CLZERO) | F(XSAVEERPTR) |
> +		F(WBNOINVD) | F(AMD_IBPB) | F(AMD_IBRS) | F(AMD_SSBD) | F(VIRT_SSBD) |
> +		F(AMD_SSB_NO) | F(AMD_STIBP) | F(AMD_STIBP_ALWAYS_ON)
> +	);
> +
> +	kvm_cpu_cap_mask(CPUID_C000_0001_EDX,
> +		F(XSTORE) | F(XSTORE_EN) | F(XCRYPT) | F(XCRYPT_EN) |
> +		F(ACE2) | F(ACE2_EN) | F(PHE) | F(PHE_EN) |
> +		F(PMM) | F(PMM_EN)
> +	);
>  }
>  
>  struct kvm_cpuid_array {
> @@ -339,48 +458,13 @@ static int __do_cpuid_func_emulated(struct kvm_cpuid_array *array, u32 func)
>  
>  static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry)
>  {
> -	unsigned f_la57;
> -
> -	/* cpuid 7.0.ebx */
> -	const u32 kvm_cpuid_7_0_ebx_x86_features =
> -		F(FSGSBASE) | F(BMI1) | F(HLE) | F(AVX2) | F(SMEP) |
> -		F(BMI2) | F(ERMS) | 0 /*INVPCID*/ | F(RTM) | 0 /*MPX*/ | F(RDSEED) |
> -		F(ADX) | F(SMAP) | F(AVX512IFMA) | F(AVX512F) | F(AVX512PF) |
> -		F(AVX512ER) | F(AVX512CD) | F(CLFLUSHOPT) | F(CLWB) | F(AVX512DQ) |
> -		F(SHA_NI) | F(AVX512BW) | F(AVX512VL) | 0 /*INTEL_PT*/;
> -
> -	/* cpuid 7.0.ecx*/
> -	const u32 kvm_cpuid_7_0_ecx_x86_features =
> -		F(AVX512VBMI) | F(LA57) | 0 /*PKU*/ | 0 /*OSPKE*/ | F(RDPID) |
> -		F(AVX512_VPOPCNTDQ) | F(UMIP) | F(AVX512_VBMI2) | F(GFNI) |
> -		F(VAES) | F(VPCLMULQDQ) | F(AVX512_VNNI) | F(AVX512_BITALG) |
> -		F(CLDEMOTE) | F(MOVDIRI) | F(MOVDIR64B) | 0 /*WAITPKG*/;
> -
> -	/* cpuid 7.0.edx*/
> -	const u32 kvm_cpuid_7_0_edx_x86_features =
> -		F(AVX512_4VNNIW) | F(AVX512_4FMAPS) | F(SPEC_CTRL) |
> -		F(SPEC_CTRL_SSBD) | F(ARCH_CAPABILITIES) | F(INTEL_STIBP) |
> -		F(MD_CLEAR);
> -
> -	/* cpuid 7.1.eax */
> -	const u32 kvm_cpuid_7_1_eax_x86_features =
> -		F(AVX512_BF16);
> -
>  	switch (entry->index) {
>  	case 0:
>  		entry->eax = min(entry->eax, 1u);
> -		entry->ebx &= kvm_cpuid_7_0_ebx_x86_features;
>  		cpuid_entry_mask(entry, CPUID_7_0_EBX);
>  		/* TSC_ADJUST is emulated */
>  		cpuid_entry_set(entry, X86_FEATURE_TSC_ADJUST);
> -
> -		entry->ecx &= kvm_cpuid_7_0_ecx_x86_features;
> -		f_la57 = cpuid_entry_get(entry, X86_FEATURE_LA57);
>  		cpuid_entry_mask(entry, CPUID_7_ECX);
> -		/* Set LA57 based on hardware capability. */
> -		entry->ecx |= f_la57;
> -
> -		entry->edx &= kvm_cpuid_7_0_edx_x86_features;
>  		cpuid_entry_mask(entry, CPUID_7_EDX);
>  		if (boot_cpu_has(X86_FEATURE_IBPB) && boot_cpu_has(X86_FEATURE_IBRS))
>  			cpuid_entry_set(entry, X86_FEATURE_SPEC_CTRL);
> @@ -395,7 +479,7 @@ static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry)
>  		cpuid_entry_set(entry, X86_FEATURE_ARCH_CAPABILITIES);
>  		break;
>  	case 1:
> -		entry->eax &= kvm_cpuid_7_1_eax_x86_features;
> +		cpuid_entry_mask(entry, CPUID_7_1_EAX);
>  		entry->ebx = 0;
>  		entry->ecx = 0;
>  		entry->edx = 0;
> @@ -414,72 +498,8 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
>  {
>  	struct kvm_cpuid_entry2 *entry;
>  	int r, i, max_idx;
> -	unsigned f_nx = is_efer_nx() ? F(NX) : 0;
> -#ifdef CONFIG_X86_64
> -	unsigned f_gbpages = F(GBPAGES);
> -	unsigned f_lm = F(LM);
> -#else
> -	unsigned f_gbpages = 0;
> -	unsigned f_lm = 0;
> -#endif
>  	unsigned f_intel_pt = kvm_x86_ops->pt_supported() ? F(INTEL_PT) : 0;
>  
> -	/* cpuid 1.edx */
> -	const u32 kvm_cpuid_1_edx_x86_features =
> -		F(FPU) | F(VME) | F(DE) | F(PSE) |
> -		F(TSC) | F(MSR) | F(PAE) | F(MCE) |
> -		F(CX8) | F(APIC) | 0 /* Reserved */ | F(SEP) |
> -		F(MTRR) | F(PGE) | F(MCA) | F(CMOV) |
> -		F(PAT) | F(PSE36) | 0 /* PSN */ | F(CLFLUSH) |
> -		0 /* Reserved, DS, ACPI */ | F(MMX) |
> -		F(FXSR) | F(XMM) | F(XMM2) | F(SELFSNOOP) |
> -		0 /* HTT, TM, Reserved, PBE */;
> -	/* cpuid 0x80000001.edx */
> -	const u32 kvm_cpuid_8000_0001_edx_x86_features =
> -		F(FPU) | F(VME) | F(DE) | F(PSE) |
> -		F(TSC) | F(MSR) | F(PAE) | F(MCE) |
> -		F(CX8) | F(APIC) | 0 /* Reserved */ | F(SYSCALL) |
> -		F(MTRR) | F(PGE) | F(MCA) | F(CMOV) |
> -		F(PAT) | F(PSE36) | 0 /* Reserved */ |
> -		f_nx | 0 /* Reserved */ | F(MMXEXT) | F(MMX) |
> -		F(FXSR) | F(FXSR_OPT) | f_gbpages | F(RDTSCP) |
> -		0 /* Reserved */ | f_lm | F(3DNOWEXT) | F(3DNOW);
> -	/* cpuid 1.ecx */
> -	const u32 kvm_cpuid_1_ecx_x86_features =
> -		/* NOTE: MONITOR (and MWAIT) are emulated as NOP,
> -		 * but *not* advertised to guests via CPUID ! */
> -		F(XMM3) | F(PCLMULQDQ) | 0 /* DTES64, MONITOR */ |
> -		0 /* DS-CPL, VMX, SMX, EST */ |
> -		0 /* TM2 */ | F(SSSE3) | 0 /* CNXT-ID */ | 0 /* Reserved */ |
> -		F(FMA) | F(CX16) | 0 /* xTPR Update, PDCM */ |
> -		F(PCID) | 0 /* Reserved, DCA */ | F(XMM4_1) |
> -		F(XMM4_2) | F(X2APIC) | F(MOVBE) | F(POPCNT) |
> -		0 /* Reserved*/ | F(AES) | F(XSAVE) | 0 /* OSXSAVE */ | F(AVX) |
> -		F(F16C) | F(RDRAND);
> -	/* cpuid 0x80000001.ecx */
> -	const u32 kvm_cpuid_8000_0001_ecx_x86_features =
> -		F(LAHF_LM) | F(CMP_LEGACY) | 0 /*SVM*/ | 0 /* ExtApicSpace */ |
> -		F(CR8_LEGACY) | F(ABM) | F(SSE4A) | F(MISALIGNSSE) |
> -		F(3DNOWPREFETCH) | F(OSVW) | 0 /* IBS */ | F(XOP) |
> -		0 /* SKINIT, WDT, LWP */ | F(FMA4) | F(TBM) |
> -		F(TOPOEXT) | F(PERFCTR_CORE);
> -
> -	/* cpuid 0x80000008.ebx */
> -	const u32 kvm_cpuid_8000_0008_ebx_x86_features =
> -		F(CLZERO) | F(XSAVEERPTR) |
> -		F(WBNOINVD) | F(AMD_IBPB) | F(AMD_IBRS) | F(AMD_SSBD) | F(VIRT_SSBD) |
> -		F(AMD_SSB_NO) | F(AMD_STIBP) | F(AMD_STIBP_ALWAYS_ON);
> -
> -	/* cpuid 0xC0000001.edx */
> -	const u32 kvm_cpuid_C000_0001_edx_x86_features =
> -		F(XSTORE) | F(XSTORE_EN) | F(XCRYPT) | F(XCRYPT_EN) |
> -		F(ACE2) | F(ACE2_EN) | F(PHE) | F(PHE_EN) |
> -		F(PMM) | F(PMM_EN);
> -
> -	/* cpuid 0xD.1.eax */
> -	const u32 kvm_cpuid_D_1_eax_x86_features =
> -		F(XSAVEOPT) | F(XSAVEC) | F(XGETBV1) | F(XSAVES);
> -
>  	/* all calls to cpuid_count() should be made on the same cpu */
>  	get_cpu();
>  
> @@ -495,9 +515,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
>  		entry->eax = min(entry->eax, 0x1fU);
>  		break;
>  	case 1:
> -		entry->edx &= kvm_cpuid_1_edx_x86_features;
>  		cpuid_entry_mask(entry, CPUID_1_EDX);
> -		entry->ecx &= kvm_cpuid_1_ecx_x86_features;
>  		cpuid_entry_mask(entry, CPUID_1_ECX);
>  		/* we support x2apic emulation even if host does not support
>  		 * it since we emulate x2apic in software */
> @@ -607,7 +625,6 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
>  		if (!entry)
>  			goto out;
>  
> -		entry->eax &= kvm_cpuid_D_1_eax_x86_features;
>  		cpuid_entry_mask(entry, CPUID_D_1_EAX);
>  
>  		if (!kvm_x86_ops->xsaves_supported())
> @@ -691,9 +708,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
>  		entry->eax = min(entry->eax, 0x8000001f);
>  		break;
>  	case 0x80000001:
> -		entry->edx &= kvm_cpuid_8000_0001_edx_x86_features;
>  		cpuid_entry_mask(entry, CPUID_8000_0001_EDX);
> -		entry->ecx &= kvm_cpuid_8000_0001_ecx_x86_features;
>  		cpuid_entry_mask(entry, CPUID_8000_0001_ECX);
>  		break;
>  	case 0x80000007: /* Advanced power management */
> @@ -712,7 +727,6 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
>  			g_phys_as = phys_as;
>  		entry->eax = g_phys_as | (virt_as << 8);
>  		entry->edx = 0;
> -		entry->ebx &= kvm_cpuid_8000_0008_ebx_x86_features;
>  		cpuid_entry_mask(entry, CPUID_8000_0008_EBX);
>  		/*
>  		 * AMD has separate bits for each SPEC_CTRL bit.
> @@ -755,7 +769,6 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
>  		entry->eax = min(entry->eax, 0xC0000004);
>  		break;
>  	case 0xC0000001:
> -		entry->edx &= kvm_cpuid_C000_0001_edx_x86_features;
>  		cpuid_entry_mask(entry, CPUID_C000_0001_EDX);
>  		break;
>  	case 3: /* Processor serial number */
> diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
> index 41ff94a7d3e0..c64283582d96 100644
> --- a/arch/x86/kvm/cpuid.h
> +++ b/arch/x86/kvm/cpuid.h
> @@ -6,6 +6,9 @@
>  #include <asm/cpu.h>
>  #include <asm/processor.h>
>  
> +extern u32 kvm_cpu_caps[NCAPINTS] __read_mostly;
> +void kvm_set_cpu_caps(void);
> +
>  int kvm_update_cpuid(struct kvm_vcpu *vcpu);
>  struct kvm_cpuid_entry2 *kvm_find_cpuid_entry(struct kvm_vcpu *vcpu,
>  					      u32 function, u32 index);
> @@ -255,4 +258,20 @@ static inline bool cpuid_fault_enabled(struct kvm_vcpu *vcpu)
>  		  MSR_MISC_FEATURES_ENABLES_CPUID_FAULT;
>  }
>  
> +static __always_inline void kvm_cpu_cap_clear(unsigned x86_feature)
> +{
> +	unsigned x86_leaf = x86_feature / 32;
> +
> +	reverse_cpuid_check(x86_leaf);
> +	kvm_cpu_caps[x86_leaf] &= ~__feature_bit(x86_feature);
> +}
> +
> +static __always_inline void kvm_cpu_cap_set(unsigned x86_feature)
> +{
> +	unsigned x86_leaf = x86_feature / 32;
> +
> +	reverse_cpuid_check(x86_leaf);
> +	kvm_cpu_caps[x86_leaf] |= __feature_bit(x86_feature);
> +}
> +
>  #endif
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index f90c56c0c64a..c5ed199d6cd9 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -9591,6 +9591,8 @@ int kvm_arch_hardware_setup(void)
>  {
>  	int r;
>  
> +	kvm_set_cpu_caps();
> +
>  	r = kvm_x86_ops->hardware_setup();
>  	if (r != 0)
>  		return r;

Apart from the BUILD_BUG_ON() condition in cpuid_entry_mask()

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 39/61] KVM: SVM: Convert feature updates from CPUID to KVM cpu caps
  2020-02-01 18:51 ` [PATCH 39/61] KVM: SVM: Convert feature updates from CPUID to KVM cpu caps Sean Christopherson
@ 2020-02-24 21:33   ` Vitaly Kuznetsov
  2020-02-25 15:10   ` Paolo Bonzini
  1 sibling, 0 replies; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-24 21:33 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Use the recently introduced KVM CPU caps to propagate SVM-only (kernel)
> settings to supported CPUID flags.
>
> Note, setting a flag based on a *different* feature is effectively
> emulation, and so must be done at runtime via ->set_supported_cpuid().
>
> Opportunistically add a technically unnecessary break and fix an
> indentation issue in svm_set_supported_cpuid().
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kvm/svm.c | 40 +++++++++++++++++++++++-----------------
>  1 file changed, 23 insertions(+), 17 deletions(-)
>
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index 630520f8adfa..f98a192459f7 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -1350,6 +1350,25 @@ static __init void svm_adjust_mmio_mask(void)
>  	kvm_mmu_set_mmio_spte_mask(mask, mask, PT_WRITABLE_MASK | PT_USER_MASK);
>  }
>  

Can we probably add the comment about what can be done here and what
needs to go to svm_set_supported_cpuid()? (The one about 'emulation'
from your commit message would do).

> +static __init void svm_set_cpu_caps(void)
> +{
> +	/* CPUID 0x1 */
> +	if (avic)
> +		kvm_cpu_cap_clear(X86_FEATURE_X2APIC);
> +
> +	/* CPUID 0x80000001 */
> +	if (nested)
> +		kvm_cpu_cap_set(X86_FEATURE_SVM);
> +
> +	/* CPUID 0x8000000A */
> +	/* Support next_rip if host supports it */
> +	if (boot_cpu_has(X86_FEATURE_NRIPS))
> +		kvm_cpu_cap_set(X86_FEATURE_NRIPS);

Unrelated to your patch but the way we handle 'nrips' is a bit weird: we
can disable it with 'nrips' module parameter but L1 hypervisor will get
it unconditionally.

Also, what about all the rest of 0x8000000A.EDX features? Nested SVM
would appreciate some love... 

> +
> +	if (npt_enabled)
> +		kvm_cpu_cap_set(X86_FEATURE_NPT);
> +}
> +
>  static __init int svm_hardware_setup(void)
>  {
>  	int cpu;
> @@ -1462,6 +1481,8 @@ static __init int svm_hardware_setup(void)
>  			pr_info("Virtual GIF supported\n");
>  	}
>  
> +	svm_set_cpu_caps();
> +
>  	return 0;
>  
>  err:
> @@ -6033,17 +6054,9 @@ static void svm_cpuid_update(struct kvm_vcpu *vcpu)
>  static void svm_set_supported_cpuid(struct kvm_cpuid_entry2 *entry)
>  {
>  	switch (entry->function) {
> -	case 0x1:
> -		if (avic)
> -			cpuid_entry_clear(entry, X86_FEATURE_X2APIC);
> -		break;
> -	case 0x80000001:
> -		if (nested)
> -			cpuid_entry_set(entry, X86_FEATURE_SVM);
> -		break;
>  	case 0x80000008:
>  		if (boot_cpu_has(X86_FEATURE_LS_CFG_SSBD) ||
> -		     boot_cpu_has(X86_FEATURE_AMD_SSBD))
> +		    boot_cpu_has(X86_FEATURE_AMD_SSBD))
>  			cpuid_entry_set(entry, X86_FEATURE_VIRT_SSBD);
>  		break;
>  	case 0x8000000A:
> @@ -6053,14 +6066,7 @@ static void svm_set_supported_cpuid(struct kvm_cpuid_entry2 *entry)
>  		entry->ecx = 0; /* Reserved */
>  		entry->edx = 0; /* Per default do not support any
>  				   additional features */
> -
> -		/* Support next_rip if host supports it */
> -		if (boot_cpu_has(X86_FEATURE_NRIPS))
> -			cpuid_entry_set(entry, X86_FEATURE_NRIPS);
> -
> -		/* Support NPT for the guest if enabled */
> -		if (npt_enabled)
> -			cpuid_entry_set(entry, X86_FEATURE_NPT);
> +		break;
>  	}
>  }

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 40/61] KVM: VMX: Convert feature updates from CPUID to KVM cpu caps
  2020-02-01 18:51 ` [PATCH 40/61] KVM: VMX: " Sean Christopherson
@ 2020-02-24 21:40   ` Vitaly Kuznetsov
  0 siblings, 0 replies; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-24 21:40 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Use the recently introduced KVM CPU caps to propagate VMX-only (kernel)
> settings to supported CPUID flags.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kvm/vmx/vmx.c | 51 ++++++++++++++++++++++++------------------
>  1 file changed, 29 insertions(+), 22 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 11b9c1e7e520..bae915431c72 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -7102,37 +7102,42 @@ static void vmx_cpuid_update(struct kvm_vcpu *vcpu)
>  static void vmx_set_supported_cpuid(struct kvm_cpuid_entry2 *entry)
>  {
>  	switch (entry->function) {
> -	case 0x1:
> -		if (nested)
> -			cpuid_entry_set(entry, X86_FEATURE_VMX);
> -		break;
>  	case 0x7:
> -		if (boot_cpu_has(X86_FEATURE_MPX) && kvm_mpx_supported())
> -			cpuid_entry_set(entry, X86_FEATURE_MPX);
> -		if (boot_cpu_has(X86_FEATURE_INVPCID) && cpu_has_vmx_invpcid())
> -			cpuid_entry_set(entry, X86_FEATURE_INVPCID);
> -		if (boot_cpu_has(X86_FEATURE_INTEL_PT) &&
> -		    vmx_pt_mode_is_host_guest())
> -			cpuid_entry_set(entry, X86_FEATURE_INTEL_PT);
>  		if (vmx_umip_emulated())
>  			cpuid_entry_set(entry, X86_FEATURE_UMIP);
> -
> -		/* PKU is not yet implemented for shadow paging. */
> -		if (enable_ept && boot_cpu_has(X86_FEATURE_PKU) &&
> -		    boot_cpu_has(X86_FEATURE_OSPKE))
> -			cpuid_entry_set(entry, X86_FEATURE_PKU);
> -		break;
> -	case 0x80000001:
> -		if (!cpu_has_vmx_rdtscp())
> -			cpuid_entry_clear(entry, X86_FEATURE_RDTSCP);
> -		if (enable_ept && !cpu_has_vmx_ept_1g_page())
> -			cpuid_entry_clear(entry, X86_FEATURE_GBPAGES);
>  		break;
>  	default:
>  		break;
>  	}
>  }
>  

Same comment as for svm: I think someone may not understand what goes
where, i.e. the separation between vmx_set_supported_cpuid() and
vmx_set_cpu_caps().

> +static __init void vmx_set_cpu_caps(void)
> +{
> +	/* CPUID 0x1 */
> +	if (nested)
> +		kvm_cpu_cap_set(X86_FEATURE_VMX);
> +
> +	/* CPUID 0x7 */
> +	if (boot_cpu_has(X86_FEATURE_MPX) && kvm_mpx_supported())
> +		kvm_cpu_cap_set(X86_FEATURE_MPX);
> +	if (boot_cpu_has(X86_FEATURE_INVPCID) && cpu_has_vmx_invpcid())
> +		kvm_cpu_cap_set(X86_FEATURE_INVPCID);
> +	if (boot_cpu_has(X86_FEATURE_INTEL_PT) &&
> +	    vmx_pt_mode_is_host_guest())
> +		kvm_cpu_cap_set(X86_FEATURE_INTEL_PT);
> +
> +	/* PKU is not yet implemented for shadow paging. */
> +	if (enable_ept && boot_cpu_has(X86_FEATURE_PKU) &&
> +	    boot_cpu_has(X86_FEATURE_OSPKE))
> +		kvm_cpu_cap_set(X86_FEATURE_PKU);
> +
> +	/* CPUID 0x80000001 */
> +	if (!cpu_has_vmx_rdtscp())
> +		kvm_cpu_cap_clear(X86_FEATURE_RDTSCP);
> +	if (enable_ept && !cpu_has_vmx_ept_1g_page())
> +		kvm_cpu_cap_clear(X86_FEATURE_GBPAGES);
> +}
> +
>  static void vmx_request_immediate_exit(struct kvm_vcpu *vcpu)
>  {
>  	to_vmx(vcpu)->req_immediate_exit = true;
> @@ -7750,6 +7755,8 @@ static __init int hardware_setup(void)
>  			return r;
>  	}
>  
> +	vmx_set_cpu_caps();
> +
>  	r = alloc_kvm_area();
>  	if (r)
>  		nested_vmx_hardware_unsetup();

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 41/61] KVM: x86: Move XSAVES CPUID adjust to VMX's KVM cpu cap update
  2020-02-01 18:51 ` [PATCH 41/61] KVM: x86: Move XSAVES CPUID adjust to VMX's KVM cpu cap update Sean Christopherson
@ 2020-02-24 21:43   ` Vitaly Kuznetsov
  0 siblings, 0 replies; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-24 21:43 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Move the clearing of the XSAVES CPUID bit into VMX, which has a separate
> VMCS control to enable XSAVES in non-root, to eliminate the last ugly
> renmant of the undesirable "unsigned f_* = *_supported ? F(*) : 0"
> pattern in the common CPUID handling code.
>
> Drop ->xsaves_supported(), CPUID adjustment was the only user.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/include/asm/kvm_host.h | 1 -
>  arch/x86/kvm/cpuid.c            | 4 ----
>  arch/x86/kvm/svm.c              | 6 ------
>  arch/x86/kvm/vmx/vmx.c          | 5 ++++-
>  4 files changed, 4 insertions(+), 12 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index ba828569cda5..dd690fb5ceca 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1163,7 +1163,6 @@ struct kvm_x86_ops {
>  	void (*handle_exit_irqoff)(struct kvm_vcpu *vcpu,
>  		enum exit_fastpath_completion *exit_fastpath);
>  
> -	bool (*xsaves_supported)(void);
>  	bool (*umip_emulated)(void);
>  	bool (*pt_supported)(void);
>  
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index c2a4c9df49a9..77a6c1db138d 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -626,10 +626,6 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
>  			goto out;
>  
>  		cpuid_entry_mask(entry, CPUID_D_1_EAX);
> -
> -		if (!kvm_x86_ops->xsaves_supported())
> -			cpuid_entry_clear(entry, X86_FEATURE_XSAVES);
> -
>  		if (entry->eax & (F(XSAVES)|F(XSAVEC)))
>  			entry->ebx = xstate_required_size(supported_xcr0, true);
>  		else
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index f98a192459f7..7cb05945162e 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -6080,11 +6080,6 @@ static bool svm_rdtscp_supported(void)
>  	return boot_cpu_has(X86_FEATURE_RDTSCP);
>  }
>  
> -static bool svm_xsaves_supported(void)
> -{
> -	return boot_cpu_has(X86_FEATURE_XSAVES);
> -}
> -
>  static bool svm_umip_emulated(void)
>  {
>  	return false;
> @@ -7455,7 +7450,6 @@ static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
>  	.cpuid_update = svm_cpuid_update,
>  
>  	.rdtscp_supported = svm_rdtscp_supported,
> -	.xsaves_supported = svm_xsaves_supported,
>  	.umip_emulated = svm_umip_emulated,
>  	.pt_supported = svm_pt_supported,
>  
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index bae915431c72..cfd0ef314176 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -7131,6 +7131,10 @@ static __init void vmx_set_cpu_caps(void)
>  	    boot_cpu_has(X86_FEATURE_OSPKE))
>  		kvm_cpu_cap_set(X86_FEATURE_PKU);
>  
> +	/* CPUID 0xD.1 */
> +	if (!vmx_xsaves_supported())
> +		kvm_cpu_cap_clear(X86_FEATURE_XSAVES);
> +
>  	/* CPUID 0x80000001 */
>  	if (!cpu_has_vmx_rdtscp())
>  		kvm_cpu_cap_clear(X86_FEATURE_RDTSCP);
> @@ -7886,7 +7890,6 @@ static struct kvm_x86_ops vmx_x86_ops __ro_after_init = {
>  
>  	.check_intercept = vmx_check_intercept,
>  	.handle_exit_irqoff = vmx_handle_exit_irqoff,
> -	.xsaves_supported = vmx_xsaves_supported,
>  	.umip_emulated = vmx_umip_emulated,
>  	.pt_supported = vmx_pt_supported,

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 42/61] KVM: x86: Add a helper to check kernel support when setting cpu cap
  2020-02-01 18:51 ` [PATCH 42/61] KVM: x86: Add a helper to check kernel support when setting cpu cap Sean Christopherson
@ 2020-02-24 21:47   ` Vitaly Kuznetsov
  0 siblings, 0 replies; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-24 21:47 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Add a helper, kvm_cpu_cap_check_and_set(), to query boot_cpu_has() as
> part of setting a KVM cpu capability.  VMX in particular has a number of
> features that are dependent on both a VMCS capability and kernel
> support.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kvm/cpuid.h   |  6 ++++++
>  arch/x86/kvm/svm.c     |  3 +--
>  arch/x86/kvm/vmx/vmx.c | 18 ++++++++----------
>  3 files changed, 15 insertions(+), 12 deletions(-)
>
> diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
> index c64283582d96..7b71ae0ca05e 100644
> --- a/arch/x86/kvm/cpuid.h
> +++ b/arch/x86/kvm/cpuid.h
> @@ -274,4 +274,10 @@ static __always_inline void kvm_cpu_cap_set(unsigned x86_feature)
>  	kvm_cpu_caps[x86_leaf] |= __feature_bit(x86_feature);
>  }
>  
> +static __always_inline void kvm_cpu_cap_check_and_set(unsigned x86_feature)
> +{
> +	if (boot_cpu_has(x86_feature))
> +		kvm_cpu_cap_set(x86_feature);
> +}
> +
>  #endif
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index 7cb05945162e..defb2c0dbf8a 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -1362,8 +1362,7 @@ static __init void svm_set_cpu_caps(void)
>  
>  	/* CPUID 0x8000000A */
>  	/* Support next_rip if host supports it */
> -	if (boot_cpu_has(X86_FEATURE_NRIPS))
> -		kvm_cpu_cap_set(X86_FEATURE_NRIPS);
> +	kvm_cpu_cap_check_and_set(X86_FEATURE_NRIPS);
>  
>  	if (npt_enabled)
>  		kvm_cpu_cap_set(X86_FEATURE_NPT);
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index cfd0ef314176..cecf59225136 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -7118,18 +7118,16 @@ static __init void vmx_set_cpu_caps(void)
>  		kvm_cpu_cap_set(X86_FEATURE_VMX);
>  
>  	/* CPUID 0x7 */
> -	if (boot_cpu_has(X86_FEATURE_MPX) && kvm_mpx_supported())
> -		kvm_cpu_cap_set(X86_FEATURE_MPX);
> -	if (boot_cpu_has(X86_FEATURE_INVPCID) && cpu_has_vmx_invpcid())
> -		kvm_cpu_cap_set(X86_FEATURE_INVPCID);
> -	if (boot_cpu_has(X86_FEATURE_INTEL_PT) &&
> -	    vmx_pt_mode_is_host_guest())
> -		kvm_cpu_cap_set(X86_FEATURE_INTEL_PT);
> +	if (kvm_mpx_supported())
> +		kvm_cpu_cap_check_and_set(X86_FEATURE_MPX);
> +	if (cpu_has_vmx_invpcid())
> +		kvm_cpu_cap_check_and_set(X86_FEATURE_INVPCID);
> +	if (vmx_pt_mode_is_host_guest())
> +		kvm_cpu_cap_check_and_set(X86_FEATURE_INTEL_PT);
>  
>  	/* PKU is not yet implemented for shadow paging. */
> -	if (enable_ept && boot_cpu_has(X86_FEATURE_PKU) &&
> -	    boot_cpu_has(X86_FEATURE_OSPKE))
> -		kvm_cpu_cap_set(X86_FEATURE_PKU);
> +	if (enable_ept && boot_cpu_has(X86_FEATURE_OSPKE))
> +		kvm_cpu_cap_check_and_set(X86_FEATURE_PKU);
>  
>  	/* CPUID 0xD.1 */
>  	if (!vmx_xsaves_supported())

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 16/61] KVM: x86: Encapsulate CPUID entries and metadata in struct
  2020-02-21 14:58   ` Vitaly Kuznetsov
@ 2020-02-24 21:55     ` Sean Christopherson
  2020-02-24 23:12       ` Vitaly Kuznetsov
  0 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-24 21:55 UTC (permalink / raw)
  To: Vitaly Kuznetsov
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

On Fri, Feb 21, 2020 at 03:58:47PM +0100, Vitaly Kuznetsov wrote:
> Sean Christopherson <sean.j.christopherson@intel.com> writes:
> 
> > @@ -539,7 +549,8 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
> >  		entry->flags |= KVM_CPUID_FLAG_STATE_READ_NEXT;
> >  
> >  		for (i = 1, max_idx = entry->eax & 0xff; i < max_idx; ++i) {
> > -			if (!do_host_cpuid(&entry[i], nent, maxnent, function, 0))
> > +			entry = do_host_cpuid(array, 2, 0);
> 
> I'd change this to 
>                         entry = do_host_cpuid(array, function, 0);
> 
> to match other call sites.

Done.  That did look weird, no idea why I decided to hardcode only this one.

> > +			if (!entry)
> >  				goto out;
> >  		}
> >  		break;
> > @@ -802,22 +814,22 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
> >  	return r;
> >  }
> >  
> > -static int do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 func,
> > -			 int *nent, int maxnent, unsigned int type)
> > +static int do_cpuid_func(struct kvm_cpuid_array *array, u32 func,
> > +			 unsigned int type)
> >  {
> > -	if (*nent >= maxnent)
> > +	if (array->nent >= array->maxnent)
> >  		return -E2BIG;
> >  
> >  	if (type == KVM_GET_EMULATED_CPUID)
> > -		return __do_cpuid_func_emulated(entry, func, nent, maxnent);
> > +		return __do_cpuid_func_emulated(array, func);
> 
> Would it make sense to move 'if (array->nent >= array->maxnent)' check
> to __do_cpuid_func_emulated() to match do_host_cpuid()?

I considered doing exactly that.  IIRC, I opted not to because at this
point in the series, the initial call to do_host_cpuid() is something like
halfway down the massive __do_cpuid_func(), and eliminating the early check
didn't feel quite right, e.g. there is a fair amount of unnecessary code
that runs before hitting the first do_host_cpuid().

What if I add a patch towards the end of the series to move this check into
__do_cpuid_func_emulated(), i.e. after __do_cpuid_func() has been trimmed
down to size and the early check really is superfluous.

^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 43/61] KVM: x86: Use KVM cpu caps to mark CR4.LA57 as not-reserved
  2020-02-01 18:52 ` [PATCH 43/61] KVM: x86: Use KVM cpu caps to mark CR4.LA57 as not-reserved Sean Christopherson
@ 2020-02-24 22:08   ` Vitaly Kuznetsov
  2020-02-24 23:23     ` Sean Christopherson
  2020-02-25 15:12     ` Paolo Bonzini
  0 siblings, 2 replies; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-24 22:08 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Add accessor(s) for KVM cpu caps and use said accessor to detect
> hardware support for LA57 instead of manually querying CPUID.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kvm/cpuid.h | 13 +++++++++++++
>  arch/x86/kvm/x86.c   |  2 +-
>  2 files changed, 14 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
> index 7b71ae0ca05e..5ce4219d465f 100644
> --- a/arch/x86/kvm/cpuid.h
> +++ b/arch/x86/kvm/cpuid.h
> @@ -274,6 +274,19 @@ static __always_inline void kvm_cpu_cap_set(unsigned x86_feature)
>  	kvm_cpu_caps[x86_leaf] |= __feature_bit(x86_feature);
>  }
>  
> +static __always_inline u32 kvm_cpu_cap_get(unsigned x86_feature)
> +{
> +	unsigned x86_leaf = x86_feature / 32;
> +
> +	reverse_cpuid_check(x86_leaf);
> +	return kvm_cpu_caps[x86_leaf] & __feature_bit(x86_feature);
> +}
> +
> +static __always_inline bool kvm_cpu_cap_has(unsigned x86_feature)
> +{
> +	return kvm_cpu_cap_get(x86_feature);
> +}

I know this works (and I even checked C99 to make sure that it works not
by accident) but I have to admit that explicit '!!' conversion to bool
always makes me feel safer :-)

> +
>  static __always_inline void kvm_cpu_cap_check_and_set(unsigned x86_feature)
>  {
>  	if (boot_cpu_has(x86_feature))
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index c5ed199d6cd9..cb40737187a1 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -912,7 +912,7 @@ static u64 kvm_host_cr4_reserved_bits(struct cpuinfo_x86 *c)
>  {
>  	u64 reserved_bits = __cr4_reserved_bits(cpu_has, c);
>  
> -	if (cpuid_ecx(0x7) & feature_bit(LA57))
> +	if (kvm_cpu_cap_has(X86_FEATURE_LA57))
>  		reserved_bits &= ~X86_CR4_LA57;
>  
>  	if (kvm_x86_ops->umip_emulated())

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 44/61] KVM: x86: Use KVM cpu caps to track UMIP emulation
  2020-02-01 18:52 ` [PATCH 44/61] KVM: x86: Use KVM cpu caps to track UMIP emulation Sean Christopherson
@ 2020-02-24 22:13   ` Vitaly Kuznetsov
  0 siblings, 0 replies; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-24 22:13 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Set UMIP in kvm_cpu_caps when it is emulated by VMX, even though the
> bit will be effectively be dropped 

Redundant 'be'

> by do_host_cpuid().  This allows checking for UMIP emulation via
> kvm_cpu_caps instead of a dedicated kvm_x86_ops callback.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/include/asm/kvm_host.h | 1 -
>  arch/x86/kvm/svm.c              | 6 ------
>  arch/x86/kvm/vmx/vmx.c          | 8 +++++++-
>  arch/x86/kvm/x86.c              | 2 +-
>  4 files changed, 8 insertions(+), 9 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index dd690fb5ceca..113b138a0347 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1163,7 +1163,6 @@ struct kvm_x86_ops {
>  	void (*handle_exit_irqoff)(struct kvm_vcpu *vcpu,
>  		enum exit_fastpath_completion *exit_fastpath);
>  
> -	bool (*umip_emulated)(void);
>  	bool (*pt_supported)(void);
>  
>  	int (*check_nested_events)(struct kvm_vcpu *vcpu, bool external_intr);
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index defb2c0dbf8a..e1ed5726964c 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -6079,11 +6079,6 @@ static bool svm_rdtscp_supported(void)
>  	return boot_cpu_has(X86_FEATURE_RDTSCP);
>  }
>  
> -static bool svm_umip_emulated(void)
> -{
> -	return false;
> -}
> -
>  static bool svm_pt_supported(void)
>  {
>  	return false;
> @@ -7449,7 +7444,6 @@ static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
>  	.cpuid_update = svm_cpuid_update,
>  
>  	.rdtscp_supported = svm_rdtscp_supported,
> -	.umip_emulated = svm_umip_emulated,
>  	.pt_supported = svm_pt_supported,
>  
>  	.set_supported_cpuid = svm_set_supported_cpuid,
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index cecf59225136..cd5a624610c9 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -7103,6 +7103,10 @@ static void vmx_set_supported_cpuid(struct kvm_cpuid_entry2 *entry)
>  {
>  	switch (entry->function) {
>  	case 0x7:
> +		/*
> +		 * UMIP needs to be manually set even though vmx_set_cpu_caps()
> +		 * also sets UMIP since do_host_cpuid() will drop it.
> +		 */
>  		if (vmx_umip_emulated())
>  			cpuid_entry_set(entry, X86_FEATURE_UMIP);
>  		break;
> @@ -7129,6 +7133,9 @@ static __init void vmx_set_cpu_caps(void)
>  	if (enable_ept && boot_cpu_has(X86_FEATURE_OSPKE))
>  		kvm_cpu_cap_check_and_set(X86_FEATURE_PKU);
>  
> +	if (vmx_umip_emulated())
> +		kvm_cpu_cap_set(X86_FEATURE_UMIP);
> +
>  	/* CPUID 0xD.1 */
>  	if (!vmx_xsaves_supported())
>  		kvm_cpu_cap_clear(X86_FEATURE_XSAVES);
> @@ -7888,7 +7895,6 @@ static struct kvm_x86_ops vmx_x86_ops __ro_after_init = {
>  
>  	.check_intercept = vmx_check_intercept,
>  	.handle_exit_irqoff = vmx_handle_exit_irqoff,
> -	.umip_emulated = vmx_umip_emulated,
>  	.pt_supported = vmx_pt_supported,
>  
>  	.request_immediate_exit = vmx_request_immediate_exit,
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index cb40737187a1..a6d5f22c7ef6 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -915,7 +915,7 @@ static u64 kvm_host_cr4_reserved_bits(struct cpuinfo_x86 *c)
>  	if (kvm_cpu_cap_has(X86_FEATURE_LA57))
>  		reserved_bits &= ~X86_CR4_LA57;
>  
> -	if (kvm_x86_ops->umip_emulated())
> +	if (kvm_cpu_cap_has(X86_FEATURE_UMIP))
>  		reserved_bits &= ~X86_CR4_UMIP;
>  
>  	return reserved_bits;

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 19/61] KVM: VMX: Add helpers to query Intel PT mode
       [not found]   ` <87pne8q8c0.fsf@vitty.brq.redhat.com>
@ 2020-02-24 22:18     ` Sean Christopherson
  2020-02-25 14:54       ` Paolo Bonzini
  0 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-24 22:18 UTC (permalink / raw)
  To: Vitaly Kuznetsov
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

On Fri, Feb 21, 2020 at 04:16:31PM +0100, Vitaly Kuznetsov wrote:
> Sean Christopherson <sean.j.christopherson@intel.com> writes:
> 
> > diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
> > index a4f7f737c5d4..70eafa88876a 100644
> > --- a/arch/x86/kvm/vmx/vmx.h
> > +++ b/arch/x86/kvm/vmx/vmx.h
> > @@ -449,7 +449,7 @@ static inline void vmx_segment_cache_clear(struct vcpu_vmx *vmx)
> >  static inline u32 vmx_vmentry_ctrl(void)
> >  {
> >  	u32 vmentry_ctrl = vmcs_config.vmentry_ctrl;
> > -	if (pt_mode == PT_MODE_SYSTEM)
> > +	if (vmx_pt_mode_is_system())
> 
> Just wondering, would it rather be better to say
>         if (!vmx_pt_supported())
> here?
> 
> >  		vmentry_ctrl &= ~(VM_ENTRY_PT_CONCEAL_PIP |
> >  				  VM_ENTRY_LOAD_IA32_RTIT_CTL);
> >  	/* Loading of EFER and PERF_GLOBAL_CTRL are toggled dynamically */
> > @@ -460,7 +460,7 @@ static inline u32 vmx_vmentry_ctrl(void)
> >  static inline u32 vmx_vmexit_ctrl(void)
> >  {
> >  	u32 vmexit_ctrl = vmcs_config.vmexit_ctrl;
> > -	if (pt_mode == PT_MODE_SYSTEM)
> > +	if (vmx_pt_mode_is_system())
> 
> ... and here? I.e. to cover the currently unsupported 'host-only' mode.

Hmm, good question.  I don't think so?  On VM-Enter, RTIT_CTL would need to
be loaded to disable PT.  Clearing RTIT_CTL on VM-Exit would be redundant
at that point[1].  And AIUI, the PIP for VM-Enter/VM-Exit isn't needed
because there is no context switch from the decoder's perspective.

Note, the original upstreaming series also used "pt_mode == PT_MODE_SYSTEM"
logic for this check when "host-only mode" was supported[2].

[1] Arguably, KVM should use the VM-Exit MSR load list to atomically
    reenable tracing, but that's feedback for a non-existence patch :-).
[2] https://patchwork.kernel.org/patch/10104533/

> 
> >  		vmexit_ctrl &= ~(VM_EXIT_PT_CONCEAL_PIP |
> >  				 VM_EXIT_CLEAR_IA32_RTIT_CTL);
> >  	/* Loading of EFER and PERF_GLOBAL_CTRL are toggled dynamically */

^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 45/61] KVM: x86: Fold CPUID 0x7 masking back into __do_cpuid_func()
  2020-02-01 18:52 ` [PATCH 45/61] KVM: x86: Fold CPUID 0x7 masking back into __do_cpuid_func() Sean Christopherson
@ 2020-02-24 22:21   ` Vitaly Kuznetsov
  0 siblings, 0 replies; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-24 22:21 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Move the CPUID 0x7 masking back into __do_cpuid_func() now that the
> size of the code has been trimmed down significantly.
>
> Tweak the WARN case, which is impossible to hit unless the CPU is
> completely broken, to break the loop before creating the bogus entry.
>
> Opportunustically reorder the cpuid_entry_set() calls and shorten the
> comment about emulation to further reduce the footprint of CPUID 0x7.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kvm/cpuid.c | 62 ++++++++++++++++----------------------------
>  1 file changed, 22 insertions(+), 40 deletions(-)
>
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index 77a6c1db138d..7362e5238799 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -456,44 +456,6 @@ static int __do_cpuid_func_emulated(struct kvm_cpuid_array *array, u32 func)
>  	return 0;
>  }
>  
> -static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry)
> -{
> -	switch (entry->index) {
> -	case 0:
> -		entry->eax = min(entry->eax, 1u);
> -		cpuid_entry_mask(entry, CPUID_7_0_EBX);
> -		/* TSC_ADJUST is emulated */
> -		cpuid_entry_set(entry, X86_FEATURE_TSC_ADJUST);
> -		cpuid_entry_mask(entry, CPUID_7_ECX);
> -		cpuid_entry_mask(entry, CPUID_7_EDX);
> -		if (boot_cpu_has(X86_FEATURE_IBPB) && boot_cpu_has(X86_FEATURE_IBRS))
> -			cpuid_entry_set(entry, X86_FEATURE_SPEC_CTRL);
> -		if (boot_cpu_has(X86_FEATURE_STIBP))
> -			cpuid_entry_set(entry, X86_FEATURE_INTEL_STIBP);
> -		if (boot_cpu_has(X86_FEATURE_AMD_SSBD))
> -			cpuid_entry_set(entry, X86_FEATURE_SPEC_CTRL_SSBD);
> -		/*
> -		 * We emulate ARCH_CAPABILITIES in software even
> -		 * if the host doesn't support it.
> -		 */
> -		cpuid_entry_set(entry, X86_FEATURE_ARCH_CAPABILITIES);
> -		break;
> -	case 1:
> -		cpuid_entry_mask(entry, CPUID_7_1_EAX);
> -		entry->ebx = 0;
> -		entry->ecx = 0;
> -		entry->edx = 0;
> -		break;
> -	default:
> -		WARN_ON_ONCE(1);
> -		entry->eax = 0;
> -		entry->ebx = 0;
> -		entry->ecx = 0;
> -		entry->edx = 0;
> -		break;
> -	}
> -}
> -
>  static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
>  {
>  	struct kvm_cpuid_entry2 *entry;
> @@ -555,14 +517,34 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
>  		break;
>  	/* function 7 has additional index. */
>  	case 7:
> -		do_cpuid_7_mask(entry);
> +		entry->eax = min(entry->eax, 1u);
> +		cpuid_entry_mask(entry, CPUID_7_0_EBX);
> +		cpuid_entry_mask(entry, CPUID_7_ECX);
> +		cpuid_entry_mask(entry, CPUID_7_EDX);
> +
> +		/* TSC_ADJUST and ARCH_CAPABILITIES are emulated in software. */
> +		cpuid_entry_set(entry, X86_FEATURE_TSC_ADJUST);
> +		cpuid_entry_set(entry, X86_FEATURE_ARCH_CAPABILITIES);
> +
> +		if (boot_cpu_has(X86_FEATURE_IBPB) && boot_cpu_has(X86_FEATURE_IBRS))
> +			cpuid_entry_set(entry, X86_FEATURE_SPEC_CTRL);
> +		if (boot_cpu_has(X86_FEATURE_STIBP))
> +			cpuid_entry_set(entry, X86_FEATURE_INTEL_STIBP);
> +		if (boot_cpu_has(X86_FEATURE_AMD_SSBD))
> +			cpuid_entry_set(entry, X86_FEATURE_SPEC_CTRL_SSBD);
>  
>  		for (i = 1, max_idx = entry->eax; i <= max_idx; i++) {
> +			if (WARN_ON_ONCE(i > 1))
> +				break;
> +
>  			entry = do_host_cpuid(array, function, i);
>  			if (!entry)
>  				goto out;
>  
> -			do_cpuid_7_mask(entry);
> +			cpuid_entry_mask(entry, CPUID_7_1_EAX);
> +			entry->ebx = 0;
> +			entry->ecx = 0;
> +			entry->edx = 0;
>  		}
>  		break;
>  	case 9:

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 46/61] KVM: x86: Remove the unnecessary loop on CPUID 0x7 sub-leafs
  2020-02-01 18:52 ` [PATCH 46/61] KVM: x86: Remove the unnecessary loop on CPUID 0x7 sub-leafs Sean Christopherson
@ 2020-02-24 22:25   ` Vitaly Kuznetsov
  0 siblings, 0 replies; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-24 22:25 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Explicitly handle CPUID 0x7 sub-leaf 1.  The kernel is currently aware
> of exactly one feature in CPUID 0x7.1,  which means there is room for
> another 127 features before CPUID 0x7.2 will see the light of day, i.e.
> the looping is likely to be dead code for years to come.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kvm/cpuid.c | 8 +++-----
>  1 file changed, 3 insertions(+), 5 deletions(-)
>
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index 7362e5238799..47f61f4497fb 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -533,11 +533,9 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
>  		if (boot_cpu_has(X86_FEATURE_AMD_SSBD))
>  			cpuid_entry_set(entry, X86_FEATURE_SPEC_CTRL_SSBD);
>  
> -		for (i = 1, max_idx = entry->eax; i <= max_idx; i++) {
> -			if (WARN_ON_ONCE(i > 1))
> -				break;
> -
> -			entry = do_host_cpuid(array, function, i);
> +		/* KVM only supports 0x7.0 and 0x7.1, capped above via min(). */
> +		if (entry->eax == 1) {
> +			entry = do_host_cpuid(array, function, 1);
>  			if (!entry)
>  				goto out;

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 47/61] KVM: x86: Squash CPUID 0x2.0 insanity for modern CPUs
  2020-02-01 18:52 ` [PATCH 47/61] KVM: x86: Squash CPUID 0x2.0 insanity for modern CPUs Sean Christopherson
@ 2020-02-24 22:35   ` Vitaly Kuznetsov
  2020-02-25 15:17   ` Paolo Bonzini
  1 sibling, 0 replies; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-24 22:35 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Rework CPUID 0x2.0 to be a normal CPUID leaf if it returns "01" in AL,
> i.e. EAX & 0xff.
>
> Long ago, Intel documented CPUID 0x2.0 as being a stateful leaf, e.g. a
> version of the SDM circa 1995 states:
>
>   The least-significant byte in register EAX (register AL) indicates the
>   number of times the CPUID instruction must be executed with an input
>   value of 2 to get a complete description of the processors's caches
>   and TLBs.  The Pentium Pro family of processors will return a 1.
>
> A 2000 version of the SDM only updated the paragraph to reference
> Intel's new processory family:
>
>   The first member of the family of Pentium 4 processors will return a 1.
>
> Fast forward to the present, and Intel's SDM now states:
>
>   The least-significant byte in register EAX (register AL) will always
>   return 01H.  Software should ignore this value and not interpret it as
>   an information descriptor.
>
> AMD's APM simply states that CPUID 0x2 is reserved.
>
> Given that CPUID itself was introduced in the Pentium, odds are good
> that the only Intel CPU family that *maybe* implemented a stateful CPUID
> was the P5.  Which obviously did not support VMX, or KVM.
>
> In other words, KVM's emulation of a stateful CPUID 0x2.0 has likely
> been dead code from the day it was introduced.  This is backed up by
> commit 0fdf8e59faa5c ("KVM: Fix cpuid iteration on multiple leaves per
> eac"), whichs show that the stateful iteration code was completely
> broken when it was introduced by commit 0771671749b59 ("KVM: Enhance
> guest cpuid management"), i.e. not actually tested.
>
> Although it's _extremely_ tempting to yank KVM's stateful code, leave it
> in for now but annotate all its code paths as "unlikely".  The code is
> relatively contained, and if by some miracle there is someone running KVM
> on a CPU with a stateful CPUID 0x2, more power to 'em.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kvm/cpuid.c | 31 +++++++++++++++++++++----------
>  1 file changed, 21 insertions(+), 10 deletions(-)
>
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index 47f61f4497fb..ab2a34337588 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -405,9 +405,6 @@ static struct kvm_cpuid_entry2 *do_host_cpuid(struct kvm_cpuid_array *array,
>  		    &entry->eax, &entry->ebx, &entry->ecx, &entry->edx);
>  
>  	switch (function) {
> -	case 2:
> -		entry->flags |= KVM_CPUID_FLAG_STATEFUL_FUNC;
> -		break;
>  	case 4:
>  	case 7:
>  	case 0xb:
> @@ -483,17 +480,31 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
>  		 * it since we emulate x2apic in software */
>  		cpuid_entry_set(entry, X86_FEATURE_X2APIC);
>  		break;
> -	/* function 2 entries are STATEFUL. That is, repeated cpuid commands
> -	 * may return different values. This forces us to get_cpu() before
> -	 * issuing the first command, and also to emulate this annoying behavior
> -	 * in kvm_emulate_cpuid() using KVM_CPUID_FLAG_STATE_READ_NEXT */
>  	case 2:
> +		/*
> +		 * On ancient CPUs, function 2 entries are STATEFUL.  That is,
> +		 * CPUID(function=2, index=0) may return different results each
> +		 * time, with the least-significant byte in EAX enumerating the
> +		 * number of times software should do CPUID(2, 0).
> +		 *
> +		 * Modern CPUs (quite likely every CPU KVM has *ever* run on)
> +		 * are less idiotic.  Intel's SDM states that EAX & 0xff "will
> +		 * always return 01H. Software should ignore this value and not
> +		 * interpret it as an informational descriptor", while AMD's
> +		 * APM states that CPUID(2) is reserved.
> +		 */
> +		max_idx = entry->eax & 0xff;
> +		if (likely(max_idx <= 1))
> +			break;
> +
> +		entry->flags |= KVM_CPUID_FLAG_STATEFUL_FUNC;
>  		entry->flags |= KVM_CPUID_FLAG_STATE_READ_NEXT;
>  
> -		for (i = 1, max_idx = entry->eax & 0xff; i < max_idx; ++i) {
> +		for (i = 1; i < max_idx; ++i) {
>  			entry = do_host_cpuid(array, 2, 0);
>  			if (!entry)
>  				goto out;
> +			entry->flags |= KVM_CPUID_FLAG_STATEFUL_FUNC;
>  		}
>  		break;
>  	/* functions 4 and 0x8000001d have additional index. */
> @@ -903,7 +914,7 @@ static int is_matching_cpuid_entry(struct kvm_cpuid_entry2 *e,
>  		return 0;
>  	if ((e->flags & KVM_CPUID_FLAG_SIGNIFCANT_INDEX) && e->index != index)
>  		return 0;
> -	if ((e->flags & KVM_CPUID_FLAG_STATEFUL_FUNC) &&
> +	if (unlikely(e->flags & KVM_CPUID_FLAG_STATEFUL_FUNC) &&
>  	    !(e->flags & KVM_CPUID_FLAG_STATE_READ_NEXT))
>  		return 0;
>  	return 1;
> @@ -920,7 +931,7 @@ struct kvm_cpuid_entry2 *kvm_find_cpuid_entry(struct kvm_vcpu *vcpu,
>  
>  		e = &vcpu->arch.cpuid_entries[i];
>  		if (is_matching_cpuid_entry(e, function, index)) {
> -			if (e->flags & KVM_CPUID_FLAG_STATEFUL_FUNC)
> +			if (unlikely(e->flags & KVM_CPUID_FLAG_STATEFUL_FUNC))
>  				move_to_next_stateful_cpuid_entry(vcpu, i);
>  			best = e;
>  			break;

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

but your history digging results make me think that killing the whole
'statefulness' thing is not a bad idea at all :-)

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 27/61] KVM: x86: Introduce cpuid_entry_{change,set,clear}() mutators
       [not found]   ` <87ftf0p0d0.fsf@vitty.brq.redhat.com>
@ 2020-02-24 22:42     ` Sean Christopherson
  0 siblings, 0 replies; 168+ messages in thread
From: Sean Christopherson @ 2020-02-24 22:42 UTC (permalink / raw)
  To: Vitaly Kuznetsov
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

On Mon, Feb 24, 2020 at 02:43:07PM +0100, Vitaly Kuznetsov wrote:
> Sean Christopherson <sean.j.christopherson@intel.com> writes:
> 
> > diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
> > index 64e96e4086e2..51f19eade5a0 100644
> > --- a/arch/x86/kvm/cpuid.h
> > +++ b/arch/x86/kvm/cpuid.h
> > @@ -135,6 +135,37 @@ static __always_inline bool cpuid_entry_has(struct kvm_cpuid_entry2 *entry,
> >  	return cpuid_entry_get(entry, x86_feature);
> >  }
> >  
> > +static __always_inline void cpuid_entry_clear(struct kvm_cpuid_entry2 *entry,
> > +					      unsigned x86_feature)
> > +{
> > +	u32 *reg = cpuid_entry_get_reg(entry, x86_feature);
> > +
> > +	*reg &= ~__feature_bit(x86_feature);
> > +}
> > +
> > +static __always_inline void cpuid_entry_set(struct kvm_cpuid_entry2 *entry,
> > +					    unsigned x86_feature)
> > +{
> > +	int *reg = cpuid_entry_get_reg(entry, x86_feature);
> 
> I think 'reg' should be 'u32', similar to cpuid_entry_clear()

Doh, thanks!

> > +
> > +	*reg |= __feature_bit(x86_feature);
> > +}
> > +
> > +static __always_inline void cpuid_entry_change(struct kvm_cpuid_entry2 *entry,
> > +					       unsigned x86_feature, bool set)
> > +{
> > +	int *reg = cpuid_entry_get_reg(entry, x86_feature);
> 
> Ditto.
> 

^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 29/61] KVM: x86: Add Kconfig-controlled auditing of reverse CPUID lookups
  2020-02-24 13:54   ` Vitaly Kuznetsov
@ 2020-02-24 22:46     ` Sean Christopherson
  2020-02-25 15:02       ` Paolo Bonzini
  2020-02-25 15:00     ` Paolo Bonzini
  1 sibling, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-24 22:46 UTC (permalink / raw)
  To: Vitaly Kuznetsov
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

On Mon, Feb 24, 2020 at 02:54:38PM +0100, Vitaly Kuznetsov wrote:
> Sean Christopherson <sean.j.christopherson@intel.com> writes:
> 
> > Add WARNs in the low level __cpuid_entry_get_reg() to assert that the
> > function and index of the CPUID entry and reverse CPUID entry match.
> > Wrap the WARNs in a new Kconfig, KVM_CPUID_AUDIT, as the checks add
> > almost no value in a production environment, i.e. will only detect
> > blatant KVM bugs and fatal hardware errors.  Add a Kconfig instead of
> > simply wrapping the WARNs with an off-by-default #ifdef so that syzbot
> > and other automated testing can enable the auditing.
> >
> > Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> > ---
> >  arch/x86/kvm/Kconfig | 10 ++++++++++
> >  arch/x86/kvm/cpuid.h |  5 +++++
> >  2 files changed, 15 insertions(+)
> >
> > diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig
> > index 840e12583b85..bbbc3258358e 100644
> > --- a/arch/x86/kvm/Kconfig
> > +++ b/arch/x86/kvm/Kconfig
> > @@ -96,6 +96,16 @@ config KVM_MMU_AUDIT
> >  	 This option adds a R/W kVM module parameter 'mmu_audit', which allows
> >  	 auditing of KVM MMU events at runtime.
> >  
> > +config KVM_CPUID_AUDIT
> > +	bool "Audit KVM reverse CPUID lookups"
> > +	depends on KVM
> > +	help
> > +	 This option enables runtime checking of reverse CPUID lookups in KVM
> > +	 to verify the function and index of the referenced X86_FEATURE_* match
> > +	 the function and index of the CPUID entry being accessed.
> > +
> > +	 If unsure, say N.
> > +
> >  # OK, it's a little counter-intuitive to do this, but it puts it neatly under
> >  # the virtualization menu.
> >  source "drivers/vhost/Kconfig"
> > diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
> > index 51f19eade5a0..41ff94a7d3e0 100644
> > --- a/arch/x86/kvm/cpuid.h
> > +++ b/arch/x86/kvm/cpuid.h
> > @@ -98,6 +98,11 @@ static __always_inline struct cpuid_reg x86_feature_cpuid(unsigned x86_feature)
> >  static __always_inline u32 *__cpuid_entry_get_reg(struct kvm_cpuid_entry2 *entry,
> >  						  const struct cpuid_reg *cpuid)
> >  {
> > +#ifdef CONFIG_KVM_CPUID_AUDIT
> > +	WARN_ON_ONCE(entry->function != cpuid->function);
> > +	WARN_ON_ONCE(entry->index != cpuid->index);
> > +#endif
> > +
> >  	switch (cpuid->reg) {
> >  	case CPUID_EAX:
> >  		return &entry->eax;
> 
> Honestly, I was thinking we should BUG_ON() and even in production builds
> but not everyone around is so rebellious I guess, so

LOL.  It's a waste of cycles for something that will "never" be hit, i.e.
we _really_ dropped the ball if a bug of this natures makes it into a
kernel release.
 
> Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>
> 
> -- 
> Vitaly
> 

^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 49/61] KVM: x86: Override host CPUID results with kvm_cpu_caps
  2020-02-01 18:52 ` [PATCH 49/61] KVM: x86: Override host CPUID results with kvm_cpu_caps Sean Christopherson
@ 2020-02-24 22:57   ` Vitaly Kuznetsov
  0 siblings, 0 replies; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-24 22:57 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Override CPUID entries with kvm_cpu_caps during KVM_GET_SUPPORTED_CPUID
> instead of masking the host CPUID result, which is redundant now that
> the host CPUID is incorporated into kvm_cpu_caps at runtime.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kvm/cpuid.c | 28 ++++++++++++++--------------
>  1 file changed, 14 insertions(+), 14 deletions(-)
>
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index 4416f2422321..871c0bd04e19 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -261,13 +261,13 @@ int kvm_vcpu_ioctl_get_cpuid2(struct kvm_vcpu *vcpu,
>  	return r;
>  }
>  
> -static __always_inline void cpuid_entry_mask(struct kvm_cpuid_entry2 *entry,
> -					     enum cpuid_leafs leaf)
> +static __always_inline void cpuid_entry_override(struct kvm_cpuid_entry2 *entry,
> +						 enum cpuid_leafs leaf)
>  {
>  	u32 *reg = cpuid_entry_get_reg(entry, leaf * 32);
>  
>  	BUILD_BUG_ON(leaf > ARRAY_SIZE(kvm_cpu_caps));
> -	*reg &= kvm_cpu_caps[leaf];
> +	*reg = kvm_cpu_caps[leaf];
>  }
>  
>  static __always_inline void kvm_cpu_cap_mask(enum cpuid_leafs leaf, u32 mask)
> @@ -488,8 +488,8 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
>  		entry->eax = min(entry->eax, 0x1fU);
>  		break;
>  	case 1:
> -		cpuid_entry_mask(entry, CPUID_1_EDX);
> -		cpuid_entry_mask(entry, CPUID_1_ECX);
> +		cpuid_entry_override(entry, CPUID_1_EDX);
> +		cpuid_entry_override(entry, CPUID_1_ECX);
>  		/* we support x2apic emulation even if host does not support
>  		 * it since we emulate x2apic in software */
>  		cpuid_entry_set(entry, X86_FEATURE_X2APIC);
> @@ -543,9 +543,9 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
>  	/* function 7 has additional index. */
>  	case 7:
>  		entry->eax = min(entry->eax, 1u);
> -		cpuid_entry_mask(entry, CPUID_7_0_EBX);
> -		cpuid_entry_mask(entry, CPUID_7_ECX);
> -		cpuid_entry_mask(entry, CPUID_7_EDX);
> +		cpuid_entry_override(entry, CPUID_7_0_EBX);
> +		cpuid_entry_override(entry, CPUID_7_ECX);
> +		cpuid_entry_override(entry, CPUID_7_EDX);
>  
>  		/* TSC_ADJUST and ARCH_CAPABILITIES are emulated in software. */
>  		cpuid_entry_set(entry, X86_FEATURE_TSC_ADJUST);
> @@ -564,7 +564,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
>  			if (!entry)
>  				goto out;
>  
> -			cpuid_entry_mask(entry, CPUID_7_1_EAX);
> +			cpuid_entry_override(entry, CPUID_7_1_EAX);
>  			entry->ebx = 0;
>  			entry->ecx = 0;
>  			entry->edx = 0;
> @@ -630,7 +630,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
>  		if (!entry)
>  			goto out;
>  
> -		cpuid_entry_mask(entry, CPUID_D_1_EAX);
> +		cpuid_entry_override(entry, CPUID_D_1_EAX);
>  		if (entry->eax & (F(XSAVES)|F(XSAVEC)))
>  			entry->ebx = xstate_required_size(supported_xcr0, true);
>  		else
> @@ -709,8 +709,8 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
>  		entry->eax = min(entry->eax, 0x8000001f);
>  		break;
>  	case 0x80000001:
> -		cpuid_entry_mask(entry, CPUID_8000_0001_EDX);
> -		cpuid_entry_mask(entry, CPUID_8000_0001_ECX);
> +		cpuid_entry_override(entry, CPUID_8000_0001_EDX);
> +		cpuid_entry_override(entry, CPUID_8000_0001_ECX);
>  		break;
>  	case 0x80000007: /* Advanced power management */
>  		/* invariant TSC is CPUID.80000007H:EDX[8] */
> @@ -728,7 +728,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
>  			g_phys_as = phys_as;
>  		entry->eax = g_phys_as | (virt_as << 8);
>  		entry->edx = 0;
> -		cpuid_entry_mask(entry, CPUID_8000_0008_EBX);
> +		cpuid_entry_override(entry, CPUID_8000_0008_EBX);
>  		/*
>  		 * AMD has separate bits for each SPEC_CTRL bit.
>  		 * arch/x86/kernel/cpu/bugs.c is kind enough to
> @@ -770,7 +770,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
>  		entry->eax = min(entry->eax, 0xC0000004);
>  		break;
>  	case 0xC0000001:
> -		cpuid_entry_mask(entry, CPUID_C000_0001_EDX);
> +		cpuid_entry_override(entry, CPUID_C000_0001_EDX);
>  		break;
>  	case 3: /* Processor serial number */
>  	case 5: /* MONITOR/MWAIT */

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 38/61] KVM: x86: Introduce kvm_cpu_caps to replace runtime CPUID masking
  2020-02-24 16:32   ` Vitaly Kuznetsov
@ 2020-02-24 22:57     ` Sean Christopherson
  2020-02-24 23:20       ` Vitaly Kuznetsov
  0 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-24 22:57 UTC (permalink / raw)
  To: Vitaly Kuznetsov
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

On Mon, Feb 24, 2020 at 05:32:54PM +0100, Vitaly Kuznetsov wrote:
> Sean Christopherson <sean.j.christopherson@intel.com> writes:
> 
> > Calculate the CPUID masks for KVM_GET_SUPPORTED_CPUID at load time using
> > what is effectively a KVM-adjusted copy of boot_cpu_data, or more
> > precisely, the x86_capability array in boot_cpu_data.
> >
> > In terms of KVM support, the vast majority of CPUID feature bits are
> > constant, and *all* feature support is known at KVM load time.  Rather
> > than apply boot_cpu_data, which is effectively read-only after init,
> > at runtime, copy it into a KVM-specific array and use *that* to mask
> > CPUID registers.
> >
> > In additional to consolidating the masking, kvm_cpu_caps can be adjusted
> > by SVM/VMX at load time and thus eliminate all feature bit manipulation
> > in ->set_supported_cpuid().
> >
> > No functional change intended.
> >
> > Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> > ---
> >  arch/x86/kvm/cpuid.c | 229 +++++++++++++++++++++++--------------------
> >  arch/x86/kvm/cpuid.h |  19 ++++
> >  arch/x86/kvm/x86.c   |   2 +
> >  3 files changed, 142 insertions(+), 108 deletions(-)
> >
> > diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> > index 20a7af320291..c2a4c9df49a9 100644
> > --- a/arch/x86/kvm/cpuid.c
> > +++ b/arch/x86/kvm/cpuid.c
> > @@ -24,6 +24,13 @@
> >  #include "trace.h"
> >  #include "pmu.h"
> >  
> > +/*
> > + * Unlike "struct cpuinfo_x86.x86_capability", kvm_cpu_caps doesn't need to be
> > + * aligned to sizeof(unsigned long) because it's not accessed via bitops.
> > + */
> > +u32 kvm_cpu_caps[NCAPINTS] __read_mostly;
> > +EXPORT_SYMBOL_GPL(kvm_cpu_caps);
> > +
> >  static u32 xstate_required_size(u64 xstate_bv, bool compacted)
> >  {
> >  	int feature_bit = 0;
> > @@ -259,7 +266,119 @@ static __always_inline void cpuid_entry_mask(struct kvm_cpuid_entry2 *entry,
> >  {
> >  	u32 *reg = cpuid_entry_get_reg(entry, leaf * 32);
> >  
> > -	*reg &= boot_cpu_data.x86_capability[leaf];
> > +	BUILD_BUG_ON(leaf > ARRAY_SIZE(kvm_cpu_caps));
> 
> Should this be '>=' ?

Yep, nice catch.

> > +	*reg &= kvm_cpu_caps[leaf];
> > +}
> > +
> > +static __always_inline void kvm_cpu_cap_mask(enum cpuid_leafs leaf, u32 mask)
> > +{
> > +	reverse_cpuid_check(leaf);
> > +	kvm_cpu_caps[leaf] &= mask;
> > +}
> > +
> > +void kvm_set_cpu_caps(void)
> > +{
> > +	unsigned f_nx = is_efer_nx() ? F(NX) : 0;
> > +#ifdef CONFIG_X86_64
> > +	unsigned f_gbpages = F(GBPAGES);
> > +	unsigned f_lm = F(LM);
> > +#else
> > +	unsigned f_gbpages = 0;
> > +	unsigned f_lm = 0;
> > +#endif
> 
> Three too many bare 'unsinged's :-)

Roger that, I'll fix this up.

> > +
> > +	BUILD_BUG_ON(sizeof(kvm_cpu_caps) >
> > +		     sizeof(boot_cpu_data.x86_capability));
> > +
> > +	memcpy(&kvm_cpu_caps, &boot_cpu_data.x86_capability,
> > +	       sizeof(kvm_cpu_caps));
> > +
> > +	kvm_cpu_cap_mask(CPUID_1_EDX,
> > +		F(FPU) | F(VME) | F(DE) | F(PSE) |
> > +		F(TSC) | F(MSR) | F(PAE) | F(MCE) |
> > +		F(CX8) | F(APIC) | 0 /* Reserved */ | F(SEP) |
> > +		F(MTRR) | F(PGE) | F(MCA) | F(CMOV) |
> > +		F(PAT) | F(PSE36) | 0 /* PSN */ | F(CLFLUSH) |
> > +		0 /* Reserved, DS, ACPI */ | F(MMX) |
> > +		F(FXSR) | F(XMM) | F(XMM2) | F(SELFSNOOP) |
> > +		0 /* HTT, TM, Reserved, PBE */
> > +	);
> > +
> > +	kvm_cpu_cap_mask(CPUID_8000_0001_EDX,
> > +		F(FPU) | F(VME) | F(DE) | F(PSE) |
> > +		F(TSC) | F(MSR) | F(PAE) | F(MCE) |
> > +		F(CX8) | F(APIC) | 0 /* Reserved */ | F(SYSCALL) |
> > +		F(MTRR) | F(PGE) | F(MCA) | F(CMOV) |
> > +		F(PAT) | F(PSE36) | 0 /* Reserved */ |
> > +		f_nx | 0 /* Reserved */ | F(MMXEXT) | F(MMX) |
> > +		F(FXSR) | F(FXSR_OPT) | f_gbpages | F(RDTSCP) |
> > +		0 /* Reserved */ | f_lm | F(3DNOWEXT) | F(3DNOW)
> > +	);
> > +
> > +	kvm_cpu_cap_mask(CPUID_1_ECX,
> > +		/* NOTE: MONITOR (and MWAIT) are emulated as NOP,
> > +		 * but *not* advertised to guests via CPUID ! */
> > +		F(XMM3) | F(PCLMULQDQ) | 0 /* DTES64, MONITOR */ |
> > +		0 /* DS-CPL, VMX, SMX, EST */ |
> > +		0 /* TM2 */ | F(SSSE3) | 0 /* CNXT-ID */ | 0 /* Reserved */ |
> > +		F(FMA) | F(CX16) | 0 /* xTPR Update, PDCM */ |
> > +		F(PCID) | 0 /* Reserved, DCA */ | F(XMM4_1) |
> > +		F(XMM4_2) | F(X2APIC) | F(MOVBE) | F(POPCNT) |
> > +		0 /* Reserved*/ | F(AES) | F(XSAVE) | 0 /* OSXSAVE */ | F(AVX) |
> > +		F(F16C) | F(RDRAND)
> > +	);
> 
> I would suggest we order things by CPUID_NUM here, i.e.
> 
> CPUID_1_ECX
> CPUID_1_EDX
> CPUID_7_1_EAX
> CPUID_7_0_EBX
> CPUID_7_ECX
> CPUID_7_EDX
> CPUID_D_1_EAX
> ...

Hmm, generally speaking I agree, but I didn't want to change the ordering
in this patch when moving the code.  Throw a patch on top?  Leave as is?
Something else?

^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 16/61] KVM: x86: Encapsulate CPUID entries and metadata in struct
  2020-02-24 21:55     ` Sean Christopherson
@ 2020-02-24 23:12       ` Vitaly Kuznetsov
  0 siblings, 0 replies; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-24 23:12 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> On Fri, Feb 21, 2020 at 03:58:47PM +0100, Vitaly Kuznetsov wrote:
>> Sean Christopherson <sean.j.christopherson@intel.com> writes:
>> 
>
>> > +			if (!entry)
>> >  				goto out;
>> >  		}
>> >  		break;
>> > @@ -802,22 +814,22 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
>> >  	return r;
>> >  }
>> >  
>> > -static int do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 func,
>> > -			 int *nent, int maxnent, unsigned int type)
>> > +static int do_cpuid_func(struct kvm_cpuid_array *array, u32 func,
>> > +			 unsigned int type)
>> >  {
>> > -	if (*nent >= maxnent)
>> > +	if (array->nent >= array->maxnent)
>> >  		return -E2BIG;
>> >  
>> >  	if (type == KVM_GET_EMULATED_CPUID)
>> > -		return __do_cpuid_func_emulated(entry, func, nent, maxnent);
>> > +		return __do_cpuid_func_emulated(array, func);
>> 
>> Would it make sense to move 'if (array->nent >= array->maxnent)' check
>> to __do_cpuid_func_emulated() to match do_host_cpuid()?
>
> I considered doing exactly that.  IIRC, I opted not to because at this
> point in the series, the initial call to do_host_cpuid() is something like
> halfway down the massive __do_cpuid_func(), and eliminating the early check
> didn't feel quite right, e.g. there is a fair amount of unnecessary code
> that runs before hitting the first do_host_cpuid().
>
> What if I add a patch towards the end of the series to move this check into
> __do_cpuid_func_emulated(), i.e. after __do_cpuid_func() has been trimmed
> down to size and the early check really is superfluous.
>

Works for me, thanks!

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 38/61] KVM: x86: Introduce kvm_cpu_caps to replace runtime CPUID masking
  2020-02-24 22:57     ` Sean Christopherson
@ 2020-02-24 23:20       ` Vitaly Kuznetsov
  2020-02-24 23:25         ` Sean Christopherson
  0 siblings, 1 reply; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-24 23:20 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> On Mon, Feb 24, 2020 at 05:32:54PM +0100, Vitaly Kuznetsov wrote:
>> Sean Christopherson <sean.j.christopherson@intel.com> writes:
>> 

...

>
>> > +
>> > +	BUILD_BUG_ON(sizeof(kvm_cpu_caps) >
>> > +		     sizeof(boot_cpu_data.x86_capability));
>> > +
>> > +	memcpy(&kvm_cpu_caps, &boot_cpu_data.x86_capability,
>> > +	       sizeof(kvm_cpu_caps));
>> > +
>> > +	kvm_cpu_cap_mask(CPUID_1_EDX,
>> > +		F(FPU) | F(VME) | F(DE) | F(PSE) |
>> > +		F(TSC) | F(MSR) | F(PAE) | F(MCE) |
>> > +		F(CX8) | F(APIC) | 0 /* Reserved */ | F(SEP) |
>> > +		F(MTRR) | F(PGE) | F(MCA) | F(CMOV) |
>> > +		F(PAT) | F(PSE36) | 0 /* PSN */ | F(CLFLUSH) |
>> > +		0 /* Reserved, DS, ACPI */ | F(MMX) |
>> > +		F(FXSR) | F(XMM) | F(XMM2) | F(SELFSNOOP) |
>> > +		0 /* HTT, TM, Reserved, PBE */
>> > +	);
>> > +
>> > +	kvm_cpu_cap_mask(CPUID_8000_0001_EDX,
>> > +		F(FPU) | F(VME) | F(DE) | F(PSE) |
>> > +		F(TSC) | F(MSR) | F(PAE) | F(MCE) |
>> > +		F(CX8) | F(APIC) | 0 /* Reserved */ | F(SYSCALL) |
>> > +		F(MTRR) | F(PGE) | F(MCA) | F(CMOV) |
>> > +		F(PAT) | F(PSE36) | 0 /* Reserved */ |
>> > +		f_nx | 0 /* Reserved */ | F(MMXEXT) | F(MMX) |
>> > +		F(FXSR) | F(FXSR_OPT) | f_gbpages | F(RDTSCP) |
>> > +		0 /* Reserved */ | f_lm | F(3DNOWEXT) | F(3DNOW)
>> > +	);
>> > +
>> > +	kvm_cpu_cap_mask(CPUID_1_ECX,
>> > +		/* NOTE: MONITOR (and MWAIT) are emulated as NOP,
>> > +		 * but *not* advertised to guests via CPUID ! */
>> > +		F(XMM3) | F(PCLMULQDQ) | 0 /* DTES64, MONITOR */ |
>> > +		0 /* DS-CPL, VMX, SMX, EST */ |
>> > +		0 /* TM2 */ | F(SSSE3) | 0 /* CNXT-ID */ | 0 /* Reserved */ |
>> > +		F(FMA) | F(CX16) | 0 /* xTPR Update, PDCM */ |
>> > +		F(PCID) | 0 /* Reserved, DCA */ | F(XMM4_1) |
>> > +		F(XMM4_2) | F(X2APIC) | F(MOVBE) | F(POPCNT) |
>> > +		0 /* Reserved*/ | F(AES) | F(XSAVE) | 0 /* OSXSAVE */ | F(AVX) |
>> > +		F(F16C) | F(RDRAND)
>> > +	);
>> 
>> I would suggest we order things by CPUID_NUM here, i.e.
>> 
>> CPUID_1_ECX
>> CPUID_1_EDX
>> CPUID_7_1_EAX
>> CPUID_7_0_EBX
>> CPUID_7_ECX
>> CPUID_7_EDX
>> CPUID_D_1_EAX
>> ...
>
> Hmm, generally speaking I agree, but I didn't want to change the ordering
> in this patch when moving the code.  Throw a patch on top?  Leave as is?
> Something else?

My line of thought was: it's not a mechanical "s,const u32
xxx_x86_features =,kvm_cpu_cap_mask...," change, things get moved from
do_cpuid_7_mask() and __do_cpuid_func() so we may as well re-order them,
reviewing-wise it's more or less the same. But honestly, this is very
minor, feel free to leave as-is.

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 43/61] KVM: x86: Use KVM cpu caps to mark CR4.LA57 as not-reserved
  2020-02-24 22:08   ` Vitaly Kuznetsov
@ 2020-02-24 23:23     ` Sean Christopherson
  2020-02-25 15:12     ` Paolo Bonzini
  1 sibling, 0 replies; 168+ messages in thread
From: Sean Christopherson @ 2020-02-24 23:23 UTC (permalink / raw)
  To: Vitaly Kuznetsov
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

On Mon, Feb 24, 2020 at 11:08:30PM +0100, Vitaly Kuznetsov wrote:
> Sean Christopherson <sean.j.christopherson@intel.com> writes:
> 
> > Add accessor(s) for KVM cpu caps and use said accessor to detect
> > hardware support for LA57 instead of manually querying CPUID.
> >
> > No functional change intended.
> >
> > Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> > ---
> >  arch/x86/kvm/cpuid.h | 13 +++++++++++++
> >  arch/x86/kvm/x86.c   |  2 +-
> >  2 files changed, 14 insertions(+), 1 deletion(-)
> >
> > diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
> > index 7b71ae0ca05e..5ce4219d465f 100644
> > --- a/arch/x86/kvm/cpuid.h
> > +++ b/arch/x86/kvm/cpuid.h
> > @@ -274,6 +274,19 @@ static __always_inline void kvm_cpu_cap_set(unsigned x86_feature)
> >  	kvm_cpu_caps[x86_leaf] |= __feature_bit(x86_feature);
> >  }
> >  
> > +static __always_inline u32 kvm_cpu_cap_get(unsigned x86_feature)
> > +{
> > +	unsigned x86_leaf = x86_feature / 32;
> > +
> > +	reverse_cpuid_check(x86_leaf);
> > +	return kvm_cpu_caps[x86_leaf] & __feature_bit(x86_feature);
> > +}
> > +
> > +static __always_inline bool kvm_cpu_cap_has(unsigned x86_feature)
> > +{
> > +	return kvm_cpu_cap_get(x86_feature);
> > +}
> 
> I know this works (and I even checked C99 to make sure that it works not
> by accident) but I have to admit that explicit '!!' conversion to bool
> always makes me feel safer :-)

Eh, the flip side of blasting it everywhere is that people then forget why
the pattern exists in the first place and don't understand when it's truly
necessary.

> > +
> >  static __always_inline void kvm_cpu_cap_check_and_set(unsigned x86_feature)
> >  {
> >  	if (boot_cpu_has(x86_feature))
> > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> > index c5ed199d6cd9..cb40737187a1 100644
> > --- a/arch/x86/kvm/x86.c
> > +++ b/arch/x86/kvm/x86.c
> > @@ -912,7 +912,7 @@ static u64 kvm_host_cr4_reserved_bits(struct cpuinfo_x86 *c)
> >  {
> >  	u64 reserved_bits = __cr4_reserved_bits(cpu_has, c);
> >  
> > -	if (cpuid_ecx(0x7) & feature_bit(LA57))
> > +	if (kvm_cpu_cap_has(X86_FEATURE_LA57))
> >  		reserved_bits &= ~X86_CR4_LA57;
> >  
> >  	if (kvm_x86_ops->umip_emulated())
> 
> Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>
> 
> -- 
> Vitaly
> 

^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 38/61] KVM: x86: Introduce kvm_cpu_caps to replace runtime CPUID masking
  2020-02-24 23:20       ` Vitaly Kuznetsov
@ 2020-02-24 23:25         ` Sean Christopherson
  0 siblings, 0 replies; 168+ messages in thread
From: Sean Christopherson @ 2020-02-24 23:25 UTC (permalink / raw)
  To: Vitaly Kuznetsov
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

On Tue, Feb 25, 2020 at 12:20:03AM +0100, Vitaly Kuznetsov wrote:
> Sean Christopherson <sean.j.christopherson@intel.com> writes:
> 
> > On Mon, Feb 24, 2020 at 05:32:54PM +0100, Vitaly Kuznetsov wrote:
> >> Sean Christopherson <sean.j.christopherson@intel.com> writes:
> >> 
> 
> ...
> 
> >
> >> > +
> >> > +	BUILD_BUG_ON(sizeof(kvm_cpu_caps) >
> >> > +		     sizeof(boot_cpu_data.x86_capability));
> >> > +
> >> > +	memcpy(&kvm_cpu_caps, &boot_cpu_data.x86_capability,
> >> > +	       sizeof(kvm_cpu_caps));
> >> > +
> >> > +	kvm_cpu_cap_mask(CPUID_1_EDX,
> >> > +		F(FPU) | F(VME) | F(DE) | F(PSE) |
> >> > +		F(TSC) | F(MSR) | F(PAE) | F(MCE) |
> >> > +		F(CX8) | F(APIC) | 0 /* Reserved */ | F(SEP) |
> >> > +		F(MTRR) | F(PGE) | F(MCA) | F(CMOV) |
> >> > +		F(PAT) | F(PSE36) | 0 /* PSN */ | F(CLFLUSH) |
> >> > +		0 /* Reserved, DS, ACPI */ | F(MMX) |
> >> > +		F(FXSR) | F(XMM) | F(XMM2) | F(SELFSNOOP) |
> >> > +		0 /* HTT, TM, Reserved, PBE */
> >> > +	);
> >> > +
> >> > +	kvm_cpu_cap_mask(CPUID_8000_0001_EDX,
> >> > +		F(FPU) | F(VME) | F(DE) | F(PSE) |
> >> > +		F(TSC) | F(MSR) | F(PAE) | F(MCE) |
> >> > +		F(CX8) | F(APIC) | 0 /* Reserved */ | F(SYSCALL) |
> >> > +		F(MTRR) | F(PGE) | F(MCA) | F(CMOV) |
> >> > +		F(PAT) | F(PSE36) | 0 /* Reserved */ |
> >> > +		f_nx | 0 /* Reserved */ | F(MMXEXT) | F(MMX) |
> >> > +		F(FXSR) | F(FXSR_OPT) | f_gbpages | F(RDTSCP) |
> >> > +		0 /* Reserved */ | f_lm | F(3DNOWEXT) | F(3DNOW)
> >> > +	);
> >> > +
> >> > +	kvm_cpu_cap_mask(CPUID_1_ECX,
> >> > +		/* NOTE: MONITOR (and MWAIT) are emulated as NOP,
> >> > +		 * but *not* advertised to guests via CPUID ! */
> >> > +		F(XMM3) | F(PCLMULQDQ) | 0 /* DTES64, MONITOR */ |
> >> > +		0 /* DS-CPL, VMX, SMX, EST */ |
> >> > +		0 /* TM2 */ | F(SSSE3) | 0 /* CNXT-ID */ | 0 /* Reserved */ |
> >> > +		F(FMA) | F(CX16) | 0 /* xTPR Update, PDCM */ |
> >> > +		F(PCID) | 0 /* Reserved, DCA */ | F(XMM4_1) |
> >> > +		F(XMM4_2) | F(X2APIC) | F(MOVBE) | F(POPCNT) |
> >> > +		0 /* Reserved*/ | F(AES) | F(XSAVE) | 0 /* OSXSAVE */ | F(AVX) |
> >> > +		F(F16C) | F(RDRAND)
> >> > +	);
> >> 
> >> I would suggest we order things by CPUID_NUM here, i.e.
> >> 
> >> CPUID_1_ECX
> >> CPUID_1_EDX
> >> CPUID_7_1_EAX
> >> CPUID_7_0_EBX
> >> CPUID_7_ECX
> >> CPUID_7_EDX
> >> CPUID_D_1_EAX
> >> ...
> >
> > Hmm, generally speaking I agree, but I didn't want to change the ordering
> > in this patch when moving the code.  Throw a patch on top?  Leave as is?
> > Something else?
> 
> My line of thought was: it's not a mechanical "s,const u32
> xxx_x86_features =,kvm_cpu_cap_mask...," change, things get moved from
> do_cpuid_7_mask() and __do_cpuid_func() so we may as well re-order them,
> reviewing-wise it's more or less the same. But honestly, this is very
> minor, feel free to leave as-is.

Fair enough, I'll throw it into this patch.

^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 48/61] KVM: x86: Do host CPUID at load time to mask KVM cpu caps
       [not found]   ` <87o8tnmwni.fsf@vitty.brq.redhat.com>
@ 2020-02-24 23:31     ` Sean Christopherson
  2020-02-25 13:53       ` Vitaly Kuznetsov
  0 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-24 23:31 UTC (permalink / raw)
  To: Vitaly Kuznetsov
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

On Mon, Feb 24, 2020 at 11:46:09PM +0100, Vitaly Kuznetsov wrote:
> Sean Christopherson <sean.j.christopherson@intel.com> writes:
> 
> > Mask kvm_cpu_caps based on host CPUID in preparation for overriding the
> > CPUID results during KVM_GET_SUPPORTED_CPUID instead of doing the
> > masking at runtime.
> >
> > Note, masking may or may not be necessary, e.g. the kernel rarely, if
> > ever, sets real CPUID bits that are not supported by hardware.  But, the
> > code is cheap and only runs once at load, so an abundance of caution is
> > warranted.
> >
> > No functional change intended.
> >
> > Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> > ---
> >  arch/x86/kvm/cpuid.c | 14 ++++++++++++++
> >  1 file changed, 14 insertions(+)
> >
> > diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> > index ab2a34337588..4416f2422321 100644
> > --- a/arch/x86/kvm/cpuid.c
> > +++ b/arch/x86/kvm/cpuid.c
> > @@ -272,8 +272,22 @@ static __always_inline void cpuid_entry_mask(struct kvm_cpuid_entry2 *entry,
> >  
> >  static __always_inline void kvm_cpu_cap_mask(enum cpuid_leafs leaf, u32 mask)
> >  {
> > +	const struct cpuid_reg cpuid = x86_feature_cpuid(leaf * 32);
> > +	struct kvm_cpuid_entry2 entry;
> > +
> >  	reverse_cpuid_check(leaf);
> >  	kvm_cpu_caps[leaf] &= mask;
> > +
> > +#ifdef CONFIG_KVM_CPUID_AUDIT
> > +	/* Entry needs to be fully populated when auditing is enabled. */
> > +	entry.function = cpuid.function;
> > +	entry.index = cpuid.index;
> > +#endif
> > +
> > +	cpuid_count(cpuid.function, cpuid.index,
> > +		    &entry.eax, &entry.ebx, &entry.ecx, &entry.edx);
> > +
> > +	kvm_cpu_caps[leaf] &= *__cpuid_entry_get_reg(&entry, &cpuid);
> >  }
> >  
> >  void kvm_set_cpu_caps(void)
> 
> If we don't really believe that masking will actually mask anything,
> maybe we should move it under '#ifdef CONFIG_KVM_CPUID_AUDIT'? And/or 
> add a WARN_ON()?

I'm not opposed to trying that, but I'd definitely want to do it as a
separate patch, or maybe even let it stew separately in kvm/queue for a
few cycles.

> The patch itself looks good, so:
> Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>
> 
> -- 
> Vitaly
> 

^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 48/61] KVM: x86: Do host CPUID at load time to mask KVM cpu caps
  2020-02-24 23:31     ` Sean Christopherson
@ 2020-02-25 13:53       ` Vitaly Kuznetsov
  0 siblings, 0 replies; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-25 13:53 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> On Mon, Feb 24, 2020 at 11:46:09PM +0100, Vitaly Kuznetsov wrote:
>> 
>> If we don't really believe that masking will actually mask anything,
>> maybe we should move it under '#ifdef CONFIG_KVM_CPUID_AUDIT'? And/or 
>> add a WARN_ON()?
>
> I'm not opposed to trying that, but I'd definitely want to do it as a
> separate patch, or maybe even let it stew separately in kvm/queue for a
> few cycles.
>

Sounds like a good idea)

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 50/61] KVM: x86: Set emulated/transmuted feature bits via kvm_cpu_caps
  2020-02-01 18:52 ` [PATCH 50/61] KVM: x86: Set emulated/transmuted feature bits via kvm_cpu_caps Sean Christopherson
@ 2020-02-25 13:59   ` Vitaly Kuznetsov
  0 siblings, 0 replies; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-25 13:59 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Set emulated and transmuted (set based on other features) feature bits
> via kvm_cpu_caps now that the CPUID output for KVM_GET_SUPPORTED_CPUID
> is direcly overidden with kvm_cpu_caps.
>
> Note, VMX emulation of UMIP already sets kvm_cpu_caps.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kvm/cpuid.c   | 72 +++++++++++++++++++++---------------------
>  arch/x86/kvm/svm.c     | 10 +++---
>  arch/x86/kvm/vmx/vmx.c | 13 +-------
>  3 files changed, 42 insertions(+), 53 deletions(-)
>
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index 871c0bd04e19..a37cb6fda979 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -341,6 +341,8 @@ void kvm_set_cpu_caps(void)
>  		0 /* Reserved*/ | F(AES) | F(XSAVE) | 0 /* OSXSAVE */ | F(AVX) |
>  		F(F16C) | F(RDRAND)
>  	);
> +	/* KVM emulates x2apic in software irrespective of host support. */
> +	kvm_cpu_cap_set(X86_FEATURE_X2APIC);
>  
>  	kvm_cpu_cap_mask(CPUID_7_0_EBX,
>  		F(FSGSBASE) | F(BMI1) | F(HLE) | F(AVX2) | F(SMEP) |
> @@ -366,6 +368,17 @@ void kvm_set_cpu_caps(void)
>  		F(MD_CLEAR)
>  	);
>  
> +	/* TSC_ADJUST and ARCH_CAPABILITIES are emulated in software. */
> +	kvm_cpu_cap_set(X86_FEATURE_TSC_ADJUST);
> +	kvm_cpu_cap_set(X86_FEATURE_ARCH_CAPABILITIES);
> +
> +	if (boot_cpu_has(X86_FEATURE_IBPB) && boot_cpu_has(X86_FEATURE_IBRS))
> +		kvm_cpu_cap_set(X86_FEATURE_SPEC_CTRL);
> +	if (boot_cpu_has(X86_FEATURE_STIBP))
> +		kvm_cpu_cap_set(X86_FEATURE_INTEL_STIBP);
> +	if (boot_cpu_has(X86_FEATURE_AMD_SSBD))
> +		kvm_cpu_cap_set(X86_FEATURE_SPEC_CTRL_SSBD);
> +
>  	kvm_cpu_cap_mask(CPUID_7_1_EAX,
>  		F(AVX512_BF16)
>  	);
> @@ -388,6 +401,29 @@ void kvm_set_cpu_caps(void)
>  		F(AMD_SSB_NO) | F(AMD_STIBP) | F(AMD_STIBP_ALWAYS_ON)
>  	);
>  
> +	/*
> +	 * AMD has separate bits for each SPEC_CTRL bit.
> +	 * arch/x86/kernel/cpu/bugs.c is kind enough to
> +	 * record that in cpufeatures so use them.
> +	 */
> +	if (boot_cpu_has(X86_FEATURE_IBPB))
> +		kvm_cpu_cap_set(X86_FEATURE_AMD_IBPB);
> +	if (boot_cpu_has(X86_FEATURE_IBRS))
> +		kvm_cpu_cap_set(X86_FEATURE_AMD_IBRS);
> +	if (boot_cpu_has(X86_FEATURE_STIBP))
> +		kvm_cpu_cap_set(X86_FEATURE_AMD_STIBP);
> +	if (boot_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD))
> +		kvm_cpu_cap_set(X86_FEATURE_AMD_SSBD);
> +	if (!boot_cpu_has_bug(X86_BUG_SPEC_STORE_BYPASS))
> +		kvm_cpu_cap_set(X86_FEATURE_AMD_SSB_NO);
> +	/*
> +	 * The preference is to use SPEC CTRL MSR instead of the
> +	 * VIRT_SPEC MSR.
> +	 */
> +	if (boot_cpu_has(X86_FEATURE_LS_CFG_SSBD) &&
> +	    !boot_cpu_has(X86_FEATURE_AMD_SSBD))
> +		kvm_cpu_cap_set(X86_FEATURE_VIRT_SSBD);
> +
>  	kvm_cpu_cap_mask(CPUID_C000_0001_EDX,
>  		F(XSTORE) | F(XSTORE_EN) | F(XCRYPT) | F(XCRYPT_EN) |
>  		F(ACE2) | F(ACE2_EN) | F(PHE) | F(PHE_EN) |
> @@ -490,9 +526,6 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
>  	case 1:
>  		cpuid_entry_override(entry, CPUID_1_EDX);
>  		cpuid_entry_override(entry, CPUID_1_ECX);
> -		/* we support x2apic emulation even if host does not support
> -		 * it since we emulate x2apic in software */
> -		cpuid_entry_set(entry, X86_FEATURE_X2APIC);
>  		break;
>  	case 2:
>  		/*
> @@ -547,17 +580,6 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
>  		cpuid_entry_override(entry, CPUID_7_ECX);
>  		cpuid_entry_override(entry, CPUID_7_EDX);
>  
> -		/* TSC_ADJUST and ARCH_CAPABILITIES are emulated in software. */
> -		cpuid_entry_set(entry, X86_FEATURE_TSC_ADJUST);
> -		cpuid_entry_set(entry, X86_FEATURE_ARCH_CAPABILITIES);
> -
> -		if (boot_cpu_has(X86_FEATURE_IBPB) && boot_cpu_has(X86_FEATURE_IBRS))
> -			cpuid_entry_set(entry, X86_FEATURE_SPEC_CTRL);
> -		if (boot_cpu_has(X86_FEATURE_STIBP))
> -			cpuid_entry_set(entry, X86_FEATURE_INTEL_STIBP);
> -		if (boot_cpu_has(X86_FEATURE_AMD_SSBD))
> -			cpuid_entry_set(entry, X86_FEATURE_SPEC_CTRL_SSBD);
> -
>  		/* KVM only supports 0x7.0 and 0x7.1, capped above via min(). */
>  		if (entry->eax == 1) {
>  			entry = do_host_cpuid(array, function, 1);
> @@ -729,28 +751,6 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
>  		entry->eax = g_phys_as | (virt_as << 8);
>  		entry->edx = 0;
>  		cpuid_entry_override(entry, CPUID_8000_0008_EBX);
> -		/*
> -		 * AMD has separate bits for each SPEC_CTRL bit.
> -		 * arch/x86/kernel/cpu/bugs.c is kind enough to
> -		 * record that in cpufeatures so use them.
> -		 */
> -		if (boot_cpu_has(X86_FEATURE_IBPB))
> -			cpuid_entry_set(entry, X86_FEATURE_AMD_IBPB);
> -		if (boot_cpu_has(X86_FEATURE_IBRS))
> -			cpuid_entry_set(entry, X86_FEATURE_AMD_IBRS);
> -		if (boot_cpu_has(X86_FEATURE_STIBP))
> -			cpuid_entry_set(entry, X86_FEATURE_AMD_STIBP);
> -		if (boot_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD))
> -			cpuid_entry_set(entry, X86_FEATURE_AMD_SSBD);
> -		if (!boot_cpu_has_bug(X86_BUG_SPEC_STORE_BYPASS))
> -			cpuid_entry_set(entry, X86_FEATURE_AMD_SSB_NO);
> -		/*
> -		 * The preference is to use SPEC CTRL MSR instead of the
> -		 * VIRT_SPEC MSR.
> -		 */
> -		if (boot_cpu_has(X86_FEATURE_LS_CFG_SSBD) &&
> -		    !boot_cpu_has(X86_FEATURE_AMD_SSBD))
> -			cpuid_entry_set(entry, X86_FEATURE_VIRT_SSBD);
>  		break;
>  	}
>  	case 0x80000019:
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index e1ed5726964c..f4434816dcdf 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -1360,6 +1360,11 @@ static __init void svm_set_cpu_caps(void)
>  	if (nested)
>  		kvm_cpu_cap_set(X86_FEATURE_SVM);
>  
> +	/* CPUID 0x80000008 */
> +	if (boot_cpu_has(X86_FEATURE_LS_CFG_SSBD) ||
> +	    boot_cpu_has(X86_FEATURE_AMD_SSBD))
> +		kvm_cpu_cap_set(X86_FEATURE_VIRT_SSBD);
> +
>  	/* CPUID 0x8000000A */
>  	/* Support next_rip if host supports it */
>  	kvm_cpu_cap_check_and_set(X86_FEATURE_NRIPS);
> @@ -6053,11 +6058,6 @@ static void svm_cpuid_update(struct kvm_vcpu *vcpu)
>  static void svm_set_supported_cpuid(struct kvm_cpuid_entry2 *entry)
>  {
>  	switch (entry->function) {
> -	case 0x80000008:
> -		if (boot_cpu_has(X86_FEATURE_LS_CFG_SSBD) ||
> -		    boot_cpu_has(X86_FEATURE_AMD_SSBD))
> -			cpuid_entry_set(entry, X86_FEATURE_VIRT_SSBD);
> -		break;
>  	case 0x8000000A:
>  		entry->eax = 1; /* SVM revision 1 */
>  		entry->ebx = 8; /* Lets support 8 ASIDs in case we add proper
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index cd5a624610c9..2a1df1b714db 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -7101,18 +7101,7 @@ static void vmx_cpuid_update(struct kvm_vcpu *vcpu)
>  
>  static void vmx_set_supported_cpuid(struct kvm_cpuid_entry2 *entry)
>  {
> -	switch (entry->function) {
> -	case 0x7:
> -		/*
> -		 * UMIP needs to be manually set even though vmx_set_cpu_caps()
> -		 * also sets UMIP since do_host_cpuid() will drop it.
> -		 */
> -		if (vmx_umip_emulated())
> -			cpuid_entry_set(entry, X86_FEATURE_UMIP);
> -		break;
> -	default:
> -		break;
> -	}
> +
>  }

Ok, feel free to ignore my previous comment about the need to document
what goes to caps and what to vmx_supported_cpuid() as the answer is
simple: everything goes to caps (at least on VMX) :-)

>  
>  static __init void vmx_set_cpu_caps(void)

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 51/61] KVM: x86: Use kvm_cpu_caps to detect Intel PT support
  2020-02-01 18:52 ` [PATCH 51/61] KVM: x86: Use kvm_cpu_caps to detect Intel PT support Sean Christopherson
@ 2020-02-25 14:06   ` Vitaly Kuznetsov
  0 siblings, 0 replies; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-25 14:06 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Check for Intel PT using kvm_cpu_cap_has() to pave the way toward
> eliminating ->pt_supported().
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kvm/cpuid.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index a37cb6fda979..3d287fc6eb6e 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -507,7 +507,6 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
>  {
>  	struct kvm_cpuid_entry2 *entry;
>  	int r, i, max_idx;
> -	unsigned f_intel_pt = kvm_x86_ops->pt_supported() ? F(INTEL_PT) : 0;
>  
>  	/* all calls to cpuid_count() should be made on the same cpu */
>  	get_cpu();
> @@ -687,7 +686,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
>  		break;
>  	/* Intel PT */
>  	case 0x14:
> -		if (!f_intel_pt) {
> +		if (!kvm_cpu_cap_has(X86_FEATURE_INTEL_PT)) {
>  			entry->eax = entry->ebx = entry->ecx = entry->edx = 0;
>  			break;
>  		}

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 52/61] KVM: x86: Use KVM cpu caps to detect MSR_TSC_AUX virt support
  2020-02-01 18:52 ` [PATCH 52/61] KVM: x86: Use KVM cpu caps to detect MSR_TSC_AUX virt support Sean Christopherson
@ 2020-02-25 14:08   ` Vitaly Kuznetsov
  0 siblings, 0 replies; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-25 14:08 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Check for MSR_TSC_AUX virtualization via kvm_cpu_cap_has() and drop
> ->rdtscp_supported().
>
> Note, vmx_rdtscp_supported() needs to hang around a tiny bit longer due
> other usage in VMX code.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/include/asm/kvm_host.h | 1 -
>  arch/x86/kvm/svm.c              | 6 ------
>  arch/x86/kvm/vmx/vmx.c          | 3 ---
>  arch/x86/kvm/x86.c              | 2 +-
>  4 files changed, 1 insertion(+), 11 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 113b138a0347..1dd5ac8a2136 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1143,7 +1143,6 @@ struct kvm_x86_ops {
>  	int (*get_tdp_level)(struct kvm_vcpu *vcpu);
>  	u64 (*get_mt_mask)(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio);
>  	int (*get_lpage_level)(void);
> -	bool (*rdtscp_supported)(void);
>  
>  	void (*set_tdp_cr3)(struct kvm_vcpu *vcpu, unsigned long cr3);
>  
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index f4434816dcdf..6dd9c810c0dc 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -6074,11 +6074,6 @@ static int svm_get_lpage_level(void)
>  	return PT_PDPE_LEVEL;
>  }
>  
> -static bool svm_rdtscp_supported(void)
> -{
> -	return boot_cpu_has(X86_FEATURE_RDTSCP);
> -}
> -
>  static bool svm_pt_supported(void)
>  {
>  	return false;
> @@ -7443,7 +7438,6 @@ static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
>  
>  	.cpuid_update = svm_cpuid_update,
>  
> -	.rdtscp_supported = svm_rdtscp_supported,
>  	.pt_supported = svm_pt_supported,
>  
>  	.set_supported_cpuid = svm_set_supported_cpuid,
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 2a1df1b714db..c3577f11f538 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -7870,9 +7870,6 @@ static struct kvm_x86_ops vmx_x86_ops __ro_after_init = {
>  	.get_lpage_level = vmx_get_lpage_level,
>  
>  	.cpuid_update = vmx_cpuid_update,
> -
> -	.rdtscp_supported = vmx_rdtscp_supported,
> -
>  	.set_supported_cpuid = vmx_set_supported_cpuid,
>  
>  	.has_wbinvd_exit = cpu_has_vmx_wbinvd_exit,
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index a6d5f22c7ef6..e4353c03269c 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -5246,7 +5246,7 @@ static void kvm_init_msr_list(void)
>  				continue;
>  			break;
>  		case MSR_TSC_AUX:
> -			if (!kvm_x86_ops->rdtscp_supported())
> +			if (!kvm_cpu_cap_has(X86_FEATURE_RDTSCP))
>  				continue;
>  			break;
>  		case MSR_IA32_RTIT_CTL:

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 53/61] KVM: VMX: Directly use VMX capabilities helper to detect RDTSCP support
  2020-02-01 18:52 ` [PATCH 53/61] KVM: VMX: Directly use VMX capabilities helper to detect RDTSCP support Sean Christopherson
@ 2020-02-25 14:10   ` Vitaly Kuznetsov
  0 siblings, 0 replies; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-25 14:10 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Use cpu_has_vmx_rdtscp() directly when computing secondary exec controls
> and drop the now defunct vmx_rdtscp_supported().
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kvm/vmx/vmx.c | 7 +------
>  1 file changed, 1 insertion(+), 6 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index c3577f11f538..98d54cfa0cbe 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -1651,11 +1651,6 @@ static void vmx_queue_exception(struct kvm_vcpu *vcpu)
>  	vmx_clear_hlt(vcpu);
>  }
>  
> -static bool vmx_rdtscp_supported(void)
> -{
> -	return cpu_has_vmx_rdtscp();
> -}
> -
>  /*
>   * Swap MSR entry in host/guest MSR entry array.
>   */
> @@ -4051,7 +4046,7 @@ static void vmx_compute_secondary_exec_control(struct vcpu_vmx *vmx)
>  		}
>  	}
>  
> -	if (vmx_rdtscp_supported()) {
> +	if (cpu_has_vmx_rdtscp()) {
>  		bool rdtscp_enabled = guest_cpuid_has(vcpu, X86_FEATURE_RDTSCP);
>  		if (!rdtscp_enabled)
>  			exec_control &= ~SECONDARY_EXEC_RDTSCP;

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 54/61] KVM: x86: Check for Intel PT MSR virtualization using KVM cpu caps
  2020-02-01 18:52 ` [PATCH 54/61] KVM: x86: Check for Intel PT MSR virtualization using KVM cpu caps Sean Christopherson
@ 2020-02-25 14:11   ` Vitaly Kuznetsov
  0 siblings, 0 replies; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-25 14:11 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Use kvm_cpu_cap_has() to check for Intel PT when processing the list of
> virtualized MSRs to pave the way toward removing ->pt_supported().
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kvm/x86.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index e4353c03269c..9d38dcdbb613 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -5251,23 +5251,23 @@ static void kvm_init_msr_list(void)
>  			break;
>  		case MSR_IA32_RTIT_CTL:
>  		case MSR_IA32_RTIT_STATUS:
> -			if (!kvm_x86_ops->pt_supported())
> +			if (!kvm_cpu_cap_has(X86_FEATURE_INTEL_PT))
>  				continue;
>  			break;
>  		case MSR_IA32_RTIT_CR3_MATCH:
> -			if (!kvm_x86_ops->pt_supported() ||
> +			if (!kvm_cpu_cap_has(X86_FEATURE_INTEL_PT) ||
>  			    !intel_pt_validate_hw_cap(PT_CAP_cr3_filtering))
>  				continue;
>  			break;
>  		case MSR_IA32_RTIT_OUTPUT_BASE:
>  		case MSR_IA32_RTIT_OUTPUT_MASK:
> -			if (!kvm_x86_ops->pt_supported() ||
> +			if (!kvm_cpu_cap_has(X86_FEATURE_INTEL_PT) ||
>  				(!intel_pt_validate_hw_cap(PT_CAP_topa_output) &&
>  				 !intel_pt_validate_hw_cap(PT_CAP_single_range_output)))
>  				continue;
>  			break;
>  		case MSR_IA32_RTIT_ADDR0_A ... MSR_IA32_RTIT_ADDR3_B: {
> -			if (!kvm_x86_ops->pt_supported() ||
> +			if (!kvm_cpu_cap_has(X86_FEATURE_INTEL_PT) ||
>  				msrs_to_save_all[i] - MSR_IA32_RTIT_ADDR0_A >=
>  				intel_pt_validate_hw_cap(PT_CAP_num_address_ranges) * 2)
>  				continue;

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 55/61] KVM: VMX: Directly query Intel PT mode when refreshing PMUs
  2020-02-01 18:52 ` [PATCH 55/61] KVM: VMX: Directly query Intel PT mode when refreshing PMUs Sean Christopherson
@ 2020-02-25 14:16   ` Vitaly Kuznetsov
  0 siblings, 0 replies; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-25 14:16 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Use vmx_pt_mode_is_host_guest() in intel_pmu_refresh() instead of
> bouncing through kvm_x86_ops->pt_supported, and remove ->pt_supported()
> as the PMU code was the last remaining user.
>
> Opportunistically clean up the wording of a comment that referenced
> kvm_x86_ops->pt_supported().
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/include/asm/kvm_host.h | 2 --
>  arch/x86/kvm/svm.c              | 7 -------
>  arch/x86/kvm/vmx/pmu_intel.c    | 2 +-
>  arch/x86/kvm/vmx/vmx.c          | 6 ------
>  arch/x86/kvm/x86.c              | 7 +++----
>  5 files changed, 4 insertions(+), 20 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 1dd5ac8a2136..a8bae9d88bce 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1162,8 +1162,6 @@ struct kvm_x86_ops {
>  	void (*handle_exit_irqoff)(struct kvm_vcpu *vcpu,
>  		enum exit_fastpath_completion *exit_fastpath);
>  
> -	bool (*pt_supported)(void);
> -
>  	int (*check_nested_events)(struct kvm_vcpu *vcpu, bool external_intr);
>  	void (*request_immediate_exit)(struct kvm_vcpu *vcpu);
>  
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index 6dd9c810c0dc..a27f83f7521c 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -6074,11 +6074,6 @@ static int svm_get_lpage_level(void)
>  	return PT_PDPE_LEVEL;
>  }
>  
> -static bool svm_pt_supported(void)
> -{
> -	return false;
> -}
> -
>  static bool svm_has_wbinvd_exit(void)
>  {
>  	return true;
> @@ -7438,8 +7433,6 @@ static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
>  
>  	.cpuid_update = svm_cpuid_update,
>  
> -	.pt_supported = svm_pt_supported,
> -
>  	.set_supported_cpuid = svm_set_supported_cpuid,
>  
>  	.has_wbinvd_exit = svm_has_wbinvd_exit,
> diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c
> index 34a3a17bb6d7..d8f5cb312b9d 100644
> --- a/arch/x86/kvm/vmx/pmu_intel.c
> +++ b/arch/x86/kvm/vmx/pmu_intel.c
> @@ -330,7 +330,7 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu)
>  	pmu->global_ovf_ctrl_mask = pmu->global_ctrl_mask
>  			& ~(MSR_CORE_PERF_GLOBAL_OVF_CTRL_OVF_BUF |
>  			    MSR_CORE_PERF_GLOBAL_OVF_CTRL_COND_CHGD);
> -	if (kvm_x86_ops->pt_supported())
> +	if (vmx_pt_mode_is_host_guest())
>  		pmu->global_ovf_ctrl_mask &=
>  				~MSR_CORE_PERF_GLOBAL_OVF_CTRL_TRACE_TOPA_PMI;
>  
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 98d54cfa0cbe..e6284b6aac56 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -6283,11 +6283,6 @@ static bool vmx_has_emulated_msr(int index)
>  	}
>  }
>  
> -static bool vmx_pt_supported(void)
> -{
> -	return vmx_pt_mode_is_host_guest();
> -}
> -
>  static void vmx_recover_nmi_blocking(struct vcpu_vmx *vmx)
>  {
>  	u32 exit_intr_info;
> @@ -7876,7 +7871,6 @@ static struct kvm_x86_ops vmx_x86_ops __ro_after_init = {
>  
>  	.check_intercept = vmx_check_intercept,
>  	.handle_exit_irqoff = vmx_handle_exit_irqoff,
> -	.pt_supported = vmx_pt_supported,
>  
>  	.request_immediate_exit = vmx_request_immediate_exit,
>  
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 9d38dcdbb613..144143a57d0b 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -2805,10 +2805,9 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
>  		    !guest_cpuid_has(vcpu, X86_FEATURE_XSAVES))
>  			return 1;
>  		/*
> -		 * We do support PT if kvm_x86_ops->pt_supported(), but we do
> -		 * not support IA32_XSS[bit 8]. Guests will have to use
> -		 * RDMSR/WRMSR rather than XSAVES/XRSTORS to save/restore PT
> -		 * MSRs.
> +		 * KVM supports exposing PT to the guest, but does not support
> +		 * IA32_XSS[bit 8]. Guests have to use RDMSR/WRMSR rather than
> +		 * XSAVES/XRSTORS to save/restore PT MSRs.

So the responsibility shifts from vague 'we' to KVM. There should be
a juridical term for that :-)

>  		 */
>  		if (data != 0)
>  			return 1;

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 56/61] KVM: SVM: Refactor logging of NPT enabled/disabled
  2020-02-01 18:52 ` [PATCH 56/61] KVM: SVM: Refactor logging of NPT enabled/disabled Sean Christopherson
@ 2020-02-25 14:21   ` Vitaly Kuznetsov
  0 siblings, 0 replies; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-25 14:21 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Tweak SVM's logging of NPT enabled/disabled to handle the logging in a
> single pr_info() in preparation for merging kvm_enable_tdp() and
> kvm_disable_tdp() into a single function.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kvm/svm.c | 10 ++++------
>  1 file changed, 4 insertions(+), 6 deletions(-)
>
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index a27f83f7521c..80962c1eea8f 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -1440,16 +1440,14 @@ static __init int svm_hardware_setup(void)
>  	if (!boot_cpu_has(X86_FEATURE_NPT))
>  		npt_enabled = false;
>  
> -	if (npt_enabled && !npt) {
> -		printk(KERN_INFO "kvm: Nested Paging disabled\n");
> +	if (npt_enabled && !npt)
>  		npt_enabled = false;
> -	}
>  
> -	if (npt_enabled) {
> -		printk(KERN_INFO "kvm: Nested Paging enabled\n");
> +	if (npt_enabled)
>  		kvm_enable_tdp();
> -	} else
> +	else
>  		kvm_disable_tdp();
> +	pr_info("kvm: Nested Paging %sabled\n", npt_enabled ? "en" : "dis");
>  
>  	if (nrips) {
>  		if (!boot_cpu_has(X86_FEATURE_NRIPS))

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 57/61] KVM: x86/mmu: Merge kvm_{enable,disable}_tdp() into a common function
  2020-02-01 18:52 ` [PATCH 57/61] KVM: x86/mmu: Merge kvm_{enable,disable}_tdp() into a common function Sean Christopherson
@ 2020-02-25 14:27   ` Vitaly Kuznetsov
  0 siblings, 0 replies; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-25 14:27 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Combine kvm_enable_tdp() and kvm_disable_tdp() into a single function,
> kvm_configure_mmu(), in preparation for doing additional configuration
> during hardware setup.  And because having separate helpers is silly.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/include/asm/kvm_host.h |  3 +--
>  arch/x86/kvm/mmu/mmu.c          | 13 +++----------
>  arch/x86/kvm/svm.c              |  5 +----
>  arch/x86/kvm/vmx/vmx.c          |  4 +---
>  4 files changed, 6 insertions(+), 19 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index a8bae9d88bce..1a13a53bbaeb 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1494,8 +1494,7 @@ void kvm_mmu_invlpg(struct kvm_vcpu *vcpu, gva_t gva);
>  void kvm_mmu_invpcid_gva(struct kvm_vcpu *vcpu, gva_t gva, unsigned long pcid);
>  void kvm_mmu_new_cr3(struct kvm_vcpu *vcpu, gpa_t new_cr3, bool skip_tlb_flush);
>  
> -void kvm_enable_tdp(void);
> -void kvm_disable_tdp(void);
> +void kvm_configure_mmu(bool enable_tdp);
>  
>  static inline gpa_t translate_gpa(struct kvm_vcpu *vcpu, gpa_t gpa, u32 access,
>  				  struct x86_exception *exception)
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 84eeb61d06aa..08c80c7c88d4 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -5541,18 +5541,11 @@ void kvm_mmu_invpcid_gva(struct kvm_vcpu *vcpu, gva_t gva, unsigned long pcid)
>  }
>  EXPORT_SYMBOL_GPL(kvm_mmu_invpcid_gva);
>  
> -void kvm_enable_tdp(void)
> +void kvm_configure_mmu(bool enable_tdp)
>  {
> -	tdp_enabled = true;
> +	tdp_enabled = enable_tdp;
>  }
> -EXPORT_SYMBOL_GPL(kvm_enable_tdp);
> -
> -void kvm_disable_tdp(void)
> -{
> -	tdp_enabled = false;
> -}
> -EXPORT_SYMBOL_GPL(kvm_disable_tdp);
> -
> +EXPORT_SYMBOL_GPL(kvm_configure_mmu);
>  
>  /* The return value indicates if tlb flush on all vcpus is needed. */
>  typedef bool (*slot_level_handler) (struct kvm *kvm, struct kvm_rmap_head *rmap_head);
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index 80962c1eea8f..19dc74ae1efb 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -1443,10 +1443,7 @@ static __init int svm_hardware_setup(void)
>  	if (npt_enabled && !npt)
>  		npt_enabled = false;
>  
> -	if (npt_enabled)
> -		kvm_enable_tdp();
> -	else
> -		kvm_disable_tdp();
> +	kvm_configure_mmu(npt_enabled);
>  	pr_info("kvm: Nested Paging %sabled\n", npt_enabled ? "en" : "dis");
>  
>  	if (nrips) {
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index e6284b6aac56..59206c22b5e1 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -5295,7 +5295,6 @@ static void vmx_enable_tdp(void)
>  		VMX_EPT_RWX_MASK, 0ull);
>  
>  	ept_set_mmio_spte_mask();
> -	kvm_enable_tdp();
>  }
>  
>  /*
> @@ -7678,8 +7677,7 @@ static __init int hardware_setup(void)
>  
>  	if (enable_ept)
>  		vmx_enable_tdp();
> -	else
> -		kvm_disable_tdp();
> +	kvm_configure_mmu(enable_ept);
>  
>  	/*
>  	 * Only enable PML when hardware supports PML feature, and both EPT

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 01/61] KVM: x86: Return -E2BIG when KVM_GET_SUPPORTED_CPUID hits max entries
  2020-02-03 15:59     ` Sean Christopherson
@ 2020-02-25 14:36       ` Paolo Bonzini
  0 siblings, 0 replies; 168+ messages in thread
From: Paolo Bonzini @ 2020-02-25 14:36 UTC (permalink / raw)
  To: Sean Christopherson, Vitaly Kuznetsov
  Cc: Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

On 03/02/20 16:59, Sean Christopherson wrote:
> 
>> In particular, here the change is both the
>> return value and the fact that we don't do copy_to_user() anymore so I
>> think it's possible to meet a userspace which is going to get broken by
>> the change.
> Ugh, yeah, it would be possible.  Qemu (retries), CrosVM (hardcoded to
> 256 entries) and Firecracker (doesn't use the ioctl()) are all ok,
> hopefully all other VMMs used in production environments follow suit.
> 

Also: lkvm and selftests both hardcode the limit to 100.

Both would be broken by this change, but as long as the limit is < 100
now it is fine to change.

Paolo


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 02/61] KVM: x86: Refactor loop around do_cpuid_func() to separate helper
  2020-02-07 19:53     ` Sean Christopherson
@ 2020-02-25 14:37       ` Paolo Bonzini
  2020-02-25 15:09         ` Vitaly Kuznetsov
  0 siblings, 1 reply; 168+ messages in thread
From: Paolo Bonzini @ 2020-02-25 14:37 UTC (permalink / raw)
  To: Sean Christopherson, Vitaly Kuznetsov
  Cc: Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

On 07/02/20 20:53, Sean Christopherson wrote:
> 
>> 2) Return -EINVAL instead.
> I agree that it _should_ be -EINVAL, but I just don't think it's worth
> the possibility of breaking (stupid) userspace that was doing something
> like:
> 
> 	for (i = 0; i < max_cpuid_size; i++) {
> 		cpuid.nent = i;
> 
> 		r = ioctl(fd, KVM_GET_SUPPORTED_CPUID, &cpuid);
> 		if (!r || r != -E2BIG)
> 			break;
> 	}
> 

Apart from the stupidity of the above case, why would it be EINVAL?

I can do the change to drop the initializer when applying.

Paolo


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 58/61] KVM: x86/mmu: Configure max page level during hardware setup
  2020-02-01 18:52 ` [PATCH 58/61] KVM: x86/mmu: Configure max page level during hardware setup Sean Christopherson
@ 2020-02-25 14:43   ` Vitaly Kuznetsov
  2020-02-25 21:01     ` Sean Christopherson
  0 siblings, 1 reply; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-25 14:43 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Configure the max page level during hardware setup to avoid a retpoline
> in the page fault handler.  Drop ->get_lpage_level() as the page fault
> handler was the last user.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/include/asm/kvm_host.h |  3 +--
>  arch/x86/kvm/mmu/mmu.c          | 13 +++++++++++--
>  arch/x86/kvm/svm.c              |  9 +--------
>  arch/x86/kvm/vmx/vmx.c          | 24 +++++++++++-------------
>  4 files changed, 24 insertions(+), 25 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 1a13a53bbaeb..4165d3ef11e4 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1142,7 +1142,6 @@ struct kvm_x86_ops {
>  	int (*set_identity_map_addr)(struct kvm *kvm, u64 ident_addr);
>  	int (*get_tdp_level)(struct kvm_vcpu *vcpu);
>  	u64 (*get_mt_mask)(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio);
> -	int (*get_lpage_level)(void);
>  
>  	void (*set_tdp_cr3)(struct kvm_vcpu *vcpu, unsigned long cr3);
>  
> @@ -1494,7 +1493,7 @@ void kvm_mmu_invlpg(struct kvm_vcpu *vcpu, gva_t gva);
>  void kvm_mmu_invpcid_gva(struct kvm_vcpu *vcpu, gva_t gva, unsigned long pcid);
>  void kvm_mmu_new_cr3(struct kvm_vcpu *vcpu, gpa_t new_cr3, bool skip_tlb_flush);
>  
> -void kvm_configure_mmu(bool enable_tdp);
> +void kvm_configure_mmu(bool enable_tdp, int tdp_page_level);
>  
>  static inline gpa_t translate_gpa(struct kvm_vcpu *vcpu, gpa_t gpa, u32 access,
>  				  struct x86_exception *exception)
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 08c80c7c88d4..1aedb71e7a20 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -86,6 +86,8 @@ __MODULE_PARM_TYPE(nx_huge_pages_recovery_ratio, "uint");
>   */
>  bool tdp_enabled = false;
>  
> +static int max_page_level __read_mostly;
> +
>  enum {
>  	AUDIT_PRE_PAGE_FAULT,
>  	AUDIT_POST_PAGE_FAULT,
> @@ -3280,7 +3282,7 @@ static int kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, gfn_t gfn,
>  	if (!slot)
>  		return PT_PAGE_TABLE_LEVEL;
>  
> -	max_level = min(max_level, kvm_x86_ops->get_lpage_level());
> +	max_level = min(max_level, max_page_level);
>  	for ( ; max_level > PT_PAGE_TABLE_LEVEL; max_level--) {
>  		linfo = lpage_info_slot(gfn, slot, max_level);
>  		if (!linfo->disallow_lpage)
> @@ -5541,9 +5543,16 @@ void kvm_mmu_invpcid_gva(struct kvm_vcpu *vcpu, gva_t gva, unsigned long pcid)
>  }
>  EXPORT_SYMBOL_GPL(kvm_mmu_invpcid_gva);
>  
> -void kvm_configure_mmu(bool enable_tdp)
> +void kvm_configure_mmu(bool enable_tdp, int tdp_page_level)
>  {
>  	tdp_enabled = enable_tdp;
> +
> +	if (tdp_enabled)
> +		max_page_level = tdp_page_level;
> +	else if (boot_cpu_has(X86_FEATURE_GBPAGES))
> +		max_page_level = PT_PDPE_LEVEL;
> +	else
> +		max_page_level = PT_DIRECTORY_LEVEL;
>  }
>  EXPORT_SYMBOL_GPL(kvm_configure_mmu);
>  
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index 19dc74ae1efb..76c24b3491f6 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -1443,7 +1443,7 @@ static __init int svm_hardware_setup(void)
>  	if (npt_enabled && !npt)
>  		npt_enabled = false;
>  
> -	kvm_configure_mmu(npt_enabled);
> +	kvm_configure_mmu(npt_enabled, PT_PDPE_LEVEL);
>  	pr_info("kvm: Nested Paging %sabled\n", npt_enabled ? "en" : "dis");
>  
>  	if (nrips) {
> @@ -6064,11 +6064,6 @@ static void svm_set_supported_cpuid(struct kvm_cpuid_entry2 *entry)
>  	}
>  }
>  
> -static int svm_get_lpage_level(void)
> -{
> -	return PT_PDPE_LEVEL;
> -}

I've probably missed something but before the change, get_lpage_level()
on AMD was always returning PT_PDPE_LEVEL, but after the change and when
NPT is disabled, we set max_page_level to either PT_PDPE_LEVEL (when
boot_cpu_has(X86_FEATURE_GBPAGES)) or PT_DIRECTORY_LEVEL
(otherwise). This sounds like a change) unless we think that
boot_cpu_has(X86_FEATURE_GBPAGES) is always true on AMD.

> -
>  static bool svm_has_wbinvd_exit(void)
>  {
>  	return true;
> @@ -7424,8 +7419,6 @@ static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
>  
>  	.get_exit_info = svm_get_exit_info,
>  
> -	.get_lpage_level = svm_get_lpage_level,
> -
>  	.cpuid_update = svm_cpuid_update,
>  
>  	.set_supported_cpuid = svm_set_supported_cpuid,
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 59206c22b5e1..3ad24ca692a6 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -6889,15 +6889,6 @@ static u64 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio)
>  	return (cache << VMX_EPT_MT_EPTE_SHIFT) | ipat;
>  }
>  
> -static int vmx_get_lpage_level(void)
> -{
> -	if (enable_ept && !cpu_has_vmx_ept_1g_page())
> -		return PT_DIRECTORY_LEVEL;
> -	else
> -		/* For shadow and EPT supported 1GB page */
> -		return PT_PDPE_LEVEL;
> -}
> -
>  static void vmcs_set_secondary_exec_control(struct vcpu_vmx *vmx)
>  {
>  	/*
> @@ -7584,7 +7575,7 @@ static __init int hardware_setup(void)
>  {
>  	unsigned long host_bndcfgs;
>  	struct desc_ptr dt;
> -	int r, i;
> +	int r, i, ept_lpage_level;
>  
>  	rdmsrl_safe(MSR_EFER, &host_efer);
>  
> @@ -7677,7 +7668,16 @@ static __init int hardware_setup(void)
>  
>  	if (enable_ept)
>  		vmx_enable_tdp();
> -	kvm_configure_mmu(enable_ept);
> +
> +	if (!enable_ept)
> +		ept_lpage_level = 0;
> +	else if (cpu_has_vmx_ept_1g_page())
> +		ept_lpage_level = PT_PDPE_LEVEL;
> +	else if (cpu_has_vmx_ept_2m_page())
> +		ept_lpage_level = PT_DIRECTORY_LEVEL;
> +	else
> +		ept_lpage_level = PT_PAGE_TABLE_LEVEL;
> +	kvm_configure_mmu(enable_ept, ept_lpage_level);
>  
>  	/*
>  	 * Only enable PML when hardware supports PML feature, and both EPT
> @@ -7855,8 +7855,6 @@ static struct kvm_x86_ops vmx_x86_ops __ro_after_init = {
>  
>  	.get_exit_info = vmx_get_exit_info,
>  
> -	.get_lpage_level = vmx_get_lpage_level,
> -
>  	.cpuid_update = vmx_cpuid_update,
>  	.set_supported_cpuid = vmx_set_supported_cpuid,

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 19/61] KVM: VMX: Add helpers to query Intel PT mode
  2020-02-24 22:18     ` Sean Christopherson
@ 2020-02-25 14:54       ` Paolo Bonzini
  2020-03-03 22:41         ` Sean Christopherson
  0 siblings, 1 reply; 168+ messages in thread
From: Paolo Bonzini @ 2020-02-25 14:54 UTC (permalink / raw)
  To: Sean Christopherson, Vitaly Kuznetsov
  Cc: Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

On 24/02/20 23:18, Sean Christopherson wrote:
>>>  {
>>>  	u32 vmexit_ctrl = vmcs_config.vmexit_ctrl;
>>> -	if (pt_mode == PT_MODE_SYSTEM)
>>> +	if (vmx_pt_mode_is_system())
>> ... and here? I.e. to cover the currently unsupported 'host-only' mode.
> Hmm, good question.  I don't think so?  On VM-Enter, RTIT_CTL would need to
> be loaded to disable PT.  Clearing RTIT_CTL on VM-Exit would be redundant
> at that point[1].  And AIUI, the PIP for VM-Enter/VM-Exit isn't needed
> because there is no context switch from the decoder's perspective.

How does host-only mode differ from "host-guest but don't expose PT to
the guest"?  So I would say that host-only mode is a special case of
host-guest, not of system mode.

Paolo


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 59/61] KVM: x86: Don't propagate MMU lpage support to memslot.disallow_lpage
  2020-02-01 18:52 ` [PATCH 59/61] KVM: x86: Don't propagate MMU lpage support to memslot.disallow_lpage Sean Christopherson
@ 2020-02-25 14:55   ` Vitaly Kuznetsov
  0 siblings, 0 replies; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-25 14:55 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Stop propagating MMU large page support into a memslot's disallow_lpage
> now that the MMU's max_page_level handles the scenario where VMX's EPT is
> enabled and EPT doesn't support 2M pages.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kvm/vmx/vmx.c | 3 ---
>  arch/x86/kvm/x86.c     | 6 ++----
>  2 files changed, 2 insertions(+), 7 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 3ad24ca692a6..e349689ac0cf 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -7633,9 +7633,6 @@ static __init int hardware_setup(void)
>  	if (!cpu_has_vmx_tpr_shadow())
>  		kvm_x86_ops->update_cr8_intercept = NULL;
>  
> -	if (enable_ept && !cpu_has_vmx_ept_2m_page())
> -		kvm_disable_largepages();
> -
>  #if IS_ENABLED(CONFIG_HYPERV)
>  	if (ms_hyperv.nested_features & HV_X64_NESTED_GUEST_MAPPING_FLUSH
>  	    && enable_ept) {
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 144143a57d0b..b40488fd2969 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -9884,11 +9884,9 @@ int kvm_arch_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot,
>  		ugfn = slot->userspace_addr >> PAGE_SHIFT;
>  		/*
>  		 * If the gfn and userspace address are not aligned wrt each
> -		 * other, or if explicitly asked to, disable large page
> -		 * support for this slot
> +		 * other, disable large page support for this slot.
>  		 */
> -		if ((slot->base_gfn ^ ugfn) & (KVM_PAGES_PER_HPAGE(level) - 1) ||
> -		    !kvm_largepages_enabled()) {
> +		if ((slot->base_gfn ^ ugfn) & (KVM_PAGES_PER_HPAGE(level) - 1)) {
>  			unsigned long j;
>  
>  			for (j = 0; j < lpages; ++j)

MMU code always explodes my brain, this left me wondering why wasn't the
original vmx_get_lpage_level() adjusted before...

FWIW,

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 60/61] KVM: Drop largepages_enabled and its accessor/mutator
  2020-02-01 18:52 ` [PATCH 60/61] KVM: Drop largepages_enabled and its accessor/mutator Sean Christopherson
@ 2020-02-25 14:56   ` Vitaly Kuznetsov
  0 siblings, 0 replies; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-25 14:56 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Drop largepages_enabled, kvm_largepages_enabled() and
> kvm_disable_largepages() now that all users are gone.
>
> Note, largepages_enabled was an x86-only flag that got left in common
> KVM code when KVM gained support for multiple architectures.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  include/linux/kvm_host.h |  2 --
>  virt/kvm/kvm_main.c      | 13 -------------
>  2 files changed, 15 deletions(-)
>
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 6d5331b0d937..50105b5c6370 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -683,8 +683,6 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
>  				const struct kvm_memory_slot *old,
>  				const struct kvm_memory_slot *new,
>  				enum kvm_mr_change change);
> -bool kvm_largepages_enabled(void);
> -void kvm_disable_largepages(void);
>  /* flush all memory translations */
>  void kvm_arch_flush_shadow_all(struct kvm *kvm);
>  /* flush memory translations pointing to 'slot' */
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index eb3709d55139..5851a8c27a28 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -149,8 +149,6 @@ static void mark_page_dirty_in_slot(struct kvm_memory_slot *memslot, gfn_t gfn);
>  __visible bool kvm_rebooting;
>  EXPORT_SYMBOL_GPL(kvm_rebooting);
>  
> -static bool largepages_enabled = true;
> -
>  #define KVM_EVENT_CREATE_VM 0
>  #define KVM_EVENT_DESTROY_VM 1
>  static void kvm_uevent_notify_change(unsigned int type, struct kvm *kvm);
> @@ -1368,17 +1366,6 @@ int kvm_clear_dirty_log_protect(struct kvm *kvm,
>  EXPORT_SYMBOL_GPL(kvm_clear_dirty_log_protect);
>  #endif
>  
> -bool kvm_largepages_enabled(void)
> -{
> -	return largepages_enabled;
> -}
> -
> -void kvm_disable_largepages(void)
> -{
> -	largepages_enabled = false;
> -}
> -EXPORT_SYMBOL_GPL(kvm_disable_largepages);
> -
>  struct kvm_memory_slot *gfn_to_memslot(struct kvm *kvm, gfn_t gfn)
>  {
>  	return __gfn_to_memslot(kvm_memslots(kvm), gfn);

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 29/61] KVM: x86: Add Kconfig-controlled auditing of reverse CPUID lookups
  2020-02-24 13:54   ` Vitaly Kuznetsov
  2020-02-24 22:46     ` Sean Christopherson
@ 2020-02-25 15:00     ` Paolo Bonzini
  1 sibling, 0 replies; 168+ messages in thread
From: Paolo Bonzini @ 2020-02-25 15:00 UTC (permalink / raw)
  To: Vitaly Kuznetsov, Sean Christopherson
  Cc: Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

On 24/02/20 14:54, Vitaly Kuznetsov wrote:
>> --- a/arch/x86/kvm/cpuid.h
>> +++ b/arch/x86/kvm/cpuid.h
>> @@ -98,6 +98,11 @@ static __always_inline struct cpuid_reg x86_feature_cpuid(unsigned x86_feature)
>>  static __always_inline u32 *__cpuid_entry_get_reg(struct kvm_cpuid_entry2 *entry,
>>  						  const struct cpuid_reg *cpuid)
>>  {
>> +#ifdef CONFIG_KVM_CPUID_AUDIT
>> +	WARN_ON_ONCE(entry->function != cpuid->function);
>> +	WARN_ON_ONCE(entry->index != cpuid->index);
>> +#endif
>> +
>>  	switch (cpuid->reg) {
>>  	case CPUID_EAX:
>>  		return &entry->eax;
> Honestly, I was thinking we should BUG_ON() and even in production builds
> but not everyone around is so rebellious I guess, so

BUG_ON is too much, but I agree the cost is so small that unconditional
WARN_ON makes sense.

Paolo


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 29/61] KVM: x86: Add Kconfig-controlled auditing of reverse CPUID lookups
  2020-02-24 22:46     ` Sean Christopherson
@ 2020-02-25 15:02       ` Paolo Bonzini
  0 siblings, 0 replies; 168+ messages in thread
From: Paolo Bonzini @ 2020-02-25 15:02 UTC (permalink / raw)
  To: Sean Christopherson, Vitaly Kuznetsov
  Cc: Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

On 24/02/20 23:46, Sean Christopherson wrote:
>>>  static __always_inline u32 *__cpuid_entry_get_reg(struct kvm_cpuid_entry2 *entry,
>>>  						  const struct cpuid_reg *cpuid)
>>>  {
>>> +#ifdef CONFIG_KVM_CPUID_AUDIT
>>> +	WARN_ON_ONCE(entry->function != cpuid->function);
>>> +	WARN_ON_ONCE(entry->index != cpuid->index);
>>> +#endif
>>> +
>>>  	switch (cpuid->reg) {
>>>  	case CPUID_EAX:
>>>  		return &entry->eax;
>>
>> Honestly, I was thinking we should BUG_ON() and even in production builds
>> but not everyone around is so rebellious I guess, so
> 
> LOL.  It's a waste of cycles for something that will "never" be hit, i.e.
> we _really_ dropped the ball if a bug of this natures makes it into a
> kernel release.

There are quite a few WARN_ONs like that already.  I'd say each
non-constant-folded call to __cpuid_enty_get_reg is a waste of cycles,
if you're counting them. :)

Paolo


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 61/61] KVM: x86: Move VMX's host_efer to common x86 code
  2020-02-01 18:52 ` [PATCH 61/61] KVM: x86: Move VMX's host_efer to common x86 code Sean Christopherson
@ 2020-02-25 15:02   ` Vitaly Kuznetsov
  0 siblings, 0 replies; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-25 15:02 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Move host_efer to common x86 code and use it for CPUID's is_efer_nx() to
> avoid constantly re-reading the MSR.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/include/asm/kvm_host.h | 2 ++
>  arch/x86/kvm/cpuid.c            | 5 +----
>  arch/x86/kvm/vmx/vmx.c          | 3 ---
>  arch/x86/kvm/vmx/vmx.h          | 1 -
>  arch/x86/kvm/x86.c              | 5 +++++
>  5 files changed, 8 insertions(+), 8 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 4165d3ef11e4..a2a091d328c6 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1257,6 +1257,8 @@ struct kvm_arch_async_pf {
>  	bool direct_map;
>  };
>  
> +extern u64 __read_mostly host_efer;
> +

I'm surprised we don't actually cache MSR_EFER in some common x86 code
already.

>  extern struct kvm_x86_ops *kvm_x86_ops;
>  extern struct kmem_cache *x86_fpu_cache;
>  
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index 3d287fc6eb6e..e8beb1e542a8 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -134,10 +134,7 @@ int kvm_update_cpuid(struct kvm_vcpu *vcpu)
>  
>  static int is_efer_nx(void)
>  {
> -	unsigned long long efer = 0;
> -
> -	rdmsrl_safe(MSR_EFER, &efer);
> -	return efer & EFER_NX;
> +	return host_efer & EFER_NX;
>  }
>  
>  static void cpuid_fix_nx_cap(struct kvm_vcpu *vcpu)
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index e349689ac0cf..0009066e2009 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -433,7 +433,6 @@ static const struct kvm_vmx_segment_field {
>  	VMX_SEGMENT_FIELD(LDTR),
>  };
>  
> -u64 host_efer;
>  static unsigned long host_idt_base;
>  
>  /*
> @@ -7577,8 +7576,6 @@ static __init int hardware_setup(void)
>  	struct desc_ptr dt;
>  	int r, i, ept_lpage_level;
>  
> -	rdmsrl_safe(MSR_EFER, &host_efer);
> -
>  	store_idt(&dt);
>  	host_idt_base = dt.address;
>  
> diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
> index 70eafa88876a..0e50fbcb8413 100644
> --- a/arch/x86/kvm/vmx/vmx.h
> +++ b/arch/x86/kvm/vmx/vmx.h
> @@ -12,7 +12,6 @@
>  #include "vmcs.h"
>  
>  extern const u32 vmx_msr_index[];
> -extern u64 host_efer;
>  
>  extern u32 get_umwait_control_msr(void);
>  
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index b40488fd2969..2103101eca78 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -185,6 +185,9 @@ static struct kvm_shared_msrs __percpu *shared_msrs;
>  				| XFEATURE_MASK_BNDCSR | XFEATURE_MASK_AVX512 \
>  				| XFEATURE_MASK_PKRU)
>  
> +u64 __read_mostly host_efer;
> +EXPORT_SYMBOL_GPL(host_efer);
> +
>  static u64 __read_mostly host_xss;
>  
>  struct kvm_stats_debugfs_item debugfs_entries[] = {
> @@ -9590,6 +9593,8 @@ int kvm_arch_hardware_setup(void)
>  {
>  	int r;
>  
> +	rdmsrl_safe(MSR_EFER, &host_efer);
> +
>  	kvm_set_cpu_caps();
>  
>  	r = kvm_x86_ops->hardware_setup();

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 02/61] KVM: x86: Refactor loop around do_cpuid_func() to separate helper
  2020-02-25 14:37       ` Paolo Bonzini
@ 2020-02-25 15:09         ` Vitaly Kuznetsov
  2020-02-26 11:35           ` Paolo Bonzini
  0 siblings, 1 reply; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-25 15:09 UTC (permalink / raw)
  To: Paolo Bonzini, Sean Christopherson
  Cc: Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Paolo Bonzini <pbonzini@redhat.com> writes:

> On 07/02/20 20:53, Sean Christopherson wrote:
>> 
>>> 2) Return -EINVAL instead.
>> I agree that it _should_ be -EINVAL, but I just don't think it's worth
>> the possibility of breaking (stupid) userspace that was doing something
>> like:
>> 
>> 	for (i = 0; i < max_cpuid_size; i++) {
>> 		cpuid.nent = i;
>> 
>> 		r = ioctl(fd, KVM_GET_SUPPORTED_CPUID, &cpuid);
>> 		if (!r || r != -E2BIG)
>> 			break;
>> 	}
>> 
>
> Apart from the stupidity of the above case, why would it be EINVAL?
>

I suggested -EINVAL because issuing KVM_GET_SUPPORTED_CPUID with nent=0
looks more like a completely invalid input and not 'too many
entries'(-E2BIG) to me (but -E2BIG is already there, let's keep it, it's
not a big deal).

> I can do the change to drop the initializer when applying.

We're agreed with Sean on a few cosmetic changes in other patches of
this series, wait for v2)

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 39/61] KVM: SVM: Convert feature updates from CPUID to KVM cpu caps
  2020-02-01 18:51 ` [PATCH 39/61] KVM: SVM: Convert feature updates from CPUID to KVM cpu caps Sean Christopherson
  2020-02-24 21:33   ` Vitaly Kuznetsov
@ 2020-02-25 15:10   ` Paolo Bonzini
  2020-02-28  0:28     ` Sean Christopherson
  1 sibling, 1 reply; 168+ messages in thread
From: Paolo Bonzini @ 2020-02-25 15:10 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm,
	linux-kernel

On 01/02/20 19:51, Sean Christopherson wrote:
> +	/* CPUID 0x8000000A */
> +	/* Support next_rip if host supports it */
> +	if (boot_cpu_has(X86_FEATURE_NRIPS))
> +		kvm_cpu_cap_set(X86_FEATURE_NRIPS);

Should this also be conditional on "nested"?

Paolo


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 43/61] KVM: x86: Use KVM cpu caps to mark CR4.LA57 as not-reserved
  2020-02-24 22:08   ` Vitaly Kuznetsov
  2020-02-24 23:23     ` Sean Christopherson
@ 2020-02-25 15:12     ` Paolo Bonzini
  2020-02-25 15:19       ` David Laight
  2020-02-25 21:22       ` Sean Christopherson
  1 sibling, 2 replies; 168+ messages in thread
From: Paolo Bonzini @ 2020-02-25 15:12 UTC (permalink / raw)
  To: Vitaly Kuznetsov, Sean Christopherson
  Cc: Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

On 24/02/20 23:08, Vitaly Kuznetsov wrote:
>> +
>> +static __always_inline bool kvm_cpu_cap_has(unsigned x86_feature)
>> +{
>> +	return kvm_cpu_cap_get(x86_feature);
>> +}
> I know this works (and I even checked C99 to make sure that it works not
> by accident) but I have to admit that explicit '!!' conversion to bool
> always makes me feel safer :-)

Same here, I don't really like the automagic bool behavior...

Paolo


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 47/61] KVM: x86: Squash CPUID 0x2.0 insanity for modern CPUs
  2020-02-01 18:52 ` [PATCH 47/61] KVM: x86: Squash CPUID 0x2.0 insanity for modern CPUs Sean Christopherson
  2020-02-24 22:35   ` Vitaly Kuznetsov
@ 2020-02-25 15:17   ` Paolo Bonzini
  1 sibling, 0 replies; 168+ messages in thread
From: Paolo Bonzini @ 2020-02-25 15:17 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm,
	linux-kernel

On 01/02/20 19:52, Sean Christopherson wrote:
> Although it's _extremely_ tempting to yank KVM's stateful code, leave it
> in for now but annotate all its code paths as "unlikely".  The code is
> relatively contained, and if by some miracle there is someone running KVM
> on a CPU with a stateful CPUID 0x2, more power to 'em.

I suppose the only way that could happen is with nested virtualization.
 I would just drop it.

Paolo


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 48/61] KVM: x86: Do host CPUID at load time to mask KVM cpu caps
  2020-02-01 18:52 ` [PATCH 48/61] KVM: x86: Do host CPUID at load time to mask KVM cpu caps Sean Christopherson
       [not found]   ` <87o8tnmwni.fsf@vitty.brq.redhat.com>
@ 2020-02-25 15:18   ` Paolo Bonzini
  2020-02-25 21:08     ` Sean Christopherson
  1 sibling, 1 reply; 168+ messages in thread
From: Paolo Bonzini @ 2020-02-25 15:18 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm,
	linux-kernel

On 01/02/20 19:52, Sean Christopherson wrote:
> +#ifdef CONFIG_KVM_CPUID_AUDIT
> +	/* Entry needs to be fully populated when auditing is enabled. */
> +	entry.function = cpuid.function;
> +	entry.index = cpuid.index;
> +#endif

This shows that the audit case is prone to bitrot, which is good reason
to enable it by default.

Paolo


^ permalink raw reply	[flat|nested] 168+ messages in thread

* RE: [PATCH 43/61] KVM: x86: Use KVM cpu caps to mark CR4.LA57 as not-reserved
  2020-02-25 15:12     ` Paolo Bonzini
@ 2020-02-25 15:19       ` David Laight
  2020-02-25 21:22       ` Sean Christopherson
  1 sibling, 0 replies; 168+ messages in thread
From: David Laight @ 2020-02-25 15:19 UTC (permalink / raw)
  To: 'Paolo Bonzini', Vitaly Kuznetsov, Sean Christopherson
  Cc: Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

From: Paolo Bonzini
> Sent: 25 February 2020 15:12
> On 24/02/20 23:08, Vitaly Kuznetsov wrote:
> >> +
> >> +static __always_inline bool kvm_cpu_cap_has(unsigned x86_feature)
> >> +{
> >> +	return kvm_cpu_cap_get(x86_feature);
> >> +}
> > I know this works (and I even checked C99 to make sure that it works not
> > by accident) but I have to admit that explicit '!!' conversion to bool
> > always makes me feel safer :-)
> 
> Same here, I don't really like the automagic bool behavior...

I just dislike 'bool'.

Conversion of 0/non-zero to 0/1 isn't completely free.
And something has to 'give' when the referenced memory location
doesn't contain 0 or 1.

One very old version of gcc made a complete hash of:
	bool_var |= function_returning_bool();

I'm not sure what the standard requires nor what current gcc
generates - but you want a 'logical or' instruction.

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)

^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 00/61] KVM: x86: Introduce KVM cpu caps
       [not found] ` <87wo8ak84x.fsf@vitty.brq.redhat.com>
@ 2020-02-25 15:25   ` Paolo Bonzini
  2020-02-28  1:37     ` Sean Christopherson
  2020-02-29 18:32   ` Sean Christopherson
  1 sibling, 1 reply; 168+ messages in thread
From: Paolo Bonzini @ 2020-02-25 15:25 UTC (permalink / raw)
  To: Vitaly Kuznetsov, Sean Christopherson
  Cc: Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

On 25/02/20 16:18, Vitaly Kuznetsov wrote:
> Would it be better or worse if we eliminate set_supported_cpuid() hook
> completely by doing an ugly hack like (completely untested):

Yes, it makes sense.

Paolo


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 58/61] KVM: x86/mmu: Configure max page level during hardware setup
  2020-02-25 14:43   ` Vitaly Kuznetsov
@ 2020-02-25 21:01     ` Sean Christopherson
  2020-02-26 14:55       ` Vitaly Kuznetsov
  0 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-25 21:01 UTC (permalink / raw)
  To: Vitaly Kuznetsov
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

On Tue, Feb 25, 2020 at 03:43:36PM +0100, Vitaly Kuznetsov wrote:
> Sean Christopherson <sean.j.christopherson@intel.com> writes:
> 
> > Configure the max page level during hardware setup to avoid a retpoline
> > in the page fault handler.  Drop ->get_lpage_level() as the page fault
> > handler was the last user.
> > @@ -6064,11 +6064,6 @@ static void svm_set_supported_cpuid(struct kvm_cpuid_entry2 *entry)
> >  	}
> >  }
> >  
> > -static int svm_get_lpage_level(void)
> > -{
> > -	return PT_PDPE_LEVEL;
> > -}
> 
> I've probably missed something but before the change, get_lpage_level()
> on AMD was always returning PT_PDPE_LEVEL, but after the change and when
> NPT is disabled, we set max_page_level to either PT_PDPE_LEVEL (when
> boot_cpu_has(X86_FEATURE_GBPAGES)) or PT_DIRECTORY_LEVEL
> (otherwise). This sounds like a change) unless we think that
> boot_cpu_has(X86_FEATURE_GBPAGES) is always true on AMD.

It looks like a functional change, but isn't.  kvm_mmu_hugepage_adjust()
caps the page size used by KVM's MMU at the minimum of ->get_lpage_level()
and the host's mapping level.  Barring an egregious bug in the kernel MMU,
the host page tables will max out at PT_DIRECTORY_LEVEL (2mb) unless
boot_cpu_has(X86_FEATURE_GBPAGES) is true.

In other words, this is effectively a "documentation" change.  I'll figure
out a way to explain this in the changelog...

        max_level = min(max_level, kvm_x86_ops->get_lpage_level());
        for ( ; max_level > PT_PAGE_TABLE_LEVEL; max_level--) {
                linfo = lpage_info_slot(gfn, slot, max_level);
                if (!linfo->disallow_lpage)
                        break;
        }

        if (max_level == PT_PAGE_TABLE_LEVEL)
                return PT_PAGE_TABLE_LEVEL;

        level = host_pfn_mapping_level(vcpu, gfn, pfn, slot);
        if (level == PT_PAGE_TABLE_LEVEL)
                return level;

        level = min(level, max_level); <---------

^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 48/61] KVM: x86: Do host CPUID at load time to mask KVM cpu caps
  2020-02-25 15:18   ` Paolo Bonzini
@ 2020-02-25 21:08     ` Sean Christopherson
  2020-02-29 18:38       ` Sean Christopherson
  0 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-25 21:08 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm,
	linux-kernel

On Tue, Feb 25, 2020 at 04:18:12PM +0100, Paolo Bonzini wrote:
> On 01/02/20 19:52, Sean Christopherson wrote:
> > +#ifdef CONFIG_KVM_CPUID_AUDIT
> > +	/* Entry needs to be fully populated when auditing is enabled. */
> > +	entry.function = cpuid.function;
> > +	entry.index = cpuid.index;
> > +#endif
> 
> This shows that the audit case is prone to bitrot, which is good reason
> to enable it by default.

I have no argument against that, especially since I missed this case during
development and only caught it when running on a different system that I
had happened to configure with CONFIG_KVM_CPUID_AUDIT=y. :-)

^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 43/61] KVM: x86: Use KVM cpu caps to mark CR4.LA57 as not-reserved
  2020-02-25 15:12     ` Paolo Bonzini
  2020-02-25 15:19       ` David Laight
@ 2020-02-25 21:22       ` Sean Christopherson
  2020-02-26 11:35         ` Paolo Bonzini
  1 sibling, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-25 21:22 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm,
	linux-kernel

On Tue, Feb 25, 2020 at 04:12:28PM +0100, Paolo Bonzini wrote:
> On 24/02/20 23:08, Vitaly Kuznetsov wrote:
> >> +
> >> +static __always_inline bool kvm_cpu_cap_has(unsigned x86_feature)
> >> +{
> >> +	return kvm_cpu_cap_get(x86_feature);
> >> +}
> > I know this works (and I even checked C99 to make sure that it works not
> > by accident) but I have to admit that explicit '!!' conversion to bool
> > always makes me feel safer :-)
> 
> Same here, I don't really like the automagic bool behavior...

Sounds like I need to add '!!'?

^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 02/61] KVM: x86: Refactor loop around do_cpuid_func() to separate helper
  2020-02-25 15:09         ` Vitaly Kuznetsov
@ 2020-02-26 11:35           ` Paolo Bonzini
  0 siblings, 0 replies; 168+ messages in thread
From: Paolo Bonzini @ 2020-02-26 11:35 UTC (permalink / raw)
  To: Vitaly Kuznetsov, Sean Christopherson
  Cc: Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

On 25/02/20 16:09, Vitaly Kuznetsov wrote:
>> Apart from the stupidity of the above case, why would it be EINVAL?
>>
> I suggested -EINVAL because issuing KVM_GET_SUPPORTED_CPUID with nent=0
> looks more like a completely invalid input and not 'too many
> entries'(-E2BIG) to me (but -E2BIG is already there, let's keep it, it's
> not a big deal).

Yes, and in fact he already does that change a few patches later.

Paolo


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 43/61] KVM: x86: Use KVM cpu caps to mark CR4.LA57 as not-reserved
  2020-02-25 21:22       ` Sean Christopherson
@ 2020-02-26 11:35         ` Paolo Bonzini
  0 siblings, 0 replies; 168+ messages in thread
From: Paolo Bonzini @ 2020-02-26 11:35 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm,
	linux-kernel

On 25/02/20 22:22, Sean Christopherson wrote:
>>> I know this works (and I even checked C99 to make sure that it works not
>>> by accident) but I have to admit that explicit '!!' conversion to bool
>>> always makes me feel safer :-)
>> Same here, I don't really like the automagic bool behavior...
> Sounds like I need to add '!!'?
> 

Either that or "!= 0", as you prefer.

Paolo


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 58/61] KVM: x86/mmu: Configure max page level during hardware setup
  2020-02-25 21:01     ` Sean Christopherson
@ 2020-02-26 14:55       ` Vitaly Kuznetsov
  2020-02-26 15:56         ` Sean Christopherson
  0 siblings, 1 reply; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-26 14:55 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> On Tue, Feb 25, 2020 at 03:43:36PM +0100, Vitaly Kuznetsov wrote:
>> Sean Christopherson <sean.j.christopherson@intel.com> writes:
>> 
>> > Configure the max page level during hardware setup to avoid a retpoline
>> > in the page fault handler.  Drop ->get_lpage_level() as the page fault
>> > handler was the last user.
>> > @@ -6064,11 +6064,6 @@ static void svm_set_supported_cpuid(struct kvm_cpuid_entry2 *entry)
>> >  	}
>> >  }
>> >  
>> > -static int svm_get_lpage_level(void)
>> > -{
>> > -	return PT_PDPE_LEVEL;
>> > -}
>> 
>> I've probably missed something but before the change, get_lpage_level()
>> on AMD was always returning PT_PDPE_LEVEL, but after the change and when
>> NPT is disabled, we set max_page_level to either PT_PDPE_LEVEL (when
>> boot_cpu_has(X86_FEATURE_GBPAGES)) or PT_DIRECTORY_LEVEL
>> (otherwise). This sounds like a change) unless we think that
>> boot_cpu_has(X86_FEATURE_GBPAGES) is always true on AMD.
>
> It looks like a functional change, but isn't.  kvm_mmu_hugepage_adjust()
> caps the page size used by KVM's MMU at the minimum of ->get_lpage_level()
> and the host's mapping level.  Barring an egregious bug in the kernel MMU,
> the host page tables will max out at PT_DIRECTORY_LEVEL (2mb) unless
> boot_cpu_has(X86_FEATURE_GBPAGES) is true.
>
> In other words, this is effectively a "documentation" change.  I'll figure
> out a way to explain this in the changelog...
>
>         max_level = min(max_level, kvm_x86_ops->get_lpage_level());
>         for ( ; max_level > PT_PAGE_TABLE_LEVEL; max_level--) {
>                 linfo = lpage_info_slot(gfn, slot, max_level);
>                 if (!linfo->disallow_lpage)
>                         break;
>         }
>
>         if (max_level == PT_PAGE_TABLE_LEVEL)
>                 return PT_PAGE_TABLE_LEVEL;
>
>         level = host_pfn_mapping_level(vcpu, gfn, pfn, slot);
>         if (level == PT_PAGE_TABLE_LEVEL)
>                 return level;
>
>         level = min(level, max_level); <---------

Ok, I see (I believe):

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

It would've helped me a bit if kvm_configure_mmu() was written the
following way:

void kvm_configure_mmu(bool enable_tdp, int tdp_page_level)
{
        tdp_enabled = enable_tdp;

	if (boot_cpu_has(X86_FEATURE_GBPAGES))
                max_page_level = PT_PDPE_LEVEL;
        else
                max_page_level = PT_DIRECTORY_LEVEL;

        if (tdp_enabled)
		max_page_level = min(tdp_page_level, max_page_level);
}

(we can't have cpu_has_vmx_ept_1g_page() and not
boot_cpu_has(X86_FEATURE_GBPAGES), right?)

But this is certainly just a personal preference, feel free to ignore)

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 58/61] KVM: x86/mmu: Configure max page level during hardware setup
  2020-02-26 14:55       ` Vitaly Kuznetsov
@ 2020-02-26 15:56         ` Sean Christopherson
  0 siblings, 0 replies; 168+ messages in thread
From: Sean Christopherson @ 2020-02-26 15:56 UTC (permalink / raw)
  To: Vitaly Kuznetsov
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

On Wed, Feb 26, 2020 at 03:55:55PM +0100, Vitaly Kuznetsov wrote:
> Sean Christopherson <sean.j.christopherson@intel.com> writes:
> 
> > On Tue, Feb 25, 2020 at 03:43:36PM +0100, Vitaly Kuznetsov wrote:
> >> Sean Christopherson <sean.j.christopherson@intel.com> writes:
> >> 
> >> > Configure the max page level during hardware setup to avoid a retpoline
> >> > in the page fault handler.  Drop ->get_lpage_level() as the page fault
> >> > handler was the last user.
> >> > @@ -6064,11 +6064,6 @@ static void svm_set_supported_cpuid(struct kvm_cpuid_entry2 *entry)
> >> >  	}
> >> >  }
> >> >  
> >> > -static int svm_get_lpage_level(void)
> >> > -{
> >> > -	return PT_PDPE_LEVEL;
> >> > -}
> >> 
> >> I've probably missed something but before the change, get_lpage_level()
> >> on AMD was always returning PT_PDPE_LEVEL, but after the change and when
> >> NPT is disabled, we set max_page_level to either PT_PDPE_LEVEL (when
> >> boot_cpu_has(X86_FEATURE_GBPAGES)) or PT_DIRECTORY_LEVEL
> >> (otherwise). This sounds like a change) unless we think that
> >> boot_cpu_has(X86_FEATURE_GBPAGES) is always true on AMD.
> >
> > It looks like a functional change, but isn't.  kvm_mmu_hugepage_adjust()
> > caps the page size used by KVM's MMU at the minimum of ->get_lpage_level()
> > and the host's mapping level.  Barring an egregious bug in the kernel MMU,
> > the host page tables will max out at PT_DIRECTORY_LEVEL (2mb) unless
> > boot_cpu_has(X86_FEATURE_GBPAGES) is true.
> >
> > In other words, this is effectively a "documentation" change.  I'll figure
> > out a way to explain this in the changelog...
> >
> >         max_level = min(max_level, kvm_x86_ops->get_lpage_level());
> >         for ( ; max_level > PT_PAGE_TABLE_LEVEL; max_level--) {
> >                 linfo = lpage_info_slot(gfn, slot, max_level);
> >                 if (!linfo->disallow_lpage)
> >                         break;
> >         }
> >
> >         if (max_level == PT_PAGE_TABLE_LEVEL)
> >                 return PT_PAGE_TABLE_LEVEL;
> >
> >         level = host_pfn_mapping_level(vcpu, gfn, pfn, slot);
> >         if (level == PT_PAGE_TABLE_LEVEL)
> >                 return level;
> >
> >         level = min(level, max_level); <---------
> 
> Ok, I see (I believe):
> 
> Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>
> 
> It would've helped me a bit if kvm_configure_mmu() was written the
> following way:
> 
> void kvm_configure_mmu(bool enable_tdp, int tdp_page_level)
> {
>         tdp_enabled = enable_tdp;
> 
> 	if (boot_cpu_has(X86_FEATURE_GBPAGES))
>                 max_page_level = PT_PDPE_LEVEL;
>         else
>                 max_page_level = PT_DIRECTORY_LEVEL;
> 
>         if (tdp_enabled)
> 		max_page_level = min(tdp_page_level, max_page_level);
> }
> 
> (we can't have cpu_has_vmx_ept_1g_page() and not
> boot_cpu_has(X86_FEATURE_GBPAGES), right?)

Wrong, because VMX.  It could even occur on a real system if the user
disables the feature via kernel param, e.g. "clearcpuid=58".  In the end it
won't actually change anything because KVM caps its page size at the kernel
page size (as above).  Well, unless someone is running a custom kernel that
does funky things.

> But this is certainly just a personal preference, feel free to ignore)

I'm on the fence.  Part of me likes having max_page_level reflect what KVM
is capable of, irrespective of the kernel.

^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 39/61] KVM: SVM: Convert feature updates from CPUID to KVM cpu caps
  2020-02-25 15:10   ` Paolo Bonzini
@ 2020-02-28  0:28     ` Sean Christopherson
  2020-02-28  0:36       ` Sean Christopherson
  0 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-28  0:28 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm,
	linux-kernel

On Tue, Feb 25, 2020 at 04:10:18PM +0100, Paolo Bonzini wrote:
> On 01/02/20 19:51, Sean Christopherson wrote:
> > +	/* CPUID 0x8000000A */
> > +	/* Support next_rip if host supports it */
> > +	if (boot_cpu_has(X86_FEATURE_NRIPS))
> > +		kvm_cpu_cap_set(X86_FEATURE_NRIPS);
> 
> Should this also be conditional on "nested"?

I think that makes sense?  AFAICT it should probably be conditional on
"nrips" as well.  X86_FEATURE_NPT should also be conditional on "nested".
I'll tack on a patch to make those changes, the cleanup is easier without
the things spread across different case statements, e.g. wrap the entire
SVM feature leaf in "if (nested)".

^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 39/61] KVM: SVM: Convert feature updates from CPUID to KVM cpu caps
  2020-02-28  0:28     ` Sean Christopherson
@ 2020-02-28  0:36       ` Sean Christopherson
  2020-02-28  7:03         ` Paolo Bonzini
  0 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-28  0:36 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm,
	linux-kernel

On Thu, Feb 27, 2020 at 04:28:33PM -0800, Sean Christopherson wrote:
> On Tue, Feb 25, 2020 at 04:10:18PM +0100, Paolo Bonzini wrote:
> > On 01/02/20 19:51, Sean Christopherson wrote:
> > > +	/* CPUID 0x8000000A */
> > > +	/* Support next_rip if host supports it */
> > > +	if (boot_cpu_has(X86_FEATURE_NRIPS))
> > > +		kvm_cpu_cap_set(X86_FEATURE_NRIPS);
> > 
> > Should this also be conditional on "nested"?
> 
> I think that makes sense?  AFAICT it should probably be conditional on
> "nrips" as well.  X86_FEATURE_NPT should also be conditional on "nested".
> I'll tack on a patch to make those changes, the cleanup is easier without
> the things spread across different case statements, e.g. wrap the entire
> SVM feature leaf in "if (nested)".

Regarding NRIPS, the original commit added the "Support next_rip if host
supports it" comment, but I can't tell is "host supports" means "supported
in hardware" or "supported by KVM".  In other words, should I make the cap
dependent "nrips" or leave it as is?

^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 00/61] KVM: x86: Introduce KVM cpu caps
  2020-02-25 15:25   ` [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Paolo Bonzini
@ 2020-02-28  1:37     ` Sean Christopherson
  2020-02-28  7:04       ` Paolo Bonzini
  0 siblings, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-28  1:37 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm,
	linux-kernel

On Tue, Feb 25, 2020 at 04:25:34PM +0100, Paolo Bonzini wrote:
> On 25/02/20 16:18, Vitaly Kuznetsov wrote:
> > Would it be better or worse if we eliminate set_supported_cpuid() hook
> > completely by doing an ugly hack like (completely untested):
> 
> Yes, it makes sense.

Works for me, I'll tack it on.  I think my past self kept it because I was
planning on using vmx_set_supported_cpuid() for SGX, which adds multiple
sub-leafs, but I'm pretty sure I can squeeze them into kvm_cpu_caps with
a few extra shenanigans.

^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 39/61] KVM: SVM: Convert feature updates from CPUID to KVM cpu caps
  2020-02-28  0:36       ` Sean Christopherson
@ 2020-02-28  7:03         ` Paolo Bonzini
  2020-02-28 15:09           ` Sean Christopherson
  0 siblings, 1 reply; 168+ messages in thread
From: Paolo Bonzini @ 2020-02-28  7:03 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm,
	linux-kernel

On 28/02/20 01:36, Sean Christopherson wrote:
> Regarding NRIPS, the original commit added the "Support next_rip if host
> supports it" comment, but I can't tell is "host supports" means "supported
> in hardware" or "supported by KVM".  In other words, should I make the cap
> dependent "nrips" or leave it as is?
> 

The "nrips" parameter came later.  For VMX we generally remove guest
capabilities if the corresponding parameter is on, so it's a good idea
to do the same for SVM.

Paolo


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 00/61] KVM: x86: Introduce KVM cpu caps
  2020-02-28  1:37     ` Sean Christopherson
@ 2020-02-28  7:04       ` Paolo Bonzini
  0 siblings, 0 replies; 168+ messages in thread
From: Paolo Bonzini @ 2020-02-28  7:04 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm,
	linux-kernel

On 28/02/20 02:37, Sean Christopherson wrote:
>>> Would it be better or worse if we eliminate set_supported_cpuid() hook
>>> completely by doing an ugly hack like (completely untested):
>> Yes, it makes sense.
> Works for me, I'll tack it on.  I think my past self kept it because I was
> planning on using vmx_set_supported_cpuid() for SGX, which adds multiple
> sub-leafs, but I'm pretty sure I can squeeze them into kvm_cpu_caps with
> a few extra shenanigans.
> 

We can add it back for full CPUID leaves; it may even make sense to move
PT processing there (but not in this series).

Paolo


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 39/61] KVM: SVM: Convert feature updates from CPUID to KVM cpu caps
  2020-02-28  7:03         ` Paolo Bonzini
@ 2020-02-28 15:09           ` Sean Christopherson
  0 siblings, 0 replies; 168+ messages in thread
From: Sean Christopherson @ 2020-02-28 15:09 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm,
	linux-kernel

On Fri, Feb 28, 2020 at 08:03:33AM +0100, Paolo Bonzini wrote:
> On 28/02/20 01:36, Sean Christopherson wrote:
> > Regarding NRIPS, the original commit added the "Support next_rip if host
> > supports it" comment, but I can't tell is "host supports" means "supported
> > in hardware" or "supported by KVM".  In other words, should I make the cap
> > dependent "nrips" or leave it as is?
> > 
> 
> The "nrips" parameter came later.  For VMX we generally remove guest
> capabilities if the corresponding parameter is on, so it's a good idea
> to do the same for SVM.

Huh.  I swear I looked at the code from the original commit and saw nrips
there, but it was clearly added in 2019 via commit d647eb63e671 ("KVM: svm:
add nrips module parameter").

^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 00/61] KVM: x86: Introduce KVM cpu caps
       [not found] ` <87wo8ak84x.fsf@vitty.brq.redhat.com>
  2020-02-25 15:25   ` [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Paolo Bonzini
@ 2020-02-29 18:32   ` Sean Christopherson
  2020-03-02  9:03     ` Vitaly Kuznetsov
  1 sibling, 1 reply; 168+ messages in thread
From: Sean Christopherson @ 2020-02-29 18:32 UTC (permalink / raw)
  To: Vitaly Kuznetsov
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

On Tue, Feb 25, 2020 at 04:18:38PM +0100, Vitaly Kuznetsov wrote:
> Sean Christopherson <sean.j.christopherson@intel.com> writes:
> 
> >
> >   7. Profit!
> 
> Would it be better or worse if we eliminate set_supported_cpuid() hook
> completely by doing an ugly hack like (completely untested):
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index a2a091d328c6..5ad291d48e1b 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1145,8 +1145,6 @@ struct kvm_x86_ops {
>  
>         void (*set_tdp_cr3)(struct kvm_vcpu *vcpu, unsigned long cr3);
>  
> -       void (*set_supported_cpuid)(struct kvm_cpuid_entry2 *entry);
> -
>         bool (*has_wbinvd_exit)(void);
>  
>         u64 (*read_l1_tsc_offset)(struct kvm_vcpu *vcpu);
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index e8beb1e542a8..88431fc02797 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -749,6 +749,16 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
>  		cpuid_entry_override(entry, CPUID_8000_0008_EBX);
>  		break;
>  	}
> +	case 0x8000000A:
> +		if (boot_cpu_has(X86_FEATURE_SVM)) {
> +			entry->eax = 1; /* SVM revision 1 */
> +			entry->ebx = 8; /* Lets support 8 ASIDs in case we add proper
> +					   ASID emulation to nested SVM */
> +			entry->ecx = 0; /* Reserved */
> +			entry->edx = 0; /* Per default do not support any
> +					   additional features */

Lucky thing that you suggested this change, patch ("KVM: SVM: Convert
feature updates from CPUID to KVM cpu caps") was buggy in that clearing
entry->edx here would wipe out all X86_FEATURE_NRIPS and X86_FEATURE_NPT.
Only noticed it when moving this code. 

^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 48/61] KVM: x86: Do host CPUID at load time to mask KVM cpu caps
  2020-02-25 21:08     ` Sean Christopherson
@ 2020-02-29 18:38       ` Sean Christopherson
  0 siblings, 0 replies; 168+ messages in thread
From: Sean Christopherson @ 2020-02-29 18:38 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm,
	linux-kernel

On Tue, Feb 25, 2020 at 01:08:43PM -0800, Sean Christopherson wrote:
> On Tue, Feb 25, 2020 at 04:18:12PM +0100, Paolo Bonzini wrote:
> > On 01/02/20 19:52, Sean Christopherson wrote:
> > > +#ifdef CONFIG_KVM_CPUID_AUDIT
> > > +	/* Entry needs to be fully populated when auditing is enabled. */
> > > +	entry.function = cpuid.function;
> > > +	entry.index = cpuid.index;
> > > +#endif
> > 
> > This shows that the audit case is prone to bitrot, which is good reason
> > to enable it by default.
> 
> I have no argument against that, especially since I missed this case during
> development and only caught it when running on a different system that I
> had happened to configure with CONFIG_KVM_CPUID_AUDIT=y. :-)

I ended up dropping the audit code altogether.  The uops overhead wasn't
bad, but the code bloat was pretty rough, ~16 bytes per instance.  The
final nail in the coffin was that the auditing would trigger false
positives if userspace configured CPUID leafs with a non-signficant index
to have a non-zero index, e.g. is_matching_cpuid_entry() ignores the index
if KVM_CPUID_FLAG_SIGNIFCANT_INDEX isn't set.

^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 00/61] KVM: x86: Introduce KVM cpu caps
  2020-02-29 18:32   ` Sean Christopherson
@ 2020-03-02  9:03     ` Vitaly Kuznetsov
  0 siblings, 0 replies; 168+ messages in thread
From: Vitaly Kuznetsov @ 2020-03-02  9:03 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> On Tue, Feb 25, 2020 at 04:18:38PM +0100, Vitaly Kuznetsov wrote:
>> Sean Christopherson <sean.j.christopherson@intel.com> writes:
>> 
>> >
>> >   7. Profit!
>> 
>> Would it be better or worse if we eliminate set_supported_cpuid() hook
>> completely by doing an ugly hack like (completely untested):
>> 
>> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
>> index a2a091d328c6..5ad291d48e1b 100644
>> --- a/arch/x86/include/asm/kvm_host.h
>> +++ b/arch/x86/include/asm/kvm_host.h
>> @@ -1145,8 +1145,6 @@ struct kvm_x86_ops {
>>  
>>         void (*set_tdp_cr3)(struct kvm_vcpu *vcpu, unsigned long cr3);
>>  
>> -       void (*set_supported_cpuid)(struct kvm_cpuid_entry2 *entry);
>> -
>>         bool (*has_wbinvd_exit)(void);
>>  
>>         u64 (*read_l1_tsc_offset)(struct kvm_vcpu *vcpu);
>> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
>> index e8beb1e542a8..88431fc02797 100644
>> --- a/arch/x86/kvm/cpuid.c
>> +++ b/arch/x86/kvm/cpuid.c
>> @@ -749,6 +749,16 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
>>  		cpuid_entry_override(entry, CPUID_8000_0008_EBX);
>>  		break;
>>  	}
>> +	case 0x8000000A:
>> +		if (boot_cpu_has(X86_FEATURE_SVM)) {
>> +			entry->eax = 1; /* SVM revision 1 */
>> +			entry->ebx = 8; /* Lets support 8 ASIDs in case we add proper
>> +					   ASID emulation to nested SVM */
>> +			entry->ecx = 0; /* Reserved */
>> +			entry->edx = 0; /* Per default do not support any
>> +					   additional features */
>
> Lucky thing that you suggested this change, patch ("KVM: SVM: Convert
> feature updates from CPUID to KVM cpu caps") was buggy in that clearing
> entry->edx here would wipe out all X86_FEATURE_NRIPS and X86_FEATURE_NPT.
> Only noticed it when moving this code. 
>

I plan to give your v2 a spin on AMD Epyc, just in case)

-- 
Vitaly


^ permalink raw reply	[flat|nested] 168+ messages in thread

* Re: [PATCH 19/61] KVM: VMX: Add helpers to query Intel PT mode
  2020-02-25 14:54       ` Paolo Bonzini
@ 2020-03-03 22:41         ` Sean Christopherson
  0 siblings, 0 replies; 168+ messages in thread
From: Sean Christopherson @ 2020-03-03 22:41 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm,
	linux-kernel

Disclaimer: I'm going off a few lines in the SDM and the original patches,
everything I say could be completely wrong :-)

On Tue, Feb 25, 2020 at 03:54:21PM +0100, Paolo Bonzini wrote:
> On 24/02/20 23:18, Sean Christopherson wrote:
> >>>  {
> >>>  	u32 vmexit_ctrl = vmcs_config.vmexit_ctrl;
> >>> -	if (pt_mode == PT_MODE_SYSTEM)
> >>> +	if (vmx_pt_mode_is_system())
> >> ... and here? I.e. to cover the currently unsupported 'host-only' mode.
> > Hmm, good question.  I don't think so?  On VM-Enter, RTIT_CTL would need to
> > be loaded to disable PT.  Clearing RTIT_CTL on VM-Exit would be redundant
> > at that point[1].  And AIUI, the PIP for VM-Enter/VM-Exit isn't needed
> > because there is no context switch from the decoder's perspective.
> 
> How does host-only mode differ from "host-guest but don't expose PT to
> the guest"?  So I would say that host-only mode is a special case of
> host-guest, not of system mode.

AIUI, host-guest needs a special packet for VM-Enter/VM-Exit so that the
trace analyzer understands there was a context switch.  With host-only, the
packet isn't needed because tracing stops entirely.  So it's not that
host-only is a special case of system mode, but rather it doesn't need the
VM-Exit control enabled to generate the special packet.

^ permalink raw reply	[flat|nested] 168+ messages in thread

end of thread, other threads:[~2020-03-03 22:41 UTC | newest]

Thread overview: 168+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-01 18:51 [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Sean Christopherson
2020-02-01 18:51 ` [PATCH 01/61] KVM: x86: Return -E2BIG when KVM_GET_SUPPORTED_CPUID hits max entries Sean Christopherson
2020-02-03 12:55   ` Vitaly Kuznetsov
2020-02-03 15:59     ` Sean Christopherson
2020-02-25 14:36       ` Paolo Bonzini
2020-02-01 18:51 ` [PATCH 02/61] KVM: x86: Refactor loop around do_cpuid_func() to separate helper Sean Christopherson
2020-02-06 14:59   ` Vitaly Kuznetsov
2020-02-07 19:53     ` Sean Christopherson
2020-02-25 14:37       ` Paolo Bonzini
2020-02-25 15:09         ` Vitaly Kuznetsov
2020-02-26 11:35           ` Paolo Bonzini
2020-02-01 18:51 ` [PATCH 03/61] KVM: x86: Simplify handling of Centaur CPUID leafs Sean Christopherson
2020-02-06 15:05   ` Vitaly Kuznetsov
2020-02-07 19:47     ` Sean Christopherson
2020-02-01 18:51 ` [PATCH 04/61] KVM: x86: Clean up error handling in kvm_dev_ioctl_get_cpuid() Sean Christopherson
2020-02-06 15:09   ` Vitaly Kuznetsov
2020-02-01 18:51 ` [PATCH 05/61] KVM: x86: Check userapce CPUID array size after validating sub-leaf Sean Christopherson
2020-02-06 15:24   ` Vitaly Kuznetsov
2020-02-01 18:51 ` [PATCH 06/61] KVM: x86: Move CPUID 0xD.1 handling out of the index>0 loop Sean Christopherson
2020-02-07 15:38   ` Vitaly Kuznetsov
2020-02-01 18:51 ` [PATCH 07/61] KVM: x86: Check for CPUID 0xD.N support before validating array size Sean Christopherson
2020-02-07 15:48   ` Vitaly Kuznetsov
2020-02-01 18:51 ` [PATCH 08/61] KVM: x86: Warn on zero-size save state for valid CPUID 0xD.N sub-leaf Sean Christopherson
2020-02-07 15:54   ` Vitaly Kuznetsov
2020-02-07 15:56     ` Sean Christopherson
2020-02-01 18:51 ` [PATCH 09/61] KVM: x86: Refactor CPUID 0xD.N sub-leaf entry creation Sean Christopherson
2020-02-07 15:56   ` Vitaly Kuznetsov
2020-02-01 18:51 ` [PATCH 10/61] KVM: x86: Clean up CPUID 0x7 sub-leaf loop Sean Christopherson
2020-02-21 14:20   ` Vitaly Kuznetsov
2020-02-01 18:51 ` [PATCH 11/61] KVM: x86: Drop the explicit @index from do_cpuid_7_mask() Sean Christopherson
2020-02-21 14:22   ` Vitaly Kuznetsov
2020-02-01 18:51 ` [PATCH 12/61] KVM: x86: Drop redundant boot cpu checks on SSBD feature bits Sean Christopherson
2020-02-01 18:51 ` [PATCH 13/61] KVM: x86: Consolidate CPUID array max num entries checking Sean Christopherson
2020-02-01 18:51 ` [PATCH 14/61] KVM: x86: Hoist loop counter and terminator to top of __do_cpuid_func() Sean Christopherson
2020-02-01 18:51 ` [PATCH 15/61] KVM: x86: Refactor CPUID 0x4 and 0x8000001d handling Sean Christopherson
2020-02-21 14:40   ` Vitaly Kuznetsov
2020-02-01 18:51 ` [PATCH 16/61] KVM: x86: Encapsulate CPUID entries and metadata in struct Sean Christopherson
2020-02-21 14:58   ` Vitaly Kuznetsov
2020-02-24 21:55     ` Sean Christopherson
2020-02-24 23:12       ` Vitaly Kuznetsov
2020-02-01 18:51 ` [PATCH 17/61] KVM: x86: Drop redundant array size check Sean Christopherson
2020-02-01 18:51 ` [PATCH 18/61] KVM: x86: Use common loop iterator when handling CPUID 0xD.N Sean Christopherson
2020-02-21 15:04   ` Vitaly Kuznetsov
2020-02-01 18:51 ` [PATCH 19/61] KVM: VMX: Add helpers to query Intel PT mode Sean Christopherson
     [not found]   ` <87pne8q8c0.fsf@vitty.brq.redhat.com>
2020-02-24 22:18     ` Sean Christopherson
2020-02-25 14:54       ` Paolo Bonzini
2020-03-03 22:41         ` Sean Christopherson
2020-02-01 18:51 ` [PATCH 20/61] KVM: x86: Calculate the supported xcr0 mask at load time Sean Christopherson
2020-02-13 14:21   ` Xiaoyao Li
2020-02-01 18:51 ` [PATCH 21/61] KVM: x86: Use supported_xcr0 to detect MPX support Sean Christopherson
2020-02-13 14:25   ` Xiaoyao Li
2020-02-21 15:32   ` Vitaly Kuznetsov
2020-02-01 18:51 ` [PATCH 22/61] KVM: x86: Make kvm_mpx_supported() an inline function Sean Christopherson
2020-02-13 14:26   ` Xiaoyao Li
2020-02-21 15:33   ` Vitaly Kuznetsov
2020-02-01 18:51 ` [PATCH 23/61] KVM: x86: Clear output regs for CPUID 0x14 if PT isn't exposed to guest Sean Christopherson
2020-02-21 15:36   ` Vitaly Kuznetsov
2020-02-01 18:51 ` [PATCH 24/61] KVM: x86: Drop explicit @func param from ->set_supported_cpuid() Sean Christopherson
2020-02-21 15:39   ` Vitaly Kuznetsov
2020-02-01 18:51 ` [PATCH 25/61] KVM: x86: Use u32 for holding CPUID register value in helpers Sean Christopherson
2020-02-21 15:43   ` Vitaly Kuznetsov
2020-02-01 18:51 ` [PATCH 26/61] KVM: x86: Introduce cpuid_entry_{get,has}() accessors Sean Christopherson
2020-02-14  9:44   ` Xiaoyao Li
2020-02-14 17:09     ` Sean Christopherson
2020-02-21 15:57   ` Vitaly Kuznetsov
2020-02-21 16:29     ` Sean Christopherson
2020-02-01 18:51 ` [PATCH 27/61] KVM: x86: Introduce cpuid_entry_{change,set,clear}() mutators Sean Christopherson
     [not found]   ` <87ftf0p0d0.fsf@vitty.brq.redhat.com>
2020-02-24 22:42     ` Sean Christopherson
2020-02-01 18:51 ` [PATCH 28/61] KVM: x86: Refactor cpuid_mask() to auto-retrieve the register Sean Christopherson
2020-02-24 13:49   ` Vitaly Kuznetsov
2020-02-01 18:51 ` [PATCH 29/61] KVM: x86: Add Kconfig-controlled auditing of reverse CPUID lookups Sean Christopherson
2020-02-24 13:54   ` Vitaly Kuznetsov
2020-02-24 22:46     ` Sean Christopherson
2020-02-25 15:02       ` Paolo Bonzini
2020-02-25 15:00     ` Paolo Bonzini
2020-02-01 18:51 ` [PATCH 30/61] KVM: x86: Handle MPX CPUID adjustment in VMX code Sean Christopherson
2020-02-13 13:51   ` Xiaoyao Li
2020-02-13 17:37     ` Sean Christopherson
2020-02-24 15:14   ` Vitaly Kuznetsov
2020-02-24 15:45     ` Sean Christopherson
2020-02-01 18:51 ` [PATCH 31/61] KVM: x86: Handle INVPCID " Sean Christopherson
2020-02-24 15:19   ` Vitaly Kuznetsov
2020-02-01 18:51 ` [PATCH 32/61] KVM: x86: Handle UMIP emulation " Sean Christopherson
2020-02-24 15:21   ` Vitaly Kuznetsov
2020-02-01 18:51 ` [PATCH 33/61] KVM: x86: Handle PKU " Sean Christopherson
2020-02-24 15:24   ` Vitaly Kuznetsov
2020-02-01 18:51 ` [PATCH 34/61] KVM: x86: Handle RDTSCP " Sean Christopherson
2020-02-24 15:28   ` Vitaly Kuznetsov
2020-02-01 18:51 ` [PATCH 35/61] KVM: x86: Handle Intel PT " Sean Christopherson
2020-02-24 15:30   ` Vitaly Kuznetsov
2020-02-01 18:51 ` [PATCH 36/61] KVM: x86: Handle GBPAGE CPUID adjustment for EPT " Sean Christopherson
2020-02-24 15:34   ` Vitaly Kuznetsov
2020-02-01 18:51 ` [PATCH 37/61] KVM: x86: Refactor handling of XSAVES CPUID adjustment Sean Christopherson
2020-02-24 15:39   ` Vitaly Kuznetsov
2020-02-01 18:51 ` [PATCH 38/61] KVM: x86: Introduce kvm_cpu_caps to replace runtime CPUID masking Sean Christopherson
2020-02-24 16:32   ` Vitaly Kuznetsov
2020-02-24 22:57     ` Sean Christopherson
2020-02-24 23:20       ` Vitaly Kuznetsov
2020-02-24 23:25         ` Sean Christopherson
2020-02-01 18:51 ` [PATCH 39/61] KVM: SVM: Convert feature updates from CPUID to KVM cpu caps Sean Christopherson
2020-02-24 21:33   ` Vitaly Kuznetsov
2020-02-25 15:10   ` Paolo Bonzini
2020-02-28  0:28     ` Sean Christopherson
2020-02-28  0:36       ` Sean Christopherson
2020-02-28  7:03         ` Paolo Bonzini
2020-02-28 15:09           ` Sean Christopherson
2020-02-01 18:51 ` [PATCH 40/61] KVM: VMX: " Sean Christopherson
2020-02-24 21:40   ` Vitaly Kuznetsov
2020-02-01 18:51 ` [PATCH 41/61] KVM: x86: Move XSAVES CPUID adjust to VMX's KVM cpu cap update Sean Christopherson
2020-02-24 21:43   ` Vitaly Kuznetsov
2020-02-01 18:51 ` [PATCH 42/61] KVM: x86: Add a helper to check kernel support when setting cpu cap Sean Christopherson
2020-02-24 21:47   ` Vitaly Kuznetsov
2020-02-01 18:52 ` [PATCH 43/61] KVM: x86: Use KVM cpu caps to mark CR4.LA57 as not-reserved Sean Christopherson
2020-02-24 22:08   ` Vitaly Kuznetsov
2020-02-24 23:23     ` Sean Christopherson
2020-02-25 15:12     ` Paolo Bonzini
2020-02-25 15:19       ` David Laight
2020-02-25 21:22       ` Sean Christopherson
2020-02-26 11:35         ` Paolo Bonzini
2020-02-01 18:52 ` [PATCH 44/61] KVM: x86: Use KVM cpu caps to track UMIP emulation Sean Christopherson
2020-02-24 22:13   ` Vitaly Kuznetsov
2020-02-01 18:52 ` [PATCH 45/61] KVM: x86: Fold CPUID 0x7 masking back into __do_cpuid_func() Sean Christopherson
2020-02-24 22:21   ` Vitaly Kuznetsov
2020-02-01 18:52 ` [PATCH 46/61] KVM: x86: Remove the unnecessary loop on CPUID 0x7 sub-leafs Sean Christopherson
2020-02-24 22:25   ` Vitaly Kuznetsov
2020-02-01 18:52 ` [PATCH 47/61] KVM: x86: Squash CPUID 0x2.0 insanity for modern CPUs Sean Christopherson
2020-02-24 22:35   ` Vitaly Kuznetsov
2020-02-25 15:17   ` Paolo Bonzini
2020-02-01 18:52 ` [PATCH 48/61] KVM: x86: Do host CPUID at load time to mask KVM cpu caps Sean Christopherson
     [not found]   ` <87o8tnmwni.fsf@vitty.brq.redhat.com>
2020-02-24 23:31     ` Sean Christopherson
2020-02-25 13:53       ` Vitaly Kuznetsov
2020-02-25 15:18   ` Paolo Bonzini
2020-02-25 21:08     ` Sean Christopherson
2020-02-29 18:38       ` Sean Christopherson
2020-02-01 18:52 ` [PATCH 49/61] KVM: x86: Override host CPUID results with kvm_cpu_caps Sean Christopherson
2020-02-24 22:57   ` Vitaly Kuznetsov
2020-02-01 18:52 ` [PATCH 50/61] KVM: x86: Set emulated/transmuted feature bits via kvm_cpu_caps Sean Christopherson
2020-02-25 13:59   ` Vitaly Kuznetsov
2020-02-01 18:52 ` [PATCH 51/61] KVM: x86: Use kvm_cpu_caps to detect Intel PT support Sean Christopherson
2020-02-25 14:06   ` Vitaly Kuznetsov
2020-02-01 18:52 ` [PATCH 52/61] KVM: x86: Use KVM cpu caps to detect MSR_TSC_AUX virt support Sean Christopherson
2020-02-25 14:08   ` Vitaly Kuznetsov
2020-02-01 18:52 ` [PATCH 53/61] KVM: VMX: Directly use VMX capabilities helper to detect RDTSCP support Sean Christopherson
2020-02-25 14:10   ` Vitaly Kuznetsov
2020-02-01 18:52 ` [PATCH 54/61] KVM: x86: Check for Intel PT MSR virtualization using KVM cpu caps Sean Christopherson
2020-02-25 14:11   ` Vitaly Kuznetsov
2020-02-01 18:52 ` [PATCH 55/61] KVM: VMX: Directly query Intel PT mode when refreshing PMUs Sean Christopherson
2020-02-25 14:16   ` Vitaly Kuznetsov
2020-02-01 18:52 ` [PATCH 56/61] KVM: SVM: Refactor logging of NPT enabled/disabled Sean Christopherson
2020-02-25 14:21   ` Vitaly Kuznetsov
2020-02-01 18:52 ` [PATCH 57/61] KVM: x86/mmu: Merge kvm_{enable,disable}_tdp() into a common function Sean Christopherson
2020-02-25 14:27   ` Vitaly Kuznetsov
2020-02-01 18:52 ` [PATCH 58/61] KVM: x86/mmu: Configure max page level during hardware setup Sean Christopherson
2020-02-25 14:43   ` Vitaly Kuznetsov
2020-02-25 21:01     ` Sean Christopherson
2020-02-26 14:55       ` Vitaly Kuznetsov
2020-02-26 15:56         ` Sean Christopherson
2020-02-01 18:52 ` [PATCH 59/61] KVM: x86: Don't propagate MMU lpage support to memslot.disallow_lpage Sean Christopherson
2020-02-25 14:55   ` Vitaly Kuznetsov
2020-02-01 18:52 ` [PATCH 60/61] KVM: Drop largepages_enabled and its accessor/mutator Sean Christopherson
2020-02-25 14:56   ` Vitaly Kuznetsov
2020-02-01 18:52 ` [PATCH 61/61] KVM: x86: Move VMX's host_efer to common x86 code Sean Christopherson
2020-02-25 15:02   ` Vitaly Kuznetsov
     [not found] ` <87wo8ak84x.fsf@vitty.brq.redhat.com>
2020-02-25 15:25   ` [PATCH 00/61] KVM: x86: Introduce KVM cpu caps Paolo Bonzini
2020-02-28  1:37     ` Sean Christopherson
2020-02-28  7:04       ` Paolo Bonzini
2020-02-29 18:32   ` Sean Christopherson
2020-03-02  9:03     ` Vitaly Kuznetsov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).