linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/2] KVM: MSR-based features
@ 2018-02-15 23:11 Tom Lendacky
  2018-02-15 23:12 ` [PATCH v2 1/2] KVM: x86: Add a framework for supporting " Tom Lendacky
  2018-02-15 23:12 ` [PATCH v2 2/2] KVM: SVM: Add MSR-based feature support for serializing LFENCE Tom Lendacky
  0 siblings, 2 replies; 10+ messages in thread
From: Tom Lendacky @ 2018-02-15 23:11 UTC (permalink / raw)
  To: x86, linux-kernel, kvm
  Cc: Paolo Bonzini, Joerg Roedel, Borislav Petkov, Thomas Gleixner,
	Radim Krčmář

The following series implements support within KVM for MSR-based features.
The first patch creates the MSR-based feature framework used to retrieve
the available MSR-based features.  The second patch makes use of the
framework to allow a guest to determine if the LFENCE instruction is
serializing on AMD processors.

This series is based on the master branch of the KVM git tree.

https://git.kernel.org/pub/scm/virt/kvm/kvm.git

---

Tom Lendacky (2):
      KVM: x86: Add a framework for supporting MSR-based features
      KVM: SVM: Add MSR-based feature support for serializing LFENCE


 Documentation/virtual/kvm/api.txt |   47 +++++++++++++++++++++++++++++++++
 arch/x86/include/asm/kvm_host.h   |    2 +
 arch/x86/include/uapi/asm/kvm.h   |    1 +
 arch/x86/kvm/svm.c                |   43 +++++++++++++++++++++++++++++++
 arch/x86/kvm/x86.c                |   52 +++++++++++++++++++++++++++++++++++++
 include/uapi/linux/kvm.h          |    4 +++
 6 files changed, 149 insertions(+)

-- 
Tom Lendacky

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v2 1/2] KVM: x86: Add a framework for supporting MSR-based features
  2018-02-15 23:11 [PATCH v2 0/2] KVM: MSR-based features Tom Lendacky
@ 2018-02-15 23:12 ` Tom Lendacky
  2018-02-21 11:41   ` Paolo Bonzini
  2018-02-15 23:12 ` [PATCH v2 2/2] KVM: SVM: Add MSR-based feature support for serializing LFENCE Tom Lendacky
  1 sibling, 1 reply; 10+ messages in thread
From: Tom Lendacky @ 2018-02-15 23:12 UTC (permalink / raw)
  To: x86, linux-kernel, kvm
  Cc: Paolo Bonzini, Joerg Roedel, Borislav Petkov, Thomas Gleixner,
	Radim Krčmář

Provide a new KVM capability that allows bits within MSRs to be recognized
as features.  Two new ioctls are added to the VM ioctl routine to retrieve
the list of these MSRs and then retrieve their values. An x86_kvm_ops
callback is used to determine support for the listed MSR-based features.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 Documentation/virtual/kvm/api.txt |   47 ++++++++++++++++++++++++++++++++++
 arch/x86/include/asm/kvm_host.h   |    2 +
 arch/x86/include/uapi/asm/kvm.h   |    1 +
 arch/x86/kvm/x86.c                |   51 +++++++++++++++++++++++++++++++++++++
 include/uapi/linux/kvm.h          |    4 +++
 5 files changed, 105 insertions(+)

diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
index 792fa87..cd580e4 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -3500,6 +3500,53 @@ Returns: 0 on success; -1 on error
 This ioctl can be used to unregister the guest memory region registered
 with KVM_MEMORY_ENCRYPT_REG_REGION ioctl above.
 
+4.113 KVM_GET_MSR_INDEX_LIST
+
+Capability: KVM_CAP_GET_MSR_FEATURES
+Architectures: x86
+Type: vm ioctl
+Parameters: struct kvm_msr_list (in/out)
+Returns: 0 on success; -1 on error
+Errors:
+  EFAULT:    the msr index list cannot be read from or written to
+  E2BIG:     the msr index list is to big to fit in the array specified by
+             the user.
+
+struct kvm_msr_list {
+	__u32 nmsrs; /* number of msrs in entries */
+	__u32 indices[0];
+};
+
+This ioctl returns the msrs that represent possible supported features.
+This list varies by kvm version and host processor.  The user fills in
+in the size of the indices array in nmsrs, and in return kvm adjusts nmsrs
+to reflect the actual number of msrs and fills in the indices array with
+their numbers.  To verify if an msr-based feature is available, the user
+should invoke KVM_GET_MSR for the msr in question.
+
+4.114 KVM_GET_MSR
+
+Capability: KVM_CAP_GET_MSR_FEATURES
+Architectures: x86
+Type: vm ioctl
+Parameters: struct kvm_msr_entry (in/out)
+Returns: 0 on MSR feature supported;
+         1 on MSR feature not supported;
+        -1 on error
+Errors:
+  EFAULT:    the msr entry cannot be read from or written to
+
+struct kvm_msr_entry {
+	__u32 index;
+	__u32 reserved;
+	__u64 data;
+};
+
+Using the list of msr-based features returned from KVM_GET_MSR_INDEX_LIST,
+the user can determine support for the msr-based feature using this ioctl.
+When a value of 0 is returned, the msr-based feature is supported and the
+data member of kvm_msr_entry contains the msr-based feature value.
+
 
 5. The kvm_run structure
 ------------------------
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index dd6f57a..e466bce 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1095,6 +1095,8 @@ struct kvm_x86_ops {
 	int (*mem_enc_op)(struct kvm *kvm, void __user *argp);
 	int (*mem_enc_reg_region)(struct kvm *kvm, struct kvm_enc_region *argp);
 	int (*mem_enc_unreg_region)(struct kvm *kvm, struct kvm_enc_region *argp);
+
+	int (*msr_feature)(struct kvm_msr_entry *entry);
 };
 
 struct kvm_arch_async_pf {
diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h
index f3a9604..d5536f1 100644
--- a/arch/x86/include/uapi/asm/kvm.h
+++ b/arch/x86/include/uapi/asm/kvm.h
@@ -172,6 +172,7 @@ struct kvm_fpu {
 	__u32 pad2;
 };
 
+/* for KVM_GET_MSR */
 struct kvm_msr_entry {
 	__u32 index;
 	__u32 reserved;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index c8a0b54..0219c5c 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -1049,6 +1049,15 @@ bool kvm_rdpmc(struct kvm_vcpu *vcpu)
 
 static unsigned num_emulated_msrs;
 
+/*
+ * List of msr numbers which are used to expose MSR-based features that
+ * can be used by a hypervisor to validate requested CPU features.
+ */
+static u32 msr_based_features[] = {
+};
+
+static unsigned int num_msr_based_features = ARRAY_SIZE(msr_based_features);
+
 bool kvm_valid_efer(struct kvm_vcpu *vcpu, u64 efer)
 {
 	if (efer & efer_reserved_bits)
@@ -2785,6 +2794,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 	case KVM_CAP_SET_BOOT_CPU_ID:
  	case KVM_CAP_SPLIT_IRQCHIP:
 	case KVM_CAP_IMMEDIATE_EXIT:
+	case KVM_CAP_GET_MSR_FEATURES:
 		r = 1;
 		break;
 	case KVM_CAP_ADJUST_CLOCK:
@@ -4410,6 +4420,47 @@ long kvm_arch_vm_ioctl(struct file *filp,
 			r = kvm_x86_ops->mem_enc_unreg_region(kvm, &region);
 		break;
 	}
+	case KVM_GET_MSR_INDEX_LIST: {
+		struct kvm_msr_list __user *user_msr_list = argp;
+		struct kvm_msr_list msr_list;
+		unsigned int n;
+
+		r = -EFAULT;
+		if (copy_from_user(&msr_list, user_msr_list, sizeof(msr_list)))
+			goto out;
+		n = msr_list.nmsrs;
+		msr_list.nmsrs = num_msr_based_features;
+		if (copy_to_user(user_msr_list, &msr_list, sizeof(msr_list)))
+			goto out;
+		r = -E2BIG;
+		if (n < msr_list.nmsrs)
+			goto out;
+		r = -EFAULT;
+		if (copy_to_user(user_msr_list->indices, &msr_based_features,
+				 num_msr_based_features * sizeof(u32)))
+			goto out;
+		r = 0;
+		break;
+	}
+	case KVM_GET_MSR: {
+		struct kvm_msr_entry __user *user_msr = argp;
+		struct kvm_msr_entry msr;
+
+		r = -EFAULT;
+		if (copy_from_user(&msr, user_msr, sizeof(msr)))
+			goto out;
+
+		r = 1;
+		if (!kvm_x86_ops->msr_feature || kvm_x86_ops->msr_feature(&msr))
+			goto out;
+
+		r = -EFAULT;
+		if (copy_to_user(user_msr, &msr, sizeof(msr)))
+			goto out;
+
+		r = 0;
+		break;
+	}
 	default:
 		r = -ENOTTY;
 	}
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 0fb5ef9..48e0368 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -934,6 +934,7 @@ struct kvm_ppc_resize_hpt {
 #define KVM_CAP_S390_AIS_MIGRATION 150
 #define KVM_CAP_PPC_GET_CPU_CHAR 151
 #define KVM_CAP_S390_BPB 152
+#define KVM_CAP_GET_MSR_FEATURES 153
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
@@ -1373,6 +1374,9 @@ struct kvm_enc_region {
 #define KVM_MEMORY_ENCRYPT_REG_REGION    _IOR(KVMIO, 0xbb, struct kvm_enc_region)
 #define KVM_MEMORY_ENCRYPT_UNREG_REGION  _IOR(KVMIO, 0xbc, struct kvm_enc_region)
 
+/* Available with KVM_CAP_GET_MSR_FEATURES */
+#define KVM_GET_MSR		  _IOR(KVMIO, 0xbd, struct kvm_msr_entry)
+
 /* Secure Encrypted Virtualization command */
 enum sev_cmd_id {
 	/* Guest initialization commands */

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 2/2] KVM: SVM: Add MSR-based feature support for serializing LFENCE
  2018-02-15 23:11 [PATCH v2 0/2] KVM: MSR-based features Tom Lendacky
  2018-02-15 23:12 ` [PATCH v2 1/2] KVM: x86: Add a framework for supporting " Tom Lendacky
@ 2018-02-15 23:12 ` Tom Lendacky
  2018-02-21 11:42   ` Paolo Bonzini
  1 sibling, 1 reply; 10+ messages in thread
From: Tom Lendacky @ 2018-02-15 23:12 UTC (permalink / raw)
  To: x86, linux-kernel, kvm
  Cc: Paolo Bonzini, Joerg Roedel, Borislav Petkov, Thomas Gleixner,
	Radim Krčmář

In order to determine if LFENCE is a serializing instruction on AMD
processors, MSR 0xc0011029 (MSR_F10H_DECFG) must be read and the state
of bit 1 checked.  This patch will add support to allow a guest to
properly make this determination.

Add the MSR feature callback operation to svm.c and add MSR 0xc0011029
to the list of MSR-based features.  If LFENCE is serializing, then the
feature is supported, allowing the hypervisor to set the value of the
MSR that guest will see.  Support is also added to write (hypervisor only)
and read the MSR value for the guest.  A write by the guest will result in
a #GP.  A read by the guest will return the value as set by the host.  In
this way, the support to expose the feature to the guest is controlled by
the hypervisor.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/kvm/svm.c |   43 +++++++++++++++++++++++++++++++++++++++++++
 arch/x86/kvm/x86.c |    1 +
 2 files changed, 44 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index b3e488a..2b40885 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -178,6 +178,8 @@ struct vcpu_svm {
 	uint64_t sysenter_eip;
 	uint64_t tsc_aux;
 
+	u64 msr_decfg;
+
 	u64 next_rip;
 
 	u64 host_user_msrs[NR_HOST_SAVE_USER_MSRS];
@@ -3860,6 +3862,24 @@ static int cr8_write_interception(struct vcpu_svm *svm)
 	return 0;
 }
 
+static int svm_msr_feature(struct kvm_msr_entry *msr)
+{
+	int ret = 0;
+
+	msr->data = 0;
+
+	switch (msr->index) {
+	case MSR_F10H_DECFG:
+		if (boot_cpu_has(X86_FEATURE_LFENCE_RDTSC))
+			msr->data |= MSR_F10H_DECFG_LFENCE_SERIALIZE;
+		break;
+	default:
+		ret = -EINVAL;
+	}
+
+	return ret;
+}
+
 static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
@@ -3955,6 +3975,9 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 			msr_info->data = 0x1E;
 		}
 		break;
+	case MSR_F10H_DECFG:
+		msr_info->data = svm->msr_decfg;
+		break;
 	default:
 		return kvm_get_msr_common(vcpu, msr_info);
 	}
@@ -4133,6 +4156,24 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
 	case MSR_VM_IGNNE:
 		vcpu_unimpl(vcpu, "unimplemented wrmsr: 0x%x data 0x%llx\n", ecx, data);
 		break;
+	case MSR_F10H_DECFG: {
+		struct kvm_msr_entry msr_entry;
+
+		msr_entry.index = msr->index;
+		if (svm_msr_feature(&msr_entry))
+			return 1;
+
+		/* Check the supported bits */
+		if (data & ~msr_entry.data)
+			return 1;
+
+		/* Don't allow the guest to change a bit, #GP */
+		if (!msr->host_initiated && (data ^ msr_entry.data))
+			return 1;
+
+		svm->msr_decfg = data;
+		break;
+	}
 	case MSR_IA32_APICBASE:
 		if (kvm_vcpu_apicv_active(vcpu))
 			avic_update_vapic_bar(to_svm(vcpu), data);
@@ -6917,6 +6958,8 @@ static int svm_unregister_enc_region(struct kvm *kvm,
 	.mem_enc_op = svm_mem_enc_op,
 	.mem_enc_reg_region = svm_register_enc_region,
 	.mem_enc_unreg_region = svm_unregister_enc_region,
+
+	.msr_feature = svm_msr_feature,
 };
 
 static int __init svm_init(void)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 0219c5c..42fbbf4 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -1054,6 +1054,7 @@ bool kvm_rdpmc(struct kvm_vcpu *vcpu)
  * can be used by a hypervisor to validate requested CPU features.
  */
 static u32 msr_based_features[] = {
+	MSR_F10H_DECFG,
 };
 
 static unsigned int num_msr_based_features = ARRAY_SIZE(msr_based_features);

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 1/2] KVM: x86: Add a framework for supporting MSR-based features
  2018-02-15 23:12 ` [PATCH v2 1/2] KVM: x86: Add a framework for supporting " Tom Lendacky
@ 2018-02-21 11:41   ` Paolo Bonzini
  2018-02-21 14:15     ` Tom Lendacky
  0 siblings, 1 reply; 10+ messages in thread
From: Paolo Bonzini @ 2018-02-21 11:41 UTC (permalink / raw)
  To: Tom Lendacky, x86, linux-kernel, kvm
  Cc: Joerg Roedel, Borislav Petkov, Thomas Gleixner,
	Radim Krčmář

On 16/02/2018 00:12, Tom Lendacky wrote:
> +static u32 msr_based_features[] = {
> +};
> +
> +static unsigned int num_msr_based_features = ARRAY_SIZE(msr_based_features);
> +
>  bool kvm_valid_efer(struct kvm_vcpu *vcpu, u64 efer)
>  {
>  	if (efer & efer_reserved_bits)
> @@ -2785,6 +2794,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  	case KVM_CAP_SET_BOOT_CPU_ID:
>   	case KVM_CAP_SPLIT_IRQCHIP:
>  	case KVM_CAP_IMMEDIATE_EXIT:
> +	case KVM_CAP_GET_MSR_FEATURES:
>  		r = 1;
>  		break;
>  	case KVM_CAP_ADJUST_CLOCK:
> @@ -4410,6 +4420,47 @@ long kvm_arch_vm_ioctl(struct file *filp,
>  			r = kvm_x86_ops->mem_enc_unreg_region(kvm, &region);
>  		break;
>  	}
> +	case KVM_GET_MSR_INDEX_LIST: {
> +		struct kvm_msr_list __user *user_msr_list = argp;
> +		struct kvm_msr_list msr_list;
> +		unsigned int n;
> +
> +		r = -EFAULT;
> +		if (copy_from_user(&msr_list, user_msr_list, sizeof(msr_list)))
> +			goto out;
> +		n = msr_list.nmsrs;
> +		msr_list.nmsrs = num_msr_based_features;
> +		if (copy_to_user(user_msr_list, &msr_list, sizeof(msr_list)))
> +			goto out;
> +		r = -E2BIG;
> +		if (n < msr_list.nmsrs)
> +			goto out;
> +		r = -EFAULT;
> +		if (copy_to_user(user_msr_list->indices, &msr_based_features,
> +				 num_msr_based_features * sizeof(u32)))
> +			goto out;
> +		r = 0;
> +		break;

I think it's better to have some logic in kvm_init_msr_list, to filter
the MSR list based on whatever MSRs the backend provides.

> +	}
> +	case KVM_GET_MSR: {

It's not that the API isn't usable, KVM_GET_MSR is fine for what we need
here (it's not a fast path), but it's a bit confusing to have
KVM_GET_MSR and KVM_GET_MSRS.

I see two possibilities:

1) reuse KVM_GET_MSRS as in the previous version.  It's okay to
cut-and-paste code from msr_io.

2) find a name for KVM_GET_MSR that is better and different from
KVM_GET_MSRS.  KVM_GET_HOST_MSR or KVM_GET_HOST_FEATURE_MSR come to
mind, but I'm obviously open to other suggestions.

Thanks!

Paolo

> +		struct kvm_msr_entry __user *user_msr = argp;
> +		struct kvm_msr_entry msr;
> +
> +		r = -EFAULT;
> +		if (copy_from_user(&msr, user_msr, sizeof(msr)))
> +			goto out;
> +
> +		r = 1;
> +		if (!kvm_x86_ops->msr_feature || kvm_x86_ops->msr_feature(&msr))
> +			goto out;
> +
> +		r = -EFAULT;
> +		if (copy_to_user(user_msr, &msr, sizeof(msr)))
> +			goto out;
> +
> +		r = 0;
> +		break;
> +	}

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 2/2] KVM: SVM: Add MSR-based feature support for serializing LFENCE
  2018-02-15 23:12 ` [PATCH v2 2/2] KVM: SVM: Add MSR-based feature support for serializing LFENCE Tom Lendacky
@ 2018-02-21 11:42   ` Paolo Bonzini
  0 siblings, 0 replies; 10+ messages in thread
From: Paolo Bonzini @ 2018-02-21 11:42 UTC (permalink / raw)
  To: Tom Lendacky, x86, linux-kernel, kvm
  Cc: Joerg Roedel, Borislav Petkov, Thomas Gleixner,
	Radim Krčmář

On 16/02/2018 00:12, Tom Lendacky wrote:
> In order to determine if LFENCE is a serializing instruction on AMD
> processors, MSR 0xc0011029 (MSR_F10H_DECFG) must be read and the state
> of bit 1 checked.  This patch will add support to allow a guest to
> properly make this determination.
> 
> Add the MSR feature callback operation to svm.c and add MSR 0xc0011029
> to the list of MSR-based features.  If LFENCE is serializing, then the
> feature is supported, allowing the hypervisor to set the value of the
> MSR that guest will see.  Support is also added to write (hypervisor only)
> and read the MSR value for the guest.  A write by the guest will result in
> a #GP.  A read by the guest will return the value as set by the host.  In
> this way, the support to expose the feature to the guest is controlled by
> the hypervisor.
> 
> Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
> ---
>  arch/x86/kvm/svm.c |   43 +++++++++++++++++++++++++++++++++++++++++++
>  arch/x86/kvm/x86.c |    1 +
>  2 files changed, 44 insertions(+)
> 
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index b3e488a..2b40885 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -178,6 +178,8 @@ struct vcpu_svm {
>  	uint64_t sysenter_eip;
>  	uint64_t tsc_aux;
>  
> +	u64 msr_decfg;
> +
>  	u64 next_rip;
>  
>  	u64 host_user_msrs[NR_HOST_SAVE_USER_MSRS];
> @@ -3860,6 +3862,24 @@ static int cr8_write_interception(struct vcpu_svm *svm)
>  	return 0;
>  }
>  
> +static int svm_msr_feature(struct kvm_msr_entry *msr)
> +{
> +	int ret = 0;
> +
> +	msr->data = 0;
> +
> +	switch (msr->index) {
> +	case MSR_F10H_DECFG:
> +		if (boot_cpu_has(X86_FEATURE_LFENCE_RDTSC))
> +			msr->data |= MSR_F10H_DECFG_LFENCE_SERIALIZE;
> +		break;
> +	default:
> +		ret = -EINVAL;
> +	}
> +
> +	return ret;
> +}
> +
>  static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
>  {
>  	struct vcpu_svm *svm = to_svm(vcpu);
> @@ -3955,6 +3975,9 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
>  			msr_info->data = 0x1E;
>  		}
>  		break;
> +	case MSR_F10H_DECFG:
> +		msr_info->data = svm->msr_decfg;
> +		break;
>  	default:
>  		return kvm_get_msr_common(vcpu, msr_info);
>  	}
> @@ -4133,6 +4156,24 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
>  	case MSR_VM_IGNNE:
>  		vcpu_unimpl(vcpu, "unimplemented wrmsr: 0x%x data 0x%llx\n", ecx, data);
>  		break;
> +	case MSR_F10H_DECFG: {
> +		struct kvm_msr_entry msr_entry;
> +
> +		msr_entry.index = msr->index;
> +		if (svm_msr_feature(&msr_entry))
> +			return 1;
> +
> +		/* Check the supported bits */
> +		if (data & ~msr_entry.data)
> +			return 1;
> +
> +		/* Don't allow the guest to change a bit, #GP */
> +		if (!msr->host_initiated && (data ^ msr_entry.data))
> +			return 1;
> +
> +		svm->msr_decfg = data;
> +		break;
> +	}
>  	case MSR_IA32_APICBASE:
>  		if (kvm_vcpu_apicv_active(vcpu))
>  			avic_update_vapic_bar(to_svm(vcpu), data);
> @@ -6917,6 +6958,8 @@ static int svm_unregister_enc_region(struct kvm *kvm,
>  	.mem_enc_op = svm_mem_enc_op,
>  	.mem_enc_reg_region = svm_register_enc_region,
>  	.mem_enc_unreg_region = svm_unregister_enc_region,
> +
> +	.msr_feature = svm_msr_feature,
>  };
>  
>  static int __init svm_init(void)
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 0219c5c..42fbbf4 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -1054,6 +1054,7 @@ bool kvm_rdpmc(struct kvm_vcpu *vcpu)
>   * can be used by a hypervisor to validate requested CPU features.
>   */
>  static u32 msr_based_features[] = {
> +	MSR_F10H_DECFG,
>  };
>  
>  static unsigned int num_msr_based_features = ARRAY_SIZE(msr_based_features);
> 

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 1/2] KVM: x86: Add a framework for supporting MSR-based features
  2018-02-21 11:41   ` Paolo Bonzini
@ 2018-02-21 14:15     ` Tom Lendacky
  2018-02-21 14:32       ` Paolo Bonzini
  0 siblings, 1 reply; 10+ messages in thread
From: Tom Lendacky @ 2018-02-21 14:15 UTC (permalink / raw)
  To: Paolo Bonzini, x86, linux-kernel, kvm
  Cc: Joerg Roedel, Borislav Petkov, Thomas Gleixner,
	Radim Krčmář

On 2/21/2018 5:41 AM, Paolo Bonzini wrote:
> On 16/02/2018 00:12, Tom Lendacky wrote:
>> +static u32 msr_based_features[] = {
>> +};
>> +
>> +static unsigned int num_msr_based_features = ARRAY_SIZE(msr_based_features);
>> +
>>  bool kvm_valid_efer(struct kvm_vcpu *vcpu, u64 efer)
>>  {
>>  	if (efer & efer_reserved_bits)
>> @@ -2785,6 +2794,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>>  	case KVM_CAP_SET_BOOT_CPU_ID:
>>   	case KVM_CAP_SPLIT_IRQCHIP:
>>  	case KVM_CAP_IMMEDIATE_EXIT:
>> +	case KVM_CAP_GET_MSR_FEATURES:
>>  		r = 1;
>>  		break;
>>  	case KVM_CAP_ADJUST_CLOCK:
>> @@ -4410,6 +4420,47 @@ long kvm_arch_vm_ioctl(struct file *filp,
>>  			r = kvm_x86_ops->mem_enc_unreg_region(kvm, &region);
>>  		break;
>>  	}
>> +	case KVM_GET_MSR_INDEX_LIST: {
>> +		struct kvm_msr_list __user *user_msr_list = argp;
>> +		struct kvm_msr_list msr_list;
>> +		unsigned int n;
>> +
>> +		r = -EFAULT;
>> +		if (copy_from_user(&msr_list, user_msr_list, sizeof(msr_list)))
>> +			goto out;
>> +		n = msr_list.nmsrs;
>> +		msr_list.nmsrs = num_msr_based_features;
>> +		if (copy_to_user(user_msr_list, &msr_list, sizeof(msr_list)))
>> +			goto out;
>> +		r = -E2BIG;
>> +		if (n < msr_list.nmsrs)
>> +			goto out;
>> +		r = -EFAULT;
>> +		if (copy_to_user(user_msr_list->indices, &msr_based_features,
>> +				 num_msr_based_features * sizeof(u32)))
>> +			goto out;
>> +		r = 0;
>> +		break;
> 
> I think it's better to have some logic in kvm_init_msr_list, to filter
> the MSR list based on whatever MSRs the backend provides.

Ok, that's what I had originally and then you said to just return the full
list and let KVM_GET_MSR return a 0 or 1 if it was supported. I can switch
it back.

> 
>> +	}
>> +	case KVM_GET_MSR: {
> 
> It's not that the API isn't usable, KVM_GET_MSR is fine for what we need
> here (it's not a fast path), but it's a bit confusing to have
> KVM_GET_MSR and KVM_GET_MSRS.
> 
> I see two possibilities:
> 
> 1) reuse KVM_GET_MSRS as in the previous version.  It's okay to
> cut-and-paste code from msr_io.

If I go back to trimming the list based on support, then KVM_GET_MSRS can
be used.

Thanks,
Tom

> 
> 2) find a name for KVM_GET_MSR that is better and different from
> KVM_GET_MSRS.  KVM_GET_HOST_MSR or KVM_GET_HOST_FEATURE_MSR come to
> mind, but I'm obviously open to other suggestions.
> 
> Thanks!
> 
> Paolo
> 
>> +		struct kvm_msr_entry __user *user_msr = argp;
>> +		struct kvm_msr_entry msr;
>> +
>> +		r = -EFAULT;
>> +		if (copy_from_user(&msr, user_msr, sizeof(msr)))
>> +			goto out;
>> +
>> +		r = 1;
>> +		if (!kvm_x86_ops->msr_feature || kvm_x86_ops->msr_feature(&msr))
>> +			goto out;
>> +
>> +		r = -EFAULT;
>> +		if (copy_to_user(user_msr, &msr, sizeof(msr)))
>> +			goto out;
>> +
>> +		r = 0;
>> +		break;
>> +	}
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 1/2] KVM: x86: Add a framework for supporting MSR-based features
  2018-02-21 14:15     ` Tom Lendacky
@ 2018-02-21 14:32       ` Paolo Bonzini
  2018-02-21 14:47         ` Tom Lendacky
  0 siblings, 1 reply; 10+ messages in thread
From: Paolo Bonzini @ 2018-02-21 14:32 UTC (permalink / raw)
  To: Tom Lendacky, x86, linux-kernel, kvm
  Cc: Joerg Roedel, Borislav Petkov, Thomas Gleixner,
	Radim Krčmář

On 21/02/2018 15:15, Tom Lendacky wrote:
> On 2/21/2018 5:41 AM, Paolo Bonzini wrote:
>> On 16/02/2018 00:12, Tom Lendacky wrote:
>>> +static u32 msr_based_features[] = {
>>> +};
>>> +
>>> +static unsigned int num_msr_based_features = ARRAY_SIZE(msr_based_features);
>>> +
>>>  bool kvm_valid_efer(struct kvm_vcpu *vcpu, u64 efer)
>>>  {
>>>  	if (efer & efer_reserved_bits)
>>> @@ -2785,6 +2794,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>>>  	case KVM_CAP_SET_BOOT_CPU_ID:
>>>   	case KVM_CAP_SPLIT_IRQCHIP:
>>>  	case KVM_CAP_IMMEDIATE_EXIT:
>>> +	case KVM_CAP_GET_MSR_FEATURES:
>>>  		r = 1;
>>>  		break;
>>>  	case KVM_CAP_ADJUST_CLOCK:
>>> @@ -4410,6 +4420,47 @@ long kvm_arch_vm_ioctl(struct file *filp,
>>>  			r = kvm_x86_ops->mem_enc_unreg_region(kvm, &region);
>>>  		break;
>>>  	}
>>> +	case KVM_GET_MSR_INDEX_LIST: {
>>> +		struct kvm_msr_list __user *user_msr_list = argp;
>>> +		struct kvm_msr_list msr_list;
>>> +		unsigned int n;
>>> +
>>> +		r = -EFAULT;
>>> +		if (copy_from_user(&msr_list, user_msr_list, sizeof(msr_list)))
>>> +			goto out;
>>> +		n = msr_list.nmsrs;
>>> +		msr_list.nmsrs = num_msr_based_features;
>>> +		if (copy_to_user(user_msr_list, &msr_list, sizeof(msr_list)))
>>> +			goto out;
>>> +		r = -E2BIG;
>>> +		if (n < msr_list.nmsrs)
>>> +			goto out;
>>> +		r = -EFAULT;
>>> +		if (copy_to_user(user_msr_list->indices, &msr_based_features,
>>> +				 num_msr_based_features * sizeof(u32)))
>>> +			goto out;
>>> +		r = 0;
>>> +		break;
>>
>> I think it's better to have some logic in kvm_init_msr_list, to filter
>> the MSR list based on whatever MSRs the backend provides.
> 
> Ok, that's what I had originally and then you said to just return the full
> list and let KVM_GET_MSR return a 0 or 1 if it was supported. I can switch
> it back.

Hmm, I cannot find this remark (I would have been very confused, so I
tried to look for it).  I commented on removing kvm_valid_msr_feature,
but not kvm_init_msr_list.

>>
>>> +	}
>>> +	case KVM_GET_MSR: {
>>
>> It's not that the API isn't usable, KVM_GET_MSR is fine for what we need
>> here (it's not a fast path), but it's a bit confusing to have
>> KVM_GET_MSR and KVM_GET_MSRS.
>>
>> I see two possibilities:
>>
>> 1) reuse KVM_GET_MSRS as in the previous version.  It's okay to
>> cut-and-paste code from msr_io.
> 
> If I go back to trimming the list based on support, then KVM_GET_MSRS can
> be used.

No problem, renaming is enough---I should have made a better suggestion
in the previous review.

Paolo

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 1/2] KVM: x86: Add a framework for supporting MSR-based features
  2018-02-21 14:32       ` Paolo Bonzini
@ 2018-02-21 14:47         ` Tom Lendacky
  2018-02-21 14:52           ` Tom Lendacky
  0 siblings, 1 reply; 10+ messages in thread
From: Tom Lendacky @ 2018-02-21 14:47 UTC (permalink / raw)
  To: Paolo Bonzini, x86, linux-kernel, kvm
  Cc: Joerg Roedel, Borislav Petkov, Thomas Gleixner,
	Radim Krčmář

On 2/21/2018 8:32 AM, Paolo Bonzini wrote:
> On 21/02/2018 15:15, Tom Lendacky wrote:
>> On 2/21/2018 5:41 AM, Paolo Bonzini wrote:
>>> On 16/02/2018 00:12, Tom Lendacky wrote:
>>>> +static u32 msr_based_features[] = {
>>>> +};
>>>> +
>>>> +static unsigned int num_msr_based_features = ARRAY_SIZE(msr_based_features);
>>>> +
>>>>  bool kvm_valid_efer(struct kvm_vcpu *vcpu, u64 efer)
>>>>  {
>>>>  	if (efer & efer_reserved_bits)
>>>> @@ -2785,6 +2794,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>>>>  	case KVM_CAP_SET_BOOT_CPU_ID:
>>>>   	case KVM_CAP_SPLIT_IRQCHIP:
>>>>  	case KVM_CAP_IMMEDIATE_EXIT:
>>>> +	case KVM_CAP_GET_MSR_FEATURES:
>>>>  		r = 1;
>>>>  		break;
>>>>  	case KVM_CAP_ADJUST_CLOCK:
>>>> @@ -4410,6 +4420,47 @@ long kvm_arch_vm_ioctl(struct file *filp,
>>>>  			r = kvm_x86_ops->mem_enc_unreg_region(kvm, &region);
>>>>  		break;
>>>>  	}
>>>> +	case KVM_GET_MSR_INDEX_LIST: {
>>>> +		struct kvm_msr_list __user *user_msr_list = argp;
>>>> +		struct kvm_msr_list msr_list;
>>>> +		unsigned int n;
>>>> +
>>>> +		r = -EFAULT;
>>>> +		if (copy_from_user(&msr_list, user_msr_list, sizeof(msr_list)))
>>>> +			goto out;
>>>> +		n = msr_list.nmsrs;
>>>> +		msr_list.nmsrs = num_msr_based_features;
>>>> +		if (copy_to_user(user_msr_list, &msr_list, sizeof(msr_list)))
>>>> +			goto out;
>>>> +		r = -E2BIG;
>>>> +		if (n < msr_list.nmsrs)
>>>> +			goto out;
>>>> +		r = -EFAULT;
>>>> +		if (copy_to_user(user_msr_list->indices, &msr_based_features,
>>>> +				 num_msr_based_features * sizeof(u32)))
>>>> +			goto out;
>>>> +		r = 0;
>>>> +		break;
>>>
>>> I think it's better to have some logic in kvm_init_msr_list, to filter
>>> the MSR list based on whatever MSRs the backend provides.
>>
>> Ok, that's what I had originally and then you said to just return the full
>> list and let KVM_GET_MSR return a 0 or 1 if it was supported. I can switch
>> it back.
> 
> Hmm, I cannot find this remark (I would have been very confused, so I
> tried to look for it).  I commented on removing kvm_valid_msr_feature,
> but not kvm_init_msr_list.

I think this is the reply that sent me off on that track:
  https://marc.info/?l=linux-kernel&m=151862648123153&w=2

I'll make it consistent with the other MSR-related items and initialize
the list in kvm_init_msr_list().  I'll change the signature of the
msr_feature() kvm_x86_ops callback to take an index and optionally return
a data value so it can be used to check for support when building the
list and return a value when needed.

Thanks,
Tom

> 
>>>
>>>> +	}
>>>> +	case KVM_GET_MSR: {
>>>
>>> It's not that the API isn't usable, KVM_GET_MSR is fine for what we need
>>> here (it's not a fast path), but it's a bit confusing to have
>>> KVM_GET_MSR and KVM_GET_MSRS.
>>>
>>> I see two possibilities:
>>>
>>> 1) reuse KVM_GET_MSRS as in the previous version.  It's okay to
>>> cut-and-paste code from msr_io.
>>
>> If I go back to trimming the list based on support, then KVM_GET_MSRS can
>> be used.
> 
> No problem, renaming is enough---I should have made a better suggestion
> in the previous review.
> 
> Paolo
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 1/2] KVM: x86: Add a framework for supporting MSR-based features
  2018-02-21 14:47         ` Tom Lendacky
@ 2018-02-21 14:52           ` Tom Lendacky
  2018-02-21 16:36             ` Paolo Bonzini
  0 siblings, 1 reply; 10+ messages in thread
From: Tom Lendacky @ 2018-02-21 14:52 UTC (permalink / raw)
  To: Paolo Bonzini, x86, linux-kernel, kvm
  Cc: Joerg Roedel, Borislav Petkov, Thomas Gleixner,
	Radim Krčmář

On 2/21/2018 8:47 AM, Tom Lendacky wrote:
> On 2/21/2018 8:32 AM, Paolo Bonzini wrote:
>> On 21/02/2018 15:15, Tom Lendacky wrote:
>>> On 2/21/2018 5:41 AM, Paolo Bonzini wrote:
>>>> On 16/02/2018 00:12, Tom Lendacky wrote:
>>>>> +static u32 msr_based_features[] = {
>>>>> +};
>>>>> +
>>>>> +static unsigned int num_msr_based_features = ARRAY_SIZE(msr_based_features);
>>>>> +
>>>>>  bool kvm_valid_efer(struct kvm_vcpu *vcpu, u64 efer)
>>>>>  {
>>>>>  	if (efer & efer_reserved_bits)
>>>>> @@ -2785,6 +2794,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>>>>>  	case KVM_CAP_SET_BOOT_CPU_ID:
>>>>>   	case KVM_CAP_SPLIT_IRQCHIP:
>>>>>  	case KVM_CAP_IMMEDIATE_EXIT:
>>>>> +	case KVM_CAP_GET_MSR_FEATURES:
>>>>>  		r = 1;
>>>>>  		break;
>>>>>  	case KVM_CAP_ADJUST_CLOCK:
>>>>> @@ -4410,6 +4420,47 @@ long kvm_arch_vm_ioctl(struct file *filp,
>>>>>  			r = kvm_x86_ops->mem_enc_unreg_region(kvm, &region);
>>>>>  		break;
>>>>>  	}
>>>>> +	case KVM_GET_MSR_INDEX_LIST: {
>>>>> +		struct kvm_msr_list __user *user_msr_list = argp;
>>>>> +		struct kvm_msr_list msr_list;
>>>>> +		unsigned int n;
>>>>> +
>>>>> +		r = -EFAULT;
>>>>> +		if (copy_from_user(&msr_list, user_msr_list, sizeof(msr_list)))
>>>>> +			goto out;
>>>>> +		n = msr_list.nmsrs;
>>>>> +		msr_list.nmsrs = num_msr_based_features;
>>>>> +		if (copy_to_user(user_msr_list, &msr_list, sizeof(msr_list)))
>>>>> +			goto out;
>>>>> +		r = -E2BIG;
>>>>> +		if (n < msr_list.nmsrs)
>>>>> +			goto out;
>>>>> +		r = -EFAULT;
>>>>> +		if (copy_to_user(user_msr_list->indices, &msr_based_features,
>>>>> +				 num_msr_based_features * sizeof(u32)))
>>>>> +			goto out;
>>>>> +		r = 0;
>>>>> +		break;
>>>>
>>>> I think it's better to have some logic in kvm_init_msr_list, to filter
>>>> the MSR list based on whatever MSRs the backend provides.
>>>
>>> Ok, that's what I had originally and then you said to just return the full
>>> list and let KVM_GET_MSR return a 0 or 1 if it was supported. I can switch
>>> it back.
>>
>> Hmm, I cannot find this remark (I would have been very confused, so I
>> tried to look for it).  I commented on removing kvm_valid_msr_feature,
>> but not kvm_init_msr_list.
> 
> I think this is the reply that sent me off on that track:
>   https://marc.info/?l=linux-kernel&m=151862648123153&w=2
> 
> I'll make it consistent with the other MSR-related items and initialize
> the list in kvm_init_msr_list().  I'll change the signature of the
> msr_feature() kvm_x86_ops callback to take an index and optionally return
> a data value so it can be used to check for support when building the
> list and return a value when needed.

Hmm, actually I'll just leave the signature alone and pass in a local
kvm_msr_entry struct variable for the call when initializing the list.

Thanks,
Tom

> 
> Thanks,
> Tom
> 
>>
>>>>
>>>>> +	}
>>>>> +	case KVM_GET_MSR: {
>>>>
>>>> It's not that the API isn't usable, KVM_GET_MSR is fine for what we need
>>>> here (it's not a fast path), but it's a bit confusing to have
>>>> KVM_GET_MSR and KVM_GET_MSRS.
>>>>
>>>> I see two possibilities:
>>>>
>>>> 1) reuse KVM_GET_MSRS as in the previous version.  It's okay to
>>>> cut-and-paste code from msr_io.
>>>
>>> If I go back to trimming the list based on support, then KVM_GET_MSRS can
>>> be used.
>>
>> No problem, renaming is enough---I should have made a better suggestion
>> in the previous review.
>>
>> Paolo
>>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 1/2] KVM: x86: Add a framework for supporting MSR-based features
  2018-02-21 14:52           ` Tom Lendacky
@ 2018-02-21 16:36             ` Paolo Bonzini
  0 siblings, 0 replies; 10+ messages in thread
From: Paolo Bonzini @ 2018-02-21 16:36 UTC (permalink / raw)
  To: Tom Lendacky, x86, linux-kernel, kvm
  Cc: Joerg Roedel, Borislav Petkov, Thomas Gleixner,
	Radim Krčmář

On 21/02/2018 15:52, Tom Lendacky wrote:
> On 2/21/2018 8:47 AM, Tom Lendacky wrote:
>> On 2/21/2018 8:32 AM, Paolo Bonzini wrote:
>>> On 21/02/2018 15:15, Tom Lendacky wrote:
>>>> On 2/21/2018 5:41 AM, Paolo Bonzini wrote:
>>>>> On 16/02/2018 00:12, Tom Lendacky wrote:
>>>>>> +static u32 msr_based_features[] = {
>>>>>> +};
>>>>>> +
>>>>>> +static unsigned int num_msr_based_features = ARRAY_SIZE(msr_based_features);
>>>>>> +
>>>>>>  bool kvm_valid_efer(struct kvm_vcpu *vcpu, u64 efer)
>>>>>>  {
>>>>>>  	if (efer & efer_reserved_bits)
>>>>>> @@ -2785,6 +2794,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>>>>>>  	case KVM_CAP_SET_BOOT_CPU_ID:
>>>>>>   	case KVM_CAP_SPLIT_IRQCHIP:
>>>>>>  	case KVM_CAP_IMMEDIATE_EXIT:
>>>>>> +	case KVM_CAP_GET_MSR_FEATURES:
>>>>>>  		r = 1;
>>>>>>  		break;
>>>>>>  	case KVM_CAP_ADJUST_CLOCK:
>>>>>> @@ -4410,6 +4420,47 @@ long kvm_arch_vm_ioctl(struct file *filp,
>>>>>>  			r = kvm_x86_ops->mem_enc_unreg_region(kvm, &region);
>>>>>>  		break;
>>>>>>  	}
>>>>>> +	case KVM_GET_MSR_INDEX_LIST: {
>>>>>> +		struct kvm_msr_list __user *user_msr_list = argp;
>>>>>> +		struct kvm_msr_list msr_list;
>>>>>> +		unsigned int n;
>>>>>> +
>>>>>> +		r = -EFAULT;
>>>>>> +		if (copy_from_user(&msr_list, user_msr_list, sizeof(msr_list)))
>>>>>> +			goto out;
>>>>>> +		n = msr_list.nmsrs;
>>>>>> +		msr_list.nmsrs = num_msr_based_features;
>>>>>> +		if (copy_to_user(user_msr_list, &msr_list, sizeof(msr_list)))
>>>>>> +			goto out;
>>>>>> +		r = -E2BIG;
>>>>>> +		if (n < msr_list.nmsrs)
>>>>>> +			goto out;
>>>>>> +		r = -EFAULT;
>>>>>> +		if (copy_to_user(user_msr_list->indices, &msr_based_features,
>>>>>> +				 num_msr_based_features * sizeof(u32)))
>>>>>> +			goto out;
>>>>>> +		r = 0;
>>>>>> +		break;
>>>>>
>>>>> I think it's better to have some logic in kvm_init_msr_list, to filter
>>>>> the MSR list based on whatever MSRs the backend provides.
>>>>
>>>> Ok, that's what I had originally and then you said to just return the full
>>>> list and let KVM_GET_MSR return a 0 or 1 if it was supported. I can switch
>>>> it back.
>>>
>>> Hmm, I cannot find this remark (I would have been very confused, so I
>>> tried to look for it).  I commented on removing kvm_valid_msr_feature,
>>> but not kvm_init_msr_list.
>>
>> I think this is the reply that sent me off on that track:
>>   https://marc.info/?l=linux-kernel&m=151862648123153&w=2

Yeah, it was referring to AMD hosts that don't have the MSR.  Sorry for
the confusion.

>> I'll make it consistent with the other MSR-related items and initialize
>> the list in kvm_init_msr_list().  I'll change the signature of the
>> msr_feature() kvm_x86_ops callback to take an index and optionally return
>> a data value so it can be used to check for support when building the
>> list and return a value when needed.
> 
> Hmm, actually I'll just leave the signature alone and pass in a local
> kvm_msr_entry struct variable for the call when initializing the list.

Sounds good!

Paolo

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2018-02-21 16:36 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-02-15 23:11 [PATCH v2 0/2] KVM: MSR-based features Tom Lendacky
2018-02-15 23:12 ` [PATCH v2 1/2] KVM: x86: Add a framework for supporting " Tom Lendacky
2018-02-21 11:41   ` Paolo Bonzini
2018-02-21 14:15     ` Tom Lendacky
2018-02-21 14:32       ` Paolo Bonzini
2018-02-21 14:47         ` Tom Lendacky
2018-02-21 14:52           ` Tom Lendacky
2018-02-21 16:36             ` Paolo Bonzini
2018-02-15 23:12 ` [PATCH v2 2/2] KVM: SVM: Add MSR-based feature support for serializing LFENCE Tom Lendacky
2018-02-21 11:42   ` Paolo Bonzini

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).