linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v5 0/4] KVM: nSVM: ondemand nested state allocation
@ 2020-09-21 13:19 Maxim Levitsky
  2020-09-21 13:19 ` [PATCH v5 1/4] KVM: x86: xen_hvm_config: cleanup return values Maxim Levitsky
                   ` (3 more replies)
  0 siblings, 4 replies; 8+ messages in thread
From: Maxim Levitsky @ 2020-09-21 13:19 UTC (permalink / raw)
  To: kvm
  Cc: Vitaly Kuznetsov, H. Peter Anvin, Joerg Roedel, Ingo Molnar,
	Sean Christopherson,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Wanpeng Li, Borislav Petkov, Jim Mattson, linux-kernel,
	Paolo Bonzini, Thomas Gleixner, Maxim Levitsky

This is yet another version of ondemand nested state allocation.

In this version I adoped the suggestion of Sean Christopherson
to return make EFER write return a negative error which then should
propogate to the userspace.

So I fixed the WRMSR code to actually obey this (#GP on positive
return value, exit to userspace when negative error value,
and success on 0 error value, and fixed one user (xen)
that returned negative error code on failures.

The XEN patch is only compile tested. The rest were tested
by always returning -ENOMEM from svm_allocate_nested.

Best regards,
	Maxim Levitsky

Maxim Levitsky (4):
  KVM: x86: xen_hvm_config cleanup return values
  KVM: x86: report negative values from wrmsr to userspace
  KVM: x86: allow kvm_x86_ops.set_efer to return a value
  KVM: nSVM: implement ondemand allocation of the nested state

 arch/x86/include/asm/kvm_host.h |  2 +-
 arch/x86/kvm/emulate.c          |  7 ++--
 arch/x86/kvm/svm/nested.c       | 42 ++++++++++++++++++++++++
 arch/x86/kvm/svm/svm.c          | 58 +++++++++++++++++++--------------
 arch/x86/kvm/svm/svm.h          |  8 ++++-
 arch/x86/kvm/vmx/vmx.c          |  9 +++--
 arch/x86/kvm/x86.c              | 36 ++++++++++----------
 7 files changed, 113 insertions(+), 49 deletions(-)

-- 
2.26.2



^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v5 1/4] KVM: x86: xen_hvm_config: cleanup return values
  2020-09-21 13:19 [PATCH v5 0/4] KVM: nSVM: ondemand nested state allocation Maxim Levitsky
@ 2020-09-21 13:19 ` Maxim Levitsky
  2020-09-21 13:19 ` [PATCH v5 2/4] KVM: x86: report negative values from wrmsr to userspace Maxim Levitsky
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 8+ messages in thread
From: Maxim Levitsky @ 2020-09-21 13:19 UTC (permalink / raw)
  To: kvm
  Cc: Vitaly Kuznetsov, H. Peter Anvin, Joerg Roedel, Ingo Molnar,
	Sean Christopherson,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Wanpeng Li, Borislav Petkov, Jim Mattson, linux-kernel,
	Paolo Bonzini, Thomas Gleixner, Maxim Levitsky

MSR writes should return 1 when giving #GP to the user,
and negative value when fatal error (e.g out of memory)
happened.

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 arch/x86/kvm/x86.c | 23 +++++++++--------------
 1 file changed, 9 insertions(+), 14 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 17f4995e80a7e..063d70e736f7f 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -2694,24 +2694,19 @@ static int xen_hvm_config(struct kvm_vcpu *vcpu, u64 data)
 	u32 page_num = data & ~PAGE_MASK;
 	u64 page_addr = data & PAGE_MASK;
 	u8 *page;
-	int r;
 
-	r = -E2BIG;
 	if (page_num >= blob_size)
-		goto out;
-	r = -ENOMEM;
+		return 1;
+
 	page = memdup_user(blob_addr + (page_num * PAGE_SIZE), PAGE_SIZE);
-	if (IS_ERR(page)) {
-		r = PTR_ERR(page);
-		goto out;
+	if (IS_ERR(page))
+		return PTR_ERR(page);
+
+	if (kvm_vcpu_write_guest(vcpu, page_addr, page, PAGE_SIZE)) {
+		kfree(page);
+		return 1;
 	}
-	if (kvm_vcpu_write_guest(vcpu, page_addr, page, PAGE_SIZE))
-		goto out_free;
-	r = 0;
-out_free:
-	kfree(page);
-out:
-	return r;
+	return 0;
 }
 
 static inline bool kvm_pv_async_pf_enabled(struct kvm_vcpu *vcpu)
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v5 2/4] KVM: x86: report negative values from wrmsr to userspace
  2020-09-21 13:19 [PATCH v5 0/4] KVM: nSVM: ondemand nested state allocation Maxim Levitsky
  2020-09-21 13:19 ` [PATCH v5 1/4] KVM: x86: xen_hvm_config: cleanup return values Maxim Levitsky
@ 2020-09-21 13:19 ` Maxim Levitsky
  2020-09-21 16:08   ` Sean Christopherson
  2020-09-21 13:19 ` [PATCH v5 3/4] KVM: x86: allow kvm_x86_ops.set_efer to return a value Maxim Levitsky
  2020-09-21 13:19 ` [PATCH v5 4/4] KVM: nSVM: implement ondemand allocation of the nested state Maxim Levitsky
  3 siblings, 1 reply; 8+ messages in thread
From: Maxim Levitsky @ 2020-09-21 13:19 UTC (permalink / raw)
  To: kvm
  Cc: Vitaly Kuznetsov, H. Peter Anvin, Joerg Roedel, Ingo Molnar,
	Sean Christopherson,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Wanpeng Li, Borislav Petkov, Jim Mattson, linux-kernel,
	Paolo Bonzini, Thomas Gleixner, Maxim Levitsky

This will allow us to make some MSR writes fatal to the guest
(e.g when out of memory condition occurs)

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 arch/x86/kvm/emulate.c | 7 +++++--
 arch/x86/kvm/x86.c     | 5 +++--
 2 files changed, 8 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
index 1d450d7710d63..d855304f5a509 100644
--- a/arch/x86/kvm/emulate.c
+++ b/arch/x86/kvm/emulate.c
@@ -3702,13 +3702,16 @@ static int em_dr_write(struct x86_emulate_ctxt *ctxt)
 static int em_wrmsr(struct x86_emulate_ctxt *ctxt)
 {
 	u64 msr_data;
+	int ret;
 
 	msr_data = (u32)reg_read(ctxt, VCPU_REGS_RAX)
 		| ((u64)reg_read(ctxt, VCPU_REGS_RDX) << 32);
-	if (ctxt->ops->set_msr(ctxt, reg_read(ctxt, VCPU_REGS_RCX), msr_data))
+
+	ret = ctxt->ops->set_msr(ctxt, reg_read(ctxt, VCPU_REGS_RCX), msr_data);
+	if (ret > 0)
 		return emulate_gp(ctxt, 0);
 
-	return X86EMUL_CONTINUE;
+	return ret < 0 ? X86EMUL_UNHANDLEABLE : X86EMUL_CONTINUE;
 }
 
 static int em_rdmsr(struct x86_emulate_ctxt *ctxt)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 063d70e736f7f..b6c67ab7c4f34 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -1612,15 +1612,16 @@ int kvm_emulate_wrmsr(struct kvm_vcpu *vcpu)
 {
 	u32 ecx = kvm_rcx_read(vcpu);
 	u64 data = kvm_read_edx_eax(vcpu);
+	int ret = kvm_set_msr(vcpu, ecx, data);
 
-	if (kvm_set_msr(vcpu, ecx, data)) {
+	if (ret > 0) {
 		trace_kvm_msr_write_ex(ecx, data);
 		kvm_inject_gp(vcpu, 0);
 		return 1;
 	}
 
 	trace_kvm_msr_write(ecx, data);
-	return kvm_skip_emulated_instruction(vcpu);
+	return ret < 0 ? ret : kvm_skip_emulated_instruction(vcpu);
 }
 EXPORT_SYMBOL_GPL(kvm_emulate_wrmsr);
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v5 3/4] KVM: x86: allow kvm_x86_ops.set_efer to return a value
  2020-09-21 13:19 [PATCH v5 0/4] KVM: nSVM: ondemand nested state allocation Maxim Levitsky
  2020-09-21 13:19 ` [PATCH v5 1/4] KVM: x86: xen_hvm_config: cleanup return values Maxim Levitsky
  2020-09-21 13:19 ` [PATCH v5 2/4] KVM: x86: report negative values from wrmsr to userspace Maxim Levitsky
@ 2020-09-21 13:19 ` Maxim Levitsky
       [not found]   ` <20200921154151.GA23807@linux.intel.com>
  2020-09-21 13:19 ` [PATCH v5 4/4] KVM: nSVM: implement ondemand allocation of the nested state Maxim Levitsky
  3 siblings, 1 reply; 8+ messages in thread
From: Maxim Levitsky @ 2020-09-21 13:19 UTC (permalink / raw)
  To: kvm
  Cc: Vitaly Kuznetsov, H. Peter Anvin, Joerg Roedel, Ingo Molnar,
	Sean Christopherson,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Wanpeng Li, Borislav Petkov, Jim Mattson, linux-kernel,
	Paolo Bonzini, Thomas Gleixner, Maxim Levitsky

This will be used later to return an error when setting this msr fails.

Note that we ignore this return value for qemu initiated writes to
avoid breaking backward compatibility.

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 arch/x86/include/asm/kvm_host.h | 2 +-
 arch/x86/kvm/svm/svm.c          | 3 ++-
 arch/x86/kvm/svm/svm.h          | 2 +-
 arch/x86/kvm/vmx/vmx.c          | 9 ++++++---
 arch/x86/kvm/x86.c              | 8 +++++++-
 5 files changed, 17 insertions(+), 7 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 5303dbc5c9bce..b273c199b9a55 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1069,7 +1069,7 @@ struct kvm_x86_ops {
 	void (*get_cs_db_l_bits)(struct kvm_vcpu *vcpu, int *db, int *l);
 	void (*set_cr0)(struct kvm_vcpu *vcpu, unsigned long cr0);
 	int (*set_cr4)(struct kvm_vcpu *vcpu, unsigned long cr4);
-	void (*set_efer)(struct kvm_vcpu *vcpu, u64 efer);
+	int (*set_efer)(struct kvm_vcpu *vcpu, u64 efer);
 	void (*get_idt)(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
 	void (*set_idt)(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
 	void (*get_gdt)(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 3da5b2f1b4a19..18f8af55e970a 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -263,7 +263,7 @@ static int get_max_npt_level(void)
 #endif
 }
 
-void svm_set_efer(struct kvm_vcpu *vcpu, u64 efer)
+int svm_set_efer(struct kvm_vcpu *vcpu, u64 efer)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 	vcpu->arch.efer = efer;
@@ -283,6 +283,7 @@ void svm_set_efer(struct kvm_vcpu *vcpu, u64 efer)
 
 	svm->vmcb->save.efer = efer | EFER_SVME;
 	vmcb_mark_dirty(svm->vmcb, VMCB_CR);
+	return 0;
 }
 
 static int is_external_interrupt(u32 info)
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 45496775f0db2..1e1842de0efe7 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -338,7 +338,7 @@ static inline bool gif_set(struct vcpu_svm *svm)
 #define MSR_INVALID				0xffffffffU
 
 u32 svm_msrpm_offset(u32 msr);
-void svm_set_efer(struct kvm_vcpu *vcpu, u64 efer);
+int svm_set_efer(struct kvm_vcpu *vcpu, u64 efer);
 void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0);
 int svm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4);
 void svm_flush_tlb(struct kvm_vcpu *vcpu);
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 6f9a0c6d5dc59..8aef1926e26be 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -2835,13 +2835,15 @@ static void enter_rmode(struct kvm_vcpu *vcpu)
 	kvm_mmu_reset_context(vcpu);
 }
 
-void vmx_set_efer(struct kvm_vcpu *vcpu, u64 efer)
+int vmx_set_efer(struct kvm_vcpu *vcpu, u64 efer)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 	struct shared_msr_entry *msr = find_msr_entry(vmx, MSR_EFER);
 
-	if (!msr)
-		return;
+	if (!msr) {
+		/* Host doen't support EFER, nothing to do */
+		return 0;
+	}
 
 	vcpu->arch.efer = efer;
 	if (efer & EFER_LMA) {
@@ -2853,6 +2855,7 @@ void vmx_set_efer(struct kvm_vcpu *vcpu, u64 efer)
 		msr->data = efer & ~EFER_LME;
 	}
 	setup_msrs(vmx);
+	return 0;
 }
 
 #ifdef CONFIG_X86_64
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index b6c67ab7c4f34..cab189a71cbb7 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -1456,6 +1456,7 @@ static int set_efer(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 {
 	u64 old_efer = vcpu->arch.efer;
 	u64 efer = msr_info->data;
+	int r;
 
 	if (efer & efer_reserved_bits)
 		return 1;
@@ -1472,7 +1473,12 @@ static int set_efer(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 	efer &= ~EFER_LMA;
 	efer |= vcpu->arch.efer & EFER_LMA;
 
-	kvm_x86_ops.set_efer(vcpu, efer);
+	r = kvm_x86_ops.set_efer(vcpu, efer);
+
+	if (r && !msr_info->host_initiated) {
+		WARN_ON(r > 0);
+		return r;
+	}
 
 	/* Update reserved bits */
 	if ((efer ^ old_efer) & EFER_NX)
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v5 4/4] KVM: nSVM: implement ondemand allocation of the nested state
  2020-09-21 13:19 [PATCH v5 0/4] KVM: nSVM: ondemand nested state allocation Maxim Levitsky
                   ` (2 preceding siblings ...)
  2020-09-21 13:19 ` [PATCH v5 3/4] KVM: x86: allow kvm_x86_ops.set_efer to return a value Maxim Levitsky
@ 2020-09-21 13:19 ` Maxim Levitsky
  3 siblings, 0 replies; 8+ messages in thread
From: Maxim Levitsky @ 2020-09-21 13:19 UTC (permalink / raw)
  To: kvm
  Cc: Vitaly Kuznetsov, H. Peter Anvin, Joerg Roedel, Ingo Molnar,
	Sean Christopherson,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Wanpeng Li, Borislav Petkov, Jim Mattson, linux-kernel,
	Paolo Bonzini, Thomas Gleixner, Maxim Levitsky

This way we don't waste memory on VMs which don't use
nesting virtualization even if it is available to them.

If allocation of nested state fails (which should happen,
only when host is about to OOM anyway), use new KVM_REQ_OUT_OF_MEMORY
request to shut down the guest

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 arch/x86/kvm/svm/nested.c | 42 ++++++++++++++++++++++++++++++
 arch/x86/kvm/svm/svm.c    | 55 ++++++++++++++++++++++-----------------
 arch/x86/kvm/svm/svm.h    |  6 +++++
 3 files changed, 79 insertions(+), 24 deletions(-)

diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 09417f5197410..dd13856818a03 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -467,6 +467,9 @@ int nested_svm_vmrun(struct vcpu_svm *svm)
 
 	vmcb12 = map.hva;
 
+	if (WARN_ON(!svm->nested.initialized))
+		return 1;
+
 	if (!nested_vmcb_checks(svm, vmcb12)) {
 		vmcb12->control.exit_code    = SVM_EXIT_ERR;
 		vmcb12->control.exit_code_hi = 0;
@@ -684,6 +687,45 @@ int nested_svm_vmexit(struct vcpu_svm *svm)
 	return 0;
 }
 
+int svm_allocate_nested(struct vcpu_svm *svm)
+{
+	struct page *hsave_page;
+
+	if (svm->nested.initialized)
+		return 0;
+
+	hsave_page = alloc_page(GFP_KERNEL_ACCOUNT | __GFP_ZERO);
+	if (!hsave_page)
+		return -ENOMEM;
+
+	svm->nested.hsave = page_address(hsave_page);
+
+	svm->nested.msrpm = svm_vcpu_init_msrpm();
+	if (!svm->nested.msrpm)
+		goto err_free_hsave;
+
+	svm->nested.initialized = true;
+	return 0;
+
+err_free_hsave:
+	__free_page(hsave_page);
+	return -ENOMEM;
+}
+
+void svm_free_nested(struct vcpu_svm *svm)
+{
+	if (!svm->nested.initialized)
+		return;
+
+	svm_vcpu_free_msrpm(svm->nested.msrpm);
+	svm->nested.msrpm = NULL;
+
+	__free_page(virt_to_page(svm->nested.hsave));
+	svm->nested.hsave = NULL;
+
+	svm->nested.initialized = false;
+}
+
 /*
  * Forcibly leave nested mode in order to be able to reset the VCPU later on.
  */
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 18f8af55e970a..a77a95bff7d0a 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -266,6 +266,7 @@ static int get_max_npt_level(void)
 int svm_set_efer(struct kvm_vcpu *vcpu, u64 efer)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
+	u64 old_efer = vcpu->arch.efer;
 	vcpu->arch.efer = efer;
 
 	if (!npt_enabled) {
@@ -276,9 +277,27 @@ int svm_set_efer(struct kvm_vcpu *vcpu, u64 efer)
 			efer &= ~EFER_LME;
 	}
 
-	if (!(efer & EFER_SVME)) {
-		svm_leave_nested(svm);
-		svm_set_gif(svm, true);
+	if ((old_efer & EFER_SVME) != (efer & EFER_SVME)) {
+		if (!(efer & EFER_SVME)) {
+			svm_leave_nested(svm);
+			svm_set_gif(svm, true);
+
+			/*
+			 * Free the nested state unless we are in SMM, in which
+			 * case the exit from SVM mode is only for duration of the SMI
+			 * handler
+			 */
+			if (!is_smm(&svm->vcpu))
+				svm_free_nested(svm);
+
+		} else {
+			int ret = svm_allocate_nested(svm);
+
+			if (ret) {
+				vcpu->arch.efer = old_efer;
+				return ret;
+			}
+		}
 	}
 
 	svm->vmcb->save.efer = efer | EFER_SVME;
@@ -610,7 +629,7 @@ static void set_msr_interception(u32 *msrpm, unsigned msr,
 	msrpm[offset] = tmp;
 }
 
-static u32 *svm_vcpu_init_msrpm(void)
+u32 *svm_vcpu_init_msrpm(void)
 {
 	int i;
 	u32 *msrpm;
@@ -630,7 +649,7 @@ static u32 *svm_vcpu_init_msrpm(void)
 	return msrpm;
 }
 
-static void svm_vcpu_free_msrpm(u32 *msrpm)
+void svm_vcpu_free_msrpm(u32 *msrpm)
 {
 	__free_pages(virt_to_page(msrpm), MSRPM_ALLOC_ORDER);
 }
@@ -1204,7 +1223,6 @@ static int svm_create_vcpu(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm;
 	struct page *vmcb_page;
-	struct page *hsave_page;
 	int err;
 
 	BUILD_BUG_ON(offsetof(struct vcpu_svm, vcpu) != 0);
@@ -1215,13 +1233,9 @@ static int svm_create_vcpu(struct kvm_vcpu *vcpu)
 	if (!vmcb_page)
 		goto out;
 
-	hsave_page = alloc_page(GFP_KERNEL_ACCOUNT | __GFP_ZERO);
-	if (!hsave_page)
-		goto error_free_vmcb_page;
-
 	err = avic_init_vcpu(svm);
 	if (err)
-		goto error_free_hsave_page;
+		goto out;
 
 	/* We initialize this flag to true to make sure that the is_running
 	 * bit would be set the first time the vcpu is loaded.
@@ -1229,15 +1243,9 @@ static int svm_create_vcpu(struct kvm_vcpu *vcpu)
 	if (irqchip_in_kernel(vcpu->kvm) && kvm_apicv_activated(vcpu->kvm))
 		svm->avic_is_running = true;
 
-	svm->nested.hsave = page_address(hsave_page);
-
 	svm->msrpm = svm_vcpu_init_msrpm();
 	if (!svm->msrpm)
-		goto error_free_hsave_page;
-
-	svm->nested.msrpm = svm_vcpu_init_msrpm();
-	if (!svm->nested.msrpm)
-		goto error_free_msrpm;
+		goto error_free_vmcb_page;
 
 	svm->vmcb = page_address(vmcb_page);
 	svm->vmcb_pa = __sme_set(page_to_pfn(vmcb_page) << PAGE_SHIFT);
@@ -1249,10 +1257,6 @@ static int svm_create_vcpu(struct kvm_vcpu *vcpu)
 
 	return 0;
 
-error_free_msrpm:
-	svm_vcpu_free_msrpm(svm->msrpm);
-error_free_hsave_page:
-	__free_page(hsave_page);
 error_free_vmcb_page:
 	__free_page(vmcb_page);
 out:
@@ -1278,10 +1282,10 @@ static void svm_free_vcpu(struct kvm_vcpu *vcpu)
 	 */
 	svm_clear_current_vmcb(svm->vmcb);
 
+	svm_free_nested(svm);
+
 	__free_page(pfn_to_page(__sme_clr(svm->vmcb_pa) >> PAGE_SHIFT));
 	__free_pages(virt_to_page(svm->msrpm), MSRPM_ALLOC_ORDER);
-	__free_page(virt_to_page(svm->nested.hsave));
-	__free_pages(virt_to_page(svm->nested.msrpm), MSRPM_ALLOC_ORDER);
 }
 
 static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
@@ -3964,6 +3968,9 @@ static int svm_pre_leave_smm(struct kvm_vcpu *vcpu, const char *smstate)
 					 gpa_to_gfn(vmcb12_gpa), &map) == -EINVAL)
 				return 1;
 
+			if (svm_allocate_nested(svm))
+				return 1;
+
 			ret = enter_svm_guest_mode(svm, vmcb12_gpa, map.hva);
 			kvm_vcpu_unmap(&svm->vcpu, &map, true);
 		}
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 1e1842de0efe7..10453abc5bed3 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -96,6 +96,8 @@ struct svm_nested_state {
 
 	/* cache for control fields of the guest */
 	struct vmcb_control_area ctl;
+
+	bool initialized;
 };
 
 struct vcpu_svm {
@@ -339,6 +341,8 @@ static inline bool gif_set(struct vcpu_svm *svm)
 
 u32 svm_msrpm_offset(u32 msr);
 int svm_set_efer(struct kvm_vcpu *vcpu, u64 efer);
+u32 *svm_vcpu_init_msrpm(void);
+void svm_vcpu_free_msrpm(u32 *msrpm);
 void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0);
 int svm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4);
 void svm_flush_tlb(struct kvm_vcpu *vcpu);
@@ -379,6 +383,8 @@ static inline bool nested_exit_on_nmi(struct vcpu_svm *svm)
 int enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa,
 			 struct vmcb *nested_vmcb);
 void svm_leave_nested(struct vcpu_svm *svm);
+void svm_free_nested(struct vcpu_svm *svm);
+int svm_allocate_nested(struct vcpu_svm *svm);
 int nested_svm_vmrun(struct vcpu_svm *svm);
 void nested_svm_vmloadsave(struct vmcb *from_vmcb, struct vmcb *to_vmcb);
 int nested_svm_vmexit(struct vcpu_svm *svm);
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v5 2/4] KVM: x86: report negative values from wrmsr to userspace
  2020-09-21 13:19 ` [PATCH v5 2/4] KVM: x86: report negative values from wrmsr to userspace Maxim Levitsky
@ 2020-09-21 16:08   ` Sean Christopherson
  2020-09-22 16:13     ` Maxim Levitsky
  0 siblings, 1 reply; 8+ messages in thread
From: Sean Christopherson @ 2020-09-21 16:08 UTC (permalink / raw)
  To: Maxim Levitsky
  Cc: kvm, Vitaly Kuznetsov, H. Peter Anvin, Joerg Roedel, Ingo Molnar,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Wanpeng Li, Borislav Petkov, Jim Mattson, linux-kernel,
	Paolo Bonzini, Thomas Gleixner

On Mon, Sep 21, 2020 at 04:19:21PM +0300, Maxim Levitsky wrote:
> This will allow us to make some MSR writes fatal to the guest
> (e.g when out of memory condition occurs)
> 
> Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
> ---
>  arch/x86/kvm/emulate.c | 7 +++++--
>  arch/x86/kvm/x86.c     | 5 +++--
>  2 files changed, 8 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
> index 1d450d7710d63..d855304f5a509 100644
> --- a/arch/x86/kvm/emulate.c
> +++ b/arch/x86/kvm/emulate.c
> @@ -3702,13 +3702,16 @@ static int em_dr_write(struct x86_emulate_ctxt *ctxt)
>  static int em_wrmsr(struct x86_emulate_ctxt *ctxt)
>  {
>  	u64 msr_data;
> +	int ret;
>  
>  	msr_data = (u32)reg_read(ctxt, VCPU_REGS_RAX)
>  		| ((u64)reg_read(ctxt, VCPU_REGS_RDX) << 32);
> -	if (ctxt->ops->set_msr(ctxt, reg_read(ctxt, VCPU_REGS_RCX), msr_data))
> +
> +	ret = ctxt->ops->set_msr(ctxt, reg_read(ctxt, VCPU_REGS_RCX), msr_data);
> +	if (ret > 0)
>  		return emulate_gp(ctxt, 0);
>  
> -	return X86EMUL_CONTINUE;
> +	return ret < 0 ? X86EMUL_UNHANDLEABLE : X86EMUL_CONTINUE;
>  }
>  
>  static int em_rdmsr(struct x86_emulate_ctxt *ctxt)
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 063d70e736f7f..b6c67ab7c4f34 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -1612,15 +1612,16 @@ int kvm_emulate_wrmsr(struct kvm_vcpu *vcpu)
>  {
>  	u32 ecx = kvm_rcx_read(vcpu);
>  	u64 data = kvm_read_edx_eax(vcpu);
> +	int ret = kvm_set_msr(vcpu, ecx, data);
>  
> -	if (kvm_set_msr(vcpu, ecx, data)) {
> +	if (ret > 0) {
>  		trace_kvm_msr_write_ex(ecx, data);
>  		kvm_inject_gp(vcpu, 0);
>  		return 1;
>  	}
>  
>  	trace_kvm_msr_write(ecx, data);

Tracing the access as non-faulting feels wrong.  The WRMSR has not completed,
e.g. if userspace cleanly handles -ENOMEM and restarts the guest, KVM would
trace the WRMSR twice.

What about:

	int ret = kvm_set_msr(vcpu, ecx, data);

	if (ret < 0)
		return ret;

	if (ret) {
		trace_kvm_msr_write_ex(ecx, data);
		kvm_inject_gp(vcpu, 0);
		return 1;
	}

	trace_kvm_msr_write(ecx, data);
	return kvm_skip_emulated_instruction(vcpu);

> -	return kvm_skip_emulated_instruction(vcpu);
> +	return ret < 0 ? ret : kvm_skip_emulated_instruction(vcpu);
>  }
>  EXPORT_SYMBOL_GPL(kvm_emulate_wrmsr);
>  
> -- 
> 2.26.2
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v5 3/4] KVM: x86: allow kvm_x86_ops.set_efer to return a value
       [not found]   ` <20200921154151.GA23807@linux.intel.com>
@ 2020-09-22 16:05     ` Maxim Levitsky
  0 siblings, 0 replies; 8+ messages in thread
From: Maxim Levitsky @ 2020-09-22 16:05 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: kvm, Vitaly Kuznetsov, H. Peter Anvin, Joerg Roedel, Ingo Molnar,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Wanpeng Li, Borislav Petkov, Jim Mattson, linux-kernel,
	Paolo Bonzini, Thomas Gleixner

On Mon, 2020-09-21 at 08:41 -0700, Sean Christopherson wrote:
> On Mon, Sep 21, 2020 at 04:19:22PM +0300, Maxim Levitsky wrote:
> > This will be used later to return an error when setting this msr fails.
> > 
> > Note that we ignore this return value for qemu initiated writes to
> > avoid breaking backward compatibility.
> > 
> > Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
> > ---
> > --- a/arch/x86/kvm/vmx/vmx.c
> > +++ b/arch/x86/kvm/vmx/vmx.c
> > @@ -2835,13 +2835,15 @@ static void enter_rmode(struct kvm_vcpu *vcpu)
> >  	kvm_mmu_reset_context(vcpu);
> >  }
> >  
> > -void vmx_set_efer(struct kvm_vcpu *vcpu, u64 efer)
> > +int vmx_set_efer(struct kvm_vcpu *vcpu, u64 efer)
> >  {
> >  	struct vcpu_vmx *vmx = to_vmx(vcpu);
> >  	struct shared_msr_entry *msr = find_msr_entry(vmx, MSR_EFER);
> >  
> > -	if (!msr)
> > -		return;
> > +	if (!msr) {
> > +		/* Host doen't support EFER, nothing to do */
> > +		return 0;
> > +	}
> 
> Kernel style is to omit braces, even with a line comment.  Though I would
> do something like so to avoid the question.
I didn't knew this, but next time I'll will take this in account!

> 
> 	/* Nothing to do if hardware doesn't support EFER. */
> 	if (!msr)
> 		return 0
I'll do this.

> >  
> >  	vcpu->arch.efer = efer;
> >  	if (efer & EFER_LMA) {
> > @@ -2853,6 +2855,7 @@ void vmx_set_efer(struct kvm_vcpu *vcpu, u64 efer)
> >  		msr->data = efer & ~EFER_LME;
> >  	}
> >  	setup_msrs(vmx);
> > +	return 0;
> >  }
> >  
> >  #ifdef CONFIG_X86_64
> > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> > index b6c67ab7c4f34..cab189a71cbb7 100644
> > --- a/arch/x86/kvm/x86.c
> > +++ b/arch/x86/kvm/x86.c
> > @@ -1456,6 +1456,7 @@ static int set_efer(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> >  {
> >  	u64 old_efer = vcpu->arch.efer;
> >  	u64 efer = msr_info->data;
> > +	int r;
> >  
> >  	if (efer & efer_reserved_bits)
> >  		return 1;
> > @@ -1472,7 +1473,12 @@ static int set_efer(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> >  	efer &= ~EFER_LMA;
> >  	efer |= vcpu->arch.efer & EFER_LMA;
> >  
> > -	kvm_x86_ops.set_efer(vcpu, efer);
> > +	r = kvm_x86_ops.set_efer(vcpu, efer);
> > +
> > +	if (r && !msr_info->host_initiated) {
> 
> I get the desire to not break backwards compatibility, but this feels all
> kinds of wrong, and potentially dangerous as it will KVM in a mixed state.
> E.g. vcpu->arch.efer will show that nSVM is enabled, but SVM will not have
> the necessary tracking state allocated.  That could lead to a userspace
> triggerable #GP/panic.
Actually I take care to restore the vcpu->arch.efer to its old value
if an error happens, so in case of failure everything would indicate
that nothing happened, and the offending EFER write can even be retried,
however since we agreed that .set_efer will only fail with negative
errors like -ENOMEM, I agree that there is no reason to treat userspace
writes differently. This code is actually a leftover from previous version,
which I should have removed.

I'll send a new version soon.

Thanks for the review,
	Best regards,
		Maxim Levitsky

> 
> Is ignoring OOM scenario really considered backwards compability?  The VM
> is probably hosted if KVM returns -ENOMEM, e.g. a sophisticated userspace
> stack could trigger OOM killer to free memory and resume the VM.  On the
> other hand, the VM is most definitely hosed if KVM ignores the error and
> puts itself into an invalid state.
> 
> > +		WARN_ON(r > 0);
> > +		return r;
> > +	}
> >  
> >  	/* Update reserved bits */
> >  	if ((efer ^ old_efer) & EFER_NX)
> > -- 
> > 2.26.2
> > 



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v5 2/4] KVM: x86: report negative values from wrmsr to userspace
  2020-09-21 16:08   ` Sean Christopherson
@ 2020-09-22 16:13     ` Maxim Levitsky
  0 siblings, 0 replies; 8+ messages in thread
From: Maxim Levitsky @ 2020-09-22 16:13 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: kvm, Vitaly Kuznetsov, H. Peter Anvin, Joerg Roedel, Ingo Molnar,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Wanpeng Li, Borislav Petkov, Jim Mattson, linux-kernel,
	Paolo Bonzini, Thomas Gleixner

On Mon, 2020-09-21 at 09:08 -0700, Sean Christopherson wrote:
> On Mon, Sep 21, 2020 at 04:19:21PM +0300, Maxim Levitsky wrote:
> > This will allow us to make some MSR writes fatal to the guest
> > (e.g when out of memory condition occurs)
> > 
> > Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
> > ---
> >  arch/x86/kvm/emulate.c | 7 +++++--
> >  arch/x86/kvm/x86.c     | 5 +++--
> >  2 files changed, 8 insertions(+), 4 deletions(-)
> > 
> > diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
> > index 1d450d7710d63..d855304f5a509 100644
> > --- a/arch/x86/kvm/emulate.c
> > +++ b/arch/x86/kvm/emulate.c
> > @@ -3702,13 +3702,16 @@ static int em_dr_write(struct x86_emulate_ctxt *ctxt)
> >  static int em_wrmsr(struct x86_emulate_ctxt *ctxt)
> >  {
> >  	u64 msr_data;
> > +	int ret;
> >  
> >  	msr_data = (u32)reg_read(ctxt, VCPU_REGS_RAX)
> >  		| ((u64)reg_read(ctxt, VCPU_REGS_RDX) << 32);
> > -	if (ctxt->ops->set_msr(ctxt, reg_read(ctxt, VCPU_REGS_RCX), msr_data))
> > +
> > +	ret = ctxt->ops->set_msr(ctxt, reg_read(ctxt, VCPU_REGS_RCX), msr_data);
> > +	if (ret > 0)
> >  		return emulate_gp(ctxt, 0);
> >  
> > -	return X86EMUL_CONTINUE;
> > +	return ret < 0 ? X86EMUL_UNHANDLEABLE : X86EMUL_CONTINUE;
> >  }
> >  
> >  static int em_rdmsr(struct x86_emulate_ctxt *ctxt)
> > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> > index 063d70e736f7f..b6c67ab7c4f34 100644
> > --- a/arch/x86/kvm/x86.c
> > +++ b/arch/x86/kvm/x86.c
> > @@ -1612,15 +1612,16 @@ int kvm_emulate_wrmsr(struct kvm_vcpu *vcpu)
> >  {
> >  	u32 ecx = kvm_rcx_read(vcpu);
> >  	u64 data = kvm_read_edx_eax(vcpu);
> > +	int ret = kvm_set_msr(vcpu, ecx, data);
> >  
> > -	if (kvm_set_msr(vcpu, ecx, data)) {
> > +	if (ret > 0) {
> >  		trace_kvm_msr_write_ex(ecx, data);
> >  		kvm_inject_gp(vcpu, 0);
> >  		return 1;
> >  	}
> >  
> >  	trace_kvm_msr_write(ecx, data);
> 
> Tracing the access as non-faulting feels wrong.  The WRMSR has not completed,
> e.g. if userspace cleanly handles -ENOMEM and restarts the guest, KVM would
> trace the WRMSR twice.

I guess you are right. Since in this case we didn't actually executed the
instruction (exception can also be thought as an execution of an instruction,
since it leads to the exception handler), but in
this case we just fail
and let the userspace do something so we can restart from the same point again.
 
So I'll go with your suggestion.

Thanks for the review,
	Best regards,
		Maxim Levitsky

> 
> What about:
> 
> 	int ret = kvm_set_msr(vcpu, ecx, data);
> 
> 	if (ret < 0)
> 		return ret;
> 
> 	if (ret) {
> 		trace_kvm_msr_write_ex(ecx, data);
> 		kvm_inject_gp(vcpu, 0);
> 		return 1;
> 	}
> 
> 	trace_kvm_msr_write(ecx, data);
> 	return kvm_skip_emulated_instruction(vcpu);
> 
> > -	return kvm_skip_emulated_instruction(vcpu);
> > +	return ret < 0 ? ret : kvm_skip_emulated_instruction(vcpu);
> >  }
> >  EXPORT_SYMBOL_GPL(kvm_emulate_wrmsr);
> >  
> > -- 
> > 2.26.2
> > 



^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2020-09-22 16:14 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-21 13:19 [PATCH v5 0/4] KVM: nSVM: ondemand nested state allocation Maxim Levitsky
2020-09-21 13:19 ` [PATCH v5 1/4] KVM: x86: xen_hvm_config: cleanup return values Maxim Levitsky
2020-09-21 13:19 ` [PATCH v5 2/4] KVM: x86: report negative values from wrmsr to userspace Maxim Levitsky
2020-09-21 16:08   ` Sean Christopherson
2020-09-22 16:13     ` Maxim Levitsky
2020-09-21 13:19 ` [PATCH v5 3/4] KVM: x86: allow kvm_x86_ops.set_efer to return a value Maxim Levitsky
     [not found]   ` <20200921154151.GA23807@linux.intel.com>
2020-09-22 16:05     ` Maxim Levitsky
2020-09-21 13:19 ` [PATCH v5 4/4] KVM: nSVM: implement ondemand allocation of the nested state Maxim Levitsky

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).