linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/4] KVM: nSVM/nVMX: fix nested virtualization treatment of nested exceptions
@ 2021-04-01 14:38 Maxim Levitsky
  2021-04-01 14:38 ` [PATCH 1/4] KVM: x86: pending exceptions must not be blocked by an injected event Maxim Levitsky
                   ` (3 more replies)
  0 siblings, 4 replies; 12+ messages in thread
From: Maxim Levitsky @ 2021-04-01 14:38 UTC (permalink / raw)
  To: kvm
  Cc: Thomas Gleixner, Wanpeng Li, Borislav Petkov, Paolo Bonzini,
	Jim Mattson, open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Vitaly Kuznetsov, H. Peter Anvin, Joerg Roedel, Ingo Molnar,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Sean Christopherson, Maxim Levitsky

clone of "kernel-starship-5.12.unstable"

Maxim Levitsky (4):
  KVM: x86: pending exceptions must not be blocked by an injected event
  KVM: x86: separate pending and injected exception
  KVM: x86: correctly merge pending and injected exception
  KVM: x86: remove tweaking of inject_page_fault

 arch/x86/include/asm/kvm_host.h |  34 +++-
 arch/x86/kvm/svm/nested.c       |  65 +++----
 arch/x86/kvm/svm/svm.c          |   8 +-
 arch/x86/kvm/vmx/nested.c       | 107 +++++------
 arch/x86/kvm/vmx/vmx.c          |  14 +-
 arch/x86/kvm/x86.c              | 302 ++++++++++++++++++--------------
 arch/x86/kvm/x86.h              |   6 +-
 7 files changed, 283 insertions(+), 253 deletions(-)

-- 
2.26.2



^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 1/4] KVM: x86: pending exceptions must not be blocked by an injected event
  2021-04-01 14:38 [PATCH 0/4] KVM: nSVM/nVMX: fix nested virtualization treatment of nested exceptions Maxim Levitsky
@ 2021-04-01 14:38 ` Maxim Levitsky
  2021-04-01 17:05   ` Paolo Bonzini
  2021-04-01 14:38 ` [PATCH 2/4] KVM: x86: separate pending and injected exception Maxim Levitsky
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 12+ messages in thread
From: Maxim Levitsky @ 2021-04-01 14:38 UTC (permalink / raw)
  To: kvm
  Cc: Thomas Gleixner, Wanpeng Li, Borislav Petkov, Paolo Bonzini,
	Jim Mattson, open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Vitaly Kuznetsov, H. Peter Anvin, Joerg Roedel, Ingo Molnar,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Sean Christopherson, Maxim Levitsky

Injected interrupts/nmi should not block a pending exception,
but rather be either lost if nested hypervisor doesn't
intercept the pending exception (as in stock x86), or be delivered
in exitintinfo/IDT_VECTORING_INFO field, as a part of a VMexit
that corresponds to the pending exception.

The only reason for an exception to be blocked is when nested run
is pending (and that can't really happen currently
but still worth checking for).

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 arch/x86/kvm/svm/nested.c |  8 +++++++-
 arch/x86/kvm/vmx/nested.c | 10 ++++++++--
 2 files changed, 15 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 8523f60adb92..34a37b2bd486 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -1062,7 +1062,13 @@ static int svm_check_nested_events(struct kvm_vcpu *vcpu)
 	}
 
 	if (vcpu->arch.exception.pending) {
-		if (block_nested_events)
+		/*
+		 * Only a pending nested run can block a pending exception.
+		 * Otherwise an injected NMI/interrupt should either be
+		 * lost or delivered to the nested hypervisor in the EXITINTINFO
+		 * vmcb field, while delivering the pending exception.
+		 */
+		if (svm->nested.nested_run_pending)
                         return -EBUSY;
 		if (!nested_exit_on_exception(svm))
 			return 0;
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index fd334e4aa6db..c3ba842fc07f 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -3806,9 +3806,15 @@ static int vmx_check_nested_events(struct kvm_vcpu *vcpu)
 
 	/*
 	 * Process any exceptions that are not debug traps before MTF.
+	 *
+	 * Note that only a pending nested run can block a pending exception.
+	 * Otherwise an injected NMI/interrupt should either be
+	 * lost or delivered to the nested hypervisor in the IDT_VECTORING_INFO,
+	 * while delivering the pending exception.
 	 */
+
 	if (vcpu->arch.exception.pending && !vmx_pending_dbg_trap(vcpu)) {
-		if (block_nested_events)
+		if (vmx->nested.nested_run_pending)
 			return -EBUSY;
 		if (!nested_vmx_check_exception(vcpu, &exit_qual))
 			goto no_vmexit;
@@ -3825,7 +3831,7 @@ static int vmx_check_nested_events(struct kvm_vcpu *vcpu)
 	}
 
 	if (vcpu->arch.exception.pending) {
-		if (block_nested_events)
+		if (vmx->nested.nested_run_pending)
 			return -EBUSY;
 		if (!nested_vmx_check_exception(vcpu, &exit_qual))
 			goto no_vmexit;
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 2/4] KVM: x86: separate pending and injected exception
  2021-04-01 14:38 [PATCH 0/4] KVM: nSVM/nVMX: fix nested virtualization treatment of nested exceptions Maxim Levitsky
  2021-04-01 14:38 ` [PATCH 1/4] KVM: x86: pending exceptions must not be blocked by an injected event Maxim Levitsky
@ 2021-04-01 14:38 ` Maxim Levitsky
  2021-04-01 23:05   ` Sean Christopherson
  2021-04-01 14:38 ` [PATCH 3/4] KVM: x86: correctly merge " Maxim Levitsky
  2021-04-01 14:38 ` [PATCH 4/4] KVM: x86: remove tweaking of inject_page_fault Maxim Levitsky
  3 siblings, 1 reply; 12+ messages in thread
From: Maxim Levitsky @ 2021-04-01 14:38 UTC (permalink / raw)
  To: kvm
  Cc: Thomas Gleixner, Wanpeng Li, Borislav Petkov, Paolo Bonzini,
	Jim Mattson, open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Vitaly Kuznetsov, H. Peter Anvin, Joerg Roedel, Ingo Molnar,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Sean Christopherson, Maxim Levitsky

Use 'pending_exception' and 'injected_exception' fields
to store the pending and the injected exceptions.

After this patch still only one is active, but
in the next patch both could co-exist in some cases.

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 arch/x86/include/asm/kvm_host.h |  25 ++++--
 arch/x86/kvm/svm/nested.c       |  26 +++---
 arch/x86/kvm/svm/svm.c          |   6 +-
 arch/x86/kvm/vmx/nested.c       |  36 ++++----
 arch/x86/kvm/vmx/vmx.c          |  12 +--
 arch/x86/kvm/x86.c              | 145 ++++++++++++++++++--------------
 arch/x86/kvm/x86.h              |   6 +-
 7 files changed, 143 insertions(+), 113 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index a52f973bdff6..3b2fd276e8d5 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -547,6 +547,14 @@ struct kvm_vcpu_xen {
 	u64 runstate_times[4];
 };
 
+struct kvm_queued_exception {
+	bool valid;
+	u8 nr;
+	bool has_error_code;
+	u32 error_code;
+};
+
+
 struct kvm_vcpu_arch {
 	/*
 	 * rip and regs accesses must go through
@@ -645,16 +653,15 @@ struct kvm_vcpu_arch {
 
 	u8 event_exit_inst_len;
 
-	struct kvm_queued_exception {
-		bool pending;
-		bool injected;
-		bool has_error_code;
-		u8 nr;
-		u32 error_code;
-		unsigned long payload;
-		bool has_payload;
+	struct kvm_queued_exception pending_exception;
+
+	struct kvm_exception_payload {
+		bool valid;
+		unsigned long value;
 		u8 nested_apf;
-	} exception;
+	} exception_payload;
+
+	struct kvm_queued_exception injected_exception;
 
 	struct kvm_queued_interrupt {
 		bool injected;
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 34a37b2bd486..7adad9b6dcad 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -349,14 +349,14 @@ static void nested_save_pending_event_to_vmcb12(struct vcpu_svm *svm,
 	u32 exit_int_info = 0;
 	unsigned int nr;
 
-	if (vcpu->arch.exception.injected) {
-		nr = vcpu->arch.exception.nr;
+	if (vcpu->arch.injected_exception.valid) {
+		nr = vcpu->arch.injected_exception.nr;
 		exit_int_info = nr | SVM_EVTINJ_VALID | SVM_EVTINJ_TYPE_EXEPT;
 
-		if (vcpu->arch.exception.has_error_code) {
+		if (vcpu->arch.injected_exception.has_error_code) {
 			exit_int_info |= SVM_EVTINJ_VALID_ERR;
 			vmcb12->control.exit_int_info_err =
-				vcpu->arch.exception.error_code;
+				vcpu->arch.injected_exception.error_code;
 		}
 
 	} else if (vcpu->arch.nmi_injected) {
@@ -1000,30 +1000,30 @@ int nested_svm_check_permissions(struct kvm_vcpu *vcpu)
 
 static bool nested_exit_on_exception(struct vcpu_svm *svm)
 {
-	unsigned int nr = svm->vcpu.arch.exception.nr;
+	unsigned int nr = svm->vcpu.arch.pending_exception.nr;
 
 	return (svm->nested.ctl.intercepts[INTERCEPT_EXCEPTION] & BIT(nr));
 }
 
 static void nested_svm_inject_exception_vmexit(struct vcpu_svm *svm)
 {
-	unsigned int nr = svm->vcpu.arch.exception.nr;
+	unsigned int nr = svm->vcpu.arch.pending_exception.nr;
 
 	svm->vmcb->control.exit_code = SVM_EXIT_EXCP_BASE + nr;
 	svm->vmcb->control.exit_code_hi = 0;
 
-	if (svm->vcpu.arch.exception.has_error_code)
-		svm->vmcb->control.exit_info_1 = svm->vcpu.arch.exception.error_code;
+	if (svm->vcpu.arch.pending_exception.has_error_code)
+		svm->vmcb->control.exit_info_1 = svm->vcpu.arch.pending_exception.error_code;
 
 	/*
 	 * EXITINFO2 is undefined for all exception intercepts other
 	 * than #PF.
 	 */
 	if (nr == PF_VECTOR) {
-		if (svm->vcpu.arch.exception.nested_apf)
+		if (svm->vcpu.arch.exception_payload.nested_apf)
 			svm->vmcb->control.exit_info_2 = svm->vcpu.arch.apf.nested_apf_token;
-		else if (svm->vcpu.arch.exception.has_payload)
-			svm->vmcb->control.exit_info_2 = svm->vcpu.arch.exception.payload;
+		else if (svm->vcpu.arch.exception_payload.valid)
+			svm->vmcb->control.exit_info_2 = svm->vcpu.arch.exception_payload.value;
 		else
 			svm->vmcb->control.exit_info_2 = svm->vcpu.arch.cr2;
 	} else if (nr == DB_VECTOR) {
@@ -1034,7 +1034,7 @@ static void nested_svm_inject_exception_vmexit(struct vcpu_svm *svm)
 			kvm_update_dr7(&svm->vcpu);
 		}
 	} else
-		WARN_ON(svm->vcpu.arch.exception.has_payload);
+		WARN_ON(svm->vcpu.arch.exception_payload.valid);
 
 	nested_svm_vmexit(svm);
 }
@@ -1061,7 +1061,7 @@ static int svm_check_nested_events(struct kvm_vcpu *vcpu)
 		return 0;
 	}
 
-	if (vcpu->arch.exception.pending) {
+	if (vcpu->arch.pending_exception.valid) {
 		/*
 		 * Only a pending nested run can block a pending exception.
 		 * Otherwise an injected NMI/interrupt should either be
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 271196400495..90b541138c5a 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -359,9 +359,9 @@ static int skip_emulated_instruction(struct kvm_vcpu *vcpu)
 static void svm_queue_exception(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
-	unsigned nr = vcpu->arch.exception.nr;
-	bool has_error_code = vcpu->arch.exception.has_error_code;
-	u32 error_code = vcpu->arch.exception.error_code;
+	unsigned int nr = vcpu->arch.injected_exception.nr;
+	bool has_error_code = vcpu->arch.injected_exception.has_error_code;
+	u32 error_code = vcpu->arch.injected_exception.error_code;
 
 	kvm_deliver_exception_payload(vcpu);
 
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index c3ba842fc07f..5d54fecff9a7 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -388,17 +388,17 @@ static bool nested_vmx_is_page_fault_vmexit(struct vmcs12 *vmcs12,
 static int nested_vmx_check_exception(struct kvm_vcpu *vcpu, unsigned long *exit_qual)
 {
 	struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
-	unsigned int nr = vcpu->arch.exception.nr;
-	bool has_payload = vcpu->arch.exception.has_payload;
-	unsigned long payload = vcpu->arch.exception.payload;
+	unsigned int nr = vcpu->arch.pending_exception.nr;
+	bool has_payload = vcpu->arch.exception_payload.valid;
+	unsigned long payload = vcpu->arch.exception_payload.value;
 
 	if (nr == PF_VECTOR) {
-		if (vcpu->arch.exception.nested_apf) {
+		if (vcpu->arch.exception_payload.nested_apf) {
 			*exit_qual = vcpu->arch.apf.nested_apf_token;
 			return 1;
 		}
 		if (nested_vmx_is_page_fault_vmexit(vmcs12,
-						    vcpu->arch.exception.error_code)) {
+						    vcpu->arch.pending_exception.error_code)) {
 			*exit_qual = has_payload ? payload : vcpu->arch.cr2;
 			return 1;
 		}
@@ -3617,8 +3617,8 @@ static void vmcs12_save_pending_event(struct kvm_vcpu *vcpu,
 	u32 idt_vectoring;
 	unsigned int nr;
 
-	if (vcpu->arch.exception.injected) {
-		nr = vcpu->arch.exception.nr;
+	if (vcpu->arch.injected_exception.valid) {
+		nr = vcpu->arch.injected_exception.nr;
 		idt_vectoring = nr | VECTORING_INFO_VALID_MASK;
 
 		if (kvm_exception_is_soft(nr)) {
@@ -3628,10 +3628,10 @@ static void vmcs12_save_pending_event(struct kvm_vcpu *vcpu,
 		} else
 			idt_vectoring |= INTR_TYPE_HARD_EXCEPTION;
 
-		if (vcpu->arch.exception.has_error_code) {
+		if (vcpu->arch.injected_exception.has_error_code) {
 			idt_vectoring |= VECTORING_INFO_DELIVER_CODE_MASK;
 			vmcs12->idt_vectoring_error_code =
-				vcpu->arch.exception.error_code;
+				vcpu->arch.injected_exception.error_code;
 		}
 
 		vmcs12->idt_vectoring_info_field = idt_vectoring;
@@ -3712,11 +3712,11 @@ static void nested_vmx_inject_exception_vmexit(struct kvm_vcpu *vcpu,
 					       unsigned long exit_qual)
 {
 	struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
-	unsigned int nr = vcpu->arch.exception.nr;
+	unsigned int nr = vcpu->arch.pending_exception.nr;
 	u32 intr_info = nr | INTR_INFO_VALID_MASK;
 
-	if (vcpu->arch.exception.has_error_code) {
-		vmcs12->vm_exit_intr_error_code = vcpu->arch.exception.error_code;
+	if (vcpu->arch.pending_exception.has_error_code) {
+		vmcs12->vm_exit_intr_error_code = vcpu->arch.pending_exception.error_code;
 		intr_info |= INTR_INFO_DELIVER_CODE_MASK;
 	}
 
@@ -3740,9 +3740,9 @@ static void nested_vmx_inject_exception_vmexit(struct kvm_vcpu *vcpu,
  */
 static inline bool vmx_pending_dbg_trap(struct kvm_vcpu *vcpu)
 {
-	return vcpu->arch.exception.pending &&
-			vcpu->arch.exception.nr == DB_VECTOR &&
-			vcpu->arch.exception.payload;
+	return vcpu->arch.pending_exception.valid &&
+			vcpu->arch.pending_exception.nr == DB_VECTOR &&
+			vcpu->arch.exception_payload.value;
 }
 
 /*
@@ -3756,7 +3756,7 @@ static void nested_vmx_update_pending_dbg(struct kvm_vcpu *vcpu)
 {
 	if (vmx_pending_dbg_trap(vcpu))
 		vmcs_writel(GUEST_PENDING_DBG_EXCEPTIONS,
-			    vcpu->arch.exception.payload);
+			    vcpu->arch.exception_payload.value);
 }
 
 static bool nested_vmx_preemption_timer_pending(struct kvm_vcpu *vcpu)
@@ -3813,7 +3813,7 @@ static int vmx_check_nested_events(struct kvm_vcpu *vcpu)
 	 * while delivering the pending exception.
 	 */
 
-	if (vcpu->arch.exception.pending && !vmx_pending_dbg_trap(vcpu)) {
+	if (vcpu->arch.pending_exception.valid && !vmx_pending_dbg_trap(vcpu)) {
 		if (vmx->nested.nested_run_pending)
 			return -EBUSY;
 		if (!nested_vmx_check_exception(vcpu, &exit_qual))
@@ -3830,7 +3830,7 @@ static int vmx_check_nested_events(struct kvm_vcpu *vcpu)
 		return 0;
 	}
 
-	if (vcpu->arch.exception.pending) {
+	if (vcpu->arch.pending_exception.valid) {
 		if (vmx->nested.nested_run_pending)
 			return -EBUSY;
 		if (!nested_vmx_check_exception(vcpu, &exit_qual))
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index c8a4a548e96b..a9b241d2b271 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -1648,8 +1648,8 @@ static void vmx_update_emulated_instruction(struct kvm_vcpu *vcpu)
 	 * vmx_check_nested_events().
 	 */
 	if (nested_cpu_has_mtf(vmcs12) &&
-	    (!vcpu->arch.exception.pending ||
-	     vcpu->arch.exception.nr == DB_VECTOR))
+	    (!vcpu->arch.pending_exception.valid ||
+	     vcpu->arch.pending_exception.nr == DB_VECTOR))
 		vmx->nested.mtf_pending = true;
 	else
 		vmx->nested.mtf_pending = false;
@@ -1677,9 +1677,9 @@ static void vmx_clear_hlt(struct kvm_vcpu *vcpu)
 static void vmx_queue_exception(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
-	unsigned nr = vcpu->arch.exception.nr;
-	bool has_error_code = vcpu->arch.exception.has_error_code;
-	u32 error_code = vcpu->arch.exception.error_code;
+	unsigned int nr = vcpu->arch.injected_exception.nr;
+	bool has_error_code = vcpu->arch.injected_exception.has_error_code;
+	u32 error_code = vcpu->arch.injected_exception.error_code;
 	u32 intr_info = nr | INTR_INFO_VALID_MASK;
 
 	kvm_deliver_exception_payload(vcpu);
@@ -5397,7 +5397,7 @@ static int handle_invalid_guest_state(struct kvm_vcpu *vcpu)
 			return 0;
 
 		if (vmx->emulation_required && !vmx->rmode.vm86_active &&
-		    vcpu->arch.exception.pending) {
+		    vcpu->arch.pending_exception.valid) {
 			vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
 			vcpu->run->internal.suberror =
 						KVM_INTERNAL_ERROR_EMULATION;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index a9d95f90a048..493d87b0c2d5 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -481,9 +481,9 @@ static int exception_type(int vector)
 
 void kvm_deliver_exception_payload(struct kvm_vcpu *vcpu)
 {
-	unsigned nr = vcpu->arch.exception.nr;
-	bool has_payload = vcpu->arch.exception.has_payload;
-	unsigned long payload = vcpu->arch.exception.payload;
+	unsigned int nr = vcpu->arch.pending_exception.nr;
+	bool has_payload = vcpu->arch.exception_payload.valid;
+	unsigned long payload = vcpu->arch.exception_payload.value;
 
 	if (!has_payload)
 		return;
@@ -529,8 +529,8 @@ void kvm_deliver_exception_payload(struct kvm_vcpu *vcpu)
 		break;
 	}
 
-	vcpu->arch.exception.has_payload = false;
-	vcpu->arch.exception.payload = 0;
+	vcpu->arch.exception_payload.valid = false;
+	vcpu->arch.exception_payload.value = 0;
 }
 EXPORT_SYMBOL_GPL(kvm_deliver_exception_payload);
 
@@ -543,7 +543,7 @@ static void kvm_multiple_exception(struct kvm_vcpu *vcpu,
 
 	kvm_make_request(KVM_REQ_EVENT, vcpu);
 
-	if (!vcpu->arch.exception.pending && !vcpu->arch.exception.injected) {
+	if (!vcpu->arch.pending_exception.valid && !vcpu->arch.injected_exception.valid) {
 	queue:
 		if (reinject) {
 			/*
@@ -554,8 +554,7 @@ static void kvm_multiple_exception(struct kvm_vcpu *vcpu,
 			 * and the guest shouldn't proceed far enough to
 			 * need reinjection.
 			 */
-			WARN_ON_ONCE(vcpu->arch.exception.pending);
-			vcpu->arch.exception.injected = true;
+			WARN_ON_ONCE(vcpu->arch.pending_exception.valid);
 			if (WARN_ON_ONCE(has_payload)) {
 				/*
 				 * A reinjected event has already
@@ -564,22 +563,29 @@ static void kvm_multiple_exception(struct kvm_vcpu *vcpu,
 				has_payload = false;
 				payload = 0;
 			}
+
+			vcpu->arch.injected_exception.valid = true;
+			vcpu->arch.injected_exception.has_error_code = has_error;
+			vcpu->arch.injected_exception.nr = nr;
+			vcpu->arch.injected_exception.error_code = error_code;
+
 		} else {
-			vcpu->arch.exception.pending = true;
-			vcpu->arch.exception.injected = false;
+			vcpu->arch.pending_exception.valid = true;
+			vcpu->arch.injected_exception.valid = false;
+			vcpu->arch.pending_exception.has_error_code = has_error;
+			vcpu->arch.pending_exception.nr = nr;
+			vcpu->arch.pending_exception.error_code = error_code;
 		}
-		vcpu->arch.exception.has_error_code = has_error;
-		vcpu->arch.exception.nr = nr;
-		vcpu->arch.exception.error_code = error_code;
-		vcpu->arch.exception.has_payload = has_payload;
-		vcpu->arch.exception.payload = payload;
+
+		vcpu->arch.exception_payload.valid = has_payload;
+		vcpu->arch.exception_payload.value = payload;
 		if (!is_guest_mode(vcpu))
 			kvm_deliver_exception_payload(vcpu);
 		return;
 	}
 
 	/* to check exception */
-	prev_nr = vcpu->arch.exception.nr;
+	prev_nr = vcpu->arch.injected_exception.nr;
 	if (prev_nr == DF_VECTOR) {
 		/* triple fault -> shutdown */
 		kvm_make_request(KVM_REQ_TRIPLE_FAULT, vcpu);
@@ -594,13 +600,14 @@ static void kvm_multiple_exception(struct kvm_vcpu *vcpu,
 		 * exception.pending = true so that the double fault
 		 * can trigger a nested vmexit.
 		 */
-		vcpu->arch.exception.pending = true;
-		vcpu->arch.exception.injected = false;
-		vcpu->arch.exception.has_error_code = true;
-		vcpu->arch.exception.nr = DF_VECTOR;
-		vcpu->arch.exception.error_code = 0;
-		vcpu->arch.exception.has_payload = false;
-		vcpu->arch.exception.payload = 0;
+		vcpu->arch.pending_exception.valid = true;
+		vcpu->arch.injected_exception.valid = false;
+		vcpu->arch.pending_exception.has_error_code = true;
+		vcpu->arch.pending_exception.nr = DF_VECTOR;
+		vcpu->arch.pending_exception.error_code = 0;
+
+		vcpu->arch.exception_payload.valid = false;
+		vcpu->arch.exception_payload.value = 0;
 	} else
 		/* replace previous exception with a new one in a hope
 		   that instruction re-execution will regenerate lost
@@ -648,9 +655,9 @@ EXPORT_SYMBOL_GPL(kvm_complete_insn_gp);
 void kvm_inject_page_fault(struct kvm_vcpu *vcpu, struct x86_exception *fault)
 {
 	++vcpu->stat.pf_guest;
-	vcpu->arch.exception.nested_apf =
+	vcpu->arch.exception_payload.nested_apf =
 		is_guest_mode(vcpu) && fault->async_page_fault;
-	if (vcpu->arch.exception.nested_apf) {
+	if (vcpu->arch.exception_payload.nested_apf) {
 		vcpu->arch.apf.nested_apf_token = fault->address;
 		kvm_queue_exception_e(vcpu, PF_VECTOR, fault->error_code);
 	} else {
@@ -4269,6 +4276,7 @@ static int kvm_vcpu_ioctl_x86_set_mce(struct kvm_vcpu *vcpu,
 static void kvm_vcpu_ioctl_x86_get_vcpu_events(struct kvm_vcpu *vcpu,
 					       struct kvm_vcpu_events *events)
 {
+	struct kvm_queued_exception *exc;
 	process_nmi(vcpu);
 
 	if (kvm_check_request(KVM_REQ_SMI, vcpu))
@@ -4286,21 +4294,27 @@ static void kvm_vcpu_ioctl_x86_get_vcpu_events(struct kvm_vcpu *vcpu,
 	 * KVM_GET_VCPU_EVENTS.
 	 */
 	if (!vcpu->kvm->arch.exception_payload_enabled &&
-	    vcpu->arch.exception.pending && vcpu->arch.exception.has_payload)
+	    vcpu->arch.pending_exception.valid && vcpu->arch.exception_payload.valid)
 		kvm_deliver_exception_payload(vcpu);
 
+	WARN_ON_ONCE(vcpu->arch.pending_exception.valid &&
+		     vcpu->arch.injected_exception.valid);
+
+	exc = vcpu->arch.pending_exception.valid ? &vcpu->arch.pending_exception :
+						   &vcpu->arch.injected_exception;
+
 	/*
 	 * The API doesn't provide the instruction length for software
 	 * exceptions, so don't report them. As long as the guest RIP
 	 * isn't advanced, we should expect to encounter the exception
 	 * again.
 	 */
-	if (kvm_exception_is_soft(vcpu->arch.exception.nr)) {
+	if (kvm_exception_is_soft(exc->nr)) {
 		events->exception.injected = 0;
 		events->exception.pending = 0;
 	} else {
-		events->exception.injected = vcpu->arch.exception.injected;
-		events->exception.pending = vcpu->arch.exception.pending;
+		events->exception.injected = vcpu->arch.injected_exception.valid;
+		events->exception.pending = vcpu->arch.pending_exception.valid;
 		/*
 		 * For ABI compatibility, deliberately conflate
 		 * pending and injected exceptions when
@@ -4308,13 +4322,14 @@ static void kvm_vcpu_ioctl_x86_get_vcpu_events(struct kvm_vcpu *vcpu,
 		 */
 		if (!vcpu->kvm->arch.exception_payload_enabled)
 			events->exception.injected |=
-				vcpu->arch.exception.pending;
+				vcpu->arch.pending_exception.valid;
 	}
-	events->exception.nr = vcpu->arch.exception.nr;
-	events->exception.has_error_code = vcpu->arch.exception.has_error_code;
-	events->exception.error_code = vcpu->arch.exception.error_code;
-	events->exception_has_payload = vcpu->arch.exception.has_payload;
-	events->exception_payload = vcpu->arch.exception.payload;
+	events->exception.nr = exc->nr;
+	events->exception.has_error_code = exc->has_error_code;
+	events->exception.error_code = exc->error_code;
+
+	events->exception_has_payload = vcpu->arch.exception_payload.valid;
+	events->exception_payload = vcpu->arch.exception_payload.value;
 
 	events->interrupt.injected =
 		vcpu->arch.interrupt.injected && !vcpu->arch.interrupt.soft;
@@ -4349,6 +4364,8 @@ static void kvm_smm_changed(struct kvm_vcpu *vcpu);
 static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu,
 					      struct kvm_vcpu_events *events)
 {
+	struct kvm_queued_exception *exc;
+
 	if (events->flags & ~(KVM_VCPUEVENT_VALID_NMI_PENDING
 			      | KVM_VCPUEVENT_VALID_SIPI_VECTOR
 			      | KVM_VCPUEVENT_VALID_SHADOW
@@ -4368,6 +4385,12 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu,
 		events->exception_has_payload = 0;
 	}
 
+	exc = events->exception.pending ? &vcpu->arch.pending_exception :
+					  &vcpu->arch.injected_exception;
+
+	vcpu->arch.pending_exception.valid = false;
+	vcpu->arch.injected_exception.valid = false;
+
 	if ((events->exception.injected || events->exception.pending) &&
 	    (events->exception.nr > 31 || events->exception.nr == NMI_VECTOR))
 		return -EINVAL;
@@ -4379,13 +4402,13 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu,
 		return -EINVAL;
 
 	process_nmi(vcpu);
-	vcpu->arch.exception.injected = events->exception.injected;
-	vcpu->arch.exception.pending = events->exception.pending;
-	vcpu->arch.exception.nr = events->exception.nr;
-	vcpu->arch.exception.has_error_code = events->exception.has_error_code;
-	vcpu->arch.exception.error_code = events->exception.error_code;
-	vcpu->arch.exception.has_payload = events->exception_has_payload;
-	vcpu->arch.exception.payload = events->exception_payload;
+
+	exc->nr = events->exception.nr;
+	exc->has_error_code = events->exception.has_error_code;
+	exc->error_code = events->exception.error_code;
+
+	vcpu->arch.exception_payload.valid = events->exception_has_payload;
+	vcpu->arch.exception_payload.value = events->exception_payload;
 
 	vcpu->arch.interrupt.injected = events->interrupt.injected;
 	vcpu->arch.interrupt.nr = events->interrupt.nr;
@@ -8378,8 +8401,8 @@ int kvm_check_nested_events(struct kvm_vcpu *vcpu)
 
 static void kvm_inject_exception(struct kvm_vcpu *vcpu)
 {
-	if (vcpu->arch.exception.error_code && !is_protmode(vcpu))
-		vcpu->arch.exception.error_code = false;
+	if (vcpu->arch.injected_exception.error_code && !is_protmode(vcpu))
+		vcpu->arch.injected_exception.error_code = false;
 	static_call(kvm_x86_queue_exception)(vcpu);
 }
 
@@ -8390,7 +8413,7 @@ static void inject_pending_event(struct kvm_vcpu *vcpu, bool *req_immediate_exit
 
 	/* try to reinject previous events if any */
 
-	if (vcpu->arch.exception.injected) {
+	if (vcpu->arch.injected_exception.valid) {
 		kvm_inject_exception(vcpu);
 		can_inject = false;
 	}
@@ -8408,7 +8431,7 @@ static void inject_pending_event(struct kvm_vcpu *vcpu, bool *req_immediate_exit
 	 * serviced prior to recognizing any new events in order to
 	 * fully complete the previous instruction.
 	 */
-	else if (!vcpu->arch.exception.pending) {
+	else if (!vcpu->arch.pending_exception.valid) {
 		if (vcpu->arch.nmi_injected) {
 			static_call(kvm_x86_set_nmi)(vcpu);
 			can_inject = false;
@@ -8418,8 +8441,8 @@ static void inject_pending_event(struct kvm_vcpu *vcpu, bool *req_immediate_exit
 		}
 	}
 
-	WARN_ON_ONCE(vcpu->arch.exception.injected &&
-		     vcpu->arch.exception.pending);
+	WARN_ON_ONCE(vcpu->arch.pending_exception.valid &&
+		     vcpu->arch.injected_exception.valid);
 
 	/*
 	 * Call check_nested_events() even if we reinjected a previous event
@@ -8434,19 +8457,19 @@ static void inject_pending_event(struct kvm_vcpu *vcpu, bool *req_immediate_exit
 	}
 
 	/* try to inject new event if pending */
-	if (vcpu->arch.exception.pending) {
-		trace_kvm_inj_exception(vcpu->arch.exception.nr,
-					vcpu->arch.exception.has_error_code,
-					vcpu->arch.exception.error_code);
+	if (vcpu->arch.pending_exception.valid) {
+		trace_kvm_inj_exception(vcpu->arch.pending_exception.nr,
+					vcpu->arch.pending_exception.has_error_code,
+					vcpu->arch.pending_exception.error_code);
 
-		vcpu->arch.exception.pending = false;
-		vcpu->arch.exception.injected = true;
+		vcpu->arch.injected_exception = vcpu->arch.pending_exception;
+		vcpu->arch.pending_exception.valid = false;
 
-		if (exception_type(vcpu->arch.exception.nr) == EXCPT_FAULT)
+		if (exception_type(vcpu->arch.injected_exception.nr) == EXCPT_FAULT)
 			__kvm_set_rflags(vcpu, kvm_get_rflags(vcpu) |
 					     X86_EFLAGS_RF);
 
-		if (vcpu->arch.exception.nr == DB_VECTOR) {
+		if (vcpu->arch.injected_exception.nr == DB_VECTOR) {
 			kvm_deliver_exception_payload(vcpu);
 			if (vcpu->arch.dr7 & DR7_GD) {
 				vcpu->arch.dr7 &= ~DR7_GD;
@@ -8515,7 +8538,7 @@ static void inject_pending_event(struct kvm_vcpu *vcpu, bool *req_immediate_exit
 	    kvm_x86_ops.nested_ops->hv_timer_pending(vcpu))
 		*req_immediate_exit = true;
 
-	WARN_ON(vcpu->arch.exception.pending);
+	WARN_ON(vcpu->arch.pending_exception.valid);
 	return;
 
 busy:
@@ -9624,7 +9647,7 @@ static void __set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
 	kvm_rip_write(vcpu, regs->rip);
 	kvm_set_rflags(vcpu, regs->rflags | X86_EFLAGS_FIXED);
 
-	vcpu->arch.exception.pending = false;
+	vcpu->arch.pending_exception.valid = false;
 
 	kvm_make_request(KVM_REQ_EVENT, vcpu);
 }
@@ -9910,7 +9933,7 @@ int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu,
 
 	if (dbg->control & (KVM_GUESTDBG_INJECT_DB | KVM_GUESTDBG_INJECT_BP)) {
 		r = -EBUSY;
-		if (vcpu->arch.exception.pending)
+		if (vcpu->arch.pending_exception.valid)
 			goto out;
 		if (dbg->control & KVM_GUESTDBG_INJECT_DB)
 			kvm_queue_exception(vcpu, DB_VECTOR);
@@ -10991,7 +11014,7 @@ static inline bool kvm_vcpu_has_events(struct kvm_vcpu *vcpu)
 	if (vcpu->arch.pv.pv_unhalted)
 		return true;
 
-	if (vcpu->arch.exception.pending)
+	if (vcpu->arch.pending_exception.valid)
 		return true;
 
 	if (kvm_test_request(KVM_REQ_NMI, vcpu) ||
@@ -11231,7 +11254,7 @@ bool kvm_can_do_async_pf(struct kvm_vcpu *vcpu)
 {
 	if (unlikely(!lapic_in_kernel(vcpu) ||
 		     kvm_event_needs_reinjection(vcpu) ||
-		     vcpu->arch.exception.pending))
+		     vcpu->arch.pending_exception.valid))
 		return false;
 
 	if (kvm_hlt_in_guest(vcpu->kvm) && !kvm_can_deliver_async_pf(vcpu))
diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
index daccf20fbcd5..21d62387f5e6 100644
--- a/arch/x86/kvm/x86.h
+++ b/arch/x86/kvm/x86.h
@@ -60,8 +60,8 @@ int kvm_check_nested_events(struct kvm_vcpu *vcpu);
 
 static inline void kvm_clear_exception_queue(struct kvm_vcpu *vcpu)
 {
-	vcpu->arch.exception.pending = false;
-	vcpu->arch.exception.injected = false;
+	vcpu->arch.pending_exception.valid = false;
+	vcpu->arch.injected_exception.valid = false;
 }
 
 static inline void kvm_queue_interrupt(struct kvm_vcpu *vcpu, u8 vector,
@@ -79,7 +79,7 @@ static inline void kvm_clear_interrupt_queue(struct kvm_vcpu *vcpu)
 
 static inline bool kvm_event_needs_reinjection(struct kvm_vcpu *vcpu)
 {
-	return vcpu->arch.exception.injected || vcpu->arch.interrupt.injected ||
+	return vcpu->arch.injected_exception.valid || vcpu->arch.interrupt.injected ||
 		vcpu->arch.nmi_injected;
 }
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 3/4] KVM: x86: correctly merge pending and injected exception
  2021-04-01 14:38 [PATCH 0/4] KVM: nSVM/nVMX: fix nested virtualization treatment of nested exceptions Maxim Levitsky
  2021-04-01 14:38 ` [PATCH 1/4] KVM: x86: pending exceptions must not be blocked by an injected event Maxim Levitsky
  2021-04-01 14:38 ` [PATCH 2/4] KVM: x86: separate pending and injected exception Maxim Levitsky
@ 2021-04-01 14:38 ` Maxim Levitsky
  2021-04-01 19:56   ` Paolo Bonzini
  2021-04-01 14:38 ` [PATCH 4/4] KVM: x86: remove tweaking of inject_page_fault Maxim Levitsky
  3 siblings, 1 reply; 12+ messages in thread
From: Maxim Levitsky @ 2021-04-01 14:38 UTC (permalink / raw)
  To: kvm
  Cc: Thomas Gleixner, Wanpeng Li, Borislav Petkov, Paolo Bonzini,
	Jim Mattson, open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Vitaly Kuznetsov, H. Peter Anvin, Joerg Roedel, Ingo Molnar,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Sean Christopherson, Maxim Levitsky

Allow the pending and the injected exceptions to co-exist
when they are raised.

Add 'kvm_deliver_pending_exception' function which 'merges' the pending
and injected exception or delivers a VM exit with both for a case when
the L1 intercepts the pending exception.

The later is done by vendor code using new nested callback
'deliver_exception_as_vmexit'

The kvm_deliver_pending_exception is called after each VM exit,
and prior to VM entry which ensures that during userspace VM exits,
only injected exception can be in a raised state.

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 arch/x86/include/asm/kvm_host.h |   9 ++
 arch/x86/kvm/svm/nested.c       |  27 ++--
 arch/x86/kvm/svm/svm.c          |   2 +-
 arch/x86/kvm/vmx/nested.c       |  58 ++++----
 arch/x86/kvm/vmx/vmx.c          |   2 +-
 arch/x86/kvm/x86.c              | 233 ++++++++++++++++++--------------
 6 files changed, 181 insertions(+), 150 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 3b2fd276e8d5..a9b9cd030d9a 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1346,6 +1346,15 @@ struct kvm_x86_ops {
 
 struct kvm_x86_nested_ops {
 	int (*check_events)(struct kvm_vcpu *vcpu);
+
+	/*
+	 * Deliver a pending exception as a VM exit if the L1 intercepts it.
+	 * Returns -EBUSY if L1 does intercept the exception but,
+	 * it is not possible to deliver it right now.
+	 * (for example when nested run is pending)
+	 */
+	int (*deliver_exception_as_vmexit)(struct kvm_vcpu *vcpu);
+
 	bool (*hv_timer_pending)(struct kvm_vcpu *vcpu);
 	void (*triple_fault)(struct kvm_vcpu *vcpu);
 	int (*get_state)(struct kvm_vcpu *vcpu,
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 7adad9b6dcad..ff745d59ffcf 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -1061,21 +1061,6 @@ static int svm_check_nested_events(struct kvm_vcpu *vcpu)
 		return 0;
 	}
 
-	if (vcpu->arch.pending_exception.valid) {
-		/*
-		 * Only a pending nested run can block a pending exception.
-		 * Otherwise an injected NMI/interrupt should either be
-		 * lost or delivered to the nested hypervisor in the EXITINTINFO
-		 * vmcb field, while delivering the pending exception.
-		 */
-		if (svm->nested.nested_run_pending)
-                        return -EBUSY;
-		if (!nested_exit_on_exception(svm))
-			return 0;
-		nested_svm_inject_exception_vmexit(svm);
-		return 0;
-	}
-
 	if (vcpu->arch.smi_pending && !svm_smi_blocked(vcpu)) {
 		if (block_nested_events)
 			return -EBUSY;
@@ -1107,6 +1092,17 @@ static int svm_check_nested_events(struct kvm_vcpu *vcpu)
 	return 0;
 }
 
+int svm_deliver_nested_exception_as_vmexit(struct kvm_vcpu *vcpu)
+{
+	struct vcpu_svm *svm = to_svm(vcpu);
+
+	if (svm->nested.nested_run_pending)
+		return -EBUSY;
+	if (nested_exit_on_exception(svm))
+		nested_svm_inject_exception_vmexit(svm);
+	return 0;
+}
+
 int nested_svm_exit_special(struct vcpu_svm *svm)
 {
 	u32 exit_code = svm->vmcb->control.exit_code;
@@ -1321,6 +1317,7 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu,
 struct kvm_x86_nested_ops svm_nested_ops = {
 	.check_events = svm_check_nested_events,
 	.triple_fault = nested_svm_triple_fault,
+	.deliver_exception_as_vmexit = svm_deliver_nested_exception_as_vmexit,
 	.get_nested_state_pages = svm_get_nested_state_pages,
 	.get_state = svm_get_nested_state,
 	.set_state = svm_set_nested_state,
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 90b541138c5a..b89e48574c39 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -363,7 +363,7 @@ static void svm_queue_exception(struct kvm_vcpu *vcpu)
 	bool has_error_code = vcpu->arch.injected_exception.has_error_code;
 	u32 error_code = vcpu->arch.injected_exception.error_code;
 
-	kvm_deliver_exception_payload(vcpu);
+	WARN_ON_ONCE(vcpu->arch.pending_exception.valid);
 
 	if (nr == BP_VECTOR && !nrips) {
 		unsigned long rip, old_rip = kvm_rip_read(vcpu);
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index 5d54fecff9a7..1c09b132c55c 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -3768,7 +3768,6 @@ static bool nested_vmx_preemption_timer_pending(struct kvm_vcpu *vcpu)
 static int vmx_check_nested_events(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
-	unsigned long exit_qual;
 	bool block_nested_events =
 	    vmx->nested.nested_run_pending || kvm_event_needs_reinjection(vcpu);
 	bool mtf_pending = vmx->nested.mtf_pending;
@@ -3804,41 +3803,15 @@ static int vmx_check_nested_events(struct kvm_vcpu *vcpu)
 		return 0;
 	}
 
-	/*
-	 * Process any exceptions that are not debug traps before MTF.
-	 *
-	 * Note that only a pending nested run can block a pending exception.
-	 * Otherwise an injected NMI/interrupt should either be
-	 * lost or delivered to the nested hypervisor in the IDT_VECTORING_INFO,
-	 * while delivering the pending exception.
-	 */
-
-	if (vcpu->arch.pending_exception.valid && !vmx_pending_dbg_trap(vcpu)) {
-		if (vmx->nested.nested_run_pending)
-			return -EBUSY;
-		if (!nested_vmx_check_exception(vcpu, &exit_qual))
-			goto no_vmexit;
-		nested_vmx_inject_exception_vmexit(vcpu, exit_qual);
-		return 0;
-	}
-
 	if (mtf_pending) {
 		if (block_nested_events)
 			return -EBUSY;
+
 		nested_vmx_update_pending_dbg(vcpu);
 		nested_vmx_vmexit(vcpu, EXIT_REASON_MONITOR_TRAP_FLAG, 0, 0);
 		return 0;
 	}
 
-	if (vcpu->arch.pending_exception.valid) {
-		if (vmx->nested.nested_run_pending)
-			return -EBUSY;
-		if (!nested_vmx_check_exception(vcpu, &exit_qual))
-			goto no_vmexit;
-		nested_vmx_inject_exception_vmexit(vcpu, exit_qual);
-		return 0;
-	}
-
 	if (nested_vmx_preemption_timer_pending(vcpu)) {
 		if (block_nested_events)
 			return -EBUSY;
@@ -3884,6 +3857,34 @@ static int vmx_check_nested_events(struct kvm_vcpu *vcpu)
 	return 0;
 }
 
+static int nested_vmx_deliver_exception_as_vmexit(struct kvm_vcpu *vcpu)
+{
+	struct vcpu_vmx *vmx = to_vmx(vcpu);
+	unsigned long exit_qual;
+
+	if (vmx->nested.nested_run_pending)
+		return -EBUSY;
+
+	if (vmx->nested.mtf_pending && vmx_pending_dbg_trap(vcpu)) {
+		/*
+		 * A pending monitor trap takes precedence over pending
+		 * debug exception which is 'stashed' into
+		 * 'GUEST_PENDING_DBG_EXCEPTIONS'
+		 */
+
+		nested_vmx_update_pending_dbg(vcpu);
+		vmx->nested.mtf_pending = false;
+		nested_vmx_vmexit(vcpu, EXIT_REASON_MONITOR_TRAP_FLAG, 0, 0);
+		return 0;
+	}
+	if (vcpu->arch.pending_exception.valid) {
+		if (nested_vmx_check_exception(vcpu, &exit_qual))
+			nested_vmx_inject_exception_vmexit(vcpu, exit_qual);
+		return 0;
+	}
+	return 0;
+}
+
 static u32 vmx_get_preemption_timer_value(struct kvm_vcpu *vcpu)
 {
 	ktime_t remaining =
@@ -6603,6 +6604,7 @@ __init int nested_vmx_hardware_setup(int (*exit_handlers[])(struct kvm_vcpu *))
 
 struct kvm_x86_nested_ops vmx_nested_ops = {
 	.check_events = vmx_check_nested_events,
+	.deliver_exception_as_vmexit = nested_vmx_deliver_exception_as_vmexit,
 	.hv_timer_pending = nested_vmx_preemption_timer_pending,
 	.triple_fault = nested_vmx_triple_fault,
 	.get_state = vmx_get_nested_state,
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index a9b241d2b271..fc6bc40d47b0 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -1682,7 +1682,7 @@ static void vmx_queue_exception(struct kvm_vcpu *vcpu)
 	u32 error_code = vcpu->arch.injected_exception.error_code;
 	u32 intr_info = nr | INTR_INFO_VALID_MASK;
 
-	kvm_deliver_exception_payload(vcpu);
+	WARN_ON_ONCE(vcpu->arch.pending_exception.valid);
 
 	if (has_error_code) {
 		vmcs_write32(VM_ENTRY_EXCEPTION_ERROR_CODE, error_code);
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 493d87b0c2d5..a363204f37be 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -535,86 +535,30 @@ void kvm_deliver_exception_payload(struct kvm_vcpu *vcpu)
 EXPORT_SYMBOL_GPL(kvm_deliver_exception_payload);
 
 static void kvm_multiple_exception(struct kvm_vcpu *vcpu,
-		unsigned nr, bool has_error, u32 error_code,
-	        bool has_payload, unsigned long payload, bool reinject)
+				   unsigned int nr, bool has_error, u32 error_code,
+				   bool has_payload, unsigned long payload,
+				   bool reinject)
 {
-	u32 prev_nr;
-	int class1, class2;
-
+	struct kvm_queued_exception *exc;
 	kvm_make_request(KVM_REQ_EVENT, vcpu);
 
-	if (!vcpu->arch.pending_exception.valid && !vcpu->arch.injected_exception.valid) {
-	queue:
-		if (reinject) {
-			/*
-			 * On vmentry, vcpu->arch.exception.pending is only
-			 * true if an event injection was blocked by
-			 * nested_run_pending.  In that case, however,
-			 * vcpu_enter_guest requests an immediate exit,
-			 * and the guest shouldn't proceed far enough to
-			 * need reinjection.
-			 */
-			WARN_ON_ONCE(vcpu->arch.pending_exception.valid);
-			if (WARN_ON_ONCE(has_payload)) {
-				/*
-				 * A reinjected event has already
-				 * delivered its payload.
-				 */
-				has_payload = false;
-				payload = 0;
-			}
-
-			vcpu->arch.injected_exception.valid = true;
-			vcpu->arch.injected_exception.has_error_code = has_error;
-			vcpu->arch.injected_exception.nr = nr;
-			vcpu->arch.injected_exception.error_code = error_code;
+	WARN_ON_ONCE(vcpu->arch.pending_exception.valid);
+	WARN_ON_ONCE(reinject && vcpu->arch.injected_exception.valid);
 
-		} else {
-			vcpu->arch.pending_exception.valid = true;
-			vcpu->arch.injected_exception.valid = false;
-			vcpu->arch.pending_exception.has_error_code = has_error;
-			vcpu->arch.pending_exception.nr = nr;
-			vcpu->arch.pending_exception.error_code = error_code;
-		}
-
-		vcpu->arch.exception_payload.valid = has_payload;
-		vcpu->arch.exception_payload.value = payload;
-		if (!is_guest_mode(vcpu))
-			kvm_deliver_exception_payload(vcpu);
-		return;
-	}
-
-	/* to check exception */
-	prev_nr = vcpu->arch.injected_exception.nr;
-	if (prev_nr == DF_VECTOR) {
-		/* triple fault -> shutdown */
-		kvm_make_request(KVM_REQ_TRIPLE_FAULT, vcpu);
-		return;
-	}
-	class1 = exception_class(prev_nr);
-	class2 = exception_class(nr);
-	if ((class1 == EXCPT_CONTRIBUTORY && class2 == EXCPT_CONTRIBUTORY)
-		|| (class1 == EXCPT_PF && class2 != EXCPT_BENIGN)) {
-		/*
-		 * Generate double fault per SDM Table 5-5.  Set
-		 * exception.pending = true so that the double fault
-		 * can trigger a nested vmexit.
-		 */
-		vcpu->arch.pending_exception.valid = true;
-		vcpu->arch.injected_exception.valid = false;
-		vcpu->arch.pending_exception.has_error_code = true;
-		vcpu->arch.pending_exception.nr = DF_VECTOR;
-		vcpu->arch.pending_exception.error_code = 0;
+	exc = reinject ? &vcpu->arch.injected_exception :
+			 &vcpu->arch.pending_exception;
+	exc->valid = true;
+	exc->nr = nr;
+	exc->has_error_code = has_error;
+	exc->error_code = error_code;
 
-		vcpu->arch.exception_payload.valid = false;
-		vcpu->arch.exception_payload.value = 0;
-	} else
-		/* replace previous exception with a new one in a hope
-		   that instruction re-execution will regenerate lost
-		   exception */
-		goto queue;
+	// re-injected exception has its payload already delivered
+	WARN_ON_ONCE(reinject && has_payload);
+	vcpu->arch.exception_payload.valid = has_payload;
+	vcpu->arch.exception_payload.value = payload;
 }
 
+
 void kvm_queue_exception(struct kvm_vcpu *vcpu, unsigned nr)
 {
 	kvm_multiple_exception(vcpu, nr, false, 0, false, 0, false);
@@ -641,6 +585,95 @@ static void kvm_queue_exception_e_p(struct kvm_vcpu *vcpu, unsigned nr,
 			       true, payload, false);
 }
 
+static int kvm_do_deliver_pending_exception(struct kvm_vcpu *vcpu)
+{
+	int class1, class2, ret;
+
+	/* try to deliver current pending exception as VM exit */
+	if (is_guest_mode(vcpu)) {
+		ret = kvm_x86_ops.nested_ops->deliver_exception_as_vmexit(vcpu);
+		if (ret || !vcpu->arch.pending_exception.valid)
+			return ret;
+	}
+
+	/* No injected exception, so just deliver the payload and inject it */
+	if (!vcpu->arch.injected_exception.valid) {
+		trace_kvm_inj_exception(vcpu->arch.pending_exception.nr,
+					vcpu->arch.pending_exception.has_error_code,
+					vcpu->arch.pending_exception.error_code);
+queue:
+		/* Intel SDM 17.3.1.1 */
+		if (exception_type(vcpu->arch.pending_exception.nr) == EXCPT_FAULT)
+			__kvm_set_rflags(vcpu, kvm_get_rflags(vcpu) |
+					     X86_EFLAGS_RF);
+
+		kvm_deliver_exception_payload(vcpu);
+
+		/* Intel SDM 17.2.4
+		 * The processor clears the GD flag upon entering to the
+		 * debug exception handler, to allow the handler access
+		 * to the debug registers.
+		 */
+		if (vcpu->arch.pending_exception.nr == DB_VECTOR) {
+			if (vcpu->arch.dr7 & DR7_GD) {
+				vcpu->arch.dr7 &= ~DR7_GD;
+				kvm_update_dr7(vcpu);
+			}
+		}
+
+		if (vcpu->arch.pending_exception.error_code && !is_protmode(vcpu))
+			vcpu->arch.pending_exception.error_code = false;
+
+		vcpu->arch.injected_exception = vcpu->arch.pending_exception;
+		vcpu->arch.pending_exception.valid = false;
+		return 0;
+	}
+
+	/* Convert a pending exception and an injected #DF to a triple fault*/
+	if (vcpu->arch.injected_exception.nr == DF_VECTOR) {
+		/* triple fault -> shutdown */
+		vcpu->arch.injected_exception.valid = false;
+		vcpu->arch.pending_exception.valid = false;
+		kvm_make_request(KVM_REQ_TRIPLE_FAULT, vcpu);
+		return 0;
+	}
+
+	class1 = exception_class(vcpu->arch.injected_exception.nr);
+	class2 = exception_class(vcpu->arch.pending_exception.nr);
+
+	if ((class1 == EXCPT_CONTRIBUTORY && class2 == EXCPT_CONTRIBUTORY)
+		|| (class1 == EXCPT_PF && class2 != EXCPT_BENIGN)) {
+
+		/* Generate double fault per SDM Table 5-5. */
+		vcpu->arch.injected_exception.valid = false;
+		vcpu->arch.pending_exception.valid = true;
+		vcpu->arch.pending_exception.has_error_code = true;
+		vcpu->arch.pending_exception.nr = DF_VECTOR;
+		vcpu->arch.pending_exception.error_code = 0;
+		vcpu->arch.exception_payload.valid = false;
+	} else
+		/* Drop the injected exception and replace it with the pending one*/
+		goto queue;
+
+	return 0;
+}
+
+static int kvm_deliver_pending_exception(struct kvm_vcpu *vcpu)
+{
+	int ret = 0;
+
+	if (!vcpu->arch.pending_exception.valid)
+		return ret;
+
+	ret = kvm_do_deliver_pending_exception(vcpu);
+
+	if (ret || !vcpu->arch.pending_exception.valid)
+		return ret;
+
+	WARN_ON_ONCE(vcpu->arch.pending_exception.nr != DF_VECTOR);
+	return kvm_do_deliver_pending_exception(vcpu);
+}
+
 int kvm_complete_insn_gp(struct kvm_vcpu *vcpu, int err)
 {
 	if (err)
@@ -4297,6 +4330,12 @@ static void kvm_vcpu_ioctl_x86_get_vcpu_events(struct kvm_vcpu *vcpu,
 	    vcpu->arch.pending_exception.valid && vcpu->arch.exception_payload.valid)
 		kvm_deliver_exception_payload(vcpu);
 
+	/*
+	 * Currently we merge the pending and the injected exceptions
+	 * after each VM exit, which can fail only when nested run is pending,
+	 * in which case only injected (from us or L1) exception is possible.
+	 */
+
 	WARN_ON_ONCE(vcpu->arch.pending_exception.valid &&
 		     vcpu->arch.injected_exception.valid);
 
@@ -8401,8 +8440,6 @@ int kvm_check_nested_events(struct kvm_vcpu *vcpu)
 
 static void kvm_inject_exception(struct kvm_vcpu *vcpu)
 {
-	if (vcpu->arch.injected_exception.error_code && !is_protmode(vcpu))
-		vcpu->arch.injected_exception.error_code = false;
 	static_call(kvm_x86_queue_exception)(vcpu);
 }
 
@@ -8411,8 +8448,13 @@ static void inject_pending_event(struct kvm_vcpu *vcpu, bool *req_immediate_exit
 	int r;
 	bool can_inject = true;
 
-	/* try to reinject previous events if any */
+	r = kvm_deliver_pending_exception(vcpu);
+	if (r < 0)
+		goto busy;
+
+	WARN_ON_ONCE(vcpu->arch.pending_exception.valid);
 
+	/* try to reinject previous events if any */
 	if (vcpu->arch.injected_exception.valid) {
 		kvm_inject_exception(vcpu);
 		can_inject = false;
@@ -8431,7 +8473,7 @@ static void inject_pending_event(struct kvm_vcpu *vcpu, bool *req_immediate_exit
 	 * serviced prior to recognizing any new events in order to
 	 * fully complete the previous instruction.
 	 */
-	else if (!vcpu->arch.pending_exception.valid) {
+	else {
 		if (vcpu->arch.nmi_injected) {
 			static_call(kvm_x86_set_nmi)(vcpu);
 			can_inject = false;
@@ -8441,9 +8483,6 @@ static void inject_pending_event(struct kvm_vcpu *vcpu, bool *req_immediate_exit
 		}
 	}
 
-	WARN_ON_ONCE(vcpu->arch.pending_exception.valid &&
-		     vcpu->arch.injected_exception.valid);
-
 	/*
 	 * Call check_nested_events() even if we reinjected a previous event
 	 * in order for caller to determine if it should require immediate-exit
@@ -8456,30 +8495,6 @@ static void inject_pending_event(struct kvm_vcpu *vcpu, bool *req_immediate_exit
 			goto busy;
 	}
 
-	/* try to inject new event if pending */
-	if (vcpu->arch.pending_exception.valid) {
-		trace_kvm_inj_exception(vcpu->arch.pending_exception.nr,
-					vcpu->arch.pending_exception.has_error_code,
-					vcpu->arch.pending_exception.error_code);
-
-		vcpu->arch.injected_exception = vcpu->arch.pending_exception;
-		vcpu->arch.pending_exception.valid = false;
-
-		if (exception_type(vcpu->arch.injected_exception.nr) == EXCPT_FAULT)
-			__kvm_set_rflags(vcpu, kvm_get_rflags(vcpu) |
-					     X86_EFLAGS_RF);
-
-		if (vcpu->arch.injected_exception.nr == DB_VECTOR) {
-			kvm_deliver_exception_payload(vcpu);
-			if (vcpu->arch.dr7 & DR7_GD) {
-				vcpu->arch.dr7 &= ~DR7_GD;
-				kvm_update_dr7(vcpu);
-			}
-		}
-
-		kvm_inject_exception(vcpu);
-		can_inject = false;
-	}
 
 	/*
 	 * Finally, inject interrupt events.  If an event cannot be injected
@@ -9270,6 +9285,14 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 		kvm_lapic_sync_from_vapic(vcpu);
 
 	r = static_call(kvm_x86_handle_exit)(vcpu, exit_fastpath);
+
+	/*
+	 * Deliver the pending exception so that the state of having a pending
+	 * and an injected exception is not visible to the userspace.
+	 */
+
+	kvm_deliver_pending_exception(vcpu);
+
 	return r;
 
 cancel_injection:
@@ -11014,7 +11037,7 @@ static inline bool kvm_vcpu_has_events(struct kvm_vcpu *vcpu)
 	if (vcpu->arch.pv.pv_unhalted)
 		return true;
 
-	if (vcpu->arch.pending_exception.valid)
+	if (vcpu->arch.pending_exception.valid || vcpu->arch.injected_exception.valid)
 		return true;
 
 	if (kvm_test_request(KVM_REQ_NMI, vcpu) ||
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 4/4] KVM: x86: remove tweaking of inject_page_fault
  2021-04-01 14:38 [PATCH 0/4] KVM: nSVM/nVMX: fix nested virtualization treatment of nested exceptions Maxim Levitsky
                   ` (2 preceding siblings ...)
  2021-04-01 14:38 ` [PATCH 3/4] KVM: x86: correctly merge " Maxim Levitsky
@ 2021-04-01 14:38 ` Maxim Levitsky
  3 siblings, 0 replies; 12+ messages in thread
From: Maxim Levitsky @ 2021-04-01 14:38 UTC (permalink / raw)
  To: kvm
  Cc: Thomas Gleixner, Wanpeng Li, Borislav Petkov, Paolo Bonzini,
	Jim Mattson, open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Vitaly Kuznetsov, H. Peter Anvin, Joerg Roedel, Ingo Molnar,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Sean Christopherson, Maxim Levitsky

This is no longer needed since page faults can now be
injected as regular exceptions in all the cases.

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 arch/x86/kvm/svm/nested.c | 20 --------------------
 arch/x86/kvm/vmx/nested.c | 23 -----------------------
 2 files changed, 43 deletions(-)

diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index ff745d59ffcf..25840399841e 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -53,23 +53,6 @@ static void nested_svm_inject_npf_exit(struct kvm_vcpu *vcpu,
 	nested_svm_vmexit(svm);
 }
 
-static void svm_inject_page_fault_nested(struct kvm_vcpu *vcpu, struct x86_exception *fault)
-{
-       struct vcpu_svm *svm = to_svm(vcpu);
-       WARN_ON(!is_guest_mode(vcpu));
-
-       if (vmcb_is_intercept(&svm->nested.ctl, INTERCEPT_EXCEPTION_OFFSET + PF_VECTOR) &&
-	   !svm->nested.nested_run_pending) {
-               svm->vmcb->control.exit_code = SVM_EXIT_EXCP_BASE + PF_VECTOR;
-               svm->vmcb->control.exit_code_hi = 0;
-               svm->vmcb->control.exit_info_1 = fault->error_code;
-               svm->vmcb->control.exit_info_2 = fault->address;
-               nested_svm_vmexit(svm);
-       } else {
-               kvm_inject_page_fault(vcpu, fault);
-       }
-}
-
 static u64 nested_svm_get_tdp_pdptr(struct kvm_vcpu *vcpu, int index)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
@@ -575,9 +558,6 @@ int enter_svm_guest_mode(struct kvm_vcpu *vcpu, u64 vmcb12_gpa,
 	if (ret)
 		return ret;
 
-	if (!npt_enabled)
-		vcpu->arch.mmu->inject_page_fault = svm_inject_page_fault_nested;
-
 	svm_set_gif(svm, true);
 
 	return 0;
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index 1c09b132c55c..8add4c27e718 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -418,26 +418,6 @@ static int nested_vmx_check_exception(struct kvm_vcpu *vcpu, unsigned long *exit
 	return 0;
 }
 
-
-static void vmx_inject_page_fault_nested(struct kvm_vcpu *vcpu,
-		struct x86_exception *fault)
-{
-	struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
-
-	WARN_ON(!is_guest_mode(vcpu));
-
-	if (nested_vmx_is_page_fault_vmexit(vmcs12, fault->error_code) &&
-		!to_vmx(vcpu)->nested.nested_run_pending) {
-		vmcs12->vm_exit_intr_error_code = fault->error_code;
-		nested_vmx_vmexit(vcpu, EXIT_REASON_EXCEPTION_NMI,
-				  PF_VECTOR | INTR_TYPE_HARD_EXCEPTION |
-				  INTR_INFO_DELIVER_CODE_MASK | INTR_INFO_VALID_MASK,
-				  fault->address);
-	} else {
-		kvm_inject_page_fault(vcpu, fault);
-	}
-}
-
 static int nested_vmx_check_io_bitmap_controls(struct kvm_vcpu *vcpu,
 					       struct vmcs12 *vmcs12)
 {
@@ -2588,9 +2568,6 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
 		vmcs_write64(GUEST_PDPTR3, vmcs12->guest_pdptr3);
 	}
 
-	if (!enable_ept)
-		vcpu->arch.walk_mmu->inject_page_fault = vmx_inject_page_fault_nested;
-
 	if ((vmcs12->vm_entry_controls & VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL) &&
 	    WARN_ON_ONCE(kvm_set_msr(vcpu, MSR_CORE_PERF_GLOBAL_CTRL,
 				     vmcs12->guest_ia32_perf_global_ctrl)))
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/4] KVM: x86: pending exceptions must not be blocked by an injected event
  2021-04-01 14:38 ` [PATCH 1/4] KVM: x86: pending exceptions must not be blocked by an injected event Maxim Levitsky
@ 2021-04-01 17:05   ` Paolo Bonzini
  2021-04-01 17:12     ` Maxim Levitsky
  0 siblings, 1 reply; 12+ messages in thread
From: Paolo Bonzini @ 2021-04-01 17:05 UTC (permalink / raw)
  To: Maxim Levitsky, kvm
  Cc: Thomas Gleixner, Wanpeng Li, Borislav Petkov, Jim Mattson,
	open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Vitaly Kuznetsov, H. Peter Anvin, Joerg Roedel, Ingo Molnar,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Sean Christopherson

On 01/04/21 16:38, Maxim Levitsky wrote:
> Injected interrupts/nmi should not block a pending exception,
> but rather be either lost if nested hypervisor doesn't
> intercept the pending exception (as in stock x86), or be delivered
> in exitintinfo/IDT_VECTORING_INFO field, as a part of a VMexit
> that corresponds to the pending exception.
> 
> The only reason for an exception to be blocked is when nested run
> is pending (and that can't really happen currently
> but still worth checking for).
> 
> Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>

This patch would be an almost separate bugfix, right?  I am going to 
queue this, but a confirmation would be helpful.

Paolo

> ---
>   arch/x86/kvm/svm/nested.c |  8 +++++++-
>   arch/x86/kvm/vmx/nested.c | 10 ++++++++--
>   2 files changed, 15 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
> index 8523f60adb92..34a37b2bd486 100644
> --- a/arch/x86/kvm/svm/nested.c
> +++ b/arch/x86/kvm/svm/nested.c
> @@ -1062,7 +1062,13 @@ static int svm_check_nested_events(struct kvm_vcpu *vcpu)
>   	}
>   
>   	if (vcpu->arch.exception.pending) {
> -		if (block_nested_events)
> +		/*
> +		 * Only a pending nested run can block a pending exception.
> +		 * Otherwise an injected NMI/interrupt should either be
> +		 * lost or delivered to the nested hypervisor in the EXITINTINFO
> +		 * vmcb field, while delivering the pending exception.
> +		 */
> +		if (svm->nested.nested_run_pending)
>                           return -EBUSY;
>   		if (!nested_exit_on_exception(svm))
>   			return 0;
> diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
> index fd334e4aa6db..c3ba842fc07f 100644
> --- a/arch/x86/kvm/vmx/nested.c
> +++ b/arch/x86/kvm/vmx/nested.c
> @@ -3806,9 +3806,15 @@ static int vmx_check_nested_events(struct kvm_vcpu *vcpu)
>   
>   	/*
>   	 * Process any exceptions that are not debug traps before MTF.
> +	 *
> +	 * Note that only a pending nested run can block a pending exception.
> +	 * Otherwise an injected NMI/interrupt should either be
> +	 * lost or delivered to the nested hypervisor in the IDT_VECTORING_INFO,
> +	 * while delivering the pending exception.
>   	 */
> +
>   	if (vcpu->arch.exception.pending && !vmx_pending_dbg_trap(vcpu)) {
> -		if (block_nested_events)
> +		if (vmx->nested.nested_run_pending)
>   			return -EBUSY;
>   		if (!nested_vmx_check_exception(vcpu, &exit_qual))
>   			goto no_vmexit;
> @@ -3825,7 +3831,7 @@ static int vmx_check_nested_events(struct kvm_vcpu *vcpu)
>   	}
>   
>   	if (vcpu->arch.exception.pending) {
> -		if (block_nested_events)
> +		if (vmx->nested.nested_run_pending)
>   			return -EBUSY;
>   		if (!nested_vmx_check_exception(vcpu, &exit_qual))
>   			goto no_vmexit;
> 


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/4] KVM: x86: pending exceptions must not be blocked by an injected event
  2021-04-01 17:05   ` Paolo Bonzini
@ 2021-04-01 17:12     ` Maxim Levitsky
  0 siblings, 0 replies; 12+ messages in thread
From: Maxim Levitsky @ 2021-04-01 17:12 UTC (permalink / raw)
  To: Paolo Bonzini, kvm
  Cc: Thomas Gleixner, Wanpeng Li, Borislav Petkov, Jim Mattson,
	open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Vitaly Kuznetsov, H. Peter Anvin, Joerg Roedel, Ingo Molnar,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Sean Christopherson

On Thu, 2021-04-01 at 19:05 +0200, Paolo Bonzini wrote:
> On 01/04/21 16:38, Maxim Levitsky wrote:
> > Injected interrupts/nmi should not block a pending exception,
> > but rather be either lost if nested hypervisor doesn't
> > intercept the pending exception (as in stock x86), or be delivered
> > in exitintinfo/IDT_VECTORING_INFO field, as a part of a VMexit
> > that corresponds to the pending exception.
> > 
> > The only reason for an exception to be blocked is when nested run
> > is pending (and that can't really happen currently
> > but still worth checking for).
> > 
> > Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
> 
> This patch would be an almost separate bugfix, right?  I am going to 
> queue this, but a confirmation would be helpful.

Yes, this patch doesn't depend on anything else.
Thanks!
Best regards,
	Maxim Levitsky

> 
> Paolo
> 
> > ---
> >   arch/x86/kvm/svm/nested.c |  8 +++++++-
> >   arch/x86/kvm/vmx/nested.c | 10 ++++++++--
> >   2 files changed, 15 insertions(+), 3 deletions(-)
> > 
> > diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
> > index 8523f60adb92..34a37b2bd486 100644
> > --- a/arch/x86/kvm/svm/nested.c
> > +++ b/arch/x86/kvm/svm/nested.c
> > @@ -1062,7 +1062,13 @@ static int svm_check_nested_events(struct kvm_vcpu *vcpu)
> >   	}
> >   
> >   	if (vcpu->arch.exception.pending) {
> > -		if (block_nested_events)
> > +		/*
> > +		 * Only a pending nested run can block a pending exception.
> > +		 * Otherwise an injected NMI/interrupt should either be
> > +		 * lost or delivered to the nested hypervisor in the EXITINTINFO
> > +		 * vmcb field, while delivering the pending exception.
> > +		 */
> > +		if (svm->nested.nested_run_pending)
> >                           return -EBUSY;
> >   		if (!nested_exit_on_exception(svm))
> >   			return 0;
> > diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
> > index fd334e4aa6db..c3ba842fc07f 100644
> > --- a/arch/x86/kvm/vmx/nested.c
> > +++ b/arch/x86/kvm/vmx/nested.c
> > @@ -3806,9 +3806,15 @@ static int vmx_check_nested_events(struct kvm_vcpu *vcpu)
> >   
> >   	/*
> >   	 * Process any exceptions that are not debug traps before MTF.
> > +	 *
> > +	 * Note that only a pending nested run can block a pending exception.
> > +	 * Otherwise an injected NMI/interrupt should either be
> > +	 * lost or delivered to the nested hypervisor in the IDT_VECTORING_INFO,
> > +	 * while delivering the pending exception.
> >   	 */
> > +
> >   	if (vcpu->arch.exception.pending && !vmx_pending_dbg_trap(vcpu)) {
> > -		if (block_nested_events)
> > +		if (vmx->nested.nested_run_pending)
> >   			return -EBUSY;
> >   		if (!nested_vmx_check_exception(vcpu, &exit_qual))
> >   			goto no_vmexit;
> > @@ -3825,7 +3831,7 @@ static int vmx_check_nested_events(struct kvm_vcpu *vcpu)
> >   	}
> >   
> >   	if (vcpu->arch.exception.pending) {
> > -		if (block_nested_events)
> > +		if (vmx->nested.nested_run_pending)
> >   			return -EBUSY;
> >   		if (!nested_vmx_check_exception(vcpu, &exit_qual))
> >   			goto no_vmexit;
> > 



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 3/4] KVM: x86: correctly merge pending and injected exception
  2021-04-01 14:38 ` [PATCH 3/4] KVM: x86: correctly merge " Maxim Levitsky
@ 2021-04-01 19:56   ` Paolo Bonzini
  2021-04-01 22:56     ` Sean Christopherson
  0 siblings, 1 reply; 12+ messages in thread
From: Paolo Bonzini @ 2021-04-01 19:56 UTC (permalink / raw)
  To: Maxim Levitsky, kvm
  Cc: Thomas Gleixner, Wanpeng Li, Borislav Petkov, Jim Mattson,
	open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Vitaly Kuznetsov, H. Peter Anvin, Joerg Roedel, Ingo Molnar,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Sean Christopherson

On 01/04/21 16:38, Maxim Levitsky wrote:
> +static int kvm_do_deliver_pending_exception(struct kvm_vcpu *vcpu)
> +{
> +	int class1, class2, ret;
> +
> +	/* try to deliver current pending exception as VM exit */
> +	if (is_guest_mode(vcpu)) {
> +		ret = kvm_x86_ops.nested_ops->deliver_exception_as_vmexit(vcpu);
> +		if (ret || !vcpu->arch.pending_exception.valid)
> +			return ret;
> +	}
> +
> +	/* No injected exception, so just deliver the payload and inject it */
> +	if (!vcpu->arch.injected_exception.valid) {
> +		trace_kvm_inj_exception(vcpu->arch.pending_exception.nr,
> +					vcpu->arch.pending_exception.has_error_code,
> +					vcpu->arch.pending_exception.error_code);
> +queue:

If you move the queue label to the top of the function, you can "goto queue" for #DF as well and you don't need to call kvm_do_deliver_pending_exception again.  In fact you can merge this function and kvm_deliver_pending_exception completely:


static int kvm_deliver_pending_exception_as_vmexit(struct kvm_vcpu *vcpu)
{
	WARN_ON(!vcpu->arch.pending_exception.valid);
	if (is_guest_mode(vcpu))
		return kvm_x86_ops.nested_ops->deliver_exception_as_vmexit(vcpu);
	else
		return 0;
}

static int kvm_merge_injected_exception(struct kvm_vcpu *vcpu)
{
	/*
	 * First check if the pending exception takes precedence
	 * over the injected one, which will be reported in the
	 * vmexit info.
	 */
	ret = kvm_deliver_pending_exception_as_vmexit(vcpu);
	if (ret || !vcpu->arch.pending_exception.valid)
		return ret;

	if (vcpu->arch.injected_exception.nr == DF_VECTOR) {
		...
		return 0;
	}
	...
	if ((class1 == EXCPT_CONTRIBUTORY && class2 == EXCPT_CONTRIBUTORY)
	    || (class1 == EXCPT_PF && class2 != EXCPT_BENIGN)) {
		...
	}
	vcpu->arch.injected_exception.valid = false;
}

static int kvm_deliver_pending_exception(struct kvm_vcpu *vcpu)
{
	if (!vcpu->arch.pending_exception.valid)
		return 0;

	if (vcpu->arch.injected_exception.valid)
		kvm_merge_injected_exception(vcpu);

	ret = kvm_deliver_pending_exception_as_vmexit(vcpu));
	if (ret || !vcpu->arch.pending_exception.valid)
		return ret;

	trace_kvm_inj_exception(vcpu->arch.pending_exception.nr,
				vcpu->arch.pending_exception.has_error_code,
				vcpu->arch.pending_exception.error_code);
	...
}

Note that if the pending exception is a page fault, its payload
must be delivered to CR2 before converting it to a double fault.

Going forward to vmx.c:

> 
>  	if (mtf_pending) {
>  		if (block_nested_events)
>  			return -EBUSY;
> +
>  		nested_vmx_update_pending_dbg(vcpu);

Should this instead "WARN_ON(vmx_pending_dbg_trap(vcpu));" since
the pending-#DB-plus-MTF combination is handled in
vmx_deliver_exception_as_vmexit?...

> 
> +
> +	if (vmx->nested.mtf_pending && vmx_pending_dbg_trap(vcpu)) {
> +		/*
> +		 * A pending monitor trap takes precedence over pending
> +		 * debug exception which is 'stashed' into
> +		 * 'GUEST_PENDING_DBG_EXCEPTIONS'
> +		 */
> +
> +		nested_vmx_update_pending_dbg(vcpu);
> +		vmx->nested.mtf_pending = false;
> +		nested_vmx_vmexit(vcpu, EXIT_REASON_MONITOR_TRAP_FLAG, 0, 0);
> +		return 0;
> +	}

... though this is quite ugly, even more so if you add the case of an
INIT with a pending #DB.  The problem is that INIT and MTF have higher
priority than debug exceptions.

The good thing is that an INIT or MTF plus #DB cannot happen with
nested_run_pending == 1, so it will always be inject right away.

There is precedent with KVM_GET_* modifying the CPU state; in
particular, KVM_GET_MPSTATE can modify CS and RIP and even cause a
vmexit via kvm_apic_accept_events.  And in fact, because
kvm_apic_accept_events calls kvm_check_nested_events, calling it
from KVM_GET_VCPU_EVENTS would fix the problem: the injected exception
would go into the IDT-vectored exit field, while the pending exception
would go into GUEST_PENDING_DBG_EXCEPTION and disappear.

However, you cannot do kvm_apic_accept_events twice because there would
be a race with INIT: a #DB exception could be first stored by
KVM_GET_VCPU_EVENTS, then disappear when kvm_apic_accept_events
KVM_GET_MPSTATE is called.

Fortunately, the correct order for KVM_GET_* events is first
KVM_GET_VCPU_EVENTS and then KVM_GET_MPSTATE.  So perhaps
instead of calling kvm_deliver_pending_exception on vmexit,
KVM_GET_VCPU_EVENTS could call kvm_apic_accept_events (and thus
kvm_check_nested_events) instead of KVM_GET_MPSTATE.  In addition:
nested_ops.check_events would be split in two, with high-priority
events (triple fault, now in kvm_check_nested_events; INIT; MTF)
remaining in the first and interrupts in the second, tentatively
named check_interrupts).

I'll take a look at cleaning up the current mess we have around
kvm_apic_accept_events, where most of the calls do not have to
handle guest mode at all.

Thanks for this work and for showing that it's possible to fix the
underlying mess with exception vmexit.  It may be a bit more
complicated than this, but it's a great start.

Paolo


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 3/4] KVM: x86: correctly merge pending and injected exception
  2021-04-01 19:56   ` Paolo Bonzini
@ 2021-04-01 22:56     ` Sean Christopherson
  0 siblings, 0 replies; 12+ messages in thread
From: Sean Christopherson @ 2021-04-01 22:56 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Maxim Levitsky, kvm, Thomas Gleixner, Wanpeng Li,
	Borislav Petkov, Jim Mattson,
	open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Vitaly Kuznetsov, H. Peter Anvin, Joerg Roedel, Ingo Molnar,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)

On Thu, Apr 01, 2021, Paolo Bonzini wrote:
> On 01/04/21 16:38, Maxim Levitsky wrote:
> > +static int kvm_do_deliver_pending_exception(struct kvm_vcpu *vcpu)
> > +{
> > +	int class1, class2, ret;
> > +
> > +	/* try to deliver current pending exception as VM exit */
> > +	if (is_guest_mode(vcpu)) {
> > +		ret = kvm_x86_ops.nested_ops->deliver_exception_as_vmexit(vcpu);
> > +		if (ret || !vcpu->arch.pending_exception.valid)
> > +			return ret;
> > +	}
> > +
> > +	/* No injected exception, so just deliver the payload and inject it */
> > +	if (!vcpu->arch.injected_exception.valid) {
> > +		trace_kvm_inj_exception(vcpu->arch.pending_exception.nr,
> > +					vcpu->arch.pending_exception.has_error_code,
> > +					vcpu->arch.pending_exception.error_code);
> > +queue:
> 
> If you move the queue label to the top of the function, you can "goto queue" for #DF as well and you don't need to call kvm_do_deliver_pending_exception again.  In fact you can merge this function and kvm_deliver_pending_exception completely:
> 
> 
> static int kvm_deliver_pending_exception_as_vmexit(struct kvm_vcpu *vcpu)
> {
> 	WARN_ON(!vcpu->arch.pending_exception.valid);
> 	if (is_guest_mode(vcpu))
> 		return kvm_x86_ops.nested_ops->deliver_exception_as_vmexit(vcpu);
> 	else
> 		return 0;
> }
> 
> static int kvm_merge_injected_exception(struct kvm_vcpu *vcpu)
> {
> 	/*
> 	 * First check if the pending exception takes precedence
> 	 * over the injected one, which will be reported in the
> 	 * vmexit info.
> 	 */
> 	ret = kvm_deliver_pending_exception_as_vmexit(vcpu);
> 	if (ret || !vcpu->arch.pending_exception.valid)
> 		return ret;
> 
> 	if (vcpu->arch.injected_exception.nr == DF_VECTOR) {
> 		...
> 		return 0;
> 	}
> 	...
> 	if ((class1 == EXCPT_CONTRIBUTORY && class2 == EXCPT_CONTRIBUTORY)
> 	    || (class1 == EXCPT_PF && class2 != EXCPT_BENIGN)) {
> 		...
> 	}
> 	vcpu->arch.injected_exception.valid = false;
> }
> 
> static int kvm_deliver_pending_exception(struct kvm_vcpu *vcpu)
> {
> 	if (!vcpu->arch.pending_exception.valid)
> 		return 0;
> 
> 	if (vcpu->arch.injected_exception.valid)
> 		kvm_merge_injected_exception(vcpu);
> 
> 	ret = kvm_deliver_pending_exception_as_vmexit(vcpu));
> 	if (ret || !vcpu->arch.pending_exception.valid)

I really don't like querying arch.pending_exception.valid to see if the exception
was morphed to a VM-Exit.  I also find kvm_deliver_pending_exception_as_vmexit()
to be misleading; to me, that reads as being a command, i.e. "deliver this
pending exception as a VM-Exit".

It' also be nice to make the helpers closer to pure functions, i.e. pass the
exception as a param instead of pulling it from vcpu->arch.

Now that we have static_call, the number of calls into vendor code isn't a huge
issue.  Moving nested_run_pending to arch code would help, too.  What about
doing something like:

static bool kvm_l1_wants_exception_vmexit(struct kvm_vcpu *vcpu, u8 vector)
{
	return is_guest_mode(vcpu) && kvm_x86_l1_wants_exception(vcpu, vector);
}

	...

	if (!kvm_x86_exception_allowed(vcpu))
		return -EBUSY;

	if (kvm_l1_wants_exception_vmexit(vcpu, vcpu->arch...))
		return kvm_x86_deliver_exception_as_vmexit(...);

> 		return ret;
> 
> 	trace_kvm_inj_exception(vcpu->arch.pending_exception.nr,
> 				vcpu->arch.pending_exception.has_error_code,
> 				vcpu->arch.pending_exception.error_code);
> 	...
> }
> 

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/4] KVM: x86: separate pending and injected exception
  2021-04-01 14:38 ` [PATCH 2/4] KVM: x86: separate pending and injected exception Maxim Levitsky
@ 2021-04-01 23:05   ` Sean Christopherson
  2021-04-02  7:14     ` Paolo Bonzini
  0 siblings, 1 reply; 12+ messages in thread
From: Sean Christopherson @ 2021-04-01 23:05 UTC (permalink / raw)
  To: Maxim Levitsky
  Cc: kvm, Thomas Gleixner, Wanpeng Li, Borislav Petkov, Paolo Bonzini,
	Jim Mattson, open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Vitaly Kuznetsov, H. Peter Anvin, Joerg Roedel, Ingo Molnar,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)

On Thu, Apr 01, 2021, Maxim Levitsky wrote:
> Use 'pending_exception' and 'injected_exception' fields
> to store the pending and the injected exceptions.
> 
> After this patch still only one is active, but
> in the next patch both could co-exist in some cases.

Please explain _why_.  

> Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
> ---
>  arch/x86/include/asm/kvm_host.h |  25 ++++--
>  arch/x86/kvm/svm/nested.c       |  26 +++---
>  arch/x86/kvm/svm/svm.c          |   6 +-
>  arch/x86/kvm/vmx/nested.c       |  36 ++++----
>  arch/x86/kvm/vmx/vmx.c          |  12 +--
>  arch/x86/kvm/x86.c              | 145 ++++++++++++++++++--------------
>  arch/x86/kvm/x86.h              |   6 +-
>  7 files changed, 143 insertions(+), 113 deletions(-)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index a52f973bdff6..3b2fd276e8d5 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -547,6 +547,14 @@ struct kvm_vcpu_xen {
>  	u64 runstate_times[4];
>  };
>  
> +struct kvm_queued_exception {
> +	bool valid;
> +	u8 nr;

If we're refactoring all this code anyways, maybe change "nr" to something a
bit more descriptive?  E.g. vector.

> +	bool has_error_code;
> +	u32 error_code;
> +};
> +
> +
>  struct kvm_vcpu_arch {
>  	/*
>  	 * rip and regs accesses must go through
> @@ -645,16 +653,15 @@ struct kvm_vcpu_arch {
>  
>  	u8 event_exit_inst_len;
>  
> -	struct kvm_queued_exception {
> -		bool pending;
> -		bool injected;
> -		bool has_error_code;
> -		u8 nr;
> -		u32 error_code;
> -		unsigned long payload;
> -		bool has_payload;
> +	struct kvm_queued_exception pending_exception;
> +
> +	struct kvm_exception_payload {
> +		bool valid;
> +		unsigned long value;
>  		u8 nested_apf;
> -	} exception;
> +	} exception_payload;

Hmm, even if it's dead code at this time, I think the exception payload should
be part of 'struct kvm_queued_exception'.  The payload is very much tied to a
single exception.

> +
> +	struct kvm_queued_exception injected_exception;

Any objection to keeping the current syntax, arch.exception.{pending,injected}?
Maybe it's fear of change, but I like the current style, I think because the
relevant info is condensed at the end, e.g. I can ignore "vcpu->arch.exception"
and look at "pending.vector" or whatever.  E.g.

	struct {
		struct kvm_queued_exception pending;
		struct kvm_queued_exception injected;
	} exception;
>  
>  	struct kvm_queued_interrupt {
>  		bool injected;

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/4] KVM: x86: separate pending and injected exception
  2021-04-01 23:05   ` Sean Christopherson
@ 2021-04-02  7:14     ` Paolo Bonzini
  2021-04-02 15:01       ` Sean Christopherson
  0 siblings, 1 reply; 12+ messages in thread
From: Paolo Bonzini @ 2021-04-02  7:14 UTC (permalink / raw)
  To: Sean Christopherson, Maxim Levitsky
  Cc: kvm, Thomas Gleixner, Wanpeng Li, Borislav Petkov, Jim Mattson,
	open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Vitaly Kuznetsov, H. Peter Anvin, Joerg Roedel, Ingo Molnar,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)

On 02/04/21 01:05, Sean Christopherson wrote:
>>
>> +struct kvm_queued_exception {
>> +	bool valid;
>> +	u8 nr;
> 
> If we're refactoring all this code anyways, maybe change "nr" to something a
> bit more descriptive?  E.g. vector.

"nr" is part of the userspace structure, so consistency is an advantage too.

>> +	struct kvm_exception_payload {
>> +		bool valid;
>> +		unsigned long value;
>>   		u8 nested_apf;
>> -	} exception;
>> +	} exception_payload;
> 
> Hmm, even if it's dead code at this time, I think the exception payload should
> be part of 'struct kvm_queued_exception'.  The payload is very much tied to a
> single exception.

Agreed, when handling injected exceptions you can WARN that there is no 
payload.

Paolo


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/4] KVM: x86: separate pending and injected exception
  2021-04-02  7:14     ` Paolo Bonzini
@ 2021-04-02 15:01       ` Sean Christopherson
  0 siblings, 0 replies; 12+ messages in thread
From: Sean Christopherson @ 2021-04-02 15:01 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Maxim Levitsky, kvm, Thomas Gleixner, Wanpeng Li,
	Borislav Petkov, Jim Mattson,
	open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Vitaly Kuznetsov, H. Peter Anvin, Joerg Roedel, Ingo Molnar,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)

On Fri, Apr 02, 2021, Paolo Bonzini wrote:
> On 02/04/21 01:05, Sean Christopherson wrote:
> > > 
> > > +struct kvm_queued_exception {
> > > +	bool valid;
> > > +	u8 nr;
> > 
> > If we're refactoring all this code anyways, maybe change "nr" to something a
> > bit more descriptive?  E.g. vector.
> 
> "nr" is part of the userspace structure, so consistency is an advantage too.

Foiled at every turn.  Keeping "nr" probably does make sense.

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2021-04-02 15:02 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-01 14:38 [PATCH 0/4] KVM: nSVM/nVMX: fix nested virtualization treatment of nested exceptions Maxim Levitsky
2021-04-01 14:38 ` [PATCH 1/4] KVM: x86: pending exceptions must not be blocked by an injected event Maxim Levitsky
2021-04-01 17:05   ` Paolo Bonzini
2021-04-01 17:12     ` Maxim Levitsky
2021-04-01 14:38 ` [PATCH 2/4] KVM: x86: separate pending and injected exception Maxim Levitsky
2021-04-01 23:05   ` Sean Christopherson
2021-04-02  7:14     ` Paolo Bonzini
2021-04-02 15:01       ` Sean Christopherson
2021-04-01 14:38 ` [PATCH 3/4] KVM: x86: correctly merge " Maxim Levitsky
2021-04-01 19:56   ` Paolo Bonzini
2021-04-01 22:56     ` Sean Christopherson
2021-04-01 14:38 ` [PATCH 4/4] KVM: x86: remove tweaking of inject_page_fault Maxim Levitsky

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).