kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 1/2] KVM: VMX: use kvm_event_needs_reinjection
@ 2017-08-23 10:23 Wanpeng Li
  2017-08-23 10:23 ` [PATCH v2 2/2] KVM: X86: Fix loss of exception which has not yet injected Wanpeng Li
  0 siblings, 1 reply; 4+ messages in thread
From: Wanpeng Li @ 2017-08-23 10:23 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: Paolo Bonzini, Radim Krčmář, Wanpeng Li

From: Wanpeng Li <wanpeng.li@hotmail.com>

Use kvm_event_needs_reinjection() encapsulation.

Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Wanpeng Li <wanpeng.li@hotmail.com>
---
 arch/x86/kvm/vmx.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index c2e7c4b..a56e7f3 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -10931,9 +10931,7 @@ static int vmx_check_nested_events(struct kvm_vcpu *vcpu, bool external_intr)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 
-	if (vcpu->arch.exception.pending ||
-		vcpu->arch.nmi_injected ||
-		vcpu->arch.interrupt.pending)
+	if (kvm_event_needs_reinjection(vcpu))
 		return -EBUSY;
 
 	if (nested_cpu_has_preemption_timer(get_vmcs12(vcpu)) &&
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [PATCH v2 2/2] KVM: X86: Fix loss of exception which has not yet injected
  2017-08-23 10:23 [PATCH v2 1/2] KVM: VMX: use kvm_event_needs_reinjection Wanpeng Li
@ 2017-08-23 10:23 ` Wanpeng Li
  2017-08-23 12:27   ` Paolo Bonzini
  0 siblings, 1 reply; 4+ messages in thread
From: Wanpeng Li @ 2017-08-23 10:23 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: Paolo Bonzini, Radim Krčmář, Wanpeng Li

From: Wanpeng Li <wanpeng.li@hotmail.com>

vmx_complete_interrupts() assumes that the exception is always injected, 
so it would be dropped by kvm_clear_exception_queue(). This patch separates 
exception.pending from exception.injected, exception.inject represents the 
exception is injected or the exception should be reinjected due to vmexit 
occurs during event delivery in VMX non-root operation. exception.pending 
represents the exception is queued and will be cleared when injecting the 
exception to the guest. So exception.pending and exception.injected can 
cooperate to guarantee exception will not be lost.

Reported-by: Radim Krčmář <rkrcmar@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Wanpeng Li <wanpeng.li@hotmail.com>
---
Note: I will complete the qemu part for VCPU_EVENTS after this patch is in 
an acceptable state.

v1 -> v2:
 * fix svm compile error

 Documentation/virtual/kvm/api.txt     |  2 +-
 arch/x86/include/asm/kvm_host.h       |  2 +-
 arch/x86/include/uapi/asm/kvm.h       |  2 +-
 arch/x86/kvm/svm.c                    |  2 +-
 arch/x86/kvm/vmx.c                    |  4 ++--
 arch/x86/kvm/x86.c                    | 21 ++++++++++++++++-----
 arch/x86/kvm/x86.h                    |  4 ++--
 tools/arch/x86/include/uapi/asm/kvm.h |  2 +-
 8 files changed, 25 insertions(+), 14 deletions(-)

diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
index e63a35f..6bfb178 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -830,9 +830,9 @@ states of the vcpu.
 struct kvm_vcpu_events {
 	struct {
 		__u8 injected;
+		__u8 pending;
 		__u8 nr;
 		__u8 has_error_code;
-		__u8 pad;
 		__u32 error_code;
 	} exception;
 	struct {
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index e1e6f00..f3899a3 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -546,8 +546,8 @@ struct kvm_vcpu_arch {
 
 	struct kvm_queued_exception {
 		bool pending;
+		bool injected;
 		bool has_error_code;
-		bool reinject;
 		u8 nr;
 		u32 error_code;
 		u8 nested_apf;
diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h
index c2824d0..7c55916 100644
--- a/arch/x86/include/uapi/asm/kvm.h
+++ b/arch/x86/include/uapi/asm/kvm.h
@@ -296,9 +296,9 @@ struct kvm_reinject_control {
 struct kvm_vcpu_events {
 	struct {
 		__u8 injected;
+		__u8 pending;
 		__u8 nr;
 		__u8 has_error_code;
-		__u8 pad;
 		__u32 error_code;
 	} exception;
 	struct {
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index c4634fd..4b17c17 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -637,7 +637,7 @@ static int svm_queue_exception(struct kvm_vcpu *vcpu)
 	struct vcpu_svm *svm = to_svm(vcpu);
 	unsigned nr = vcpu->arch.exception.nr;
 	bool has_error_code = vcpu->arch.exception.has_error_code;
-	bool reinject = vcpu->arch.exception.reinject;
+	bool reinject = vcpu->arch.exception.injected;
 	u32 error_code = vcpu->arch.exception.error_code;
 
 	/*
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index a56e7f3..f96829c 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -2510,7 +2510,7 @@ static int vmx_queue_exception(struct kvm_vcpu *vcpu)
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 	unsigned nr = vcpu->arch.exception.nr;
 	bool has_error_code = vcpu->arch.exception.has_error_code;
-	bool reinject = vcpu->arch.exception.reinject;
+	bool reinject = vcpu->arch.exception.injected;
 	u32 error_code = vcpu->arch.exception.error_code;
 	u32 intr_info = nr | INTR_INFO_VALID_MASK;
 
@@ -10891,7 +10891,7 @@ static void vmcs12_save_pending_event(struct kvm_vcpu *vcpu,
 	u32 idt_vectoring;
 	unsigned int nr;
 
-	if (vcpu->arch.exception.pending && vcpu->arch.exception.reinject) {
+	if (vcpu->arch.exception.injected) {
 		nr = vcpu->arch.exception.nr;
 		idt_vectoring = nr | VECTORING_INFO_VALID_MASK;
 
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 7414c26..5c680f1 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -389,15 +389,18 @@ static void kvm_multiple_exception(struct kvm_vcpu *vcpu,
 
 	kvm_make_request(KVM_REQ_EVENT, vcpu);
 
-	if (!vcpu->arch.exception.pending) {
+	if (!vcpu->arch.exception.pending ||
+		!vcpu->arch.exception.injected) {
 	queue:
 		if (has_error && !is_protmode(vcpu))
 			has_error = false;
-		vcpu->arch.exception.pending = true;
+		if (reinject)
+			vcpu->arch.exception.injected = true;
+		else
+			vcpu->arch.exception.pending = true;
 		vcpu->arch.exception.has_error_code = has_error;
 		vcpu->arch.exception.nr = nr;
 		vcpu->arch.exception.error_code = error_code;
-		vcpu->arch.exception.reinject = reinject;
 		return;
 	}
 
@@ -3069,12 +3072,12 @@ static void kvm_vcpu_ioctl_x86_get_vcpu_events(struct kvm_vcpu *vcpu,
 					       struct kvm_vcpu_events *events)
 {
 	process_nmi(vcpu);
+	events->exception.injected = vcpu->arch.exception.injected;
 	events->exception.injected =
 		vcpu->arch.exception.pending &&
 		!kvm_exception_is_soft(vcpu->arch.exception.nr);
 	events->exception.nr = vcpu->arch.exception.nr;
 	events->exception.has_error_code = vcpu->arch.exception.has_error_code;
-	events->exception.pad = 0;
 	events->exception.error_code = vcpu->arch.exception.error_code;
 
 	events->interrupt.injected =
@@ -3125,6 +3128,7 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu,
 		return -EINVAL;
 
 	process_nmi(vcpu);
+	vcpu->arch.exception.injected = events->exception.injected;
 	vcpu->arch.exception.pending = events->exception.injected;
 	vcpu->arch.exception.nr = events->exception.nr;
 	vcpu->arch.exception.has_error_code = events->exception.has_error_code;
@@ -6341,7 +6345,8 @@ static int inject_pending_event(struct kvm_vcpu *vcpu, bool req_int_win)
 	int r;
 
 	/* try to reinject previous events if any */
-	if (vcpu->arch.exception.pending) {
+	if (vcpu->arch.exception.pending ||
+		vcpu->arch.exception.injected) {
 		trace_kvm_inj_exception(vcpu->arch.exception.nr,
 					vcpu->arch.exception.has_error_code,
 					vcpu->arch.exception.error_code);
@@ -6357,6 +6362,11 @@ static int inject_pending_event(struct kvm_vcpu *vcpu, bool req_int_win)
 		}
 
 		r = kvm_x86_ops->queue_exception(vcpu);
+		if (r == 0 && vcpu->arch.exception.pending &&
+			!vcpu->arch.exception.injected) {
+			vcpu->arch.exception.pending = false;
+			vcpu->arch.exception.injected = true;
+		}
 		return r;
 	}
 
@@ -7727,6 +7737,7 @@ void kvm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
 	vcpu->arch.nmi_injected = false;
 	kvm_clear_interrupt_queue(vcpu);
 	kvm_clear_exception_queue(vcpu);
+	vcpu->arch.exception.pending = false;
 
 	memset(vcpu->arch.db, 0, sizeof(vcpu->arch.db));
 	kvm_update_dr0123(vcpu);
diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
index 1134603..e6ec0de 100644
--- a/arch/x86/kvm/x86.h
+++ b/arch/x86/kvm/x86.h
@@ -11,7 +11,7 @@
 
 static inline void kvm_clear_exception_queue(struct kvm_vcpu *vcpu)
 {
-	vcpu->arch.exception.pending = false;
+	vcpu->arch.exception.injected = false;
 }
 
 static inline void kvm_queue_interrupt(struct kvm_vcpu *vcpu, u8 vector,
@@ -29,7 +29,7 @@ static inline void kvm_clear_interrupt_queue(struct kvm_vcpu *vcpu)
 
 static inline bool kvm_event_needs_reinjection(struct kvm_vcpu *vcpu)
 {
-	return vcpu->arch.exception.pending || vcpu->arch.interrupt.pending ||
+	return vcpu->arch.exception.injected || vcpu->arch.interrupt.pending ||
 		vcpu->arch.nmi_injected;
 }
 
diff --git a/tools/arch/x86/include/uapi/asm/kvm.h b/tools/arch/x86/include/uapi/asm/kvm.h
index c2824d0..7c55916 100644
--- a/tools/arch/x86/include/uapi/asm/kvm.h
+++ b/tools/arch/x86/include/uapi/asm/kvm.h
@@ -296,9 +296,9 @@ struct kvm_reinject_control {
 struct kvm_vcpu_events {
 	struct {
 		__u8 injected;
+		__u8 pending;
 		__u8 nr;
 		__u8 has_error_code;
-		__u8 pad;
 		__u32 error_code;
 	} exception;
 	struct {
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH v2 2/2] KVM: X86: Fix loss of exception which has not yet injected
  2017-08-23 10:23 ` [PATCH v2 2/2] KVM: X86: Fix loss of exception which has not yet injected Wanpeng Li
@ 2017-08-23 12:27   ` Paolo Bonzini
  2017-08-24  4:30     ` Wanpeng Li
  0 siblings, 1 reply; 4+ messages in thread
From: Paolo Bonzini @ 2017-08-23 12:27 UTC (permalink / raw)
  To: Wanpeng Li, linux-kernel, kvm; +Cc: Radim Krčmář, Wanpeng Li

On 23/08/2017 12:23, Wanpeng Li wrote:
> @@ -6341,7 +6345,8 @@ static int inject_pending_event(struct kvm_vcpu *vcpu, bool req_int_win)
>  	int r;
>  
>  	/* try to reinject previous events if any */
> -	if (vcpu->arch.exception.pending) {
> +	if (vcpu->arch.exception.pending ||
> +		vcpu->arch.exception.injected) {
>  		trace_kvm_inj_exception(vcpu->arch.exception.nr,
>  					vcpu->arch.exception.has_error_code,
>  					vcpu->arch.exception.error_code);
> @@ -6357,6 +6362,11 @@ static int inject_pending_event(struct kvm_vcpu *vcpu, bool req_int_win)
>  		}
>  
>  		r = kvm_x86_ops->queue_exception(vcpu);
> +		if (r == 0 && vcpu->arch.exception.pending &&
> +			!vcpu->arch.exception.injected) {
> +			vcpu->arch.exception.pending = false;
> +			vcpu->arch.exception.injected = true;
> +		}
>  		return r;
>  	}
>  

There are some changes needed here:

- this "if" should check only .injected and call
kvm_x86_ops->queue_exception.  The "if" for .pending, which handles
rflags/dr7, assigns false to .pending and true to .injected, and also
calls kvm_x86_ops->queue_exception, should be after the call to
check_nested_events.

- in the call to inject_pending_event, the "else" should have a

	WARN_ON(vcpu->arch.exception.pending);

just for completeness.


Also, nested_vmx_check_exception has to be moved from
vmx_queue_exception to vmx_check_nested_events, so that
nested_run_pending is checked correctly.  Something like this:

        if (vcpu->arch.exception.pending &&
	    nested_vmx_check_exception(vcpu, &exit_qual)) {
		if (vmx->nested.nested_run_pending)
                        return -EBUSY;

		nested_vmx_inject_exception_vmexit(vcpu, exit_qual);
		return 0;
        }

Maybe you can have:

- patch 1 which I'll apply now

- patch 2 is the same as this one, with only the above changes to
inject_pending_event and the WARN_ON.

- patch 3 moves the nested_vmx_inject_exception_vmexit call from
nested_vmx_check_exception to vmx_queue_exception.

- patch 4 moves the code to vmx_check_nested_events and add the
nested_run_pending check, fixing the original bug.

As to GET/SET_VCPU_EVENTS, for now I would not do any change.  Instead,
we can do

	/*
	 * FIXME: pass injected and pending separately.  This is only
	 * needed for nested virtualization, whose state cannot be
	 * migrated yet.  For now we combine them just in case.
	 */
	events->exception.injected =
		(vcpu->arch.exception.pending ||
		 vcpu->arch.exception.injected) &&
		!kvm_exception_is_soft(vcpu->arch.exception.nr);
	}

and set_vcpu_events can just set .injected = false.  Separating the two
will need a separate capability and KVM_ENABLE_CAP.

Paolo

> 
> diff --git a/tools/arch/x86/include/uapi/asm/kvm.h b/tools/arch/x86/include/uapi/asm/kvm.h
> index c2824d0..7c55916 100644
> --- a/tools/arch/x86/include/uapi/asm/kvm.h
> +++ b/tools/arch/x86/include/uapi/asm/kvm.h
> @@ -296,9 +296,9 @@ struct kvm_reinject_control {
>  struct kvm_vcpu_events {
>  	struct {
>  		__u8 injected;
> +		__u8 pending;
>  		__u8 nr;
>  		__u8 has_error_code;
> -		__u8 pad;
>  		__u32 error_code;
>  	} exception;
>  	struct {

I think you don't need t

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH v2 2/2] KVM: X86: Fix loss of exception which has not yet injected
  2017-08-23 12:27   ` Paolo Bonzini
@ 2017-08-24  4:30     ` Wanpeng Li
  0 siblings, 0 replies; 4+ messages in thread
From: Wanpeng Li @ 2017-08-24  4:30 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: linux-kernel, kvm, Radim Krčmář, Wanpeng Li

2017-08-23 20:27 GMT+08:00 Paolo Bonzini <pbonzini@redhat.com>:
> On 23/08/2017 12:23, Wanpeng Li wrote:
>> @@ -6341,7 +6345,8 @@ static int inject_pending_event(struct kvm_vcpu *vcpu, bool req_int_win)
>>       int r;
>>
>>       /* try to reinject previous events if any */
>> -     if (vcpu->arch.exception.pending) {
>> +     if (vcpu->arch.exception.pending ||
>> +             vcpu->arch.exception.injected) {
>>               trace_kvm_inj_exception(vcpu->arch.exception.nr,
>>                                       vcpu->arch.exception.has_error_code,
>>                                       vcpu->arch.exception.error_code);
>> @@ -6357,6 +6362,11 @@ static int inject_pending_event(struct kvm_vcpu *vcpu, bool req_int_win)
>>               }
>>
>>               r = kvm_x86_ops->queue_exception(vcpu);
>> +             if (r == 0 && vcpu->arch.exception.pending &&
>> +                     !vcpu->arch.exception.injected) {
>> +                     vcpu->arch.exception.pending = false;
>> +                     vcpu->arch.exception.injected = true;
>> +             }
>>               return r;
>>       }
>>
>
> There are some changes needed here:
>
> - this "if" should check only .injected and call
> kvm_x86_ops->queue_exception.  The "if" for .pending, which handles
> rflags/dr7, assigns false to .pending and true to .injected, and also
> calls kvm_x86_ops->queue_exception, should be after the call to
> check_nested_events.
>
> - in the call to inject_pending_event, the "else" should have a
>
>         WARN_ON(vcpu->arch.exception.pending);
>
> just for completeness.
>
>
> Also, nested_vmx_check_exception has to be moved from
> vmx_queue_exception to vmx_check_nested_events, so that
> nested_run_pending is checked correctly.  Something like this:
>
>         if (vcpu->arch.exception.pending &&
>             nested_vmx_check_exception(vcpu, &exit_qual)) {
>                 if (vmx->nested.nested_run_pending)
>                         return -EBUSY;
>
>                 nested_vmx_inject_exception_vmexit(vcpu, exit_qual);
>                 return 0;
>         }
>
> Maybe you can have:
>
> - patch 1 which I'll apply now
>
> - patch 2 is the same as this one, with only the above changes to
> inject_pending_event and the WARN_ON.
>
> - patch 3 moves the nested_vmx_inject_exception_vmexit call from
> nested_vmx_check_exception to vmx_queue_exception.
>
> - patch 4 moves the code to vmx_check_nested_events and add the
> nested_run_pending check, fixing the original bug.
>
> As to GET/SET_VCPU_EVENTS, for now I would not do any change.  Instead,
> we can do
>
>         /*
>          * FIXME: pass injected and pending separately.  This is only
>          * needed for nested virtualization, whose state cannot be
>          * migrated yet.  For now we combine them just in case.
>          */
>         events->exception.injected =
>                 (vcpu->arch.exception.pending ||
>                  vcpu->arch.exception.injected) &&
>                 !kvm_exception_is_soft(vcpu->arch.exception.nr);
>         }
>

I complete all the comments in v3, thanks for the review. :)

Regards,
Wanpeng Li

> and set_vcpu_events can just set .injected = false.  Separating the two
> will need a separate capability and KVM_ENABLE_CAP.
>
> Paolo
>
>>
>> diff --git a/tools/arch/x86/include/uapi/asm/kvm.h b/tools/arch/x86/include/uapi/asm/kvm.h
>> index c2824d0..7c55916 100644
>> --- a/tools/arch/x86/include/uapi/asm/kvm.h
>> +++ b/tools/arch/x86/include/uapi/asm/kvm.h
>> @@ -296,9 +296,9 @@ struct kvm_reinject_control {
>>  struct kvm_vcpu_events {
>>       struct {
>>               __u8 injected;
>> +             __u8 pending;
>>               __u8 nr;
>>               __u8 has_error_code;
>> -             __u8 pad;
>>               __u32 error_code;
>>       } exception;
>>       struct {
>
> I think you don't need t

Agreed.

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2017-08-24  4:30 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-08-23 10:23 [PATCH v2 1/2] KVM: VMX: use kvm_event_needs_reinjection Wanpeng Li
2017-08-23 10:23 ` [PATCH v2 2/2] KVM: X86: Fix loss of exception which has not yet injected Wanpeng Li
2017-08-23 12:27   ` Paolo Bonzini
2017-08-24  4:30     ` Wanpeng Li

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).