linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH RFC] KVM: async_pf: fix async_pf exception injection
@ 2017-06-08  9:30 Wanpeng Li
  2017-06-08 11:52 ` Paolo Bonzini
  0 siblings, 1 reply; 6+ messages in thread
From: Wanpeng Li @ 2017-06-08  9:30 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: Paolo Bonzini, Radim Krčmář, Wanpeng Li

 INFO: task gnome-terminal-:1734 blocked for more than 120 seconds.
       Not tainted 4.12.0-rc4+ #8
 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
 gnome-terminal- D    0  1734   1015 0x00000000
 Call Trace:
  __schedule+0x3cd/0xb30
  schedule+0x40/0x90
  kvm_async_pf_task_wait+0x1cc/0x270
  ? __vfs_read+0x37/0x150
  ? prepare_to_swait+0x22/0x70
  do_async_page_fault+0x77/0xb0
  ? do_async_page_fault+0x77/0xb0
  async_page_fault+0x28/0x30

This is triggered by running both win7 and win2016 on L1 KVM simultaneously, 
and then gives stress to memory on L1, I can observed this hang on L1 when 
at least ~70% swap area is occupied on L0.

This is due to async pf was injected to L2 which should be injected to L1, 
L2 guest starts receiving pagefault w/ bogus %cr2(apf token from the host 
actually), and L1 guest starts accumulating tasks stuck in D state in 
kvm_async_pf_task_wait().

I try to fix it according to Radim's proposal "force a nested VM exit from 
nested_vmx_check_exception if the injected #PF is async_pf and handle the #PF 
VM exit in L1". https://www.spinics.net/lists/kvm/msg142498.html

However, I found that "nr == PF_VECTOR && vmx->apf_reason != 0" never be true 
in nested_vmx_check_exception(). SVM depends on the similar stuff in 
nested_svm_intercept() which makes me confusing how it can works. In addition, 
vmx/svm->apf_reason should be got in L1 since apf_reason.reason will make sense 
just in pv guest. So vmx/svm->apf_reason should always be 0 on L0. I change the 
condition to "nr == PF_VECTOR && error_code == 0" to intercept async_pf, however,
the below bug will be splatted:

 BUG: unable to handle kernel paging request at ffffe305770a87e0
 IP: kfree+0x6f/0x300
 PGD 0 
 P4D 0 
 
 Oops: 0000 [#1] PREEMPT SMP
 CPU: 3 PID: 2187 Comm: transhuge-stres Tainted: G           OE   4.12.0-rc4+ #9
 task: ffff8a9214b58000 task.stack: ffffb46bc34e4000
 RIP: 0010:kfree+0x6f/0x300
 RSP: 0000:ffffb46bc34e7b28 EFLAGS: 00010086
 RAX: ffffe305770a87c0 RBX: ffffb46bc2a1fe70 RCX: 0000000000000001
 RDX: 0000757180000000 RSI: 00000000ffffffff RDI: 0000000000000096
 RBP: ffffb46bc34e7b50 R08: 0000000000000000 R09: 0000000000000001
 R10: ffffb46bc34e7ac8 R11: 68b9962a00000000 R12: 000000a7770a87c0
 R13: ffffffff90059b75 R14: ffffffff913466c0 R15: ffffe25e06f18000
 FS:  00007f1904ae7700(0000) GS:ffff8a921a800000(0000) knlGS:0000000000000000
 CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
 CR2: ffffe305770a87e0 CR3: 000000040eb5c000 CR4: 00000000001426e0
 Call Trace:
  kvm_async_pf_task_wait+0xd5/0x280
  ? __this_cpu_preempt_check+0x13/0x20
  do_async_page_fault+0x77/0xb0
  ? do_async_page_fault+0x77/0xb0
  async_page_fault+0x28/0x30

In additon, if svm->apf_reason doen't make sense on L0, then maybe it also will not 
work in the function nested_svm_exit_special().

The patch below is uncompleted, and your inputs to improve it is a great appreciated.

Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Wanpeng Li <wanpeng.li@hotmail.com>
---
 arch/x86/kvm/vmx.c | 41 ++++++++++++++++++++++++++++++++---------
 1 file changed, 32 insertions(+), 9 deletions(-)

diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index ca5d2b9..21a1b44 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -616,6 +616,7 @@ struct vcpu_vmx {
 	bool emulation_required;
 
 	u32 exit_reason;
+	u32 apf_reason;
 
 	/* Posted interrupt descriptor */
 	struct pi_desc pi_desc;
@@ -2418,11 +2419,12 @@ static void skip_emulated_instruction(struct kvm_vcpu *vcpu)
  * KVM wants to inject page-faults which it got to the guest. This function
  * checks whether in a nested guest, we need to inject them to L1 or L2.
  */
-static int nested_vmx_check_exception(struct kvm_vcpu *vcpu, unsigned nr)
+static int nested_vmx_check_exception(struct kvm_vcpu *vcpu, unsigned nr, u32 error_code)
 {
 	struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
 
-	if (!(vmcs12->exception_bitmap & (1u << nr)))
+	if (!((vmcs12->exception_bitmap & (1u << nr)) ||
+		(nr == PF_VECTOR && error_code == 0)))
 		return 0;
 
 	nested_vmx_vmexit(vcpu, EXIT_REASON_EXCEPTION_NMI,
@@ -2439,7 +2441,7 @@ static void vmx_queue_exception(struct kvm_vcpu *vcpu, unsigned nr,
 	u32 intr_info = nr | INTR_INFO_VALID_MASK;
 
 	if (!reinject && is_guest_mode(vcpu) &&
-	    nested_vmx_check_exception(vcpu, nr))
+	    nested_vmx_check_exception(vcpu, nr, error_code))
 		return;
 
 	if (has_error_code) {
@@ -5646,14 +5648,31 @@ static int handle_exception(struct kvm_vcpu *vcpu)
 	}
 
 	if (is_page_fault(intr_info)) {
-		/* EPT won't cause page fault directly */
-		BUG_ON(enable_ept);
 		cr2 = vmcs_readl(EXIT_QUALIFICATION);
-		trace_kvm_page_fault(cr2, error_code);
+		switch (vmx->apf_reason) {
+		default:
+			/* EPT won't cause page fault directly */
+			BUG_ON(enable_ept);
+			trace_kvm_page_fault(cr2, error_code);
 
-		if (kvm_event_needs_reinjection(vcpu))
-			kvm_mmu_unprotect_page_virt(vcpu, cr2);
-		return kvm_mmu_page_fault(vcpu, cr2, error_code, NULL, 0);
+			if (kvm_event_needs_reinjection(vcpu))
+				kvm_mmu_unprotect_page_virt(vcpu, cr2);
+			return kvm_mmu_page_fault(vcpu, cr2, error_code, NULL, 0);
+			break;
+		case KVM_PV_REASON_PAGE_NOT_PRESENT:
+			vmx->apf_reason = 0;
+			local_irq_disable();
+			kvm_async_pf_task_wait(cr2);
+			local_irq_enable();
+			break;
+		case KVM_PV_REASON_PAGE_READY:
+			vmx->apf_reason = 0;
+			local_irq_disable();
+			kvm_async_pf_task_wake(cr2);
+			local_irq_enable();
+			break;
+		}
+		return 0;
 	}
 
 	ex_no = intr_info & INTR_INFO_VECTOR_MASK;
@@ -8600,6 +8619,10 @@ static void vmx_complete_atomic_exit(struct vcpu_vmx *vmx)
 	vmx->exit_intr_info = vmcs_read32(VM_EXIT_INTR_INFO);
 	exit_intr_info = vmx->exit_intr_info;
 
+	/* if exit due to PF check for async PF */
+	if (is_page_fault(exit_intr_info))
+		vmx->apf_reason = kvm_read_and_reset_pf_reason();
+
 	/* Handle machine checks before interrupts are enabled */
 	if (is_machine_check(exit_intr_info))
 		kvm_machine_check();
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2017-06-09 11:32 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-06-08  9:30 [PATCH RFC] KVM: async_pf: fix async_pf exception injection Wanpeng Li
2017-06-08 11:52 ` Paolo Bonzini
2017-06-08 12:32   ` Wanpeng Li
2017-06-08 12:48     ` Paolo Bonzini
2017-06-09  5:30   ` Wanpeng Li
2017-06-09 11:32     ` Paolo Bonzini

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).