From: Borislav Petkov <bp@alien8.de>
To: Jan Kiszka <jan.kiszka@web.de>
Cc: "Gleb Natapov" <gleb@kernel.org>,
"Paolo Bonzini" <pbonzini@redhat.com>,
lkml <linux-kernel@vger.kernel.org>,
"Peter Zijlstra" <peterz@infradead.org>,
"Steven Rostedt" <rostedt@goodmis.org>, x86-ml <x86@kernel.org>,
kvm@vger.kernel.org, "Jörg Rödel" <joro@8bytes.org>
Subject: Re: __schedule #DF splat
Date: Sun, 29 Jun 2014 15:14:43 +0200 [thread overview]
Message-ID: <20140629131443.GA5199@pd.tnic> (raw)
In-Reply-To: <53B0050B.90104@web.de>
On Sun, Jun 29, 2014 at 02:22:35PM +0200, Jan Kiszka wrote:
> OK, looks like I won ;):
I gladly let you win. :-P
> The issue was apparently introduced with "KVM: x86: get CPL from
> SS.DPL" (ae9fedc793). Maybe we are not properly saving or restoring
> this state on SVM since then.
I wonder if this change in the CPL saving would have anything to do with
the fact that we're doing a CR3 write right before we fail pagetable
walk and end up walking a user page table. It could be unrelated though,
as in the previous dump I had a get_user right before the #DF. Hmmm.
I better go and revert that one and check whether it fixes things.
> Need a break, will look into details later.
Ok, some more info from my side, see relevant snippet below. We're
basically not finding the pte at level 3 during the page walk for
7fff0b0f8908.
However, why we're even page walking this userspace address at that
point I have no idea.
And the CR3 write right before this happens is there so I'm pretty much
sure by now that this is related...
qemu-system-x86-5007 [007] ...1 346.126204: vcpu_match_mmio: gva 0xffffffffff5fd0b0 gpa 0xfee000b0 Write GVA
qemu-system-x86-5007 [007] ...1 346.126204: kvm_mmio: mmio write len 4 gpa 0xfee000b0 val 0x0
qemu-system-x86-5007 [007] ...1 346.126205: kvm_apic: apic_write APIC_EOI = 0x0
qemu-system-x86-5007 [007] ...1 346.126205: kvm_eoi: apicid 0 vector 253
qemu-system-x86-5007 [007] d..2 346.126206: kvm_entry: vcpu 0
qemu-system-x86-5007 [007] d..2 346.126211: kvm_exit: reason write_cr3 rip 0xffffffff816113a0 info 8000000000000000 0
qemu-system-x86-5007 [007] ...2 346.126214: kvm_mmu_get_page: sp gen 25 gfn 7b2b1 4 pae q0 wux !nxe root 0 sync existing
qemu-system-x86-5007 [007] d..2 346.126215: kvm_entry: vcpu 0
qemu-system-x86-5007 [007] d..2 346.126216: kvm_exit: reason PF excp rip 0xffffffff816113df info 2 7fff0b0f8908
qemu-system-x86-5007 [007] ...1 346.126217: kvm_page_fault: address 7fff0b0f8908 error_code 2
qemu-system-x86-5007 [007] ...1 346.126218: kvm_mmu_pagetable_walk: addr 7fff0b0f8908 pferr 2 W
qemu-system-x86-5007 [007] ...1 346.126219: kvm_mmu_paging_element: pte 7b2b6067 level 4
qemu-system-x86-5007 [007] ...1 346.126220: kvm_mmu_paging_element: pte 0 level 3
qemu-system-x86-5007 [007] ...1 346.126220: kvm_mmu_walker_error: pferr 2 W
qemu-system-x86-5007 [007] ...1 346.126221: kvm_multiple_exception: nr: 14, prev: 255, has_error: 1, error_code: 0x2, reinj: 0
qemu-system-x86-5007 [007] ...1 346.126221: kvm_inj_exception: #PF (0x2)
qemu-system-x86-5007 [007] d..2 346.126222: kvm_entry: vcpu 0
qemu-system-x86-5007 [007] d..2 346.126223: kvm_exit: reason PF excp rip 0xffffffff816113df info 2 7fff0b0f8908
qemu-system-x86-5007 [007] ...1 346.126224: kvm_multiple_exception: nr: 14, prev: 14, has_error: 1, error_code: 0x2, reinj: 1
qemu-system-x86-5007 [007] ...1 346.126225: kvm_page_fault: address 7fff0b0f8908 error_code 2
qemu-system-x86-5007 [007] ...1 346.126225: kvm_mmu_pagetable_walk: addr 7fff0b0f8908 pferr 0
qemu-system-x86-5007 [007] ...1 346.126226: kvm_mmu_paging_element: pte 7b2b6067 level 4
qemu-system-x86-5007 [007] ...1 346.126227: kvm_mmu_paging_element: pte 0 level 3
qemu-system-x86-5007 [007] ...1 346.126227: kvm_mmu_walker_error: pferr 0
qemu-system-x86-5007 [007] ...1 346.126228: kvm_mmu_pagetable_walk: addr 7fff0b0f8908 pferr 2 W
qemu-system-x86-5007 [007] ...1 346.126229: kvm_mmu_paging_element: pte 7b2b6067 level 4
qemu-system-x86-5007 [007] ...1 346.126230: kvm_mmu_paging_element: pte 0 level 3
qemu-system-x86-5007 [007] ...1 346.126230: kvm_mmu_walker_error: pferr 2 W
qemu-system-x86-5007 [007] ...1 346.126231: kvm_multiple_exception: nr: 14, prev: 14, has_error: 1, error_code: 0x2, reinj: 0
qemu-system-x86-5007 [007] ...1 346.126231: kvm_inj_exception: #DF (0x0)
qemu-system-x86-5007 [007] d..2 346.126232: kvm_entry: vcpu 0
qemu-system-x86-5007 [007] d..2 346.126371: kvm_exit: reason io rip 0xffffffff8131e623 info 3d40220 ffffffff8131e625
qemu-system-x86-5007 [007] ...1 346.126372: kvm_pio: pio_write at 0x3d4 size 2 count 1 val 0x130e
qemu-system-x86-5007 [007] ...1 346.126374: kvm_userspace_exit: reason KVM_EXIT_IO (2)
qemu-system-x86-5007 [007] d..2 346.126383: kvm_entry: vcpu 0
--
Regards/Gruss,
Boris.
Sent from a fat crate under my desk. Formatting is fine.
--
next prev parent reply other threads:[~2014-06-29 13:14 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-06-25 15:32 __schedule #DF splat Borislav Petkov
2014-06-25 20:26 ` Borislav Petkov
2014-06-27 10:18 ` Borislav Petkov
2014-06-27 11:41 ` Paolo Bonzini
2014-06-27 11:55 ` Borislav Petkov
2014-06-27 12:01 ` Paolo Bonzini
2014-06-27 12:10 ` Borislav Petkov
2014-06-28 11:44 ` Borislav Petkov
2014-06-29 6:46 ` Gleb Natapov
2014-06-29 9:56 ` Jan Kiszka
2014-06-29 10:24 ` Gleb Natapov
2014-06-29 10:31 ` Jan Kiszka
2014-06-29 10:53 ` Gleb Natapov
2014-06-29 10:59 ` Jan Kiszka
2014-06-29 11:51 ` Borislav Petkov
2014-06-29 12:22 ` Jan Kiszka
2014-06-29 13:14 ` Borislav Petkov [this message]
2014-06-29 13:42 ` Gleb Natapov
2014-06-29 14:01 ` Borislav Petkov
2014-06-29 14:27 ` Gleb Natapov
2014-06-29 14:32 ` Jan Kiszka
2014-06-29 14:51 ` Jan Kiszka
2014-06-29 15:12 ` [PATCH] KVM: SVM: Fix CPL export via SS.DPL Jan Kiszka
2014-06-29 18:00 ` Borislav Petkov
2014-06-30 15:01 ` Paolo Bonzini
2014-06-30 15:03 ` Jan Kiszka
2014-06-30 15:15 ` Borislav Petkov
2014-06-30 15:25 ` Gleb Natapov
2014-06-30 15:26 ` Paolo Bonzini
2014-06-29 13:46 ` __schedule #DF splat Borislav Petkov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20140629131443.GA5199@pd.tnic \
--to=bp@alien8.de \
--cc=gleb@kernel.org \
--cc=jan.kiszka@web.de \
--cc=joro@8bytes.org \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=pbonzini@redhat.com \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.