From: Jan Kiszka <jan.kiszka@web.de> To: Gleb Natapov <gleb@kernel.org>, Borislav Petkov <bp@alien8.de> Cc: "Paolo Bonzini" <pbonzini@redhat.com>, lkml <linux-kernel@vger.kernel.org>, "Peter Zijlstra" <peterz@infradead.org>, "Steven Rostedt" <rostedt@goodmis.org>, x86-ml <x86@kernel.org>, kvm@vger.kernel.org, "Jörg Rödel" <joro@8bytes.org> Subject: Re: __schedule #DF splat Date: Sun, 29 Jun 2014 16:51:12 +0200 [thread overview] Message-ID: <53B027E0.7040003@web.de> (raw) In-Reply-To: <53B02395.8030505@web.de> [-- Attachment #1: Type: text/plain, Size: 3149 bytes --] On 2014-06-29 16:32, Jan Kiszka wrote: > On 2014-06-29 16:27, Gleb Natapov wrote: >> On Sun, Jun 29, 2014 at 04:01:04PM +0200, Borislav Petkov wrote: >>> On Sun, Jun 29, 2014 at 04:42:47PM +0300, Gleb Natapov wrote: >>>> Please do so and let us know. >>> >>> Yep, just did. Reverting ae9fedc793 fixes the issue. >>> >>>> reinj:1 means that previous injection failed due to another #PF that >>>> happened during the event injection itself This may happen if GDT or fist >>>> instruction of a fault handler is not mapped by shadow pages, but here >>>> it says that the new page fault is at the same address as the previous >>>> one as if GDT is or #PF handler is mapped there. Strange. Especially >>>> since #DF is injected successfully, so GDT should be fine. May be wrong >>>> cpl makes svm crazy? >>> >>> Well, I'm not going to even pretend to know kvm to know *when* we're >>> saving VMCB state but if we're saving the wrong CPL and then doing the >>> pagetable walk, I can very well imagine if the walker gets confused. One >>> possible issue could be U/S bit (bit 2) in the PTE bits which allows >>> access to supervisor pages only when CPL < 3. I.e., CPL has effect on >>> pagetable walk and a wrong CPL level could break it. >>> >>> All a conjecture though... >>> >> Looks plausible, still strange that second #PF is at the same address as the first one though. >> Anyway, not we have the commit to blame. > > I suspect there is a gap between cause and effect. I'm tracing CPL > changes currently, and my first impression is that QEMU triggers an > unwanted switch from CPL 3 to 0 on vmport access: > > qemu-system-x86-11883 [001] 7493.378630: kvm_entry: vcpu 0 > qemu-system-x86-11883 [001] 7493.378631: bprint: svm_vcpu_run: entry cpl 0 > qemu-system-x86-11883 [001] 7493.378636: bprint: svm_vcpu_run: exit cpl 3 > qemu-system-x86-11883 [001] 7493.378637: kvm_exit: reason io rip 0x400854 info 56580241 400855 > qemu-system-x86-11883 [001] 7493.378640: kvm_emulate_insn: 0:400854:ed (prot64) > qemu-system-x86-11883 [001] 7493.378642: kvm_userspace_exit: reason KVM_EXIT_IO (2) > qemu-system-x86-11883 [001] 7493.378655: bprint: kvm_arch_vcpu_ioctl_get_sregs: ss.dpl 0 > qemu-system-x86-11883 [001] 7493.378684: bprint: kvm_arch_vcpu_ioctl_set_sregs: ss.dpl 0 > qemu-system-x86-11883 [001] 7493.378685: bprint: svm_set_segment: cpl = 0 > qemu-system-x86-11883 [001] 7493.378711: kvm_pio: pio_read at 0x5658 size 4 count 1 val 0x3442554a > > Yeah... do we have to manually sync save.cpl into ss.dpl on get_sregs > on AMD? > Applying this logic: diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index ec8366c..b5e994a 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -1462,6 +1462,7 @@ static void svm_get_segment(struct kvm_vcpu *vcpu, */ if (var->unusable) var->db = 0; + var->dpl = to_svm(vcpu)->vmcb->save.cpl; break; } } ...and my VM runs smoothly so far. Does it make sense in all scenarios? Jan [-- Attachment #2: OpenPGP digital signature --] [-- Type: application/pgp-signature, Size: 263 bytes --]
next prev parent reply other threads:[~2014-06-29 14:51 UTC|newest] Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top 2014-06-25 15:32 Borislav Petkov 2014-06-25 20:26 ` Borislav Petkov 2014-06-27 10:18 ` Borislav Petkov 2014-06-27 11:41 ` Paolo Bonzini 2014-06-27 11:55 ` Borislav Petkov 2014-06-27 12:01 ` Paolo Bonzini 2014-06-27 12:10 ` Borislav Petkov 2014-06-28 11:44 ` Borislav Petkov 2014-06-29 6:46 ` Gleb Natapov 2014-06-29 9:56 ` Jan Kiszka 2014-06-29 10:24 ` Gleb Natapov 2014-06-29 10:31 ` Jan Kiszka 2014-06-29 10:53 ` Gleb Natapov 2014-06-29 10:59 ` Jan Kiszka 2014-06-29 11:51 ` Borislav Petkov 2014-06-29 12:22 ` Jan Kiszka 2014-06-29 13:14 ` Borislav Petkov 2014-06-29 13:42 ` Gleb Natapov 2014-06-29 14:01 ` Borislav Petkov 2014-06-29 14:27 ` Gleb Natapov 2014-06-29 14:32 ` Jan Kiszka 2014-06-29 14:51 ` Jan Kiszka [this message] 2014-06-29 15:12 ` [PATCH] KVM: SVM: Fix CPL export via SS.DPL Jan Kiszka 2014-06-29 18:00 ` Borislav Petkov 2014-06-30 15:01 ` Paolo Bonzini 2014-06-30 15:03 ` Jan Kiszka 2014-06-30 15:15 ` Borislav Petkov 2014-06-30 15:25 ` Gleb Natapov 2014-06-30 15:26 ` Paolo Bonzini 2014-06-29 13:46 ` __schedule #DF splat Borislav Petkov
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=53B027E0.7040003@web.de \ --to=jan.kiszka@web.de \ --cc=bp@alien8.de \ --cc=gleb@kernel.org \ --cc=joro@8bytes.org \ --cc=kvm@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=pbonzini@redhat.com \ --cc=peterz@infradead.org \ --cc=rostedt@goodmis.org \ --cc=x86@kernel.org \ --subject='Re: __schedule #DF splat' \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.