All of lore.kernel.org
 help / color / mirror / Atom feed
* [Bug 203979] New: kvm_spurious_fault in L1 when running a nested kvm instance on AMD i686-pae
@ 2019-06-24 19:09 bugzilla-daemon
  2019-06-30 23:03 ` [Bug 203979] " bugzilla-daemon
                   ` (3 more replies)
  0 siblings, 4 replies; 5+ messages in thread
From: bugzilla-daemon @ 2019-06-24 19:09 UTC (permalink / raw)
  To: kvm

https://bugzilla.kernel.org/show_bug.cgi?id=203979

            Bug ID: 203979
           Summary: kvm_spurious_fault in L1 when running a nested kvm
                    instance on AMD i686-pae
           Product: Virtualization
           Version: unspecified
    Kernel Version: 5.2-rc6
          Hardware: All
                OS: Linux
              Tree: Mainline
            Status: NEW
          Severity: normal
          Priority: P1
         Component: kvm
          Assignee: virtualization_kvm@kernel-bugs.osdl.org
          Reporter: jpalecek@web.de
        Regression: No

Hello,

when I try to run a freedos nested in linux kvm guest on kernel 5.2-rc6, I get
this message in L1 guest:

debian login: [   13.291265] FS-Cache: Loaded
[   13.295627] 9p: Installing v9fs 9p2000 file system support
[   13.296946] FS-Cache: Netfs '9p' registered for caching
[   19.200271] ------------[ cut here ]------------
[   19.201265] kernel BUG at arch/x86/kvm/x86.c:358!
[   19.202069] invalid opcode: 0000 [#1] SMP NOPTI
[   19.202850] CPU: 0 PID: 270 Comm: qemu-system-i38 Not tainted
5.2.0-rc6-bughunt+ #1
[   19.203568] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS
1.12.0-1 04/01/2014
[   19.203568] EIP: kvm_spurious_fault+0x8/0x10 [kvm]
[   19.203568] Code: 08 ff 75 18 ff 75 14 ff 75 10 ff 75 0c 57 56 e8 5e 09 3d
fa 83 c4 30 8d 65 f8 5e 5f 5d c3 8d 74 26 00 0f 1f 44 00 00 55 89 e5 <0f> 0b 8d
b6 00 00 00 00 0f 1f 44 00 00 55 89 e5 57 56 53 83 ec 04
[   19.203568] EAX: 0004c000 EBX: 00000000 ECX: 00000000 EDX: 00000663
[   19.203568] ESI: 00000000 EDI: 00000000 EBP: c0037da4 ESP: c0037da4
[   19.203568] DS: 007b ES: 007b FS: 0000 GS: 00e0 SS: 0068 EFLAGS: 00210246
[   19.203568] CR0: 80050033 CR2: 00cd813b CR3: 04e7f820 CR4: 000006f0
[   19.203568] Call Trace:
[   19.203568]  svm_vcpu_run+0x1a5/0x650 [kvm_amd]
[   19.203568]  ? kvm_ioapic_scan_entry+0x62/0xe0 [kvm]
[   19.203568]  ? kvm_arch_vcpu_ioctl_run+0x596/0x1a80 [kvm]
[   19.203568]  ? _cond_resched+0x17/0x30
[   19.203568]  ? kvm_vcpu_ioctl+0x214/0x590 [kvm]
[   19.203568]  ? kvm_vcpu_ioctl+0x214/0x590 [kvm]
[   19.203568]  ? __bpf_trace_kvm_async_pf_nopresent_ready+0x20/0x20 [kvm]
[   19.203568]  ? do_vfs_ioctl+0x9a/0x6c0
[   19.203568]  ? tomoyo_path_chmod+0x20/0x20
[   19.203568]  ? tomoyo_file_ioctl+0x19/0x20
[   19.203568]  ? security_file_ioctl+0x30/0x50
[   19.203568]  ? ksys_ioctl+0x56/0x80
[   19.203568]  ? sys_ioctl+0x16/0x20
[   19.203568]  ? do_fast_syscall_32+0x87/0x1df
[   19.203568]  ? entry_SYSENTER_32+0x6b/0xbe
[   19.203568] Modules linked in: 9p fscache ppdev edac_mce_amd kvm_amd
9pnet_virtio kvm bochs_drm 9pnet irqbypass ttm snd_pcm snd_timer snd
drm_kms_helper soundcore joydev pcspkr evdev serio_raw drm sg parport_pc
parport button ip_tables x_tables autofs4 ext4 crc32c_generic crc16 mbcache
jbd2 sr_mod cdrom sd_mod ata_generic ata_piix libata psmouse virtio_pci
virtio_ring virtio e1000 scsi_mod i2c_piix4 floppy
[   19.203568] ---[ end trace c0c327b925400cd6 ]---
[   19.203568] EIP: kvm_spurious_fault+0x8/0x10 [kvm]
[   19.203568] Code: 08 ff 75 18 ff 75 14 ff 75 10 ff 75 0c 57 56 e8 5e 09 3d
fa 83 c4 30 8d 65 f8 5e 5f 5d c3 8d 74 26 00 0f 1f 44 00 00 55 89 e5 <0f> 0b 8d
b6 00 00 00 00 0f 1f 44 00 00 55 89 e5 57 56 53 83 ec 04
[   19.203568] EAX: 0004c000 EBX: 00000000 ECX: 00000000 EDX: 00000663
[   19.203568] ESI: 00000000 EDI: 00000000 EBP: c0037da4 ESP: c3a71dfc
[   19.203568] DS: 007b ES: 007b FS: 0000 GS: 00e0 SS: 0068 EFLAGS: 00210246
[   19.203568] CR0: 80050033 CR2: 00cd813b CR3: 04e7f820 CR4: 000006f0

svm_vcpu_run+0x1a5 is the vmsave instruction. The bug seems to be only
dependent on the L0 host, where version 5.1 works, but 5.2 fails. Linux version
of the L1 guest doesn't seem to matter. (I'm running a pae system but didn't
actually test if it has anything to do with pae)

Any ideas about what could have caused it?

-- 
You are receiving this mail because:
You are watching the assignee of the bug.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [Bug 203979] kvm_spurious_fault in L1 when running a nested kvm instance on AMD i686-pae
  2019-06-24 19:09 [Bug 203979] New: kvm_spurious_fault in L1 when running a nested kvm instance on AMD i686-pae bugzilla-daemon
@ 2019-06-30 23:03 ` bugzilla-daemon
  2019-06-30 23:17 ` bugzilla-daemon
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 5+ messages in thread
From: bugzilla-daemon @ 2019-06-30 23:03 UTC (permalink / raw)
  To: kvm

https://bugzilla.kernel.org/show_bug.cgi?id=203979

Jiri Palecek (jpalecek@web.de) changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |karahmed@amazon.de

--- Comment #1 from Jiri Palecek (jpalecek@web.de) ---
So, I have bisected this and found the culprit to be:

commit 8c5fbf1a723107814c20c3f4d6343ab9d694a705 (refs/bisect/bad)
Author: KarimAllah Ahmed <karahmed@amazon.de>
Date:   Thu Jan 31 21:24:40 2019 +0100

    KVM/nSVM: Use the new mapping API for mapping guest memory

    Use the new mapping API for mapping guest memory to avoid depending on
    "struct page".

    Signed-off-by: KarimAllah Ahmed <karahmed@amazon.de>
    Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

-- 
You are receiving this mail because:
You are watching the assignee of the bug.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [Bug 203979] kvm_spurious_fault in L1 when running a nested kvm instance on AMD i686-pae
  2019-06-24 19:09 [Bug 203979] New: kvm_spurious_fault in L1 when running a nested kvm instance on AMD i686-pae bugzilla-daemon
  2019-06-30 23:03 ` [Bug 203979] " bugzilla-daemon
@ 2019-06-30 23:17 ` bugzilla-daemon
  2019-07-03  9:45 ` bugzilla-daemon
  2019-08-09 15:43 ` bugzilla-daemon
  3 siblings, 0 replies; 5+ messages in thread
From: bugzilla-daemon @ 2019-06-30 23:17 UTC (permalink / raw)
  To: kvm

https://bugzilla.kernel.org/show_bug.cgi?id=203979

--- Comment #2 from Jiri Palecek (jpalecek@web.de) ---
Created attachment 283501
  --> https://bugzilla.kernel.org/attachment.cgi?id=283501&action=edit
Patch that allows me to run nested kvm instances again

After much looking on the offending commit, which doesn't seem to change any
functionality, but just changes the API, I found out the gfns used inside
__kvm_map_gfn were in fact far off. Further looking into it I found that these
gfns were the result of gfn_to_gpa in some (but not all) places that used the
new api. However, what was clearly meant was gpa_to_gfn. So I prepared this
patch and tested that it allows to run nested linux instances in kvm
successfully on my AMD system.

-- 
You are receiving this mail because:
You are watching the assignee of the bug.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [Bug 203979] kvm_spurious_fault in L1 when running a nested kvm instance on AMD i686-pae
  2019-06-24 19:09 [Bug 203979] New: kvm_spurious_fault in L1 when running a nested kvm instance on AMD i686-pae bugzilla-daemon
  2019-06-30 23:03 ` [Bug 203979] " bugzilla-daemon
  2019-06-30 23:17 ` bugzilla-daemon
@ 2019-07-03  9:45 ` bugzilla-daemon
  2019-08-09 15:43 ` bugzilla-daemon
  3 siblings, 0 replies; 5+ messages in thread
From: bugzilla-daemon @ 2019-07-03  9:45 UTC (permalink / raw)
  To: kvm

https://bugzilla.kernel.org/show_bug.cgi?id=203979

Jiri Palecek (jpalecek@web.de) changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
           Hardware|All                         |i386
         Regression|No                          |Yes

-- 
You are receiving this mail because:
You are watching the assignee of the bug.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [Bug 203979] kvm_spurious_fault in L1 when running a nested kvm instance on AMD i686-pae
  2019-06-24 19:09 [Bug 203979] New: kvm_spurious_fault in L1 when running a nested kvm instance on AMD i686-pae bugzilla-daemon
                   ` (2 preceding siblings ...)
  2019-07-03  9:45 ` bugzilla-daemon
@ 2019-08-09 15:43 ` bugzilla-daemon
  3 siblings, 0 replies; 5+ messages in thread
From: bugzilla-daemon @ 2019-08-09 15:43 UTC (permalink / raw)
  To: kvm

https://bugzilla.kernel.org/show_bug.cgi?id=203979

Jiri Palecek (jpalecek@web.de) changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|NEW                         |RESOLVED
         Resolution|---                         |CODE_FIX

--- Comment #3 from Jiri Palecek (jpalecek@web.de) ---
Already fixed by 8f38302c0be2d

-- 
You are receiving this mail because:
You are watching the assignee of the bug.

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2019-08-09 15:43 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-06-24 19:09 [Bug 203979] New: kvm_spurious_fault in L1 when running a nested kvm instance on AMD i686-pae bugzilla-daemon
2019-06-30 23:03 ` [Bug 203979] " bugzilla-daemon
2019-06-30 23:17 ` bugzilla-daemon
2019-07-03  9:45 ` bugzilla-daemon
2019-08-09 15:43 ` bugzilla-daemon

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.