All of lore.kernel.org
 help / color / mirror / Atom feed
* kernel BUG at drivers/block/nvme-core.c:732!
@ 2015-12-21  9:45 John Morrison
  2015-12-21 21:12 ` Keith Busch
  2016-01-27  0:37 ` Keith Busch
  0 siblings, 2 replies; 5+ messages in thread
From: John Morrison @ 2015-12-21  9:45 UTC (permalink / raw)


Hi,

We have a coupe of servers where we have 2 P3700?s in each.
Neither doing and heavy IO and both have crashed with this error:-

Any ideas what?s going wrong ?

Thanks
John

383368.216038] kernel BUG at drivers/block/nvme-core.c:732!
[383368.478005] invalid opcode: 0000 [#1] SMP 
[383368.680772] Modules linked in: ext4 mbcache jbd2 ebtable_broute ebtable_nat ebtable_filter ebt_ip ebtables vhost_net vhost macvtap macvlan tun nls_utf8 isofs loop ip6table_filter ip6_tables iptable_filter bridge stp llc bonding vfat fat x86_pkg_temp_thermal intel_powerclamp coretemp crct10dif_pclmul crc32_pclmul crc32c_intel aesni_intel lrw gf128mul glue_helper ablk_helper cryptd kvm_intel sr_mod cdrom sb_edac ipmi_si ioatdma lpc_ich pcspkr edac_core mfd_core sg i2c_i801 hpwdt dca wmi ipmi_msghandler pcc_cpufreq acpi_power_meter acpi_cpufreq dm_mod nfsd auth_rpcgss nfs_acl lockd grace sunrpc binfmt_misc ip_tables mgag200 i2c_algo_bit drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm drm bnx2x sd_mod usb_storage tg3 mdio ptp nvme i2c_core hpsa pps_core [last unloaded: ebtables]
[383372.108323] CPU: 2 PID: 8535 Comm: qemu-system-x86 Not tainted 4.3.0 #1
[383372.434602] Hardware name: HP ProLiant DL360 Gen9, BIOS P89 11/10/2015
[383372.757063] task: ffff88289b3dd600 ti: ffff88164c940000 task.ti: ffff88164c940000
[383373.122056] RIP: 0010:[<ffffffffa0053b19>]  [<ffffffffa0053b19>] nvme_queue_rq+0xa19/0xa20 [nvme]
[383373.558800] RSP: 0018:ffff88164c943ba8  EFLAGS: 00010286
[383373.821345] RAX: 0000000000000000 RBX: ffff8827c0e664e0 RCX: 0000000000006800
[383374.172143] RDX: 0000001c4db4da00 RSI: ffff881c4db4da00 RDI: 0000000000000246
[383374.522798] RBP: ffff88164c943c90 R08: ffff8827c2ba7040 R09: 000000006ea2a000
[383374.873771] R10: 00000000ffffe800 R11: 0000000000001000 R12: ffff8827bee8ef00
[383375.224719] R13: 0000000000000001 R14: ffff8827c2ba7000 R15: ffff8828b16f1d40
[383375.575317] FS:  00007fd3d75fe700(0000) GS:ffff8827df880000(0000) knlGS:0000000000000000
[383375.972435] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[383376.255644] CR2: 00007f7838664000 CR3: 00000018cd96a000 CR4: 00000000001426e0
[383376.606297] Stack:
[383376.707729]  00008800c5902600 ffff8827c2ba7160 ffff8827c5a03b80 ffff88164c943be8
[383377.071120]  ffff8827c2ba7040 00000000fffff800 000000006ea29000 ffff882700001000
[383377.434496]  0000000000001000 ffffffff00000200 ffff8827c2ba7040 ffff881c4db4da00
[383377.797841] Call Trace:
[383377.920324]  [<ffffffff8139efb6>] __blk_mq_run_hw_queue+0x1d6/0x380
[383378.228861]  [<ffffffff8139edc5>] blk_mq_run_hw_queue+0x95/0xb0
[383378.520447]  [<ffffffff813a0353>] blk_mq_insert_requests+0xc3/0x110
[383378.829014]  [<ffffffff813a0f91>] blk_mq_flush_plug_list+0x131/0x160
[383379.141619]  [<ffffffff81396856>] blk_flush_plug_list+0xb6/0x200
[383379.437374]  [<ffffffff81396d1c>] blk_finish_plug+0x2c/0x40
[383379.707880]  [<ffffffff8126ca6c>] do_io_submit+0x2ec/0x520
[383379.978319]  [<ffffffff8126ccb0>] SyS_io_submit+0x10/0x20
[383380.244652]  [<ffffffff816fc0ae>] entry_SYSCALL_64_fastpath+0x12/0x71
[383380.561546] Code: 18 41 c7 46 08 ff ff ff ff 44 29 e8 44 01 d8 89 85 1c ff ff ff e9 35 fe ff ff e8 e3 1b 05 e1 4c 8b 2d 7c d1 a1 e1 e9 19 ff ff ff <0f> 0b 0f 0b 0f 1f 00 0f 1f 44 00 00 55 48 89 e5 41 57 41 56 49 
[383381.483308] RIP  [<ffffffffa0053b19>] nvme_queue_rq+0xa19/0xa20 [nvme]
[383381.804620]  RSP <ffff88164c943ba8>
[383381.980548] ---[ end trace f0dc9fdbddef44ce ]---
[383382.209932] Kernel panic - not syncing: Fatal exception
[383382.467930] Kernel Offset: disabled
[383382.645644] ---[ end Kernel panic - not syncing: Fatal exception

^ permalink raw reply	[flat|nested] 5+ messages in thread

* kernel BUG at drivers/block/nvme-core.c:732!
  2015-12-21  9:45 kernel BUG at drivers/block/nvme-core.c:732! John Morrison
@ 2015-12-21 21:12 ` Keith Busch
  2016-01-27  0:37 ` Keith Busch
  1 sibling, 0 replies; 5+ messages in thread
From: Keith Busch @ 2015-12-21 21:12 UTC (permalink / raw)


On Mon, Dec 21, 2015@09:45:02AM +0000, John Morrison wrote:
> Hi,
> 
> We have a coupe of servers where we have 2 P3700?s in each.
> Neither doing and heavy IO and both have crashed with this error:-
>
> Any ideas what?s going wrong ?

The BUG means the driver received an invalid scatter list.

Your stack trace below shows qemu using io_submit, so I tested the same
syscall with unaligned io vectors. The kernel splits these up as expected,
or fails with EINVAL if not at least block aligned.

Can you provide a simple user space example that recreates this?

Is your system using an IOMMU?

> 383368.216038] kernel BUG at drivers/block/nvme-core.c:732!
> [383368.478005] invalid opcode: 0000 [#1] SMP 
> [383368.680772] Modules linked in: ext4 mbcache jbd2 ebtable_broute ebtable_nat ebtable_filter ebt_ip ebtables vhost_net vhost macvtap macvlan tun nls_utf8 isofs loop ip6table_filter ip6_tables iptable_filter bridge stp llc bonding vfat fat x86_pkg_temp_thermal intel_powerclamp coretemp crct10dif_pclmul crc32_pclmul crc32c_intel aesni_intel lrw gf128mul glue_helper ablk_helper cryptd kvm_intel sr_mod cdrom sb_edac ipmi_si ioatdma lpc_ich pcspkr edac_core mfd_core sg i2c_i801 hpwdt dca wmi ipmi_msghandler pcc_cpufreq acpi_power_meter acpi_cpufreq dm_mod nfsd auth_rpcgss nfs_acl lockd grace sunrpc binfmt_misc ip_tables mgag200 i2c_algo_bit drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm drm bnx2x sd_mod usb_storage tg3 mdio ptp nvme i2c_core hpsa pps_core [last unloaded: ebtables]
> [383372.108323] CPU: 2 PID: 8535 Comm: qemu-system-x86 Not tainted 4.3.0 #1
> [383372.434602] Hardware name: HP ProLiant DL360 Gen9, BIOS P89 11/10/2015
> [383372.757063] task: ffff88289b3dd600 ti: ffff88164c940000 task.ti: ffff88164c940000
> [383373.122056] RIP: 0010:[<ffffffffa0053b19>]  [<ffffffffa0053b19>] nvme_queue_rq+0xa19/0xa20 [nvme]
> [383373.558800] RSP: 0018:ffff88164c943ba8  EFLAGS: 00010286
> [383373.821345] RAX: 0000000000000000 RBX: ffff8827c0e664e0 RCX: 0000000000006800
> [383374.172143] RDX: 0000001c4db4da00 RSI: ffff881c4db4da00 RDI: 0000000000000246
> [383374.522798] RBP: ffff88164c943c90 R08: ffff8827c2ba7040 R09: 000000006ea2a000
> [383374.873771] R10: 00000000ffffe800 R11: 0000000000001000 R12: ffff8827bee8ef00
> [383375.224719] R13: 0000000000000001 R14: ffff8827c2ba7000 R15: ffff8828b16f1d40
> [383375.575317] FS:  00007fd3d75fe700(0000) GS:ffff8827df880000(0000) knlGS:0000000000000000
> [383375.972435] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [383376.255644] CR2: 00007f7838664000 CR3: 00000018cd96a000 CR4: 00000000001426e0
> [383376.606297] Stack:
> [383376.707729]  00008800c5902600 ffff8827c2ba7160 ffff8827c5a03b80 ffff88164c943be8
> [383377.071120]  ffff8827c2ba7040 00000000fffff800 000000006ea29000 ffff882700001000
> [383377.434496]  0000000000001000 ffffffff00000200 ffff8827c2ba7040 ffff881c4db4da00
> [383377.797841] Call Trace:
> [383377.920324]  [<ffffffff8139efb6>] __blk_mq_run_hw_queue+0x1d6/0x380
> [383378.228861]  [<ffffffff8139edc5>] blk_mq_run_hw_queue+0x95/0xb0
> [383378.520447]  [<ffffffff813a0353>] blk_mq_insert_requests+0xc3/0x110
> [383378.829014]  [<ffffffff813a0f91>] blk_mq_flush_plug_list+0x131/0x160
> [383379.141619]  [<ffffffff81396856>] blk_flush_plug_list+0xb6/0x200
> [383379.437374]  [<ffffffff81396d1c>] blk_finish_plug+0x2c/0x40
> [383379.707880]  [<ffffffff8126ca6c>] do_io_submit+0x2ec/0x520
> [383379.978319]  [<ffffffff8126ccb0>] SyS_io_submit+0x10/0x20
> [383380.244652]  [<ffffffff816fc0ae>] entry_SYSCALL_64_fastpath+0x12/0x71
> [383380.561546] Code: 18 41 c7 46 08 ff ff ff ff 44 29 e8 44 01 d8 89 85 1c ff ff ff e9 35 fe ff ff e8 e3 1b 05 e1 4c 8b 2d 7c d1 a1 e1 e9 19 ff ff ff <0f> 0b 0f 0b 0f 1f 00 0f 1f 44 00 00 55 48 89 e5 41 57 41 56 49 
> [383381.483308] RIP  [<ffffffffa0053b19>] nvme_queue_rq+0xa19/0xa20 [nvme]
> [383381.804620]  RSP <ffff88164c943ba8>
> [383381.980548] ---[ end trace f0dc9fdbddef44ce ]---
> [383382.209932] Kernel panic - not syncing: Fatal exception
> [383382.467930] Kernel Offset: disabled
> [383382.645644] ---[ end Kernel panic - not syncing: Fatal exception

^ permalink raw reply	[flat|nested] 5+ messages in thread

* kernel BUG at drivers/block/nvme-core.c:732!
  2015-12-21  9:45 kernel BUG at drivers/block/nvme-core.c:732! John Morrison
  2015-12-21 21:12 ` Keith Busch
@ 2016-01-27  0:37 ` Keith Busch
  1 sibling, 0 replies; 5+ messages in thread
From: Keith Busch @ 2016-01-27  0:37 UTC (permalink / raw)


On Mon, Dec 21, 2015@09:45:02AM +0000, John Morrison wrote:
> Hi,
> 
> We have a coupe of servers where we have 2 P3700?s in each.
> Neither doing and heavy IO and both have crashed with this error:-
> 
> Any ideas what?s going wrong ?

Just for the list's benefit ... This issue was fixed with stable commit
578270bfb,


  https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit?id=578270bfbd2803dc7b0b03fbc2ac119efbc73195


 
> 383368.216038] kernel BUG at drivers/block/nvme-core.c:732!
> [383368.478005] invalid opcode: 0000 [#1] SMP 
> [383368.680772] Modules linked in: ext4 mbcache jbd2 ebtable_broute ebtable_nat ebtable_filter ebt_ip ebtables vhost_net vhost macvtap macvlan tun nls_utf8 isofs loop ip6table_filter ip6_tables iptable_filter bridge stp llc bonding vfat fat x86_pkg_temp_thermal intel_powerclamp coretemp crct10dif_pclmul crc32_pclmul crc32c_intel aesni_intel lrw gf128mul glue_helper ablk_helper cryptd kvm_intel sr_mod cdrom sb_edac ipmi_si ioatdma lpc_ich pcspkr edac_core mfd_core sg i2c_i801 hpwdt dca wmi ipmi_msghandler pcc_cpufreq acpi_power_meter acpi_cpufreq dm_mod nfsd auth_rpcgss nfs_acl lockd grace sunrpc binfmt_misc ip_tables mgag200 i2c_algo_bit drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm drm bnx2x sd_mod usb_storage tg3 mdio ptp nvme i2c_core hpsa pps_core [last unloaded: ebtables]
> [383372.108323] CPU: 2 PID: 8535 Comm: qemu-system-x86 Not tainted 4.3.0 #1
> [383372.434602] Hardware name: HP ProLiant DL360 Gen9, BIOS P89 11/10/2015
> [383372.757063] task: ffff88289b3dd600 ti: ffff88164c940000 task.ti: ffff88164c940000
> [383373.122056] RIP: 0010:[<ffffffffa0053b19>]  [<ffffffffa0053b19>] nvme_queue_rq+0xa19/0xa20 [nvme]
> [383373.558800] RSP: 0018:ffff88164c943ba8  EFLAGS: 00010286
> [383373.821345] RAX: 0000000000000000 RBX: ffff8827c0e664e0 RCX: 0000000000006800
> [383374.172143] RDX: 0000001c4db4da00 RSI: ffff881c4db4da00 RDI: 0000000000000246
> [383374.522798] RBP: ffff88164c943c90 R08: ffff8827c2ba7040 R09: 000000006ea2a000
> [383374.873771] R10: 00000000ffffe800 R11: 0000000000001000 R12: ffff8827bee8ef00
> [383375.224719] R13: 0000000000000001 R14: ffff8827c2ba7000 R15: ffff8828b16f1d40
> [383375.575317] FS:  00007fd3d75fe700(0000) GS:ffff8827df880000(0000) knlGS:0000000000000000
> [383375.972435] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [383376.255644] CR2: 00007f7838664000 CR3: 00000018cd96a000 CR4: 00000000001426e0
> [383376.606297] Stack:
> [383376.707729]  00008800c5902600 ffff8827c2ba7160 ffff8827c5a03b80 ffff88164c943be8
> [383377.071120]  ffff8827c2ba7040 00000000fffff800 000000006ea29000 ffff882700001000
> [383377.434496]  0000000000001000 ffffffff00000200 ffff8827c2ba7040 ffff881c4db4da00
> [383377.797841] Call Trace:
> [383377.920324]  [<ffffffff8139efb6>] __blk_mq_run_hw_queue+0x1d6/0x380
> [383378.228861]  [<ffffffff8139edc5>] blk_mq_run_hw_queue+0x95/0xb0
> [383378.520447]  [<ffffffff813a0353>] blk_mq_insert_requests+0xc3/0x110
> [383378.829014]  [<ffffffff813a0f91>] blk_mq_flush_plug_list+0x131/0x160
> [383379.141619]  [<ffffffff81396856>] blk_flush_plug_list+0xb6/0x200
> [383379.437374]  [<ffffffff81396d1c>] blk_finish_plug+0x2c/0x40
> [383379.707880]  [<ffffffff8126ca6c>] do_io_submit+0x2ec/0x520
> [383379.978319]  [<ffffffff8126ccb0>] SyS_io_submit+0x10/0x20
> [383380.244652]  [<ffffffff816fc0ae>] entry_SYSCALL_64_fastpath+0x12/0x71
> [383380.561546] Code: 18 41 c7 46 08 ff ff ff ff 44 29 e8 44 01 d8 89 85 1c ff ff ff e9 35 fe ff ff e8 e3 1b 05 e1 4c 8b 2d 7c d1 a1 e1 e9 19 ff ff ff <0f> 0b 0f 0b 0f 1f 00 0f 1f 44 00 00 55 48 89 e5 41 57 41 56 49 
> [383381.483308] RIP  [<ffffffffa0053b19>] nvme_queue_rq+0xa19/0xa20 [nvme]
> [383381.804620]  RSP <ffff88164c943ba8>
> [383381.980548] ---[ end trace f0dc9fdbddef44ce ]---
> [383382.209932] Kernel panic - not syncing: Fatal exception
> [383382.467930] Kernel Offset: disabled
> [383382.645644] ---[ end Kernel panic - not syncing: Fatal exception

^ permalink raw reply	[flat|nested] 5+ messages in thread

* kernel BUG at drivers/block/nvme-core.c:732!
  2015-12-09 22:14 Seufert, Tim
@ 2015-12-09 22:43 ` Keith Busch
  0 siblings, 0 replies; 5+ messages in thread
From: Keith Busch @ 2015-12-09 22:43 UTC (permalink / raw)


On Wed, Dec 09, 2015@02:14:37PM -0800, Seufert, Tim wrote:
> Computer: i7-6700k CPU, Supermicro X11SS-Q motherboard, and a Samsung 950 Pro NVME SSD
> Linux version: CentOS 6.7 with ElRepo kernel-ml 4.3.0
> 
> What led up to the event: This is a very new system and I had just put it together and copied over a KVM guest (the guest OS is also CentOS 6.7).  At the time the ?kernel BUG? occurred, the guest was midway through updating itself, so yum/rpm was generating plenty of I/O. Since its disk image file was located on the host?s 950 Pro, this was generating NVME traffic.  The BUG resulted in the guest hanging forever (couldn?t open new terminals, make SSH connections, or do anything else that required disk I/O), but oddly enough the host did not hang even though its root FS was on the same NVME SSD partition containing the guest image.  I had to reboot the host to recover.
> 
> I have since replaced the host OS installation with a fresh install of CentOS 7, but am still running kernel-ml 4.3.0.  So far I have not seen a repetition of this BUG.

The BUG_ON below means the driver detected the SGL list it was provided
is not PRP'able. In the past, this has meant that the virtual address
page offset does not match the DMA address offset.

I've not seen this repeat on x86 architectures before. If you can find
a test case that reproduces this, we should be able to figure out what
is making this happen.

> A side question: Is the advice to not enable discard in Intel?s NVME driver reference guide (https://downloadmirror.intel.com/23929/eng/Intel_Linux_NVMe_Driver_Reference_Guide_330602-002.pdf) still considered valid? It claims "You want to allow the SSD manage blocks and its activity between the NVM (non-volatile memory) and host with more advanced and consistent approaches in the SSD Controller? but it?s not clear to me how the SSD controller can have a more advanced and consistent approach if it isn?t ever notified when blocks are okay to throw away.

Not sure what the guide taking about. I'll check with the author.
 
> Dec  1 19:08:52 verra kernel: ------------[ cut here ]------------
> Dec  1 19:08:52 verra kernel: kernel BUG at drivers/block/nvme-core.c:732!

^ permalink raw reply	[flat|nested] 5+ messages in thread

* kernel BUG at drivers/block/nvme-core.c:732!
@ 2015-12-09 22:14 Seufert, Tim
  2015-12-09 22:43 ` Keith Busch
  0 siblings, 1 reply; 5+ messages in thread
From: Seufert, Tim @ 2015-12-09 22:14 UTC (permalink / raw)


Computer: i7-6700k CPU, Supermicro X11SS-Q motherboard, and a Samsung 950 Pro NVME SSD
Linux version: CentOS 6.7 with ElRepo kernel-ml 4.3.0

What led up to the event: This is a very new system and I had just put it together and copied over a KVM guest (the guest OS is also CentOS 6.7).  At the time the ?kernel BUG? occurred, the guest was midway through updating itself, so yum/rpm was generating plenty of I/O. Since its disk image file was located on the host?s 950 Pro, this was generating NVME traffic.  The BUG resulted in the guest hanging forever (couldn?t open new terminals, make SSH connections, or do anything else that required disk I/O), but oddly enough the host did not hang even though its root FS was on the same NVME SSD partition containing the guest image.  I had to reboot the host to recover.

I have since replaced the host OS installation with a fresh install of CentOS 7, but am still running kernel-ml 4.3.0.  So far I have not seen a repetition of this BUG.


A side question: Is the advice to not enable discard in Intel?s NVME driver reference guide (https://downloadmirror.intel.com/23929/eng/Intel_Linux_NVMe_Driver_Reference_Guide_330602-002.pdf) still considered valid? It claims "You want to allow the SSD manage blocks and its activity between the NVM (non-volatile memory) and host with more advanced and consistent approaches in the SSD Controller? but it?s not clear to me how the SSD controller can have a more advanced and consistent approach if it isn?t ever notified when blocks are okay to throw away.


Dec  1 19:08:52 verra kernel: ------------[ cut here ]------------
Dec  1 19:08:52 verra kernel: kernel BUG at drivers/block/nvme-core.c:732!
Dec  1 19:08:52 verra kernel: invalid opcode: 0000 [#1] SMP 
Dec  1 19:08:52 verra kernel: Modules linked in: ebtable_nat ebtables ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_nat_ipv4 nf_nat xt_CHECKSUM iptable_mangle autofs4 cpufreq_powersave ipt_REJECT nf_reject_ipv4 nf_conntrack_ipv4 nf_defrag_ipv4 iptable_filter ip_tables ip6t_REJECT nf_reject_ipv6 nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter ip6_tables vhost_net macvtap macvlan vhost tun joydev input_leds hci_uart btintel btqca btbcm bluetooth rfkill i2c_hid 8250_fintek intel_lpss_acpi intel_lpss serio_raw igb dca coretemp hwmon x86_pkg_temp_thermal intel_powerclamp kvm_intel kvm crct10dif_pclmul crc32_pclmul crc32c_intel microcode pcspkr acpi_pad snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic e1000e ptp pps_core i2c_i801 snd_hda_intel snd_hda_codec snd_hda_core snd_hwdep snd_seq snd_seq_device snd_pcm snd_timer snd soundcore shpchp fjes mei_me mei xhci_pci xhci_hcd ext4 jbd2 mbcache nvme drbg aesni_intel ablk_helper cryptd lrw gf128mul glue_helper aes_x86_64 ahci l
Dec  1 19:08:52 verra kernel: ibahci i915 video dm_mirror dm_region_hash dm_log dm_mod
Dec  1 19:08:52 verra kernel: CPU: 1 PID: 4019 Comm: qemu-kvm Not tainted 4.3.0-1.el6.elrepo.x86_64 #1
Dec  1 19:08:52 verra kernel: Hardware name: Supermicro X11SSQ/X11SSQ, BIOS 1.0 09/17/2015
Dec  1 19:08:52 verra kernel: task: ffff880847ce0580 ti: ffff88084a698000 task.ti: ffff88084a698000
Dec  1 19:08:52 verra kernel: RIP: 0010:[<ffffffffa01d4499>]  [<ffffffffa01d4499>] nvme_setup_prps.clone.0+0x249/0x250 [nvme]
Dec  1 19:08:52 verra kernel: RSP: 0018:ffff88084a69ba38  EFLAGS: 00010286
Dec  1 19:08:52 verra kernel: RAX: 0000000000000000 RBX: 0000000000000001 RCX: 0000000000001000
Dec  1 19:08:52 verra kernel: RDX: ffff8806818d3800 RSI: 0000000000000200 RDI: 0000000000000292
Dec  1 19:08:52 verra kernel: RBP: ffff88084a69bac8 R08: ffff880844cdf040 R09: ffff880846aabc00
Dec  1 19:08:52 verra kernel: R10: 000000000068f136 R11: 00000000000ffedf R12: 00000000ffffe400
Dec  1 19:08:52 verra kernel: R13: 000000000001d400 R14: 00000000ffec1000 R15: ffff8806818d3800
Dec  1 19:08:52 verra kernel: FS:  00007f09946609a0(0000) GS:ffff880872c40000(0000) knlGS:0000000000000000
Dec  1 19:08:52 verra kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Dec  1 19:08:52 verra kernel: CR2: 00007fccf48730a0 CR3: 0000000848f1b000 CR4: 00000000003426e0
Dec  1 19:08:52 verra kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
Dec  1 19:08:52 verra kernel: DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Dec  1 19:08:52 verra kernel: Stack:
Dec  1 19:08:52 verra kernel: ffff88084ee98dc0 ffff880800001000 ffff880844cdf040 ffff880844cdf440
Dec  1 19:08:52 verra kernel: ffff8808481496c0 0001f40044cdf040 0000000000001000 ffff880844cdf000
Dec  1 19:08:52 verra kernel: 000010004a69bac8 00000200fffff000 ffff88084a69bac8 00000000fd09f800
Dec  1 19:08:52 verra kernel: Call Trace:
Dec  1 19:08:52 verra kernel: [<ffffffffa01d4d1c>] nvme_queue_rq+0x53c/0x690 [nvme]
Dec  1 19:08:52 verra kernel: [<ffffffff8116b5ab>] ? generic_file_direct_write+0x15b/0x160
Dec  1 19:08:52 verra kernel: [<ffffffff8116b8e5>] ? __generic_file_write_iter+0xd5/0x1f0
Dec  1 19:08:52 verra kernel: [<ffffffff811f3a20>] ? __pollwait+0xf0/0xf0
Dec  1 19:08:52 verra kernel: [<ffffffff812d9877>] __blk_mq_run_hw_queue+0x1e7/0x370
Dec  1 19:08:52 verra kernel: [<ffffffff812d9abf>] blk_mq_run_hw_queue+0x6f/0x80
Dec  1 19:08:52 verra kernel: [<ffffffff812da336>] blk_mq_insert_requests+0xd6/0x120
Dec  1 19:08:52 verra kernel: [<ffffffff812da4a5>] blk_mq_flush_plug_list+0x125/0x140
Dec  1 19:08:52 verra kernel: [<ffffffff812d0b46>] blk_flush_plug_list+0xc6/0x1f0
Dec  1 19:08:52 verra kernel: [<ffffffff812d0ca4>] blk_finish_plug+0x34/0x50
Dec  1 19:08:52 verra kernel: [<ffffffff8122a83e>] do_io_submit+0x13e/0x1d0
Dec  1 19:08:52 verra kernel: [<ffffffff8122a8e0>] SyS_io_submit+0x10/0x20
Dec  1 19:08:52 verra kernel: [<ffffffff817017ae>] entry_SYSCALL_64_fastpath+0x12/0x71
Dec  1 19:08:52 verra kernel: Code: ff ff 44 29 6d 9c e9 3a ff ff ff 48 8b 55 a8 4c 89 72 18 c7 42 08 ff ff ff ff 8b 75 9c 29 de 89 f3 01 cb 89 5d 9c e9 1a ff ff ff <0f> 0b eb fe 0f 1f 00 55 48 89 e5 41 57 41 89 cf 41 56 41 89 d6 
Dec  1 19:08:52 verra kernel: RIP  [<ffffffffa01d4499>] nvme_setup_prps.clone.0+0x249/0x250 [nvme]
Dec  1 19:08:52 verra kernel: RSP <ffff88084a69ba38>
Dec  1 19:08:52 verra kernel: ---[ end trace 4e220fd2439ddb08 ]---

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2016-01-27  0:37 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-12-21  9:45 kernel BUG at drivers/block/nvme-core.c:732! John Morrison
2015-12-21 21:12 ` Keith Busch
2016-01-27  0:37 ` Keith Busch
  -- strict thread matches above, loose matches on Subject: below --
2015-12-09 22:14 Seufert, Tim
2015-12-09 22:43 ` Keith Busch

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.