All of lore.kernel.org
 help / color / mirror / Atom feed
* [Bug 216349] New: Kernel panic in a Ubuntu VM running on Proxmox
@ 2022-08-10 12:13 bugzilla-daemon
  2022-08-10 13:44 ` [Bug 216349] " bugzilla-daemon
                   ` (14 more replies)
  0 siblings, 15 replies; 16+ messages in thread
From: bugzilla-daemon @ 2022-08-10 12:13 UTC (permalink / raw)
  To: kvm

https://bugzilla.kernel.org/show_bug.cgi?id=216349

            Bug ID: 216349
           Summary: Kernel panic in a Ubuntu VM running on Proxmox
           Product: Virtualization
           Version: unspecified
    Kernel Version: 5.15.39-3
          Hardware: Intel
                OS: Linux
              Tree: Mainline
            Status: NEW
          Severity: normal
          Priority: P1
         Component: kvm
          Assignee: virtualization_kvm@kernel-bugs.osdl.org
          Reporter: jdpark.au@gmail.com
        Regression: No

Hi,

I've experienced quite a few kernel panics over a week or so on a VM running on
a Proxmox server. I've also experienced the same issue on a pfSense VM running
on the same Proxmox server but haven't been able to capture a log from that VM.

Host details:
Proxmox 7.2-7
Kernel: 5.15.39-3
CPU: Intel N5105

VM Details:
Ubuntu 22.04
Kernel: 5.15.0-46-generic

The VM freezes and becomes completely unresponsive. I have to do a hard reset
on the VM in order to recover. Nothing is logged to dmesg or syslog so I set up
netconsole to log to a remote server and added:
GRUB_CMDLINE_LINUX_DEFAULT="debug ignore_loglevel" to grub.

Below is the output of the panic. Please let me know if there's any more
information I can provide.

---

[12361.508193] BUG: kernel NULL pointer dereference, address: 0000000000000000
[12361.509399] #PF: supervisor write access in kernel mode
[12361.510524] #PF: error_code(0x0002) - not-present page
[12361.511847] PGD 0 P4D 0 
[12361.513120] Oops: 0002 [#1] SMP PTI
[12361.514392] CPU: 0 PID: 3268 Comm: python3 Not tainted 5.15.0-46-generic
#49-Ubuntu
[12361.515796] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS
rel-1.15.0-0-g2dd4b9b3f840-prebuilt.qemu.org 04/01/2014
[12361.518606] RIP: 0010:asm_exc_general_protection+0x4/0x30
[12361.520233] Code: c4 48 8d 6c 24 01 48 89 e7 48 8b 74 24 78 48 c7 44 24 78
ff ff ff ff e8 ea 7f f9 ff e9 05 0b 00 00 0f 1f 44 00 00 0f 1f 00 e8 <c8> 09 00
00 48 89 c4 48 8d 6c 24 01 48 89 e7 48 8b 74 24 78 48 c7
[12361.523251] RSP: 0018:ffffa7498342f010 EFLAGS: 00010046
[12361.524599] RAX: 0000000000000000 RBX: 0000000000000015 RCX:
0000000000000001
[12361.525806] RDX: ffff8fed49a6ed00 RSI: ffff8fed4b178000 RDI:
ffff8fec418a9400
[12361.527014] RBP: ffffa7498342f8b0 R08: 0000000000000015 R09:
ffff8fed4b1780a8
[12361.527868] R10: 0000000000000000 R11: 0000000000000000 R12:
ffff8fed57e4f180
[12361.528754] R13: 0000000000004000 R14: 0000000000000015 R15:
0000000000000001
[12361.529623] FS:  00007f291afb8b30(0000) GS:ffff8fed7bc00000(0000)
knlGS:0000000000000000
[12361.530318] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[12361.530941] CR2: 0000000000000000 CR3: 0000000102ad8000 CR4:
00000000000006f0
[12361.531602] Call Trace:
[12361.532257]  <TASK>
[12361.532953]  ? asm_exc_int3+0x40/0x40
[12361.533565]  ? asm_exc_general_protection+0x4/0x30
[12361.534192]  ? asm_exc_int3+0x40/0x40
[12361.534823]  ? asm_exc_general_protection+0x4/0x30
[12361.535450]  ? asm_exc_int3+0x40/0x40
[12361.536063]  ? asm_exc_general_protection+0x4/0x30
[12361.536675]  ? asm_exc_int3+0x40/0x40
[12361.537262]  ? asm_exc_general_protection+0x4/0x30
[12361.537845]  ? asm_exc_int3+0x40/0x40
[12361.538425]  ? asm_exc_general_protection+0x4/0x30
[12361.539015]  ? asm_exc_int3+0x40/0x40
[12361.539630]  ? asm_exc_general_protection+0x4/0x30
[12361.540212]  ? asm_exc_int3+0x40/0x40
[12361.540825]  ? asm_exc_general_protection+0x4/0x30
[12361.541561]  ? asm_exc_int3+0x40/0x40
[12361.542191]  ? asm_exc_general_protection+0x4/0x30
[12361.542761]  ? asm_exc_int3+0x40/0x40
[12361.543325]  ? asm_exc_general_protection+0x4/0x30
[12361.543909]  ? asm_exc_int3+0x40/0x40
[12361.544481]  ? asm_exc_general_protection+0x4/0x30
[12361.545062]  ? asm_exc_int3+0x40/0x40
[12361.545677]  ? asm_exc_general_protection+0x4/0x30
[12361.546270]  ? asm_exc_int3+0x40/0x40
[12361.546861]  ? asm_exc_general_protection+0x4/0x30
[12361.547466]  ? asm_exc_int3+0x40/0x40
[12361.548071]  ? asm_exc_general_protection+0x4/0x30
[12361.548669]  ? asm_exc_int3+0x40/0x40
[12361.549258]  ? asm_exc_general_protection+0x4/0x30
[12361.549844]  ? asm_exc_int3+0x40/0x40
[12361.550425]  ? asm_exc_general_protection+0x4/0x30
[12361.551007]  ? asm_exc_int3+0x40/0x40
[12361.551594]  ? asm_exc_general_protection+0x4/0x30
[12361.552138]  ? asm_exc_int3+0x40/0x40
[12361.552671]  ? asm_exc_general_protection+0x4/0x30
[12361.553201]  ? asm_exc_int3+0x40/0x40
[12361.553737]  ? asm_exc_general_protection+0x4/0x30
[12361.554226]  ? asm_exc_int3+0x40/0x40
[12361.554706]  ? asm_exc_general_protection+0x4/0x30
[12361.555175]  ? asm_exc_int3+0x40/0x40
[12361.555646]  ? asm_exc_general_protection+0x4/0x30
[12361.556093]  ? asm_exc_int3+0x40/0x40
[12361.556549]  ? asm_exc_general_protection+0x4/0x30
[12361.556992]  ? asm_exc_int3+0x40/0x40
[12361.557420]  ? asm_sysvec_spurious_apic_interrupt+0x20/0x20
[12361.557849]  ? schedule_hrtimeout_range_clock+0xa0/0x120
[12361.558272]  ? __fget_files+0x51/0xc0
[12361.558707]  ? __hrtimer_init+0x110/0x110
[12361.559140]  __fget_light+0x32/0x90
[12361.559560]  __fdget+0x13/0x20
[12361.559989]  do_select+0x302/0x850
[12361.560405]  ? __pollwait+0xe0/0xe0
[12361.560820]  ? __pollwait+0xe0/0xe0
[12361.561261]  ? __pollwait+0xe0/0xe0
[12361.561648]  ? __pollwait+0xe0/0xe0
[12361.562028]  ? cpumask_next_and+0x24/0x30
[12361.562443]  ? update_sg_lb_stats+0x78/0x580
[12361.562857]  ? kfree_skbmem+0x81/0xa0
[12361.563266]  ? update_group_capacity+0x2c/0x2d0
[12361.563725]  ? update_sd_lb_stats.constprop.0+0xe0/0x250
[12361.564130]  ? __check_object_size.part.0+0x3a/0x150
[12361.564518]  ? __check_object_size+0x1d/0x30
[12361.564904]  ? core_sys_select+0x246/0x420
[12361.565288]  core_sys_select+0x1dd/0x420
[12361.565684]  ? ktime_get_ts64+0x55/0x100
[12361.566086]  ? _copy_to_user+0x20/0x30
[12361.566495]  ? poll_select_finish+0x121/0x220
[12361.566899]  ? kvm_clock_get_cycles+0x11/0x20
[12361.567313]  kern_select+0xdd/0x180
[12361.567744]  __x64_sys_select+0x21/0x30
[12361.568148]  do_syscall_64+0x5c/0xc0
[12361.568546]  ? __do_softirq+0xd9/0x2e7
[12361.568947]  ? exit_to_user_mode_prepare+0x37/0xb0
[12361.569349]  ? irqentry_exit_to_user_mode+0x9/0x20
[12361.569753]  ? irqentry_exit+0x1d/0x30
[12361.570154]  ? sysvec_apic_timer_interrupt+0x4e/0x90
[12361.570558]  entry_SYSCALL_64_after_hwframe+0x61/0xcb
[12361.570970] RIP: 0033:0x7f292739f4a3
[12361.571394] Code: c3 8b 07 85 c0 75 24 49 89 fb 48 89 f0 48 89 d7 48 89 ce
4c 89 c2 4d 89 ca 4c 8b 44 24 08 4c 8b 4c 24 10 4c 89 5c 24 08 0f 05 <c3> e9 c7
d1 ff ff 41 54 b8 02 00 00 00 49 89 f4 be 00 88 08 00 55
[12361.572283] RSP: 002b:00007f291afaaf68 EFLAGS: 00000246 ORIG_RAX:
0000000000000017
[12361.572752] RAX: ffffffffffffffda RBX: 00007f291afb8b30 RCX:
00007f292739f4a3
[12361.573227] RDX: 00007f291afab090 RSI: 00007f291afab010 RDI:
0000000000000017
[12361.573706] RBP: 00007f291afab010 R08: 00007f291afaafb0 R09:
0000000000000000
[12361.574182] R10: 00007f291afab110 R11: 0000000000000246 R12:
0000000000000017
[12361.574656] R13: 00007f291afab090 R14: 00007f291afab190 R15:
00007f291afaf1a0
[12361.575144]  </TASK>
[12361.575640] Modules linked in: xt_nat xt_tcpudp veth xt_conntrack
nft_chain_nat xt_MASQUERADE nf_nat nf_conntrack_netlink nf_conntrack
nf_defrag_ipv6 nf_defrag_ipv4 xfrm_user xfrm_algo nft_counter xt_addrtype
nft_compat nf_tables nfnetlink br_netfilter bridge stp llc overlay sch_fq_codel
joydev input_leds cp210x serio_raw usbserial cdc_acm qemu_fw_cfg mac_hid
dm_multipath scsi_dh_rdac scsi_dh_emc scsi_dh_alua efi_pstore pstore_blk mtd
ramoops netconsole reed_solomon ipmi_devintf ipmi_msghandler msr pstore_zone
ip_tables x_tables autofs4 btrfs blake2b_generic zstd_compress raid10 raid456
async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq
libcrc32c raid1 raid0 multipath linear hid_generic bochs drm_vram_helper
drm_ttm_helper ttm psmouse drm_kms_helper usbhid syscopyarea sysfillrect
virtio_net sysimgblt fb_sys_fops net_failover failover cec hid rc_core
virtio_scsi drm i2c_piix4 pata_acpi floppy
[12361.580240] CR2: 0000000000000000
[12361.580896] ---[ end trace 2596706ab1b3b337 ]---
[12361.581518] RIP: 0010:asm_exc_general_protection+0x4/0x30
[12361.582178] Code: c4 48 8d 6c 24 01 48 89 e7 48 8b 74 24 78 48 c7 44 24 78
ff ff ff ff e8 ea 7f f9 ff e9 05 0b 00 00 0f 1f 44 00 00 0f 1f 00 e8 <c8> 09 00
00 48 89 c4 48 8d 6c 24 01 48 89 e7 48 8b 74 24 78 48 c7
[12361.583552] RSP: 0018:ffffa7498342f010 EFLAGS: 00010046
[12361.584323] RAX: 0000000000000000 RBX: 0000000000000015 RCX:
0000000000000001
[12361.585078] RDX: ffff8fed49a6ed00 RSI: ffff8fed4b178000 RDI:
ffff8fec418a9400
[12361.585828] RBP: ffffa7498342f8b0 R08: 0000000000000015 R09:
ffff8fed4b1780a8
[12361.586563] R10: 0000000000000000 R11: 0000000000000000 R12:
ffff8fed57e4f180
[12361.587283] R13: 0000000000004000 R14: 0000000000000015 R15:
0000000000000001
[12361.588012] FS:  00007f291afb8b30(0000) GS:ffff8fed7bc00000(0000)
knlGS:0000000000000000
[12361.588742] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[12361.589472] CR2: 0000000000000000 CR3: 0000000102ad8000 CR4:
00000000000006f0
[12394.744918] BUG: kernel NULL pointer dereference, address: 0000000000000045
[12394.745723] #PF: supervisor instruction fetch in kernel mode
[12394.746513] #PF: error_code(0x0010) - not-present page
[12394.747292] PGD 0 P4D 0 
[12394.748083] Oops: 0010 [#2] SMP PTI
[12394.748858] CPU: 0 PID: 3950 Comm: mosquitto Tainted: G      D          
5.15.0-46-generic #49-Ubuntu
[12394.749639] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS
rel-1.15.0-0-g2dd4b9b3f840-prebuilt.qemu.org 04/01/2014
[12394.751251] RIP: 0010:0x45
[12394.752088] Code: Unable to access opcode bytes at RIP 0x1b.
[12394.752907] RSP: 0018:ffffa74980003648 EFLAGS: 00010046
[12394.753731] RAX: 0000000000000045 RBX: ffff8fed57f082c8 RCX:
00000000000000c3
[12394.754576] RDX: 0000000000000010 RSI: 0000000000000001 RDI:
ffffa7498342fa00
[12394.755413] RBP: ffffa74980003690 R08: 00000000000000c3 R09:
ffffa749800036a8
[12394.756244] R10: 00000000b140ae3e R11: ffffa74980003730 R12:
0000000000000000
[12394.757091] R13: 0000000000000000 R14: 0000000000000010 R15:
00000000000000c3
[12394.757972] FS:  00007f250ea9ab48(0000) GS:ffff8fed7bc00000(0000)
knlGS:0000000000000000
[12394.758803] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[12394.759627] CR2: 0000000000000045 CR3: 0000000026064000 CR4:
00000000000006f0
[12394.760488] Call Trace:
[12394.761303]  <IRQ>
[12394.762148]  ? __wake_up_common+0x7d/0x140
[12394.762979]  __wake_up_common_lock+0x7c/0xc0
[12394.763834]  __wake_up_sync_key+0x20/0x30
[12394.764666]  sock_def_readable+0x3b/0x80
[12394.765471]  tcp_data_ready+0x31/0xe0
[12394.766280]  tcp_data_queue+0x315/0x610
[12394.767028]  tcp_rcv_established+0x25f/0x6d0
[12394.767799]  tcp_v4_do_rcv+0x155/0x260
[12394.768568]  tcp_v4_rcv+0xd9d/0xed0
[12394.769302]  ip_protocol_deliver_rcu+0x3d/0x240
[12394.770033]  ip_local_deliver_finish+0x48/0x60
[12394.770726]  ip_local_deliver+0xfb/0x110
[12394.771387]  ? ip_protocol_deliver_rcu+0x240/0x240
[12394.772059]  ip_rcv_finish+0xbe/0xd0
[12394.772746]  ip_sabotage_in+0x5f/0x70 [br_netfilter]
[12394.773425]  nf_hook_slow+0x44/0xc0
[12394.774105]  ip_rcv+0x8a/0x190
[12394.774731]  ? ip_sublist_rcv+0x200/0x200
[12394.775349]  __netif_receive_skb_one_core+0x8a/0xa0
[12394.775959]  __netif_receive_skb+0x15/0x60
[12394.776551]  netif_receive_skb+0x43/0x140
[12394.777140]  ? fdb_find_rcu+0xb1/0x130 [bridge]
[12394.777769]  br_pass_frame_up+0x151/0x190 [bridge]
[12394.778382]  br_handle_frame_finish+0x1a5/0x520 [bridge]
[12394.778981]  ? __nf_ct_refresh_acct+0x55/0x60 [nf_conntrack]
[12394.779589]  ? nf_conntrack_tcp_packet+0x61f/0xf60 [nf_conntrack]
[12394.780171]  ? br_pass_frame_up+0x190/0x190 [bridge]
[12394.780758]  br_nf_hook_thresh+0xe1/0x120 [br_netfilter]
[12394.781337]  ? br_pass_frame_up+0x190/0x190 [bridge]
[12394.781937]  br_nf_pre_routing_finish+0x16e/0x430 [br_netfilter]
[12394.782517]  ? br_pass_frame_up+0x190/0x190 [bridge]
[12394.783122]  ? nf_nat_ipv4_pre_routing+0x4a/0xc0 [nf_nat]
[12394.783755]  br_nf_pre_routing+0x245/0x550 [br_netfilter]
[12394.784323]  ? tcp_write_xmit+0x690/0xb10
[12394.784872]  ? br_nf_forward_arp+0x320/0x320 [br_netfilter]
[12394.785424]  br_handle_frame+0x211/0x3c0 [bridge]
[12394.785995]  ? fib_multipath_hash+0x4a0/0x6a0
[12394.786535]  ? br_pass_frame_up+0x190/0x190 [bridge]
[12394.787075]  ? br_handle_frame_finish+0x520/0x520 [bridge]
[12394.787615]  __netif_receive_skb_core.constprop.0+0x23a/0xef0
[12394.788148]  ? ip_rcv+0x16f/0x190
[12394.788718]  __netif_receive_skb_one_core+0x3f/0xa0
[12394.789306]  __netif_receive_skb+0x15/0x60
[12394.789831]  process_backlog+0x9e/0x170
[12394.790353]  __napi_poll+0x33/0x190
[12394.790860]  net_rx_action+0x126/0x280
[12394.791351]  __do_softirq+0xd9/0x2e7
[12394.791846]  do_softirq+0x7d/0xb0
[12394.792350]  </IRQ>
[12394.792855]  <TASK>
[12394.793338]  __local_bh_enable_ip+0x54/0x60
[12394.793830]  ip_finish_output2+0x1a2/0x580
[12394.794331]  __ip_finish_output+0xb7/0x180
[12394.794823]  ip_finish_output+0x2e/0xc0
[12394.795316]  ip_output+0x78/0x100
[12394.795803]  ? __ip_finish_output+0x180/0x180
[12394.796322]  ip_local_out+0x5e/0x70
[12394.796816]  __ip_queue_xmit+0x180/0x440
[12394.797311]  ? page_counter_cancel+0x2e/0x80
[12394.797820]  ip_queue_xmit+0x15/0x20
[12394.798322]  __tcp_transmit_skb+0x8dd/0xa00
[12394.798813]  tcp_write_xmit+0x3ab/0xb10
[12394.799303]  ? __check_object_size.part.0+0x4a/0x150
[12394.799808]  __tcp_push_pending_frames+0x37/0x100
[12394.800308]  tcp_push+0xd6/0x100
[12394.800806]  tcp_sendmsg_locked+0x883/0xc80
[12394.801303]  tcp_sendmsg+0x2d/0x50
[12394.801793]  inet_sendmsg+0x43/0x80
[12394.802302]  sock_sendmsg+0x62/0x70
[12394.802787]  sock_write_iter+0x93/0xf0
[12394.803277]  new_sync_write+0x193/0x1b0
[12394.803770]  vfs_write+0x1d5/0x270
[12394.804276]  ksys_write+0xb5/0xf0
[12394.804737]  ? syscall_trace_enter.constprop.0+0xa7/0x1c0
[12394.805205]  __x64_sys_write+0x19/0x20
[12394.805665]  do_syscall_64+0x5c/0xc0
[12394.806129]  ? syscall_exit_to_user_mode+0x27/0x50
[12394.806592]  ? do_syscall_64+0x69/0xc0
[12394.807059]  ? do_syscall_64+0x69/0xc0
[12394.807549]  entry_SYSCALL_64_after_hwframe+0x61/0xcb
[12394.808008] RIP: 0033:0x7f250ea593ad
[12394.808499] Code: c3 8b 07 85 c0 75 24 49 89 fb 48 89 f0 48 89 d7 48 89 ce
4c 89 c2 4d 89 ca 4c 8b 44 24 08 4c 8b 4c 24 10 4c 89 5c 24 08 0f 05 <c3> e9 8a
d2 ff ff 41 54 b8 02 00 00 00 49 89 f4 be 00 88 08 00 55
[12394.809442] RSP: 002b:00007ffea08ec188 EFLAGS: 00000246 ORIG_RAX:
0000000000000001
[12394.809945] RAX: ffffffffffffffda RBX: 00007f250ea9ab48 RCX:
00007f250ea593ad
[12394.810440] RDX: 00000000000000a2 RSI: 00007f250e79c810 RDI:
0000000000000009
[12394.810933] RBP: 00007f250e7d7e80 R08: 0000000000000000 R09:
0000000000000000
[12394.811451] R10: 0000000000000000 R11: 0000000000000246 R12:
0000000000000001
[12394.811938] R13: 000000000000009f R14: 0000000000000000 R15:
00007f250e7d7e80
[12394.812449]  </TASK>
[12394.812930] Modules linked in: xt_nat xt_tcpudp veth xt_conntrack
nft_chain_nat xt_MASQUERADE nf_nat nf_conntrack_netlink nf_conntrack
nf_defrag_ipv6 nf_defrag_ipv4 xfrm_user xfrm_algo nft_counter xt_addrtype
nft_compat nf_tables nfnetlink br_netfilter bridge stp llc overlay sch_fq_codel
joydev input_leds cp210x serio_raw usbserial cdc_acm qemu_fw_cfg mac_hid
dm_multipath scsi_dh_rdac scsi_dh_emc scsi_dh_alua efi_pstore pstore_blk mtd
ramoops netconsole reed_solomon ipmi_devintf ipmi_msghandler msr pstore_zone
ip_tables x_tables autofs4 btrfs blake2b_generic zstd_compress raid10 raid456
async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq
libcrc32c raid1 raid0 multipath linear hid_generic bochs drm_vram_helper
drm_ttm_helper ttm psmouse drm_kms_helper usbhid syscopyarea sysfillrect
virtio_net sysimgblt fb_sys_fops net_failover failover cec hid rc_core
virtio_scsi drm i2c_piix4 pata_acpi floppy
[12394.817596] CR2: 0000000000000045
[12394.818324] ---[ end trace 2596706ab1b3b338 ]---
[12394.819007] RIP: 0010:asm_exc_general_protection+0x4/0x30
[12394.819695] Code: c4 48 8d 6c 24 01 48 89 e7 48 8b 74 24 78 48 c7 44 24 78
ff ff ff ff e8 ea 7f f9 ff e9 05 0b 00 00 0f 1f 44 00 00 0f 1f 00 e8 <c8> 09 00
00 48 89 c4 48 8d 6c 24 01 48 89 e7 48 8b 74 24 78 48 c7
[12394.821094] RSP: 0018:ffffa7498342f010 EFLAGS: 00010046
[12394.821847] RAX: 0000000000000000 RBX: 0000000000000015 RCX:
0000000000000001
[12394.822622] RDX: ffff8fed49a6ed00 RSI: ffff8fed4b178000 RDI:
ffff8fec418a9400
[12394.823371] RBP: ffffa7498342f8b0 R08: 0000000000000015 R09:
ffff8fed4b1780a8
[12394.824113] R10: 0000000000000000 R11: 0000000000000000 R12:
ffff8fed57e4f180
[12394.824874] R13: 0000000000004000 R14: 0000000000000015 R15:
0000000000000001
[12394.825623] FS:  00007f250ea9ab48(0000) GS:ffff8fed7bc00000(0000)
knlGS:0000000000000000
[12394.826391] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[12394.827160] CR2: 0000000000000045 CR3: 0000000026064000 CR4:
00000000000006f0
[12394.827934] Kernel panic - not syncing: Fatal exception in interrupt
[12394.828901] Kernel Offset: 0x8200000 from 0xffffffff81000000 (relocation
range: 0xffffffff80000000-0xffffffffbfffffff)
[12394.829699] ---[ end Kernel panic - not syncing: Fatal exception in
interrupt ]---

-- 
You may reply to this email to add a comment.

You are receiving this mail because:
You are watching the assignee of the bug.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [Bug 216349] Kernel panic in a Ubuntu VM running on Proxmox
  2022-08-10 12:13 [Bug 216349] New: Kernel panic in a Ubuntu VM running on Proxmox bugzilla-daemon
@ 2022-08-10 13:44 ` bugzilla-daemon
  2022-08-10 14:29 ` bugzilla-daemon
                   ` (13 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: bugzilla-daemon @ 2022-08-10 13:44 UTC (permalink / raw)
  To: kvm

https://bugzilla.kernel.org/show_bug.cgi?id=216349

--- Comment #1 from John Park (jdpark.au@gmail.com) ---
I just experienced another kernel panic on the same VM above. Log below:

---

[ 7720.804438] BUG: #DF stack guard page was hit at 00000000d9071369 (stack is
000000002e08a9df..0000000059db9875)
[ 7720.804460] stack guard page: 0000 [#1] SMP PTI
[ 7720.804464] CPU: 0 PID: 809 Comm: dockerd Not tainted 5.15.0-46-generic
#49-Ubuntu
[ 7720.804473] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS
rel-1.15.0-0-g2dd4b9b3f840-prebuilt.qemu.org 04/01/2014
[ 7720.804475] RIP: 0010:error_entry+0xc/0x130
[ 7720.804498] Code: ff 85 db 0f 85 19 fd ff ff 0f 01 f8 e9 11 fd ff ff 66 66
2e 0f 1f 84 00 00 00 00 00 66 90 fc 56 48 8b 74 24 08 48 89 7c 24 08 <52> 51 50
41 50 41 51 41 52 41 53 53 55 41 54 41 55 41 56 41 57 56
[ 7720.804500] RSP: 0000:fffffe0000009000 EFLAGS: 00010087
[ 7720.804503] RAX: 000000000001fbc0 RBX: 0000000000000000 RCX:
ffffffff87001187
[ 7720.804504] RDX: 0000000000000000 RSI: ffffffff87000b48 RDI:
fffffe0000009078
[ 7720.804505] RBP: fffffe0000009068 R08: 0000000000000000 R09:
0000000000000000
[ 7720.804506] R10: 0000000000000000 R11: 0000000000000000 R12:
fffffe0000009078
[ 7720.804507] R13: 000000000001fbc0 R14: 0000000000000000 R15:
0000000000000000
[ 7720.804508] FS:  00007fd11cff9640(0000) GS:ffff8988bbc00000(0000)
knlGS:0000000000000000
[ 7720.804509] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 7720.804511] CR2: fffffe0000008ff8 CR3: 00000001292be000 CR4:
00000000000006f0
[ 7720.804514] Call Trace:
[ 7720.804519]  <#DF>
[ 7720.804534]  ? exc_page_fault+0x1c/0x170
[ 7720.804538]  asm_exc_page_fault+0x26/0x30
[ 7720.804541] RIP: 0010:exc_page_fault+0x1c/0x170
[ 7720.804543] Code: 07 01 eb c4 e8 b5 01 00 00 cc cc cc cc cc 55 48 89 e5 41
57 41 56 49 89 f6 41 55 41 54 49 89 fc 0f 20 d0 0f 1f 40 00 49 89 c5 <65> 48 8b
04 25 c0 fb 01 00 48 8b 80 98 08 00 00 0f 18 48 78 66 90
[ 7720.804545] RSP: 0000:fffffe0000009128 EFLAGS: 00010087
[ 7720.804546] RAX: 000000000001fbc0 RBX: 0000000000000000 RCX:
ffffffff87001187
[ 7720.804547] RDX: 0000000000000000 RSI: 0000000000000000 RDI:
fffffe0000009158
[ 7720.804548] RBP: fffffe0000009148 R08: 0000000000000000 R09:
0000000000000000
[ 7720.804548] R10: 0000000000000000 R11: 0000000000000000 R12:
fffffe0000009158
[ 7720.804549] R13: 000000000001fbc0 R14: 0000000000000000 R15:
0000000000000000
[ 7720.804550]  ? native_iret+0x7/0x7
[ 7720.804562]  asm_exc_page_fault+0x26/0x30
[ 7720.804564] RIP: 0010:exc_page_fault+0x1c/0x170
[ 7720.804566] Code: 07 01 eb c4 e8 b5 01 00 00 cc cc cc cc cc 55 48 89 e5 41
57 41 56 49 89 f6 41 55 41 54 49 89 fc 0f 20 d0 0f 1f 40 00 49 89 c5 <65> 48 8b
04 25 c0 fb 01 00 48 8b 80 98 08 00 00 0f 18 48 78 66 90
[ 7720.804567] RSP: 0000:fffffe0000009208 EFLAGS: 00010087
[ 7720.804568] RAX: 000000000001fbc0 RBX: 0000000000000000 RCX:
ffffffff87001187
[ 7720.804569] RDX: 0000000000000000 RSI: 0000000000000000 RDI:
fffffe0000009238
[ 7720.804570] RBP: fffffe0000009228 R08: 0000000000000000 R09:
0000000000000000
[ 7720.804570] R10: 0000000000000000 R11: 0000000000000000 R12:
fffffe0000009238
[ 7720.804571] R13: 000000000001fbc0 R14: 0000000000000000 R15:
0000000000000000
[ 7720.804592]  ? native_iret+0x7/0x7
[ 7720.804594]  asm_exc_page_fault+0x26/0x30
[ 7720.804597] RIP: 0010:exc_page_fault+0x1c/0x170
[ 7720.804598] Code: 07 01 eb c4 e8 b5 01 00 00 cc cc cc cc cc 55 48 89 e5 41
57 41 56 49 89 f6 41 55 41 54 49 89 fc 0f 20 d0 0f 1f 40 00 49 89 c5 <65> 48 8b
04 25 c0 fb 01 00 48 8b 80 98 08 00 00 0f 18 48 78 66 90
[ 7720.804608] RSP: 0000:fffffe00000092e8 EFLAGS: 00010087
[ 7720.804610] RAX: 000000000001fbc0 RBX: 0000000000000000 RCX:
ffffffff87001187
[ 7720.804610] RDX: 0000000000000000 RSI: 0000000000000000 RDI:
fffffe0000009318
[ 7720.804611] RBP: fffffe0000009308 R08: 0000000000000000 R09:
0000000000000000
[ 7720.804612] R10: 0000000000000000 R11: 0000000000000000 R12:
fffffe0000009318
[ 7720.804612] R13: 000000000001fbc0 R14: 0000000000000000 R15:
0000000000000000
[ 7720.804614]  ? native_iret+0x7/0x7
[ 7720.804616]  asm_exc_page_fault+0x26/0x30
[ 7720.804618] RIP: 0010:exc_page_fault+0x1c/0x170
[ 7720.804620] Code: 07 01 eb c4 e8 b5 01 00 00 cc cc cc cc cc 55 48 89 e5 41
57 41 56 49 89 f6 41 55 41 54 49 89 fc 0f 20 d0 0f 1f 40 00 49 89 c5 <65> 48 8b
04 25 c0 fb 01 00 48 8b 80 98 08 00 00 0f 18 48 78 66 90
[ 7720.804629] RSP: 0000:fffffe00000093c8 EFLAGS: 00010087
[ 7720.804630] RAX: 000000000001fbc0 RBX: 0000000000000000 RCX:
ffffffff87001187
[ 7720.804631] RDX: 0000000000000000 RSI: 0000000000000000 RDI:
fffffe00000093f8
[ 7720.804632] RBP: fffffe00000093e8 R08: 0000000000000000 R09:
0000000000000000
[ 7720.804632] R10: 0000000000000000 R11: 0000000000000000 R12:
fffffe00000093f8
[ 7720.804633] R13: 000000000001fbc0 R14: 0000000000000000 R15:
0000000000000000
[ 7720.804634]  ? native_iret+0x7/0x7
[ 7720.804637]  asm_exc_page_fault+0x26/0x30
[ 7720.804639] RIP: 0010:exc_page_fault+0x1c/0x170
[ 7720.804640] Code: 07 01 eb c4 e8 b5 01 00 00 cc cc cc cc cc 55 48 89 e5 41
57 41 56 49 89 f6 41 55 41 54 49 89 fc 0f 20 d0 0f 1f 40 00 49 89 c5 <65> 48 8b
04 25 c0 fb 01 00 48 8b 80 98 08 00 00 0f 18 48 78 66 90
[ 7720.804641] RSP: 0000:fffffe00000094a8 EFLAGS: 00010087
[ 7720.804642] RAX: 000000000001fbc0 RBX: 0000000000000000 RCX:
ffffffff87001187
[ 7720.804643] RDX: 0000000000000000 RSI: 0000000000000000 RDI:
fffffe00000094d8
[ 7720.804643] RBP: fffffe00000094c8 R08: 0000000000000000 R09:
0000000000000000
[ 7720.804644] R10: 0000000000000000 R11: 0000000000000000 R12:
fffffe00000094d8
[ 7720.804645] R13: 000000000001fbc0 R14: 0000000000000000 R15:
0000000000000000
[ 7720.804645]  ? native_iret+0x7/0x7
[ 7720.804647]  asm_exc_page_fault+0x26/0x30
[ 7720.804649] RIP: 0010:exc_page_fault+0x1c/0x170
[ 7720.804650] Code: 07 01 eb c4 e8 b5 01 00 00 cc cc cc cc cc 55 48 89 e5 41
57 41 56 49 89 f6 41 55 41 54 49 89 fc 0f 20 d0 0f 1f 40 00 49 89 c5 <65> 48 8b
04 25 c0 fb 01 00 48 8b 80 98 08 00 00 0f 18 48 78 66 90
[ 7720.804651] RSP: 0000:fffffe0000009588 EFLAGS: 00010087
[ 7720.804652] RAX: 000000000001fbc0 RBX: 0000000000000000 RCX:
ffffffff87001187
[ 7720.804653] RDX: 0000000000000000 RSI: 0000000000000000 RDI:
fffffe00000095b8
[ 7720.804653] RBP: fffffe00000095a8 R08: 0000000000000000 R09:
0000000000000000
[ 7720.804654] R10: 0000000000000000 R11: 0000000000000000 R12:
fffffe00000095b8
[ 7720.804655] R13: 000000000001fbc0 R14: 0000000000000000 R15:
0000000000000000
[ 7720.804656]  ? native_iret+0x7/0x7
[ 7720.804657]  asm_exc_page_fault+0x26/0x30
[ 7720.804659] RIP: 0010:irqentry_enter+0xf/0x50
[ 7720.804661] Code: 66 66 2e 0f 1f 84 00 00 00 00 00 c3 cc cc cc cc 66 66 2e
0f 1f 84 00 00 00 00 00 55 48 89 e5 f6 87 88 00 00 00 03 75 17 31 c0 <65> 48 8b
14 25 c0 fb 01 00 f6 42 2c 02 75 13 5d c3 cc cc cc cc e8
[ 7720.804661] RSP: 0000:fffffe0000009668 EFLAGS: 00010046
[ 7720.804662] RAX: 0000000000000000 RBX: 0000000000000000 RCX:
ffffffff87001187
[ 7720.804663] RDX: 0000000000000000 RSI: ffffffff87000aea RDI:
fffffe0000009698
[ 7720.804677] RBP: fffffe0000009668 R08: 0000000000000000 R09:
0000000000000000
[ 7720.804678] R10: 0000000000000000 R11: 0000000000000000 R12:
fffffe0000009698
[ 7720.804679] R13: 0000000000000000 R14: 0000000000000000 R15:
0000000000000000
[ 7720.804680]  ? native_iret+0x7/0x7
[ 7720.804681]  ? asm_exc_invalid_op+0xa/0x20
[ 7720.804684]  exc_invalid_op+0x25/0x70
[ 7720.804686]  asm_exc_invalid_op+0x1a/0x20
[ 7720.804688] RIP: 0010:asm_exc_invalid_op+0x0/0x20
[ 7720.804690] Code: 00 00 48 89 c4 48 8d 6c 24 01 48 89 e7 48 8b 74 24 78 48
c7 44 24 78 ff ff ff ff e8 ea 7f f9 ff e9 a5 0a 00 00 0f 1f 44 00 00 <0f> 1f 00
6a ff e8 66 09 00 00 48 89 c4 48 8d 6c 24 01 48 89 e7 e8
[ 7720.804691] RSP: 0000:fffffe0000009748 EFLAGS: 00010002
[ 7720.804692] RAX: 000000c0009b6600 RBX: 000000c0008ba750 RCX:
0000000000000028
[ 7720.804693] RDX: 0000000000000090 RSI: 0000000000203000 RDI:
00007fd1244e3138
[ 7720.804694] RBP: 00007fd11cff8af8 R08: 0000000000000003 R09:
00007fd1260cdd3b
[ 7720.804695] R10: 00000000000fbeb0 R11: 00007fd126287fff R12:
000000c0008ba750
[ 7720.804695] R13: 000000c0009b6600 R14: 000000c0009cf860 R15:
0000000000000000
[ 7720.804700] WARNING: stack recursion on stack type 5
[ 7720.804703]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804705]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804707]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804709]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804711]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804722]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804724]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804726]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804728]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804730]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804732]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804734]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804736]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804737]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804739]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804742]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804744]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804746]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804747]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804749]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804751]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804753]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804755]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804757]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804759]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804761]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804763]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804765]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804767]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804769]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804779]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804782]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804783]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804785]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804787]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804789]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804791]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804793]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804796]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804797]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804799]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804801]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804803]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804805]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804807]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804809]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804811]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804813]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804815]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804817]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804818]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804820]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804822]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804824]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804826]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804828]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804830]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804832]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804834]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804836]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804838]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804840]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804841]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804843]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804845]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804847]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804849]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804851]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804853]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804855]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804857]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804859]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804861]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804877]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804878]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804880]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804882]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804895]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804898]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804900]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804902]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804904]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804906]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804908]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804910]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804912]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.804915]  ? asm_exc_stack_segment+0x10/0x30
[ 7720.804917]  ? vsnprintf+0x359/0x550
[ 7720.804935]  ? vsnprintf+0x359/0x550
[ 7720.804936]  ? sprintf+0x56/0x80
[ 7720.804938]  ? __sprint_symbol.constprop.0+0xee/0x110
[ 7720.804964]  ? symbol_string+0xa2/0x140
[ 7720.804966]  ? symbol_string+0xa2/0x140
[ 7720.804968]  ? vsnprintf+0x397/0x550
[ 7720.804969]  ? vscnprintf+0xd/0x40
[ 7720.804970]  ? printk_sprint+0x79/0xa0
[ 7720.804978]  ? pointer+0x230/0x4f0
[ 7720.804980]  ? vsnprintf+0x397/0x550
[ 7720.804982]  ? vscnprintf+0xd/0x40
[ 7720.804983]  ? printk_sprint+0x5e/0xa0
[ 7720.804985]  ? vprintk_store+0x2fe/0x5b0
[ 7720.804987]  ? defer_console_output+0x3b/0x50
[ 7720.804989]  ? vprintk+0x4a/0x90
[ 7720.804991]  ? is_bpf_text_address+0x17/0x30
[ 7720.805002]  ? kernel_text_address+0xf7/0x100
[ 7720.805011]  ? unwind_next_frame.part.0+0x86/0x200
[ 7720.805020]  ? __kernel_text_address+0x12/0x50
[ 7720.805022]  ? show_trace_log_lvl+0x1cb/0x2df
[ 7720.805033]  ? show_trace_log_lvl+0x1cb/0x2df
[ 7720.805035]  ? asm_exc_alignment_check+0x30/0x30
[ 7720.805038]  ? show_regs.part.0+0x23/0x29
[ 7720.805039]  ? __die_body.cold+0x8/0xd
[ 7720.805056]  ? __die+0x2b/0x37
[ 7720.805057]  ? die+0x30/0x60
[ 7720.805067]  ? handle_stack_overflow+0x4e/0x60
[ 7720.805069]  ? exc_double_fault+0x155/0x190
[ 7720.805071]  ? asm_exc_double_fault+0x1e/0x30
[ 7720.805073]  ? native_iret+0x7/0x7
[ 7720.805074]  ? asm_exc_page_fault+0x8/0x30
[ 7720.805077]  ? error_entry+0xc/0x130
[ 7720.805078]  </#DF>
[ 7720.805083] Modules linked in: tcp_diag udp_diag inet_diag veth xt_nat
xt_tcpudp xt_conntrack nft_chain_nat xt_MASQUERADE nf_nat nf_conntrack_netlink
nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 xfrm_user xfrm_algo nft_counter
xt_addrtype nft_compat nf_tables nfnetlink br_netfilter bridge stp llc overlay
sch_fq_codel cp210x input_leds usbserial cdc_acm joydev serio_raw mac_hid
qemu_fw_cfg dm_multipath scsi_dh_rdac scsi_dh_emc scsi_dh_alua mtd pstore_blk
ramoops netconsole pstore_zone reed_solomon ipmi_devintf ipmi_msghandler msr
efi_pstore ip_tables x_tables autofs4 btrfs blake2b_generic zstd_compress
raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor
raid6_pq libcrc32c raid1 raid0 multipath linear bochs drm_vram_helper
drm_ttm_helper ttm drm_kms_helper hid_generic syscopyarea sysfillrect sysimgblt
fb_sys_fops cec usbhid rc_core virtio_net net_failover hid drm psmouse
virtio_scsi failover i2c_piix4 pata_acpi floppy
[ 7720.901966] ---[ end trace b7f1a532a0e81c78 ]---
[ 7720.901991] RIP: 0010:error_entry+0xc/0x130
[ 7720.901998] Code: ff 85 db 0f 85 19 fd ff ff 0f 01 f8 e9 11 fd ff ff 66 66
2e 0f 1f 84 00 00 00 00 00 66 90 fc 56 48 8b 74 24 08 48 89 7c 24 08 <52> 51 50
41 50 41 51 41 52 41 53 53 55 41 54 41 55 41 56 41 57 56
[ 7720.901999] RSP: 0000:fffffe0000009000 EFLAGS: 00010087
[ 7720.902001] RAX: 000000000001fbc0 RBX: 0000000000000000 RCX:
ffffffff87001187
[ 7720.902002] RDX: 0000000000000000 RSI: ffffffff87000b48 RDI:
fffffe0000009078
[ 7720.902003] RBP: fffffe0000009068 R08: 0000000000000000 R09:
0000000000000000
[ 7720.902004] R10: 0000000000000000 R11: 0000000000000000 R12:
fffffe0000009078
[ 7720.902004] R13: 000000000001fbc0 R14: 0000000000000000 R15:
0000000000000000
[ 7720.902005] FS:  00007fd11cff9640(0000) GS:ffff8988bbc00000(0000)
knlGS:0000000000000000
[ 7720.902007] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 7720.902008] CR2: fffffe0000008ff8 CR3: 00000001292be000 CR4:
00000000000006f0
[ 7720.902013] Kernel panic - not syncing: Fatal exception in interrupt
[ 7720.902108] Kernel Offset: 0x5200000 from 0xffffffff81000000 (relocation
range: 0xffffffff80000000-0xffffffffbfffffff)

-- 
You may reply to this email to add a comment.

You are receiving this mail because:
You are watching the assignee of the bug.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [Bug 216349] Kernel panic in a Ubuntu VM running on Proxmox
  2022-08-10 12:13 [Bug 216349] New: Kernel panic in a Ubuntu VM running on Proxmox bugzilla-daemon
  2022-08-10 13:44 ` [Bug 216349] " bugzilla-daemon
@ 2022-08-10 14:29 ` bugzilla-daemon
  2022-08-10 15:43 ` bugzilla-daemon
                   ` (12 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: bugzilla-daemon @ 2022-08-10 14:29 UTC (permalink / raw)
  To: kvm

https://bugzilla.kernel.org/show_bug.cgi?id=216349

Dr. David Alan Gilbert (dgilbert@redhat.com) changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |dgilbert@redhat.com

--- Comment #2 from Dr. David Alan Gilbert (dgilbert@redhat.com) ---
Are you doing live migration? And if so, between which host CPUs?

-- 
You may reply to this email to add a comment.

You are receiving this mail because:
You are watching the assignee of the bug.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [Bug 216349] Kernel panic in a Ubuntu VM running on Proxmox
  2022-08-10 12:13 [Bug 216349] New: Kernel panic in a Ubuntu VM running on Proxmox bugzilla-daemon
  2022-08-10 13:44 ` [Bug 216349] " bugzilla-daemon
  2022-08-10 14:29 ` bugzilla-daemon
@ 2022-08-10 15:43 ` bugzilla-daemon
  2022-08-11  5:50 ` bugzilla-daemon
                   ` (11 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: bugzilla-daemon @ 2022-08-10 15:43 UTC (permalink / raw)
  To: kvm

https://bugzilla.kernel.org/show_bug.cgi?id=216349

--- Comment #3 from John Park (jdpark.au@gmail.com) ---
(In reply to Dr. David Alan Gilbert from comment #2)
> Are you doing live migration? And if so, between which host CPUs?

Thanks for your response. No, I'm not performing any migration. The VM is
static and only running Docker. The host CPU is an Intel N5105. It's not doing
anything too intensive with VM CPU sitting at around 10% and the Host CPU
sitting at around 5% (powersave governor set).

The CPU governer was initially set to 'performance' which I've since changed to
'powersave' using the intel_pstate driver but this hasn't improved the
situation.

There's quite a few people on the Proxmox forums (links below) running the same
CPU who have also encountered the same issue. Could there be an issue with the
kernel and this particular CPU?

https://forum.proxmox.com/threads/vm-freezes-irregularly.111494/
https://forum.proxmox.com/threads/vms-freezing-randomly.113037/
https://forum.proxmox.com/threads/ubuntu-20-04-und-22-04-vms-bleiben-zuf%C3%A4llig-h%C3%A4ngen-bsd-vms-nicht.113358/
https://forum.proxmox.com/threads/ubuntu-20-04-04-machine-freezes.112507/
https://forum.proxmox.com/threads/proxmox-vms-crashing-freezing-on-intel-n5105-cpu.113177/
https://forum.proxmox.com/threads/pfsense-vm-keeps-freezing-crashing.112439/

-- 
You may reply to this email to add a comment.

You are receiving this mail because:
You are watching the assignee of the bug.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [Bug 216349] Kernel panic in a Ubuntu VM running on Proxmox
  2022-08-10 12:13 [Bug 216349] New: Kernel panic in a Ubuntu VM running on Proxmox bugzilla-daemon
                   ` (2 preceding siblings ...)
  2022-08-10 15:43 ` bugzilla-daemon
@ 2022-08-11  5:50 ` bugzilla-daemon
  2022-08-11  5:52 ` bugzilla-daemon
                   ` (10 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: bugzilla-daemon @ 2022-08-11  5:50 UTC (permalink / raw)
  To: kvm

https://bugzilla.kernel.org/show_bug.cgi?id=216349

--- Comment #4 from John Park (jdpark.au@gmail.com) ---
I'm not sure how verbose I should be in the bug report but I thought it would
be worth mentioning that in attempting to troubleshoot, I changed the machine
type in Proxmox from i440fx to q35 but this made no difference, I got another
kernel panic today using q35 but unfortunately I forgot to change the interface
in my netconsole config so I didn't get any log of the panic. I've taken a
screenshot of the console and uploaded it anyway.

-- 
You may reply to this email to add a comment.

You are receiving this mail because:
You are watching the assignee of the bug.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [Bug 216349] Kernel panic in a Ubuntu VM running on Proxmox
  2022-08-10 12:13 [Bug 216349] New: Kernel panic in a Ubuntu VM running on Proxmox bugzilla-daemon
                   ` (3 preceding siblings ...)
  2022-08-11  5:50 ` bugzilla-daemon
@ 2022-08-11  5:52 ` bugzilla-daemon
  2022-08-11  8:18 ` bugzilla-daemon
                   ` (9 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: bugzilla-daemon @ 2022-08-11  5:52 UTC (permalink / raw)
  To: kvm

https://bugzilla.kernel.org/show_bug.cgi?id=216349

--- Comment #5 from John Park (jdpark.au@gmail.com) ---
Created attachment 301550
  --> https://bugzilla.kernel.org/attachment.cgi?id=301550&action=edit
Console output from kernel panic after using q35 machine type

-- 
You may reply to this email to add a comment.

You are receiving this mail because:
You are watching the assignee of the bug.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [Bug 216349] Kernel panic in a Ubuntu VM running on Proxmox
  2022-08-10 12:13 [Bug 216349] New: Kernel panic in a Ubuntu VM running on Proxmox bugzilla-daemon
                   ` (4 preceding siblings ...)
  2022-08-11  5:52 ` bugzilla-daemon
@ 2022-08-11  8:18 ` bugzilla-daemon
  2022-08-11  8:29 ` bugzilla-daemon
                   ` (8 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: bugzilla-daemon @ 2022-08-11  8:18 UTC (permalink / raw)
  To: kvm

https://bugzilla.kernel.org/show_bug.cgi?id=216349

--- Comment #6 from Dr. David Alan Gilbert (dgilbert@redhat.com) ---
Hi John,
  Thanks - hmm ok, if it's not migration it's unlikely the one I'm working on.
I doubt changing to q35 will make much odds.

-- 
You may reply to this email to add a comment.

You are receiving this mail because:
You are watching the assignee of the bug.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [Bug 216349] Kernel panic in a Ubuntu VM running on Proxmox
  2022-08-10 12:13 [Bug 216349] New: Kernel panic in a Ubuntu VM running on Proxmox bugzilla-daemon
                   ` (5 preceding siblings ...)
  2022-08-11  8:18 ` bugzilla-daemon
@ 2022-08-11  8:29 ` bugzilla-daemon
  2022-08-13  2:55 ` bugzilla-daemon
                   ` (7 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: bugzilla-daemon @ 2022-08-11  8:29 UTC (permalink / raw)
  To: kvm

https://bugzilla.kernel.org/show_bug.cgi?id=216349

--- Comment #7 from John Park (jdpark.au@gmail.com) ---
(In reply to Dr. David Alan Gilbert from comment #6)
> Hi John,
>   Thanks - hmm ok, if it's not migration it's unlikely the one I'm working
> on.
> I doubt changing to q35 will make much odds.

OK, thanks anyway David.

Is there anything else I can do to help someone diagnose this issue? Should I
keep uploading logs of kernel panics on the VM? I apologise if these are stupid
questions but I haven't logged any kernel bug reports and I want to provide as
much information as possible.

There's quite a few people effected by this issue as demonstrated in the links
to the forum threads above.

-- 
You may reply to this email to add a comment.

You are receiving this mail because:
You are watching the assignee of the bug.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [Bug 216349] Kernel panic in a Ubuntu VM running on Proxmox
  2022-08-10 12:13 [Bug 216349] New: Kernel panic in a Ubuntu VM running on Proxmox bugzilla-daemon
                   ` (6 preceding siblings ...)
  2022-08-11  8:29 ` bugzilla-daemon
@ 2022-08-13  2:55 ` bugzilla-daemon
  2022-08-13  2:59 ` bugzilla-daemon
                   ` (6 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: bugzilla-daemon @ 2022-08-13  2:55 UTC (permalink / raw)
  To: kvm

https://bugzilla.kernel.org/show_bug.cgi?id=216349

--- Comment #8 from John Park (jdpark.au@gmail.com) ---
Another kernel panic and VM freeze occurred today. Log below:

---

[32846.996729] invalid opcode: 0000 [#1] SMP PTI
[32846.997520] CPU: 0 PID: 2951 Comm: cron Not tainted 5.15.0-46-generic
#49-Ubuntu
[32846.998310] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS
rel-1.15.0-0-g2dd4b9b3f840-prebuilt.qemu.org 04/01/2014
[32847.000030] RIP: 0010:asm_exc_page_fault+0x1/0x30
[32847.001067] Code: 28 ff 74 24 28 ff 74 24 28 ff 74 24 28 e8 27 09 00 00 48
89 c4 48 8d 6c 24 01 48 89 e7 e8 b7 86 f9 ff e9 42 0a 00 00 66 90 0f <1f> 00 e8
08 09 00 00 48 89 c4 48 8d 6c 24 01 48 89 e7 48 8b 74 24
[32847.003280] RSP: 0018:ffffb688031c7b08 EFLAGS: 00010086
[32847.004521] RAX: ffffffffc07f5360 RBX: 0000000000000000 RCX:
0000000000000002
[32847.005772] RDX: 0000000000000081 RSI: ffff9cb54955cb60 RDI:
ffffffff91681300
[32847.007063] RBP: ffffb688031c7ba8 R08: 0000000000000000 R09:
0000000000000000
[32847.008418] R10: ffff9cb54eab3000 R11: 0000000000000000 R12:
ffff9cb54955cb60
[32847.009782] R13: 0000000000000081 R14: ffffffff91681300 R15:
d0d0d0d0d0d0d0d0
[32847.011199] FS:  00007fe632c78840(0000) GS:ffff9cb57bc00000(0000)
knlGS:0000000000000000
[32847.012623] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[32847.013711] CR2: ffffffffc63418c3 CR3: 0000000001c14000 CR4:
00000000000006f0
[32847.014847] Call Trace:
[32847.015811]  <TASK>
[32847.016616]  ? asm_sysvec_spurious_apic_interrupt+0x20/0x20
[32847.017488]  ? ovl_verify_inode+0xd0/0xd0 [overlay]
[32847.018341]  ? ovl_verify_inode+0xd0/0xd0 [overlay]
[32847.018939]  ? inode_permission+0xef/0x1a0
[32847.019561]  link_path_walk.part.0.constprop.0+0xc9/0x370
[32847.020165]  ? path_init+0x2c0/0x3f0
[32847.020779]  path_lookupat+0x3e/0x1c0
[32847.021416]  ? generic_fillattr+0x4e/0xe0
[32847.021941]  filename_lookup+0xcf/0x1d0
[32847.022468]  ? __check_object_size+0x1d/0x30
[32847.023003]  ? strncpy_from_user+0x44/0x150
[32847.023583]  ? getname_flags.part.0+0x4c/0x1b0
[32847.024200]  user_path_at_empty+0x3f/0x60
[32847.024880]  vfs_statx+0x7a/0x130
[32847.025450]  __do_sys_newstat+0x3e/0x80
[32847.026194]  ? __secure_computing+0xa9/0x120
[32847.026825]  ? syscall_trace_enter.constprop.0+0xa7/0x1c0
[32847.027410]  __x64_sys_newstat+0x16/0x20
[32847.028025]  do_syscall_64+0x5c/0xc0
[32847.028621]  ? syscall_exit_to_user_mode+0x27/0x50
[32847.029197]  ? __x64_sys_newstat+0x16/0x20
[32847.029748]  ? do_syscall_64+0x69/0xc0
[32847.030298]  ? do_syscall_64+0x69/0xc0
[32847.030853]  ? syscall_exit_to_user_mode+0x27/0x50
[32847.031415]  ? __x64_sys_newstat+0x16/0x20
[32847.032002]  ? do_syscall_64+0x69/0xc0
[32847.032591]  ? do_syscall_64+0x69/0xc0
[32847.033193]  entry_SYSCALL_64_after_hwframe+0x61/0xcb
[32847.033786] RIP: 0033:0x7fe632e643a6
[32847.034390] Code: 00 00 75 05 48 83 c4 18 c3 e8 66 f3 01 00 66 0f 1f 44 00
00 41 89 f8 48 89 f7 48 89 d6 41 83 f8 01 77 29 b8 04 00 00 00 0f 05 <48> 3d 00
f0 ff ff 77 02 c3 90 48 8b 15 b9 fa 0c 00 f7 d8 64 89 02
[32847.035617] RSP: 002b:00007ffdce8c1f98 EFLAGS: 00000246 ORIG_RAX:
0000000000000004
[32847.036257] RAX: ffffffffffffffda RBX: 00005558fae76690 RCX:
00007fe632e643a6
[32847.036879] RDX: 00007ffdce8c2190 RSI: 00007ffdce8c2190 RDI:
00007ffdce8c2320
[32847.037492] RBP: 00007ffdce8c4380 R08: 0000000000000001 R09:
0000000000000012
[32847.038105] R10: 00005558fae764e8 R11: 0000000000000246 R12:
00007ffdce8c1fe0
[32847.038699] R13: 00007ffdce8c2320 R14: 00007ffdce8c2070 R15:
00005558fa816186
[32847.039280]  </TASK>
[32847.039875] Modules linked in: tls xt_nat xt_tcpudp veth xt_conntrack
nft_chain_nat xt_MASQUERADE nf_nat nf_conntrack_netlink nf_conntrack
nf_defrag_ipv6 nf_defrag_ipv4 xfrm_user xfrm_algo nft_counter xt_addrtype
nft_compat nf_tables nfnetlink br_netfilter bridge stp llc overlay sch_fq_codel
joydev input_leds serio_raw qemu_fw_cfg mac_hid ramoops dm_multipath
scsi_dh_rdac scsi_dh_emc scsi_dh_alua reed_solomon pstore_blk mtd netconsole
pstore_zone ipmi_devintf ipmi_msghandler efi_pstore msr ip_tables x_tables
autofs4 btrfs blake2b_generic zstd_compress raid10 raid456 async_raid6_recov
async_memcpy async_pq async_xor async_tx xor raid6_pq libcrc32c raid1 raid0
multipath linear bochs drm_vram_helper drm_ttm_helper ttm drm_kms_helper
syscopyarea hid_generic sysfillrect sysimgblt xhci_pci fb_sys_fops cec rc_core
psmouse xhci_pci_renesas virtio_net usbhid net_failover hid failover drm
virtio_scsi i2c_piix4 pata_acpi floppy
[32847.045209] ---[ end trace 3eb46e5a4c095231 ]---
[32847.045918] RIP: 0010:asm_exc_page_fault+0x1/0x30
[32847.046626] Code: 28 ff 74 24 28 ff 74 24 28 ff 74 24 28 e8 27 09 00 00 48
89 c4 48 8d 6c 24 01 48 89 e7 e8 b7 86 f9 ff e9 42 0a 00 00 66 90 0f <1f> 00 e8
08 09 00 00 48 89 c4 48 8d 6c 24 01 48 89 e7 48 8b 74 24
[32847.048094] RSP: 0018:ffffb688031c7b08 EFLAGS: 00010086
[32847.048831] RAX: ffffffffc07f5360 RBX: 0000000000000000 RCX:
0000000000000002
[32847.049554] RDX: 0000000000000081 RSI: ffff9cb54955cb60 RDI:
ffffffff91681300
[32847.050314] RBP: ffffb688031c7ba8 R08: 0000000000000000 R09:
0000000000000000
[32847.051091] R10: ffff9cb54eab3000 R11: 0000000000000000 R12:
ffff9cb54955cb60
[32847.051868] R13: 0000000000000081 R14: ffffffff91681300 R15:
d0d0d0d0d0d0d0d0
[32847.052613] FS:  00007fe632c78840(0000) GS:ffff9cb57bc00000(0000)
knlGS:0000000000000000
[32847.053373] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[32847.054163] CR2: ffffffffc63418c3 CR3: 0000000001c14000 CR4:
00000000000006f0
[32854.484979] systemd-journald[352]: Sent WATCHDOG=1 notification.
[32907.065589] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
[32907.068463] rcu:     0-...!: (0 ticks this GP) idle=3b4/0/0x0
softirq=164676/164676 fqs=0  (false positive?)
[32907.071403]  (detected by 1, t=15002 jiffies, g=339469, q=958)
[32907.074366] Sending NMI from CPU 1 to CPUs 0:
[32907.077480] NMI backtrace for cpu 0 skipped: idling at
native_safe_halt+0xb/0x10
[32907.078309] rcu: rcu_sched kthread timer wakeup didn't happen for 15001
jiffies! g339469 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402
[32907.083751] rcu:     Possible timer handling issue on cpu=0
timer-softirq=238296
[32907.084959] rcu: rcu_sched kthread starved for 15002 jiffies! g339469 f0x0
RCU_GP_WAIT_FQS(5) ->state=0x402 ->cpu=0
[32907.086186] rcu:     Unless rcu_sched kthread gets sufficient CPU time, OOM
is now expected behavior.
[32907.087160] rcu: RCU grace-period kthread stack dump:
[32907.088098] task:rcu_sched       state:I stack:    0 pid:   13 ppid:     2
flags:0x00004000
[32907.089034] Call Trace:
[32907.089950]  <TASK>
[32907.090876]  __schedule+0x23d/0x590
[32907.091804]  schedule+0x4e/0xc0
[32907.092731]  schedule_timeout+0x87/0x140
[32907.093631]  ? __bpf_trace_tick_stop+0x20/0x20
[32907.094525]  rcu_gp_fqs_loop+0xe5/0x330
[32907.095402]  rcu_gp_kthread+0xa7/0x130
[32907.096269]  ? rcu_gp_init+0x5f0/0x5f0
[32907.097173]  kthread+0x12a/0x150
[32907.098040]  ? set_kthread_struct+0x50/0x50
[32907.098891]  ret_from_fork+0x22/0x30
[32907.099759]  </TASK>
[32907.100599] rcu: Stack dump where RCU GP kthread last ran:
[32907.101423] Sending NMI from CPU 1 to CPUs 0:
[32907.102298] NMI backtrace for cpu 0 skipped: idling at
native_safe_halt+0xb/0x10
[33087.076178] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
[33087.076988] rcu:     0-...!: (0 ticks this GP) idle=770/0/0x0
softirq=164676/164676 fqs=1  (false positive?)
[33087.077752]  (detected by 1, t=60007 jiffies, g=339469, q=4980)
[33087.079970] Sending NMI from CPU 1 to CPUs 0:
[33087.082000] NMI backtrace for cpu 0 skipped: idling at
native_safe_halt+0xb/0x10
[33087.082923] rcu: rcu_sched kthread timer wakeup didn't happen for 45004
jiffies! g339469 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402
[33087.086526] rcu:     Possible timer handling issue on cpu=0
timer-softirq=238296
[33087.087490] rcu: rcu_sched kthread starved for 45005 jiffies! g339469 f0x0
RCU_GP_WAIT_FQS(5) ->state=0x402 ->cpu=0
[33087.088245] rcu:     Unless rcu_sched kthread gets sufficient CPU time, OOM
is now expected behavior.
[33087.089024] rcu: RCU grace-period kthread stack dump:
[33087.089799] task:rcu_sched       state:I stack:    0 pid:   13 ppid:     2
flags:0x00004000
[33087.090701] Call Trace:
[33087.091595]  <TASK>
[33087.092463]  __schedule+0x23d/0x590
[33087.093351]  schedule+0x4e/0xc0
[33087.094150]  schedule_timeout+0x87/0x140
[33087.094925]  ? __bpf_trace_tick_stop+0x20/0x20
[33087.095704]  rcu_gp_fqs_loop+0xe5/0x330
[33087.096590]  rcu_gp_kthread+0xa7/0x130
[33087.097453]  ? rcu_gp_init+0x5f0/0x5f0
[33087.098198]  kthread+0x12a/0x150
[33087.098964]  ? set_kthread_struct+0x50/0x50
[33087.099761]  ret_from_fork+0x22/0x30
[33087.100502]  </TASK>
[33087.101186] rcu: Stack dump where RCU GP kthread last ran:
[33087.101855] Sending NMI from CPU 1 to CPUs 0:
[33087.102544] NMI backtrace for cpu 0 skipped: idling at
native_safe_halt+0xb/0x10
[33147.081021] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
[33147.081747] rcu:     0-...!: (0 ticks this GP) idle=884/0/0x0
softirq=164676/164676 fqs=0  (false positive?)
[33147.082426]  (detected by 1, t=15002 jiffies, g=339473, q=5992)
[33147.083144] Sending NMI from CPU 1 to CPUs 0:
[33147.083899] NMI backtrace for cpu 0 skipped: idling at
native_safe_halt+0xb/0x10
[33147.084830] rcu: rcu_sched kthread timer wakeup didn't happen for 15001
jiffies! g339473 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402
[33147.086851] rcu:     Possible timer handling issue on cpu=0
timer-softirq=238296
[33147.087588] rcu: rcu_sched kthread starved for 15002 jiffies! g339473 f0x0
RCU_GP_WAIT_FQS(5) ->state=0x402 ->cpu=0
[33147.088368] rcu:     Unless rcu_sched kthread gets sufficient CPU time, OOM
is now expected behavior.
[33147.089176] rcu: RCU grace-period kthread stack dump:
[33147.089947] task:rcu_sched       state:I stack:    0 pid:   13 ppid:     2
flags:0x00004000
[33147.090709] Call Trace:
[33147.091463]  <TASK>
[33147.092241]  __schedule+0x23d/0x590
[33147.092969]  schedule+0x4e/0xc0
[33147.093692]  schedule_timeout+0x87/0x140
[33147.094410]  ? __bpf_trace_tick_stop+0x20/0x20
[33147.095141]  rcu_gp_fqs_loop+0xe5/0x330
[33147.095881]  rcu_gp_kthread+0xa7/0x130
[33147.096649]  ? rcu_gp_init+0x5f0/0x5f0
[33147.097360]  kthread+0x12a/0x150
[33147.098055]  ? set_kthread_struct+0x50/0x50
[33147.098780]  ret_from_fork+0x22/0x30
[33147.099463]  </TASK>
[33147.100156] rcu: Stack dump where RCU GP kthread last ran:
[33147.100804] Sending NMI from CPU 1 to CPUs 0:
[33147.101514] NMI backtrace for cpu 0 skipped: idling at
native_safe_halt+0xb/0x10
[33327.091939] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
[33327.094235] rcu:     0-...!: (0 ticks this GP) idle=bac/0/0x0
softirq=164676/164676 fqs=1  (false positive?)
[33327.096512]  (detected by 1, t=60007 jiffies, g=339473, q=8311)
[33327.098758] Sending NMI from CPU 1 to CPUs 0:
[33327.101146] NMI backtrace for cpu 0 skipped: idling at
native_safe_halt+0xb/0x10
[33327.102058] rcu: rcu_sched kthread timer wakeup didn't happen for 45004
jiffies! g339473 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402
[33327.108229] rcu:     Possible timer handling issue on cpu=0
timer-softirq=238296
[33327.109665] rcu: rcu_sched kthread starved for 45005 jiffies! g339473 f0x0
RCU_GP_WAIT_FQS(5) ->state=0x402 ->cpu=0
[33327.111089] rcu:     Unless rcu_sched kthread gets sufficient CPU time, OOM
is now expected behavior.
[33327.112187] rcu: RCU grace-period kthread stack dump:
[33327.113164] task:rcu_sched       state:I stack:    0 pid:   13 ppid:     2
flags:0x00004000
[33327.114173] Call Trace:
[33327.115034]  <TASK>
[33327.115788]  __schedule+0x23d/0x590
[33327.116548]  schedule+0x4e/0xc0
[33327.117302]  schedule_timeout+0x87/0x140
[33327.118062]  ? __bpf_trace_tick_stop+0x20/0x20
[33327.118819]  rcu_gp_fqs_loop+0xe5/0x330
[33327.119557]  rcu_gp_kthread+0xa7/0x130
[33327.120284]  ? rcu_gp_init+0x5f0/0x5f0
[33327.120994]  kthread+0x12a/0x150
[33327.121689]  ? set_kthread_struct+0x50/0x50
[33327.122426]  ret_from_fork+0x22/0x30
[33327.123117]  </TASK>
[33327.123772] rcu: Stack dump where RCU GP kthread last ran:
[33327.124418] Sending NMI from CPU 1 to CPUs 0:
[33327.125112] NMI backtrace for cpu 0 skipped: idling at
native_safe_halt+0xb/0x10
[33387.096689] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
[33387.099051] rcu:     0-...!: (0 ticks this GP) idle=cc8/0/0x0
softirq=164676/164676 fqs=0  (false positive?)
[33387.101398]  (detected by 1, t=15002 jiffies, g=339477, q=8413)
[33387.103719] Sending NMI from CPU 1 to CPUs 0:
[33387.105931] NMI backtrace for cpu 0 skipped: idling at
native_safe_halt+0xb/0x10
[33387.106831] rcu: rcu_sched kthread timer wakeup didn't happen for 15001
jiffies! g339477 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402
[33387.110589] rcu:     Possible timer handling issue on cpu=0
timer-softirq=238296
[33387.111627] rcu: rcu_sched kthread starved for 15002 jiffies! g339477 f0x0
RCU_GP_WAIT_FQS(5) ->state=0x402 ->cpu=0
[33387.112383] rcu:     Unless rcu_sched kthread gets sufficient CPU time, OOM
is now expected behavior.
[33387.113148] rcu: RCU grace-period kthread stack dump:
[33387.113919] task:rcu_sched       state:I stack:    0 pid:   13 ppid:     2
flags:0x00004000
[33387.114682] Call Trace:
[33387.115475]  <TASK>
[33387.116259]  __schedule+0x23d/0x590
[33387.117046]  schedule+0x4e/0xc0
[33387.117828]  schedule_timeout+0x87/0x140
[33387.118605]  ? __bpf_trace_tick_stop+0x20/0x20
[33387.119382]  rcu_gp_fqs_loop+0xe5/0x330
[33387.120166]  rcu_gp_kthread+0xa7/0x130
[33387.120920]  ? rcu_gp_init+0x5f0/0x5f0
[33387.121659]  kthread+0x12a/0x150
[33387.122350]  ? set_kthread_struct+0x50/0x50
[33387.123068]  ret_from_fork+0x22/0x30
[33387.123799]  </TASK>
[33387.124483] rcu: Stack dump where RCU GP kthread last ran:
[33387.125155] Sending NMI from CPU 1 to CPUs 0:
[33387.125859] NMI backtrace for cpu 0 skipped: idling at
native_safe_halt+0xb/0x10
[33567.107272] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
[33567.108086] rcu:     0-...!: (0 ticks this GP) idle=ff8/0/0x0
softirq=164676/164676 fqs=1  (false positive?)
[33567.108782]  (detected by 1, t=60007 jiffies, g=339477, q=8523)
[33567.109471] Sending NMI from CPU 1 to CPUs 0:
[33567.110180] NMI backtrace for cpu 0 skipped: idling at
native_safe_halt+0xb/0x10
[33567.110205] rcu: rcu_sched kthread timer wakeup didn't happen for 45004
jiffies! g339477 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402
[33567.112387] rcu:     Possible timer handling issue on cpu=0
timer-softirq=238296
[33567.113184] rcu: rcu_sched kthread starved for 45005 jiffies! g339477 f0x0
RCU_GP_WAIT_FQS(5) ->state=0x402 ->cpu=0
[33567.113997] rcu:     Unless rcu_sched kthread gets sufficient CPU time, OOM
is now expected behavior.
[33567.114835] rcu: RCU grace-period kthread stack dump:
[33567.115616] task:rcu_sched       state:I stack:    0 pid:   13 ppid:     2
flags:0x00004000
[33567.116456] Call Trace:
[33567.117284]  <TASK>
[33567.118105]  __schedule+0x23d/0x590
[33567.118928]  schedule+0x4e/0xc0
[33567.119758]  schedule_timeout+0x87/0x140
[33567.120548]  ? __bpf_trace_tick_stop+0x20/0x20
[33567.121340]  rcu_gp_fqs_loop+0xe5/0x330
[33567.122137]  rcu_gp_kthread+0xa7/0x130
[33567.123071]  ? rcu_gp_init+0x5f0/0x5f0
[33567.123850]  kthread+0x12a/0x150
[33567.124594]  ? set_kthread_struct+0x50/0x50
[33567.125333]  ret_from_fork+0x22/0x30
[33567.126093]  </TASK>
[33567.126838] rcu: Stack dump where RCU GP kthread last ran:
[33567.127558] Sending NMI from CPU 1 to CPUs 0:
[33567.128290] NMI backtrace for cpu 0 skipped: idling at
native_safe_halt+0xb/0x10
[33627.112252] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
[33627.114524] rcu:     0-...!: (0 ticks this GP) idle=12c/0/0x0
softirq=164676/164676 fqs=1  (false positive?)
[33627.116778]  (detected by 1, t=15002 jiffies, g=339481, q=4061)
[33627.118964] Sending NMI from CPU 1 to CPUs 0:
[33627.121277] NMI backtrace for cpu 0 skipped: idling at
native_safe_halt+0xb/0x10
[33627.122219] rcu: rcu_sched kthread timer wakeup didn't happen for 15001
jiffies! g339481 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402
[33627.127242] rcu:     Possible timer handling issue on cpu=0
timer-softirq=238296
[33627.128561] rcu: rcu_sched kthread starved for 15002 jiffies! g339481 f0x0
RCU_GP_WAIT_FQS(5) ->state=0x402 ->cpu=0
[33627.129605] rcu:     Unless rcu_sched kthread gets sufficient CPU time, OOM
is now expected behavior.
[33627.130497] rcu: RCU grace-period kthread stack dump:
[33627.131431] task:rcu_sched       state:I stack:    0 pid:   13 ppid:     2
flags:0x00004000
[33627.132336] Call Trace:
[33627.133137]  <TASK>
[33627.133914]  __schedule+0x23d/0x590
[33627.134697]  schedule+0x4e/0xc0
[33627.135491]  schedule_timeout+0x87/0x140
[33627.136266]  ? __bpf_trace_tick_stop+0x20/0x20
[33627.137041]  rcu_gp_fqs_loop+0xe5/0x330
[33627.137815]  rcu_gp_kthread+0xa7/0x130
[33627.138555]  ? rcu_gp_init+0x5f0/0x5f0
[33627.139304]  kthread+0x12a/0x150
[33627.140026]  ? set_kthread_struct+0x50/0x50
[33627.140753]  ret_from_fork+0x22/0x30
[33627.141451]  </TASK>
[33627.142133] rcu: Stack dump where RCU GP kthread last ran:
[33627.142774] Sending NMI from CPU 1 to CPUs 0:
[33627.143520] NMI backtrace for cpu 0 skipped: idling at
native_safe_halt+0xb/0x10
[33687.117032] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
[33687.117692] rcu:     0-...!: (0 ticks this GP) idle=240/0/0x0
softirq=164676/164676 fqs=0  (false positive?)
[33687.118360]  (detected by 1, t=15002 jiffies, g=339485, q=792)
[33687.119007] Sending NMI from CPU 1 to CPUs 0:
[33687.119749] NMI backtrace for cpu 0 skipped: idling at
native_safe_halt+0xb/0x10
[33687.120680] rcu: rcu_sched kthread timer wakeup didn't happen for 15001
jiffies! g339485 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402
[33687.122730] rcu:     Possible timer handling issue on cpu=0
timer-softirq=238296
[33687.123524] rcu: rcu_sched kthread starved for 15002 jiffies! g339485 f0x0
RCU_GP_WAIT_FQS(5) ->state=0x402 ->cpu=0
[33687.124318] rcu:     Unless rcu_sched kthread gets sufficient CPU time, OOM
is now expected behavior.
[33687.125097] rcu: RCU grace-period kthread stack dump:
[33687.125902] task:rcu_sched       state:I stack:    0 pid:   13 ppid:     2
flags:0x00004000
[33687.126703] Call Trace:
[33687.127500]  <TASK>
[33687.128318]  __schedule+0x23d/0x590
[33687.129116]  schedule+0x4e/0xc0
[33687.129908]  schedule_timeout+0x87/0x140
[33687.130745]  ? __bpf_trace_tick_stop+0x20/0x20
[33687.131535]  rcu_gp_fqs_loop+0xe5/0x330
[33687.132386]  rcu_gp_kthread+0xa7/0x130
[33687.133149]  ? rcu_gp_init+0x5f0/0x5f0
[33687.133902]  kthread+0x12a/0x150
[33687.134638]  ? set_kthread_struct+0x50/0x50
[33687.135370]  ret_from_fork+0x22/0x30
[33687.136120]  </TASK>
[33687.136813] rcu: Stack dump where RCU GP kthread last ran:
[33687.137500] Sending NMI from CPU 1 to CPUs 0:
[33687.138285] NMI backtrace for cpu 0 skipped: idling at
native_safe_halt+0xb/0x10
[33867.127694] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
[33867.128396] rcu:     0-...!: (0 ticks this GP) idle=538/0/0x0
softirq=164676/164676 fqs=1  (false positive?)
[33867.129073]  (detected by 1, t=60007 jiffies, g=339485, q=894)
[33867.129719] Sending NMI from CPU 1 to CPUs 0:
[33867.130464] NMI backtrace for cpu 0 skipped: idling at
native_safe_halt+0xb/0x10
[33867.130491] rcu: rcu_sched kthread timer wakeup didn't happen for 45004
jiffies! g339485 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402
[33867.132521] rcu:     Possible timer handling issue on cpu=0
timer-softirq=238296
[33867.133243] rcu: rcu_sched kthread starved for 45005 jiffies! g339485 f0x0
RCU_GP_WAIT_FQS(5) ->state=0x402 ->cpu=0
[33867.133982] rcu:     Unless rcu_sched kthread gets sufficient CPU time, OOM
is now expected behavior.
[33867.134745] rcu: RCU grace-period kthread stack dump:
[33867.135493] task:rcu_sched       state:I stack:    0 pid:   13 ppid:     2
flags:0x00004000
[33867.136269] Call Trace:
[33867.137090]  <TASK>
[33867.137857]  __schedule+0x23d/0x590
[33867.138674]  schedule+0x4e/0xc0
[33867.139458]  schedule_timeout+0x87/0x140
[33867.140238]  ? __bpf_trace_tick_stop+0x20/0x20
[33867.141035]  rcu_gp_fqs_loop+0xe5/0x330
[33867.141820]  rcu_gp_kthread+0xa7/0x130
[33867.142602]  ? rcu_gp_init+0x5f0/0x5f0
[33867.143354]  kthread+0x12a/0x150
[33867.144085]  ? set_kthread_struct+0x50/0x50
[33867.144812]  ret_from_fork+0x22/0x30
[33867.145542]  </TASK>
[33867.146236] rcu: Stack dump where RCU GP kthread last ran:
[33867.146933] Sending NMI from CPU 1 to CPUs 0:
[33867.147763] NMI backtrace for cpu 0 skipped: idling at
native_safe_halt+0xb/0x10
[33927.132635] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
[33927.133318] rcu:     0-...!: (0 ticks this GP) idle=654/0/0x0
softirq=164676/164676 fqs=0  (false positive?)
[33927.133978]  (detected by 1, t=15002 jiffies, g=339489, q=740)
[33927.134619] Sending NMI from CPU 1 to CPUs 0:
[33927.135332] NMI backtrace for cpu 0 skipped: idling at
native_safe_halt+0xb/0x10
[33927.136309] rcu: rcu_sched kthread timer wakeup didn't happen for 15001
jiffies! g339489 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402
[33927.138323] rcu:     Possible timer handling issue on cpu=0
timer-softirq=238296
[33927.139085] rcu: rcu_sched kthread starved for 15002 jiffies! g339489 f0x0
RCU_GP_WAIT_FQS(5) ->state=0x402 ->cpu=0
[33927.139888] rcu:     Unless rcu_sched kthread gets sufficient CPU time, OOM
is now expected behavior.
[33927.140659] rcu: RCU grace-period kthread stack dump:
[33927.141395] task:rcu_sched       state:I stack:    0 pid:   13 ppid:     2
flags:0x00004000
[33927.142187] Call Trace:
[33927.142968]  <TASK>
[33927.143794]  __schedule+0x23d/0x590
[33927.144580]  schedule+0x4e/0xc0
[33927.145354]  schedule_timeout+0x87/0x140
[33927.146160]  ? __bpf_trace_tick_stop+0x20/0x20
[33927.146908]  rcu_gp_fqs_loop+0xe5/0x330
[33927.147650]  rcu_gp_kthread+0xa7/0x130
[33927.148371]  ? rcu_gp_init+0x5f0/0x5f0
[33927.149077]  kthread+0x12a/0x150
[33927.149767]  ? set_kthread_struct+0x50/0x50
[33927.150454]  ret_from_fork+0x22/0x30
[33927.151133]  </TASK>
[33927.151792] rcu: Stack dump where RCU GP kthread last ran:
[33927.152435] Sending NMI from CPU 1 to CPUs 0:
[33927.153130] NMI backtrace for cpu 0 skipped: idling at
native_safe_halt+0xb/0x10
[34107.143372] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
[34107.145414] rcu:     0-...!: (0 ticks this GP) idle=958/0/0x0
softirq=164676/164676 fqs=1  (false positive?)
[34107.147516]  (detected by 1, t=60007 jiffies, g=339489, q=843)
[34107.149668] Sending NMI from CPU 1 to CPUs 0:
[34107.152012] NMI backtrace for cpu 0 skipped: idling at
native_safe_halt+0xb/0x10
[34107.152867] rcu: rcu_sched kthread timer wakeup didn't happen for 45004
jiffies! g339489 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402
[34107.156675] rcu:     Possible timer handling issue on cpu=0
timer-softirq=238296
[34107.157822] rcu: rcu_sched kthread starved for 45005 jiffies! g339489 f0x0
RCU_GP_WAIT_FQS(5) ->state=0x402 ->cpu=0
[34107.158981] rcu:     Unless rcu_sched kthread gets sufficient CPU time, OOM
is now expected behavior.
[34107.159712] rcu: RCU grace-period kthread stack dump:
[34107.160447] task:rcu_sched       state:I stack:    0 pid:   13 ppid:     2
flags:0x00004000
[34107.161207] Call Trace:
[34107.161956]  <TASK>
[34107.162713]  __schedule+0x23d/0x590
[34107.163467]  schedule+0x4e/0xc0
[34107.164214]  schedule_timeout+0x87/0x140
[34107.165048]  ? __bpf_trace_tick_stop+0x20/0x20
[34107.165825]  rcu_gp_fqs_loop+0xe5/0x330
[34107.166600]  rcu_gp_kthread+0xa7/0x130
[34107.167353]  ? rcu_gp_init+0x5f0/0x5f0
[34107.168126]  kthread+0x12a/0x150
[34107.168850]  ? set_kthread_struct+0x50/0x50
[34107.169571]  ret_from_fork+0x22/0x30
[34107.170294]  </TASK>
[34107.170947] rcu: Stack dump where RCU GP kthread last ran:
[34107.171620] Sending NMI from CPU 1 to CPUs 0:
[34107.172354] NMI backtrace for cpu 0 skipped: idling at
native_safe_halt+0xb/0x10
[34167.148166] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
[34167.148854] rcu:     0-...!: (0 ticks this GP) idle=a64/0/0x0
softirq=164676/164676 fqs=0  (false positive?)
[34167.149539]  (detected by 1, t=15002 jiffies, g=339493, q=825)
[34167.150215] Sending NMI from CPU 1 to CPUs 0:
[34167.150972] NMI backtrace for cpu 0 skipped: idling at
native_safe_halt+0xb/0x10
[34167.151899] rcu: rcu_sched kthread timer wakeup didn't happen for 15001
jiffies! g339493 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402
[34167.153969] rcu:     Possible timer handling issue on cpu=0
timer-softirq=238296
[34167.154714] rcu: rcu_sched kthread starved for 15002 jiffies! g339493 f0x0
RCU_GP_WAIT_FQS(5) ->state=0x402 ->cpu=0
[34167.155504] rcu:     Unless rcu_sched kthread gets sufficient CPU time, OOM
is now expected behavior.
[34167.156280] rcu: RCU grace-period kthread stack dump:
[34167.157058] task:rcu_sched       state:I stack:    0 pid:   13 ppid:     2
flags:0x00004000
[34167.157859] Call Trace:
[34167.158650]  <TASK>
[34167.159459]  __schedule+0x23d/0x590
[34167.160254]  schedule+0x4e/0xc0
[34167.161039]  schedule_timeout+0x87/0x140
[34167.161851]  ? __bpf_trace_tick_stop+0x20/0x20
[34167.162605]  rcu_gp_fqs_loop+0xe5/0x330
[34167.163393]  rcu_gp_kthread+0xa7/0x130
[34167.164125]  ? rcu_gp_init+0x5f0/0x5f0
[34167.164841]  kthread+0x12a/0x150
[34167.165590]  ? set_kthread_struct+0x50/0x50
[34167.166299]  ret_from_fork+0x22/0x30
[34167.167013]  </TASK>
[34167.167685] rcu: Stack dump where RCU GP kthread last ran:
[34167.168346] Sending NMI from CPU 1 to CPUs 0:
[34167.169070] NMI backtrace for cpu 0 skipped: idling at
native_safe_halt+0xb/0x10
[34347.159098] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
[34347.163576] rcu:     0-...!: (0 ticks this GP) idle=d6a/0/0x0
softirq=164676/164676 fqs=1  (false positive?)
[34347.165841]  (detected by 1, t=60007 jiffies, g=339493, q=955)
[34347.168124] Sending NMI from CPU 1 to CPUs 0:
[34347.170497] NMI backtrace for cpu 0 skipped: idling at
native_safe_halt+0xb/0x10
[34347.171412] rcu: rcu_sched kthread timer wakeup didn't happen for 45004
jiffies! g339493 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402
[34347.176473] rcu:     Possible timer handling issue on cpu=0
timer-softirq=238296
[34347.177835] rcu: rcu_sched kthread starved for 45005 jiffies! g339493 f0x0
RCU_GP_WAIT_FQS(5) ->state=0x402 ->cpu=0
[34347.178836] rcu:     Unless rcu_sched kthread gets sufficient CPU time, OOM
is now expected behavior.
[34347.179849] rcu: RCU grace-period kthread stack dump:
[34347.180798] task:rcu_sched       state:I stack:    0 pid:   13 ppid:     2
flags:0x00004000
[34347.181587] Call Trace:
[34347.182355]  <TASK>
[34347.183131]  __schedule+0x23d/0x590
[34347.183880]  schedule+0x4e/0xc0
[34347.184623]  schedule_timeout+0x87/0x140
[34347.185368]  ? __bpf_trace_tick_stop+0x20/0x20
[34347.186160]  rcu_gp_fqs_loop+0xe5/0x330
[34347.186897]  rcu_gp_kthread+0xa7/0x130
[34347.187619]  ? rcu_gp_init+0x5f0/0x5f0
[34347.188336]  kthread+0x12a/0x150
[34347.189026]  ? set_kthread_struct+0x50/0x50
[34347.189754]  ret_from_fork+0x22/0x30
[34347.190435]  </TASK>
[34347.191087] rcu: Stack dump where RCU GP kthread last ran:
[34347.191731] Sending NMI from CPU 1 to CPUs 0:
[34347.192425] NMI backtrace for cpu 0 skipped: idling at
native_safe_halt+0xb/0x10
[34407.163845] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
[34407.166112] rcu:     0-...!: (0 ticks this GP) idle=e7a/0/0x0
softirq=164676/164676 fqs=0  (false positive?)
[34407.168371]  (detected by 1, t=15002 jiffies, g=339497, q=838)
[34407.170692] Sending NMI from CPU 1 to CPUs 0:
[34407.173003] NMI backtrace for cpu 0 skipped: idling at
native_safe_halt+0xb/0x10
[34407.173907] rcu: rcu_sched kthread timer wakeup didn't happen for 15001
jiffies! g339497 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402
[34407.178683] rcu:     Possible timer handling issue on cpu=0
timer-softirq=238296
[34407.179735] rcu: rcu_sched kthread starved for 15002 jiffies! g339497 f0x0
RCU_GP_WAIT_FQS(5) ->state=0x402 ->cpu=0
[34407.180810] rcu:     Unless rcu_sched kthread gets sufficient CPU time, OOM
is now expected behavior.
[34407.181746] rcu: RCU grace-period kthread stack dump:
[34407.182496] task:rcu_sched       state:I stack:    0 pid:   13 ppid:     2
flags:0x00004000
[34407.183280] Call Trace:
[34407.184053]  <TASK>
[34407.184806]  __schedule+0x23d/0x590
[34407.185576]  schedule+0x4e/0xc0
[34407.186328]  schedule_timeout+0x87/0x140
[34407.187086]  ? __bpf_trace_tick_stop+0x20/0x20
[34407.187995]  rcu_gp_fqs_loop+0xe5/0x330
[34407.188739]  rcu_gp_kthread+0xa7/0x130
[34407.189595]  ? rcu_gp_init+0x5f0/0x5f0
[34407.190341]  kthread+0x12a/0x150
[34407.191075]  ? set_kthread_struct+0x50/0x50
[34407.191801]  ret_from_fork+0x22/0x30
[34407.192517]  </TASK>
[34407.193203] rcu: Stack dump where RCU GP kthread last ran:
[34407.193879] Sending NMI from CPU 1 to CPUs 0:
[34407.194572] NMI backtrace for cpu 0 skipped: idling at
native_safe_halt+0xb/0x10
[34587.174533] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
[34587.176807] rcu:     0-...!: (0 ticks this GP) idle=186/0/0x0
softirq=164676/164676 fqs=1  (false positive?)
[34587.179131]  (detected by 1, t=60007 jiffies, g=339497, q=984)
[34587.181442] Sending NMI from CPU 1 to CPUs 0:
[34587.183795] NMI backtrace for cpu 0 skipped: idling at
native_safe_halt+0xb/0x10
[34587.184714] rcu: rcu_sched kthread timer wakeup didn't happen for 45004
jiffies! g339497 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402
[34587.190238] rcu:     Possible timer handling issue on cpu=0
timer-softirq=238296
[34587.191693] rcu: rcu_sched kthread starved for 45005 jiffies! g339497 f0x0
RCU_GP_WAIT_FQS(5) ->state=0x402 ->cpu=0
[34587.192865] rcu:     Unless rcu_sched kthread gets sufficient CPU time, OOM
is now expected behavior.
[34587.193890] rcu: RCU grace-period kthread stack dump:
[34587.194912] task:rcu_sched       state:I stack:    0 pid:   13 ppid:     2
flags:0x00004000
[34587.195743] Call Trace:
[34587.196495]  <TASK>
[34587.197284]  __schedule+0x23d/0x590
[34587.198053]  schedule+0x4e/0xc0
[34587.198835]  schedule_timeout+0x87/0x140
[34587.199611]  ? __bpf_trace_tick_stop+0x20/0x20
[34587.200387]  rcu_gp_fqs_loop+0xe5/0x330
[34587.201216]  rcu_gp_kthread+0xa7/0x130
[34587.201957]  ? rcu_gp_init+0x5f0/0x5f0
[34587.202696]  kthread+0x12a/0x150
[34587.203418]  ? set_kthread_struct+0x50/0x50
[34587.204178]  ret_from_fork+0x22/0x30
[34587.204894]  </TASK>
[34587.205585] rcu: Stack dump where RCU GP kthread last ran:
[34587.206258] Sending NMI from CPU 1 to CPUs 0:
[34587.206950] NMI backtrace for cpu 0 skipped: idling at
native_safe_halt+0xb/0x10
[34647.179423] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
[34647.181691] rcu:     0-...!: (0 ticks this GP) idle=28a/0/0x0
softirq=164676/164676 fqs=0  (false positive?)
[34647.183968]  (detected by 1, t=15002 jiffies, g=339501, q=891)
[34647.186380] Sending NMI from CPU 1 to CPUs 0:
[34647.188753] NMI backtrace for cpu 0 skipped: idling at
native_safe_halt+0xb/0x10
[34647.189597] rcu: rcu_sched kthread timer wakeup didn't happen for 15001
jiffies! g339501 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402
[34647.194326] rcu:     Possible timer handling issue on cpu=0
timer-softirq=238296
[34647.195349] rcu: rcu_sched kthread starved for 15002 jiffies! g339501 f0x0
RCU_GP_WAIT_FQS(5) ->state=0x402 ->cpu=0
[34647.196404] rcu:     Unless rcu_sched kthread gets sufficient CPU time, OOM
is now expected behavior.
[34647.197354] rcu: RCU grace-period kthread stack dump:
[34647.198167] task:rcu_sched       state:I stack:    0 pid:   13 ppid:     2
flags:0x00004000
[34647.198932] Call Trace:
[34647.199707]  <TASK>
[34647.200530]  __schedule+0x23d/0x590
[34647.201324]  schedule+0x4e/0xc0
[34647.202185]  schedule_timeout+0x87/0x140
[34647.202951]  ? __bpf_trace_tick_stop+0x20/0x20
[34647.203698]  rcu_gp_fqs_loop+0xe5/0x330
[34647.204436]  rcu_gp_kthread+0xa7/0x130
[34647.205160]  ? rcu_gp_init+0x5f0/0x5f0
[34647.205870]  kthread+0x12a/0x150
[34647.206605]  ? set_kthread_struct+0x50/0x50
[34647.207297]  ret_from_fork+0x22/0x30
[34647.207979]  </TASK>
[34647.208661] rcu: Stack dump where RCU GP kthread last ran:
[34647.209306] Sending NMI from CPU 1 to CPUs 0:
[34647.210057] NMI backtrace for cpu 0 skipped: idling at
native_safe_halt+0xb/0x10
[34827.190031] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
[34827.190721] rcu:     0-...!: (0 ticks this GP) idle=58e/0/0x0
softirq=164676/164676 fqs=1  (false positive?)
[34827.191380]  (detected by 1, t=60007 jiffies, g=339501, q=1046)
[34827.192019] Sending NMI from CPU 1 to CPUs 0:
[34827.192677] NMI backtrace for cpu 0 skipped: idling at
native_safe_halt+0xb/0x10
[34827.193648] rcu: rcu_sched kthread timer wakeup didn't happen for 45004
jiffies! g339501 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402
[34827.195679] rcu:     Possible timer handling issue on cpu=0
timer-softirq=238296
[34827.196419] rcu: rcu_sched kthread starved for 45005 jiffies! g339501 f0x0
RCU_GP_WAIT_FQS(5) ->state=0x402 ->cpu=0
[34827.197187] rcu:     Unless rcu_sched kthread gets sufficient CPU time, OOM
is now expected behavior.
[34827.197951] rcu: RCU grace-period kthread stack dump:
[34827.198722] task:rcu_sched       state:I stack:    0 pid:   13 ppid:     2
flags:0x00004000
[34827.199518] Call Trace:
[34827.200302]  <TASK>
[34827.201091]  __schedule+0x23d/0x590
[34827.201878]  schedule+0x4e/0xc0
[34827.202660]  schedule_timeout+0x87/0x140
[34827.203446]  ? __bpf_trace_tick_stop+0x20/0x20
[34827.204230]  rcu_gp_fqs_loop+0xe5/0x330
[34827.205010]  rcu_gp_kthread+0xa7/0x130
[34827.205762]  ? rcu_gp_init+0x5f0/0x5f0
[34827.206499]  kthread+0x12a/0x150
[34827.207219]  ? set_kthread_struct+0x50/0x50
[34827.207951]  ret_from_fork+0x22/0x30
[34827.208683]  </TASK>
[34827.209405] rcu: Stack dump where RCU GP kthread last ran:
[34827.210077] Sending NMI from CPU 1 to CPUs 0:
[34827.210774] NMI backtrace for cpu 0 skipped: idling at
native_safe_halt+0xb/0x10
[34887.194943] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
[34887.197306] rcu:     0-...!: (0 ticks this GP) idle=692/0/0x0
softirq=164676/164676 fqs=0  (false positive?)
[34887.199651]  (detected by 1, t=15002 jiffies, g=339505, q=900)
[34887.201971] Sending NMI from CPU 1 to CPUs 0:
[34887.204327] NMI backtrace for cpu 0 skipped: idling at
native_safe_halt+0xb/0x10
[34887.205244] rcu: rcu_sched kthread timer wakeup didn't happen for 15001
jiffies! g339505 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402
[34887.209853] rcu:     Possible timer handling issue on cpu=0
timer-softirq=238296
[34887.210644] rcu: rcu_sched kthread starved for 15002 jiffies! g339505 f0x0
RCU_GP_WAIT_FQS(5) ->state=0x402 ->cpu=0
[34887.211468] rcu:     Unless rcu_sched kthread gets sufficient CPU time, OOM
is now expected behavior.
[34887.212291] rcu: RCU grace-period kthread stack dump:
[34887.213115] task:rcu_sched       state:I stack:    0 pid:   13 ppid:     2
flags:0x00004000
[34887.214011] Call Trace:
[34887.214802]  <TASK>
[34887.215582]  __schedule+0x23d/0x590
[34887.216368]  schedule+0x4e/0xc0
[34887.217192]  schedule_timeout+0x87/0x140
[34887.217995]  ? __bpf_trace_tick_stop+0x20/0x20
[34887.218794]  rcu_gp_fqs_loop+0xe5/0x330
[34887.219561]  rcu_gp_kthread+0xa7/0x130
[34887.220314]  ? rcu_gp_init+0x5f0/0x5f0
[34887.221053]  kthread+0x12a/0x150
[34887.221803]  ? set_kthread_struct+0x50/0x50
[34887.222526]  ret_from_fork+0x22/0x30
[34887.223238]  </TASK>
[34887.223928] rcu: Stack dump where RCU GP kthread last ran:
[34887.224609] Sending NMI from CPU 1 to CPUs 0:
[34887.225333] NMI backtrace for cpu 0 skipped: idling at
native_safe_halt+0xb/0x10
[35067.205569] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
[35067.206256] rcu:     0-...!: (0 ticks this GP) idle=992/0/0x0
softirq=164676/164676 fqs=1  (false positive?)
[35067.206934]  (detected by 1, t=60007 jiffies, g=339505, q=1040)
[35067.207619] Sending NMI from CPU 1 to CPUs 0:
[35067.208378] NMI backtrace for cpu 0 skipped: idling at
native_safe_halt+0xb/0x10
[35067.209305] rcu: rcu_sched kthread timer wakeup didn't happen for 45004
jiffies! g339505 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402
[35067.211413] rcu:     Possible timer handling issue on cpu=0
timer-softirq=238296
[35067.212162] rcu: rcu_sched kthread starved for 45005 jiffies! g339505 f0x0
RCU_GP_WAIT_FQS(5) ->state=0x402 ->cpu=0
[35067.212985] rcu:     Unless rcu_sched kthread gets sufficient CPU time, OOM
is now expected behavior.
[35067.213769] rcu: RCU grace-period kthread stack dump:
[35067.214574] task:rcu_sched       state:I stack:    0 pid:   13 ppid:     2
flags:0x00004000
[35067.215473] Call Trace:
[35067.216288]  <TASK>
[35067.217104]  __schedule+0x23d/0x590
[35067.217906]  schedule+0x4e/0xc0
[35067.218699]  schedule_timeout+0x87/0x140
[35067.219485]  ? __bpf_trace_tick_stop+0x20/0x20
[35067.220311]  rcu_gp_fqs_loop+0xe5/0x330
[35067.221125]  rcu_gp_kthread+0xa7/0x130
[35067.221896]  ? rcu_gp_init+0x5f0/0x5f0
[35067.222653]  kthread+0x12a/0x150
[35067.223400]  ? set_kthread_struct+0x50/0x50
[35067.224123]  ret_from_fork+0x22/0x30
[35067.224850]  </TASK>
[35067.225564] rcu: Stack dump where RCU GP kthread last ran:
[35067.226235] Sending NMI from CPU 1 to CPUs 0:
[35067.227017] NMI backtrace for cpu 0 skipped: idling at
native_safe_halt+0xb/0x10

-- 
You may reply to this email to add a comment.

You are receiving this mail because:
You are watching the assignee of the bug.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [Bug 216349] Kernel panic in a Ubuntu VM running on Proxmox
  2022-08-10 12:13 [Bug 216349] New: Kernel panic in a Ubuntu VM running on Proxmox bugzilla-daemon
                   ` (7 preceding siblings ...)
  2022-08-13  2:55 ` bugzilla-daemon
@ 2022-08-13  2:59 ` bugzilla-daemon
  2022-08-13 13:30 ` bugzilla-daemon
                   ` (5 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: bugzilla-daemon @ 2022-08-13  2:59 UTC (permalink / raw)
  To: kvm

https://bugzilla.kernel.org/show_bug.cgi?id=216349

--- Comment #9 from John Park (jdpark.au@gmail.com) ---
Created attachment 301557
  --> https://bugzilla.kernel.org/attachment.cgi?id=301557&action=edit
kernel panic - netconsole output

-- 
You may reply to this email to add a comment.

You are receiving this mail because:
You are watching the assignee of the bug.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [Bug 216349] Kernel panic in a Ubuntu VM running on Proxmox
  2022-08-10 12:13 [Bug 216349] New: Kernel panic in a Ubuntu VM running on Proxmox bugzilla-daemon
                   ` (8 preceding siblings ...)
  2022-08-13  2:59 ` bugzilla-daemon
@ 2022-08-13 13:30 ` bugzilla-daemon
  2022-08-17 14:35 ` bugzilla-daemon
                   ` (4 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: bugzilla-daemon @ 2022-08-13 13:30 UTC (permalink / raw)
  To: kvm

https://bugzilla.kernel.org/show_bug.cgi?id=216349

--- Comment #10 from John Park (jdpark.au@gmail.com) ---
I experienced another panic today. Log below

---

[38049.665307] traps: PANIC: double fault, error_code: 0x0
[38049.665352] double fault: 0000 [#1] SMP PTI
[38049.665362] CPU: 1 PID: 3295 Comm: lighttpd Not tainted 5.15.0-46-generic
#49-Ubuntu
[38049.665388] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS
rel-1.15.0-0-g2dd4b9b3f840-prebuilt.qemu.org 04/01/2014
[38049.665395] RIP: 0010:error_entry+0x0/0x130
[38049.665466] Code: de eb 0a f3 48 0f ae db e9 21 fd ff ff 85 db 0f 85 19 fd
ff ff 0f 01 f8 e9 11 fd ff ff 66 66 2e 0f 1f 84 00 00 00 00 00 66 90 <fc> 56 48
8b 74 24 08 48 89 7c 24 08 52 51 50 41 50 41 51 41 52 41
[38049.665471] RSP: 0018:ffffb507830aa35d EFLAGS: 00010002
[38049.665480] RAX: 00007ffe5be0c194 RBX: 00007ffe5be0c1e0 RCX:
00007ffe5be0c194
[38049.665484] RDX: 0000000000000070 RSI: 0000000000000010 RDI:
ffffb507830a3cf8
[38049.665487] RBP: ffffb507830a3cd8 R08: 0000000000000001 R09:
0000000000000000
[38049.665491] R10: 0000000000000001 R11: 0000000000000000 R12:
ffff91c59f1a9880
[38049.665494] R13: ffff91c4d1be2d80 R14: 0000000000080800 R15:
ffff91c49d517700
[38049.665499] FS:  00007f96e38ef680(0000) GS:ffff91c5bbd00000(0000)
knlGS:0000000000000000
[38049.665503] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[38049.665507] CR2: ffffb507830aa348 CR3: 00000000090c0000 CR4:
00000000000006e0
[38049.665518] Call Trace:
[38049.665542] Modules linked in: cp210x usbserial cdc_acm tls veth xt_nat
xt_tcpudp xt_conntrack nft_chain_nat xt_MASQUERADE nf_nat nf_conntrack_netlink
nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 xfrm_user xfrm_algo nft_counter
xt_addrtype nft_compat nf_tables nfnetlink br_netfilter bridge stp llc overlay
sch_fq_codel joydev input_leds serio_raw qemu_fw_cfg mac_hid dm_multipath
scsi_dh_rdac scsi_dh_emc scsi_dh_alua netconsole ipmi_devintf pstore_blk mtd
ramoops pstore_zone reed_solomon ipmi_msghandler msr efi_pstore ip_tables
x_tables autofs4 btrfs blake2b_generic zstd_compress raid10 raid456
async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq
libcrc32c raid1 raid0 multipath linear bochs drm_vram_helper drm_ttm_helper ttm
drm_kms_helper hid_generic syscopyarea sysfillrect sysimgblt fb_sys_fops usbhid
cec rc_core xhci_pci hid psmouse xhci_pci_renesas drm i2c_piix4 pata_acpi
virtio_net net_failover failover virtio_scsi floppy
[38049.687192] ---[ end trace e501d4c27d1b1728 ]---
[38049.687196] RIP: 0010:error_entry+0x0/0x130
[38049.687203] Code: de eb 0a f3 48 0f ae db e9 21 fd ff ff 85 db 0f 85 19 fd
ff ff 0f 01 f8 e9 11 fd ff ff 66 66 2e 0f 1f 84 00 00 00 00 00 66 90 <fc> 56 48
8b 74 24 08 48 89 7c 24 08 52 51 50 41 50 41 51 41 52 41
[38049.687227] RSP: 0018:ffffb507830aa35d EFLAGS: 00010002
[38049.687229] RAX: 00007ffe5be0c194 RBX: 00007ffe5be0c1e0 RCX:
00007ffe5be0c194
[38049.687230] RDX: 0000000000000070 RSI: 0000000000000010 RDI:
ffffb507830a3cf8
[38049.687231] RBP: ffffb507830a3cd8 R08: 0000000000000001 R09:
0000000000000000
[38049.687241] R10: 0000000000000001 R11: 0000000000000000 R12:
ffff91c59f1a9880
[38049.687242] R13: ffff91c4d1be2d80 R14: 0000000000080800 R15:
ffff91c49d517700
[38049.687243] FS:  00007f96e38ef680(0000) GS:ffff91c5bbd00000(0000)
knlGS:0000000000000000
[38049.687244] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[38049.687245] CR2: ffffb507830aa348 CR3: 00000000090c0000 CR4:
00000000000006e0
[38049.687249] Kernel panic - not syncing: Fatal exception in interrupt
[38049.687476] Kernel Offset: 0x34400000 from 0xffffffff81000000 (relocation
range: 0xffffffff80000000-0xffffffffbfffffff)

-- 
You may reply to this email to add a comment.

You are receiving this mail because:
You are watching the assignee of the bug.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [Bug 216349] Kernel panic in a Ubuntu VM running on Proxmox
  2022-08-10 12:13 [Bug 216349] New: Kernel panic in a Ubuntu VM running on Proxmox bugzilla-daemon
                   ` (9 preceding siblings ...)
  2022-08-13 13:30 ` bugzilla-daemon
@ 2022-08-17 14:35 ` bugzilla-daemon
  2022-08-17 15:37 ` bugzilla-daemon
                   ` (3 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: bugzilla-daemon @ 2022-08-17 14:35 UTC (permalink / raw)
  To: kvm

https://bugzilla.kernel.org/show_bug.cgi?id=216349

mlevitsk@redhat.com changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |mlevitsk@redhat.com

--- Comment #11 from mlevitsk@redhat.com ---
Could you try a new (5.19 for example) kernel and also try an older kernel to
try and see if this is a regression.

-- 
You may reply to this email to add a comment.

You are receiving this mail because:
You are watching the assignee of the bug.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [Bug 216349] Kernel panic in a Ubuntu VM running on Proxmox
  2022-08-10 12:13 [Bug 216349] New: Kernel panic in a Ubuntu VM running on Proxmox bugzilla-daemon
                   ` (10 preceding siblings ...)
  2022-08-17 14:35 ` bugzilla-daemon
@ 2022-08-17 15:37 ` bugzilla-daemon
  2022-08-22  5:01 ` [Bug 216349] Kernel panics in VMs running on an Intel N5105 " bugzilla-daemon
                   ` (2 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: bugzilla-daemon @ 2022-08-17 15:37 UTC (permalink / raw)
  To: kvm

https://bugzilla.kernel.org/show_bug.cgi?id=216349

--- Comment #12 from John Park (jdpark.au@gmail.com) ---
(In reply to mlevitsk from comment #11)
> Could you try a new (5.19 for example) kernel and also try an older kernel
> to try and see if this is a regression.

We're attempting to move to the mainline kernel at the moment and we'll test
older and newer kernels and see what the results are.

-- 
You may reply to this email to add a comment.

You are receiving this mail because:
You are watching the assignee of the bug.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [Bug 216349] Kernel panics in VMs running on an Intel N5105 on Proxmox
  2022-08-10 12:13 [Bug 216349] New: Kernel panic in a Ubuntu VM running on Proxmox bugzilla-daemon
                   ` (11 preceding siblings ...)
  2022-08-17 15:37 ` bugzilla-daemon
@ 2022-08-22  5:01 ` bugzilla-daemon
  2022-08-24 12:33 ` bugzilla-daemon
  2022-10-27 23:55 ` bugzilla-daemon
  14 siblings, 0 replies; 16+ messages in thread
From: bugzilla-daemon @ 2022-08-22  5:01 UTC (permalink / raw)
  To: kvm

https://bugzilla.kernel.org/show_bug.cgi?id=216349

John Park (jdpark.au@gmail.com) changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
            Summary|Kernel panic in a Ubuntu VM |Kernel panics in VMs
                   |running on Proxmox          |running on an Intel N5105
                   |                            |on Proxmox

-- 
You may reply to this email to add a comment.

You are receiving this mail because:
You are watching the assignee of the bug.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [Bug 216349] Kernel panics in VMs running on an Intel N5105 on Proxmox
  2022-08-10 12:13 [Bug 216349] New: Kernel panic in a Ubuntu VM running on Proxmox bugzilla-daemon
                   ` (12 preceding siblings ...)
  2022-08-22  5:01 ` [Bug 216349] Kernel panics in VMs running on an Intel N5105 " bugzilla-daemon
@ 2022-08-24 12:33 ` bugzilla-daemon
  2022-10-27 23:55 ` bugzilla-daemon
  14 siblings, 0 replies; 16+ messages in thread
From: bugzilla-daemon @ 2022-08-24 12:33 UTC (permalink / raw)
  To: kvm

https://bugzilla.kernel.org/show_bug.cgi?id=216349

--- Comment #13 from The Linux kernel's regression tracker (Thorsten Leemhuis) (regressions@leemhuis.info) ---
If this turns out to be a regression, please let me know

-- 
You may reply to this email to add a comment.

You are receiving this mail because:
You are watching the assignee of the bug.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [Bug 216349] Kernel panics in VMs running on an Intel N5105 on Proxmox
  2022-08-10 12:13 [Bug 216349] New: Kernel panic in a Ubuntu VM running on Proxmox bugzilla-daemon
                   ` (13 preceding siblings ...)
  2022-08-24 12:33 ` bugzilla-daemon
@ 2022-10-27 23:55 ` bugzilla-daemon
  14 siblings, 0 replies; 16+ messages in thread
From: bugzilla-daemon @ 2022-10-27 23:55 UTC (permalink / raw)
  To: kvm

https://bugzilla.kernel.org/show_bug.cgi?id=216349

Patrick Li (patrick@papaq.org) changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |patrick@papaq.org

--- Comment #14 from Patrick Li (patrick@papaq.org) ---
There are a lot of people reporting this issue on the Proxmox forum and I am
one of them.

Tried 5.19.16 kernel a few days ago, it behaved a lot better than 5.15. Guest
kernel panic after 3.5 days instead of a few hours, but still happens.

There are reports of 5.13 being unaffected, ie, problem started from 5.15. But
I can't personally confirm this.

-- 
You may reply to this email to add a comment.

You are receiving this mail because:
You are watching the assignee of the bug.

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2022-10-27 23:55 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-08-10 12:13 [Bug 216349] New: Kernel panic in a Ubuntu VM running on Proxmox bugzilla-daemon
2022-08-10 13:44 ` [Bug 216349] " bugzilla-daemon
2022-08-10 14:29 ` bugzilla-daemon
2022-08-10 15:43 ` bugzilla-daemon
2022-08-11  5:50 ` bugzilla-daemon
2022-08-11  5:52 ` bugzilla-daemon
2022-08-11  8:18 ` bugzilla-daemon
2022-08-11  8:29 ` bugzilla-daemon
2022-08-13  2:55 ` bugzilla-daemon
2022-08-13  2:59 ` bugzilla-daemon
2022-08-13 13:30 ` bugzilla-daemon
2022-08-17 14:35 ` bugzilla-daemon
2022-08-17 15:37 ` bugzilla-daemon
2022-08-22  5:01 ` [Bug 216349] Kernel panics in VMs running on an Intel N5105 " bugzilla-daemon
2022-08-24 12:33 ` bugzilla-daemon
2022-10-27 23:55 ` bugzilla-daemon

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.