Linux-rt-users archive on lore.kernel.org
 help / color / Atom feed
* 5.4.13-rt7 stall on CPU?
@ 2020-06-27 13:30 Udo van den Heuvel
  0 siblings, 0 replies; only message in thread
From: Udo van den Heuvel @ 2020-06-27 13:30 UTC (permalink / raw)
  To: RT

Hello,

Found this in /var/log/messages:

Jun 25 16:31:39 vuurmuur pppd[1522583]: local  LL address
fe80::ed36:3ac4:4115:e23e
Jun 25 16:31:39 vuurmuur pppd[1522583]: remote LL address
fe80::2a8a:1cff:fee0:9484
Jun 26 04:50:24 vuurmuur kernel: 002: rcu: INFO: rcu_preempt
self-detected stall on CPU
Jun 26 04:50:24 vuurmuur kernel: 002: rcu:      2-....: (5336 ticks this
GP) idle=f6a/1/0x4000000000000002 softirq=347363113/347363115 fqs=2430
Jun 26 04:50:24 vuurmuur kernel: 002:   (t=5250 jiffies g=608224341 q=1297)
Jun 26 04:50:24 vuurmuur kernel: 002: NMI backtrace for cpu 2
Jun 26 04:50:24 vuurmuur kernel: 002: CPU: 2 PID: 3468730 Comm: ntpd
Tainted: G           O      5.4.13-rt7 #9
Jun 26 04:50:24 vuurmuur kernel: 002: Hardware name: To Be Filled By
O.E.M. To Be Filled By O.E.M./QC5000M-ITX/PH, BIOS P1.10 05/06/2015
Jun 26 04:50:24 vuurmuur kernel: 002: Call Trace:
Jun 26 04:50:24 vuurmuur kernel: 002:  <IRQ>
Jun 26 04:50:24 vuurmuur kernel: 002:  dump_stack+0x50/0x70
Jun 26 04:50:24 vuurmuur kernel: 002:  nmi_cpu_backtrace.cold+0x14/0x53
Jun 26 04:50:24 vuurmuur kernel: 002:  ? lapic_can_unplug_cpu.cold+0x3b/0x3b
Jun 26 04:50:24 vuurmuur kernel: 002:
nmi_trigger_cpumask_backtrace+0x8e/0xa2
Jun 26 04:50:24 vuurmuur kernel: 002:  rcu_dump_cpu_stacks+0x8b/0xb9
Jun 26 04:50:24 vuurmuur kernel: 002:  rcu_sched_clock_irq.cold+0x17e/0x4fd
Jun 26 04:50:24 vuurmuur kernel: 002:  ? account_system_index_time+0xa6/0xd0
Jun 26 04:50:24 vuurmuur kernel: 002:  update_process_times+0x1f/0x50
Jun 26 04:50:24 vuurmuur kernel: 002:  tick_sched_timer+0x5a/0x1c0
Jun 26 04:50:24 vuurmuur kernel: 002:  ?
tick_switch_to_oneshot.cold+0x74/0x74
Jun 26 04:50:24 vuurmuur kernel: 002:  __hrtimer_run_queues+0xba/0x1b0
Jun 26 04:50:24 vuurmuur kernel: 002:  hrtimer_interrupt+0x108/0x230
Jun 26 04:50:24 vuurmuur kernel: 002:  smp_apic_timer_interrupt+0x61/0xa0
Jun 26 04:50:24 vuurmuur kernel: 002:  apic_timer_interrupt+0xf/0x20
Jun 26 04:50:24 vuurmuur kernel: 002:  </IRQ>
Jun 26 04:50:24 vuurmuur kernel: 002: RIP: 0010:__fget_light+0x3d/0x60
Jun 26 04:50:24 vuurmuur kernel: 002: Code: ca 75 2e 48 8b 50 50 8b 02
39 c7 73 21 89 f9 48 39 c1 48 19 c0 21 c7 48 8b 42 08 48 8d 04 f8 48 8b
00 48 85 c0 74 07 85 70 7c <75> 02 f3 c3 31 c0 c3 ba 01 00 00 00 e8 22
fe ff ff 48 85 c0 74 ee
Jun 26 04:50:24 vuurmuur kernel: 002: RSP: 0018:ffffc90001ebb930 EFLAGS:
00000246 ORIG_RAX: ffffffffffffff13
Jun 26 04:50:24 vuurmuur kernel: 002: RAX: ffff888059007c00 RBX:
000000007fff0010 RCX: 000000000000001a
Jun 26 04:50:24 vuurmuur kernel: 002: RDX: ffff888136edb458 RSI:
0000000000004000 RDI: 000000000000001a
Jun 26 04:50:24 vuurmuur kernel: 002: RBP: ffffc90001ebbcd0 R08:
00000000000000db R09: ffffffff81639c01
Jun 26 04:50:24 vuurmuur kernel: 002: R10: 0000000000000001 R11:
0000000000000000 R12: 0000000004000000
Jun 26 04:50:24 vuurmuur kernel: 002: R13: 000000000000001a R14:
000000000000001f R15: 0000000000000000
Jun 26 04:50:24 vuurmuur kernel: 002:  ? sock_ioctl+0x381/0x440
Jun 26 04:50:24 vuurmuur kernel: 002:  do_select+0x350/0x7a0
Jun 26 04:50:24 vuurmuur kernel: 002:  ?
select_estimate_accuracy+0x100/0x100
Jun 26 04:50:24 vuurmuur kernel: 002:  ? poll_select_finish+0x1d0/0x1d0
Jun 26 04:50:24 vuurmuur kernel: 002:  ? poll_select_finish+0x1d0/0x1d0
Jun 26 04:50:24 vuurmuur kernel: 002:  ? find_held_lock+0x2b/0x80
Jun 26 04:50:24 vuurmuur kernel: 002:  ? put_prev_task_rt+0x22/0x140
Jun 26 04:50:24 vuurmuur kernel: 002:  ? find_held_lock+0x2b/0x80
Jun 26 04:50:24 vuurmuur kernel: 002:  ? __schedule+0x435/0x4f0
Jun 26 04:50:24 vuurmuur kernel: 002:  ? find_held_lock+0x2b/0x80
Jun 26 04:50:24 vuurmuur kernel: 002:  ? put_prev_task_rt+0x22/0x140
Jun 26 04:50:24 vuurmuur kernel: 002:  ? find_held_lock+0x2b/0x80
Jun 26 04:50:24 vuurmuur kernel: 002:  ? __schedule+0x435/0x4f0
Jun 26 04:50:24 vuurmuur kernel: 002:  ? find_held_lock+0x2b/0x80
Jun 26 04:50:24 vuurmuur kernel: 002:  ? find_held_lock+0x2b/0x80
Jun 26 04:50:24 vuurmuur kernel: 002:  ? core_sys_select+0x5c/0x380
Jun 26 04:50:24 vuurmuur kernel: 002:  core_sys_select+0x1d0/0x380
Jun 26 04:50:24 vuurmuur kernel: 002:  ? core_sys_select+0x5c/0x380
Jun 26 04:50:24 vuurmuur kernel: 002:  ? tty_ldisc_ref_wait+0x27/0x70
Jun 26 04:50:24 vuurmuur kernel: 002:  ? __ldsem_down_read_nested+0x5e/0x240
Jun 26 04:50:24 vuurmuur kernel: 002:  ? find_held_lock+0x2b/0x80
Jun 26 04:50:24 vuurmuur kernel: 002:  ? find_held_lock+0x2b/0x80
Jun 26 04:50:24 vuurmuur kernel: 002:  ? set_user_sigmask+0x62/0x90
Jun 26 04:50:24 vuurmuur kernel: 002:  __x64_sys_pselect6+0x141/0x190
Jun 26 04:50:24 vuurmuur kernel: 002:  ? _raw_spin_unlock_irq+0x1f/0x40
Jun 26 04:50:24 vuurmuur kernel: 002:  ? sigprocmask+0x6d/0x90
Jun 26 04:50:24 vuurmuur kernel: 002:  do_syscall_64+0x77/0x440
Jun 26 04:50:24 vuurmuur kernel: 002:  ? schedule+0x3b/0xb0
Jun 26 04:50:24 vuurmuur kernel: 002:
entry_SYSCALL_64_after_hwframe+0x44/0xa9
Jun 26 04:50:24 vuurmuur kernel: 002: RIP: 0033:0x7fba5ae5a096
Jun 26 04:50:24 vuurmuur kernel: 002: Code: e8 9f dd f8 ff 4c 8b 0c 24
4c 8b 44 24 08 89 c5 4c 8b 54 24 28 48 8b 54 24 20 b8 0e 01 00 00 48 8b
74 24 18 8b 7c 24 14 0f 05 <48> 3d 00 f0 ff ff 77 28 89 ef 89 04 24 e8
c8 dd f8 ff 8b 04 24 eb
Jun 26 04:50:24 vuurmuur kernel: 002: RSP: 002b:00007ffcff164a10 EFLAGS:
00000293 ORIG_RAX: 000000000000010e
Jun 26 04:50:24 vuurmuur kernel: 002: RAX: ffffffffffffffda RBX:
00005613a62cde78 RCX: 00007fba5ae5a096
Jun 26 04:50:24 vuurmuur kernel: 002: RDX: 0000000000000000 RSI:
00007ffcff164b00 RDI: 000000000000001f
Jun 26 04:50:24 vuurmuur kernel: 002: RBP: 0000000000000000 R08:
0000000000000000 R09: 00007ffcff164a50
Jun 26 04:50:24 vuurmuur kernel: 002: R10: 0000000000000000 R11:
0000000000000293 R12: 00005613a62a9c43
Jun 26 04:50:24 vuurmuur kernel: 002: R13: 0000000000000009 R14:
ffffffffffffffff R15: 00005613a62a9e1c
Jun 26 05:03:01 vuurmuur named[1433212]: received control channel
command 'flush'


What went wrong?
How bad is this?
How to avoid?

Kind regards,
Udo

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, back to index

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-06-27 13:30 5.4.13-rt7 stall on CPU? Udo van den Heuvel

Linux-rt-users archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-rt-users/0 linux-rt-users/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-rt-users linux-rt-users/ https://lore.kernel.org/linux-rt-users \
		linux-rt-users@vger.kernel.org
	public-inbox-index linux-rt-users

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-rt-users


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git