Testing all events: OK Running tests again, along with the function tracer Running tests on all trace events: Testing all events: hrtimer: interrupt took 14340976 ns BUG: workqueue lockup - pool cpus=0 node=0 flags=0x0 nice=0 stuck for 15s! Showing busy workqueues and worker pools: workqueue events: flags=0x0 pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=1/256 refcnt=2 pending: vmstat_shepherd BUG: workqueue lockup - pool cpus=0 node=0 flags=0x0 nice=0 stuck for 10s! BUG: workqueue lockup - pool cpus=0 flags=0x4 nice=0 stuck for 10s! Showing busy workqueues and worker pools: workqueue events: flags=0x0 pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=1/256 refcnt=2 pending: vmstat_shepherd workqueue events_power_efficient: flags=0x82 pwq 2: cpus=0 flags=0x4 nice=0 active=2/256 refcnt=4 pending: neigh_periodic_work, do_cache_clean BUG: workqueue lockup - pool cpus=0 flags=0x4 nice=0 stuck for 10s! Showing busy workqueues and worker pools: workqueue events: flags=0x0 pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=1/256 refcnt=2 pending: vmstat_shepherd workqueue events_power_efficient: flags=0x82 pwq 2: cpus=0 flags=0x4 nice=0 active=1/256 refcnt=3 pending: neigh_periodic_work BUG: workqueue lockup - pool cpus=0 node=0 flags=0x0 nice=0 stuck for 10s! Showing busy workqueues and worker pools: workqueue events: flags=0x0 pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=1/256 refcnt=2 pending: vmstat_shepherd BUG: workqueue lockup - pool cpus=0 node=0 flags=0x0 nice=0 stuck for 19s! Showing busy workqueues and worker pools: workqueue events: flags=0x0 pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=1/256 refcnt=2 pending: vmstat_shepherd workqueue events_power_efficient: flags=0x82 pwq 2: cpus=0 flags=0x4 nice=0 active=2/256 refcnt=4 pending: check_lifetime, neigh_periodic_work BUG: workqueue lockup - pool cpus=0 flags=0x5 nice=0 stuck for 14s! Showing busy workqueues and worker pools: workqueue events: flags=0x0 pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=1/256 refcnt=2 pending: vmstat_shepherd workqueue events_power_efficient: flags=0x82 pwq 2: cpus=0 flags=0x5 nice=0 active=1/256 refcnt=3 pending: neigh_periodic_work pool 2: cpus=0 flags=0x5 nice=0 hung=14s workers=2 manager: 61 idle: 7 BUG: workqueue lockup - pool cpus=0 node=0 flags=0x0 nice=0 stuck for 11s! BUG: workqueue lockup - pool cpus=0 flags=0x5 nice=0 stuck for 25s! Showing busy workqueues and worker pools: workqueue events: flags=0x0 pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=1/256 refcnt=2 pending: vmstat_shepherd workqueue events_power_efficient: flags=0x82 pwq 2: cpus=0 flags=0x5 nice=0 active=1/256 refcnt=3 pending: neigh_periodic_work pool 2: cpus=0 flags=0x5 nice=0 hung=25s workers=2 manager: 61 idle: 7 BUG: workqueue lockup - pool cpus=0 node=0 flags=0x0 nice=0 stuck for 22s! BUG: workqueue lockup - pool cpus=0 flags=0x5 nice=0 stuck for 37s! Showing busy workqueues and worker pools: workqueue events: flags=0x0 pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=1/256 refcnt=2 pending: vmstat_shepherd workqueue events_power_efficient: flags=0x82 pwq 2: cpus=0 flags=0x5 nice=0 active=2/256 refcnt=4 pending: neigh_periodic_work, do_cache_clean pool 2: cpus=0 flags=0x5 nice=0 hung=37s workers=2 manager: 61 idle: 7 rcu: INFO: rcu_preempt detected stalls on CPUs/tasks: (detected by 0, t=3752 jiffies, g=2709, q=1) rcu: All QSes seen, last rcu_preempt kthread activity 620 (4295099794-4295099174), jiffies_till_next_fqs=1, root ->qsmask 0x0 rcu: rcu_preempt kthread starved for 620 jiffies! g2709 f0x2 RCU_GP_CLEANUP(7) ->state=0x0 ->cpu=0 rcu: Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior. rcu: RCU grace-period kthread stack dump: task:rcu_preempt state:R running task stack: 0 pid: 10 ppid: 2 flags:0x00000428 Call trace: dump_backtrace+0x0/0x278 arch/arm64/kernel/stacktrace.c:100 show_stack+0x30/0x80 arch/arm64/kernel/stacktrace.c:196 sched_show_task+0x1a8/0x240 kernel/sched/core.c:6445 rcu_check_gp_kthread_starvation+0x170/0x358 kernel/rcu/tree_stall.h:469 print_other_cpu_stall kernel/rcu/tree_stall.h:544 [inline] check_cpu_stall kernel/rcu/tree_stall.h:664 [inline] rcu_pending kernel/rcu/tree.c:3752 [inline] rcu_sched_clock_irq+0x744/0xd18 kernel/rcu/tree.c:2581 update_process_times+0x68/0x98 kernel/time/timer.c:1709 tick_sched_handle.isra.16+0x54/0x80 kernel/time/tick-sched.c:176 tick_sched_timer+0x64/0xd8 kernel/time/tick-sched.c:1328 __run_hrtimer kernel/time/hrtimer.c:1519 [inline] __hrtimer_run_queues+0x2a4/0x750 kernel/time/hrtimer.c:1583 hrtimer_interrupt+0xf4/0x2a0 kernel/time/hrtimer.c:1645 timer_handler drivers/clocksource/arm_arch_timer.c:647 [inline] arch_timer_handler_virt+0x44/0x70 drivers/clocksource/arm_arch_timer.c:658 handle_percpu_devid_irq+0xfc/0x4d0 kernel/irq/chip.c:930 generic_handle_irq_desc include/linux/irqdesc.h:152 [inline] generic_handle_irq+0x50/0x70 kernel/irq/irqdesc.c:650 __handle_domain_irq+0x9c/0x120 kernel/irq/irqdesc.c:687 handle_domain_irq include/linux/irqdesc.h:170 [inline] gic_handle_irq+0xcc/0x108 drivers/irqchip/irq-gic.c:370 el1_irq+0xbc/0x180 arch/arm64/kernel/entry.S:651 arch_local_irq_restore+0x4/0x8 arch/arm64/include/asm/irqflags.h:124 trace_preempt_enable_rcuidle include/trace/events/preemptirq.h:55 [inline] trace_preempt_on+0xf4/0x190 kernel/trace/trace_preemptirq.c:123 preempt_latency_stop kernel/sched/core.c:4197 [inline] preempt_schedule_common+0x12c/0x1b0 kernel/sched/core.c:4682 preempt_schedule.part.88+0x20/0x28 kernel/sched/core.c:4706 preempt_schedule+0x20/0x28 kernel/sched/core.c:4707 __raw_spin_unlock_irq include/linux/spinlock_api_smp.h:169 [inline] _raw_spin_unlock_irq+0x80/0x90 kernel/locking/spinlock.c:199 rcu_gp_cleanup kernel/rcu/tree.c:2046 [inline] rcu_gp_kthread+0xe5c/0x19a8 kernel/rcu/tree.c:2119 kthread+0x174/0x188 kernel/kthread.c:292 ret_from_fork+0x10/0x18 arch/arm64/kernel/entry.S:961 rcu: Stack dump where RCU grace-period kthread last ran: Task dump for CPU 0: task:rcu_preempt state:R running task stack: 0 pid: 10 ppid: 2 flags:0x00000428 Call trace: dump_backtrace+0x0/0x278 arch/arm64/kernel/stacktrace.c:100 show_stack+0x30/0x80 arch/arm64/kernel/stacktrace.c:196 sched_show_task+0x1a8/0x240 kernel/sched/core.c:6445 dump_cpu_task+0x48/0x58 kernel/sched/core.c:8428 rcu_check_gp_kthread_starvation+0x214/0x358 kernel/rcu/tree_stall.h:474 print_other_cpu_stall kernel/rcu/tree_stall.h:544 [inline] check_cpu_stall kernel/rcu/tree_stall.h:664 [inline] rcu_pending kernel/rcu/tree.c:3752 [inline] rcu_sched_clock_irq+0x744/0xd18 kernel/rcu/tree.c:2581 update_process_times+0x68/0x98 kernel/time/timer.c:1709 tick_sched_handle.isra.16+0x54/0x80 kernel/time/tick-sched.c:176 tick_sched_timer+0x64/0xd8 kernel/time/tick-sched.c:1328 __run_hrtimer kernel/time/hrtimer.c:1519 [inline] __hrtimer_run_queues+0x2a4/0x750 kernel/time/hrtimer.c:1583 hrtimer_interrupt+0xf4/0x2a0 kernel/time/hrtimer.c:1645 timer_handler drivers/clocksource/arm_arch_timer.c:647 [inline] arch_timer_handler_virt+0x44/0x70 drivers/clocksource/arm_arch_timer.c:658 handle_percpu_devid_irq+0xfc/0x4d0 kernel/irq/chip.c:930 generic_handle_irq_desc include/linux/irqdesc.h:152 [inline] generic_handle_irq+0x50/0x70 kernel/irq/irqdesc.c:650 __handle_domain_irq+0x9c/0x120 kernel/irq/irqdesc.c:687 handle_domain_irq include/linux/irqdesc.h:170 [inline] gic_handle_irq+0xcc/0x108 drivers/irqchip/irq-gic.c:370 el1_irq+0xbc/0x180 arch/arm64/kernel/entry.S:651 arch_local_irq_restore+0x4/0x8 arch/arm64/include/asm/irqflags.h:124 trace_preempt_enable_rcuidle include/trace/events/preemptirq.h:55 [inline] trace_preempt_on+0xf4/0x190 kernel/trace/trace_preemptirq.c:123 preempt_latency_stop kernel/sched/core.c:4197 [inline] preempt_schedule_common+0x12c/0x1b0 kernel/sched/core.c:4682 preempt_schedule.part.88+0x20/0x28 kernel/sched/core.c:4706 preempt_schedule+0x20/0x28 kernel/sched/core.c:4707 __raw_spin_unlock_irq include/linux/spinlock_api_smp.h:169 [inline] _raw_spin_unlock_irq+0x80/0x90 kernel/locking/spinlock.c:199 rcu_gp_cleanup kernel/rcu/tree.c:2046 [inline] rcu_gp_kthread+0xe5c/0x19a8 kernel/rcu/tree.c:2119 kthread+0x174/0x188 kernel/kthread.c:292 ret_from_fork+0x10/0x18 arch/arm64/kernel/entry.S:961 ================================ WARNING: inconsistent lock state 5.10.0-rc3-next-20201110-00001-gc07b306d7fa5-dirty #23 Not tainted -------------------------------- inconsistent {IN-HARDIRQ-W} -> {HARDIRQ-ON-W} usage. rcu_preempt/10 [HC0[0]:SC0[0]:HE0:SE1] takes: ffffd787e91d4358 (rcu_node_0){?.-.}-{2:2}, at: print_other_cpu_stall kernel/rcu/tree_stall.h:505 [inline] ffffd787e91d4358 (rcu_node_0){?.-.}-{2:2}, at: check_cpu_stall kernel/rcu/tree_stall.h:664 [inline] ffffd787e91d4358 (rcu_node_0){?.-.}-{2:2}, at: rcu_pending kernel/rcu/tree.c:3752 [inline] ffffd787e91d4358 (rcu_node_0){?.-.}-{2:2}, at: rcu_sched_clock_irq+0x4a0/0xd18 kernel/rcu/tree.c:2581 {IN-HARDIRQ-W} state was registered at: mark_lock kernel/locking/lockdep.c:4293 [inline] mark_usage kernel/locking/lockdep.c:4302 [inline] __lock_acquire+0x7bc/0x15b8 kernel/locking/lockdep.c:4785 lock_acquire+0x244/0x498 kernel/locking/lockdep.c:5436 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline] _raw_spin_lock_irqsave+0x78/0x144 kernel/locking/spinlock.c:159 print_other_cpu_stall kernel/rcu/tree_stall.h:505 [inline] check_cpu_stall kernel/rcu/tree_stall.h:664 [inline] rcu_pending kernel/rcu/tree.c:3752 [inline] rcu_sched_clock_irq+0x4a0/0xd18 kernel/rcu/tree.c:2581 update_process_times+0x68/0x98 kernel/time/timer.c:1709 tick_sched_handle.isra.16+0x54/0x80 kernel/time/tick-sched.c:176 tick_sched_timer+0x64/0xd8 kernel/time/tick-sched.c:1328 __run_hrtimer kernel/time/hrtimer.c:1519 [inline] __hrtimer_run_queues+0x2a4/0x750 kernel/time/hrtimer.c:1583 hrtimer_interrupt+0xf4/0x2a0 kernel/time/hrtimer.c:1645 timer_handler drivers/clocksource/arm_arch_timer.c:647 [inline] arch_timer_handler_virt+0x44/0x70 drivers/clocksource/arm_arch_timer.c:658 handle_percpu_devid_irq+0xfc/0x4d0 kernel/irq/chip.c:930 generic_handle_irq_desc include/linux/irqdesc.h:152 [inline] generic_handle_irq+0x50/0x70 kernel/irq/irqdesc.c:650 __handle_domain_irq+0x9c/0x120 kernel/irq/irqdesc.c:687 handle_domain_irq include/linux/irqdesc.h:170 [inline] gic_handle_irq+0xcc/0x108 drivers/irqchip/irq-gic.c:370 el1_irq+0xbc/0x180 arch/arm64/kernel/entry.S:651 arch_local_irq_restore+0x4/0x8 arch/arm64/include/asm/irqflags.h:124 trace_preempt_enable_rcuidle include/trace/events/preemptirq.h:55 [inline] trace_preempt_on+0xf4/0x190 kernel/trace/trace_preemptirq.c:123 preempt_latency_stop kernel/sched/core.c:4197 [inline] preempt_schedule_common+0x12c/0x1b0 kernel/sched/core.c:4682 preempt_schedule.part.88+0x20/0x28 kernel/sched/core.c:4706 preempt_schedule+0x20/0x28 kernel/sched/core.c:4707 __raw_spin_unlock_irq include/linux/spinlock_api_smp.h:169 [inline] _raw_spin_unlock_irq+0x80/0x90 kernel/locking/spinlock.c:199 rcu_gp_cleanup kernel/rcu/tree.c:2046 [inline] rcu_gp_kthread+0xe5c/0x19a8 kernel/rcu/tree.c:2119 kthread+0x174/0x188 kernel/kthread.c:292 ret_from_fork+0x10/0x18 arch/arm64/kernel/entry.S:961 irq event stamp: 39750 hardirqs last enabled at (39749): [] rcu_irq_enter_irqson+0x48/0x68 kernel/rcu/tree.c:1078 hardirqs last disabled at (39750): [] el1_irq+0x7c/0x180 arch/arm64/kernel/entry.S:648 softirqs last enabled at (36704): [] __do_softirq+0x650/0x6a4 kernel/softirq.c:325 softirqs last disabled at (36683): [] do_softirq_own_stack include/linux/interrupt.h:568 [inline] softirqs last disabled at (36683): [] invoke_softirq kernel/softirq.c:393 [inline] softirqs last disabled at (36683): [] __irq_exit_rcu kernel/softirq.c:423 [inline] softirqs last disabled at (36683): [] irq_exit+0x1a8/0x1b0 kernel/softirq.c:447 other info that might help us debug this: Possible unsafe locking scenario: CPU0 ---- lock(rcu_node_0); lock(rcu_node_0); *** DEADLOCK *** 1 lock held by rcu_preempt/10: #0: ffffd787e91d4358 (rcu_node_0){?.-.}-{2:2}, at: print_other_cpu_stall kernel/rcu/tree_stall.h:505 [inline] #0: ffffd787e91d4358 (rcu_node_0){?.-.}-{2:2}, at: check_cpu_stall kernel/rcu/tree_stall.h:664 [inline] #0: ffffd787e91d4358 (rcu_node_0){?.-.}-{2:2}, at: rcu_pending kernel/rcu/tree.c:3752 [inline] #0: ffffd787e91d4358 (rcu_node_0){?.-.}-{2:2}, at: rcu_sched_clock_irq+0x4a0/0xd18 kernel/rcu/tree.c:2581 stack backtrace: CPU: 0 PID: 10 Comm: rcu_preempt Not tainted 5.10.0-rc3-next-20201110-00001-gc07b306d7fa5-dirty #23 Hardware name: linux,dummy-virt (DT) Call trace: dump_backtrace+0x0/0x278 arch/arm64/kernel/stacktrace.c:100 show_stack+0x30/0x80 arch/arm64/kernel/stacktrace.c:196 __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0x138/0x1b0 lib/dump_stack.c:118 print_usage_bug+0x2d8/0x2f8 kernel/locking/lockdep.c:3739 valid_state kernel/locking/lockdep.c:3750 [inline] mark_lock_irq kernel/locking/lockdep.c:3953 [inline] mark_lock.part.46+0x370/0x480 kernel/locking/lockdep.c:4410 mark_lock kernel/locking/lockdep.c:4008 [inline] mark_held_locks+0x58/0x90 kernel/locking/lockdep.c:4011 __trace_hardirqs_on_caller kernel/locking/lockdep.c:4029 [inline] lockdep_hardirqs_on_prepare+0xdc/0x298 kernel/locking/lockdep.c:4097 trace_hardirqs_on+0x90/0x388 kernel/trace/trace_preemptirq.c:49 el1_irq+0xd8/0x180 arch/arm64/kernel/entry.S:685 arch_local_irq_restore+0x4/0x8 arch/arm64/include/asm/irqflags.h:124 trace_preempt_enable_rcuidle include/trace/events/preemptirq.h:55 [inline] trace_preempt_on+0xf4/0x190 kernel/trace/trace_preemptirq.c:123 preempt_latency_stop kernel/sched/core.c:4197 [inline] preempt_schedule_common+0x12c/0x1b0 kernel/sched/core.c:4682 preempt_schedule.part.88+0x20/0x28 kernel/sched/core.c:4706 preempt_schedule+0x20/0x28 kernel/sched/core.c:4707 __raw_spin_unlock_irq include/linux/spinlock_api_smp.h:169 [inline] _raw_spin_unlock_irq+0x80/0x90 kernel/locking/spinlock.c:199 rcu_gp_cleanup kernel/rcu/tree.c:2046 [inline] rcu_gp_kthread+0xe5c/0x19a8 kernel/rcu/tree.c:2119 kthread+0x174/0x188 kernel/kthread.c:292 ret_from_fork+0x10/0x18 arch/arm64/kernel/entry.S:961 BUG: scheduling while atomic: rcu_preempt/10/0x00000002 INFO: lockdep is turned off. Modules linked in: Preemption disabled at: [] preempt_schedule.part.88+0x20/0x28 kernel/sched/core.c:4706 CPU: 0 PID: 10 Comm: rcu_preempt Not tainted 5.10.0-rc3-next-20201110-00001-gc07b306d7fa5-dirty #23 Hardware name: linux,dummy-virt (DT) Call trace: dump_backtrace+0x0/0x278 arch/arm64/kernel/stacktrace.c:100 show_stack+0x30/0x80 arch/arm64/kernel/stacktrace.c:196 __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0x138/0x1b0 lib/dump_stack.c:118 __schedule_bug+0x8c/0xe8 kernel/sched/core.c:4262 schedule_debug kernel/sched/core.c:4289 [inline] __schedule+0x7e8/0x890 kernel/sched/core.c:4417 preempt_schedule_common+0x44/0x1b0 kernel/sched/core.c:4681 preempt_schedule.part.88+0x20/0x28 kernel/sched/core.c:4706 preempt_schedule+0x20/0x28 kernel/sched/core.c:4707 __raw_spin_unlock_irq include/linux/spinlock_api_smp.h:169 [inline] _raw_spin_unlock_irq+0x80/0x90 kernel/locking/spinlock.c:199 rcu_gp_cleanup kernel/rcu/tree.c:2046 [inline] rcu_gp_kthread+0xe5c/0x19a8 kernel/rcu/tree.c:2119 kthread+0x174/0x188 kernel/kthread.c:292 ret_from_fork+0x10/0x18 arch/arm64/kernel/entry.S:961