linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* workqueue: PF_MEMALLOC task 14771(cc1plus) is flushing !WQ_MEM_RECLAIM events:gen6_pm_rps_work
@ 2019-09-26  7:06 Zdenek Sojka
  2019-10-03 16:49 ` Tejun Heo
  0 siblings, 1 reply; 2+ messages in thread
From: Zdenek Sojka @ 2019-09-26  7:06 UTC (permalink / raw)
  To: linux-kernel; +Cc: tj, jiangshanlai

Hello,

I've hit the following dmesg with a 5.3.1 kernel; it looks similar to https://lkml.org/lkml/2019/8/28/754 , which should have been fixed as noted in https://lkml.org/lkml/2019/8/28/763 (if the patch is in the 5.3 release)

[ 2680.302771] ------------[ cut here ]------------
[ 2680.302862] workqueue: PF_MEMALLOC task 14771(cc1plus) is flushing !WQ_MEM_RECLAIM events:gen6_pm_rps_work
[ 2680.302868] WARNING: CPU: 5 PID: 14771 at kernel/workqueue.c:2600 check_flush_dependency+0x104/0x110
[ 2680.302870] Modules linked in:
[ 2680.302873] CPU: 5 PID: 14771 Comm: cc1plus Not tainted 5.3.1-gentoo #2
[ 2680.302875] Hardware name: Dell Inc. Precision Tower 3620/0MWYPT, BIOS 2.4.2 09/29/2017
[ 2680.302878] RIP: 0010:check_flush_dependency+0x104/0x110
[ 2680.302881] Code: 05 00 00 8b b6 88 03 00 00 48 8d 8b 78 01 00 00 49 89 e8 48 c7 c7 f0 bd a0 82 48 89 04 24 c6 05 9a 6d bb 01 01 e8 ce dd fd ff <0f> 0b 48 8b 04 24 e9 62 ff ff ff 90 53 48 8b 1f e8 a7 67 06 00 85
[ 2680.302882] RSP: 0018:ffff888418d53728 EFLAGS: 00010092
[ 2680.302885] RAX: 000000000000005e RBX: ffff888444819600 RCX: 0000000000000000
[ 2680.302886] RDX: ffff888176616140 RSI: 0000000000000000 RDI: ffffffff8123c6a1
[ 2680.302888] RBP: ffffffff81ac7de0 R08: 0000000000000001 R09: 00000000001ead00
[ 2680.302890] R10: 995e1f072b14c310 R11: 000000000000013a R12: ffff888176616140
[ 2680.302892] R13: 0000000000000001 R14: 0000000000000001 R15: ffff8884557eae40
[ 2680.302894] FS:  00005555559f0940(0000) GS:ffff888455800000(0000) knlGS:0000000000000000
[ 2680.302896] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 2680.302898] CR2: 0000000061600000 CR3: 000000017da52006 CR4: 00000000003606e0
[ 2680.302899] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 2680.302901] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 2680.302903] Call Trace:
[ 2680.302906]  __flush_work+0xd2/0x420
[ 2680.302909]  ? sched_clock+0x5/0x10
[ 2680.302912]  ? sched_clock+0x5/0x10
[ 2680.302914]  ? sched_clock_cpu+0xc/0xb0
[ 2680.302917]  ? get_work_pool+0x90/0x90
[ 2680.302919]  ? mark_held_locks+0x47/0x70
[ 2680.302922]  ? get_work_pool+0x90/0x90
[ 2680.302924]  __cancel_work_timer+0x107/0x190
[ 2680.302928]  ? synchronize_irq+0x20/0x80
[ 2680.302931]  gen6_disable_rps_interrupts+0x72/0xb0
[ 2680.302933]  gen6_rps_idle+0x15/0xd0
[ 2680.302936]  intel_gt_park+0x55/0x60
[ 2680.302939]  __intel_wakeref_put_last+0xf/0x40
[ 2680.302941]  __engine_park+0x7e/0xd0
[ 2680.302944]  __intel_wakeref_put_last+0xf/0x40
[ 2680.302947]  i915_request_retire+0x231/0x4c4
[ 2680.302949]  i915_retire_requests+0x8c/0x10c
[ 2680.302952]  i915_gem_shrink+0x4aa/0x6f0
[ 2680.302955]  ? mutex_trylock+0x114/0x130
[ 2680.302957]  i915_gem_shrinker_scan+0x127/0x160
[ 2680.302960]  do_shrink_slab+0x131/0x320
[ 2680.302962]  shrink_node+0xf7/0x380
[ 2680.302965]  do_try_to_free_pages+0xbf/0x2c0
[ 2680.302967]  try_to_free_pages+0xe0/0x250
[ 2680.302970]  __perform_reclaim.isra.22+0xb9/0x170
[ 2680.302973]  __alloc_pages_nodemask+0x6a2/0x1270
[ 2680.302975]  ? __alloc_pages_nodemask+0x3d0/0x1270
[ 2680.302978]  ? __lock_acquire+0x246/0x1a60
[ 2680.302980]  ? sched_clock+0x5/0x10
[ 2680.302982]  ? sched_clock_cpu+0xc/0xb0
[ 2680.302985]  ? __lock_release+0x15f/0x2a0
[ 2680.302987]  do_huge_pmd_anonymous_page+0x141/0x5c0
[ 2680.302990]  __handle_mm_fault+0xf28/0x10e0
[ 2680.302993]  __do_page_fault+0x23a/0x4e0
[ 2680.302996]  page_fault+0x39/0x40
[ 2680.302998] RIP: 0033:0x60003a2c
[ 2680.303001] Code: 03 01 00 74 22 4c 89 0f c6 00 0f 48 8b 07 f7 c6 00 02 00 00 74 30 48 8d 50 01 48 89 17 c6 00 38 48 8b 07 4c 8d 48 01 4c 89 0f <40> 88 30 c3 4c 89 0f c6 00 f3 48 8b 07 4c 8d 48 01 e9 50 ff ff ff
[ 2680.303003] RSP: 002b:00007ffe02b5ec18 EFLAGS: 00010246
[ 2680.303005] RAX: 0000000061600000 RBX: 0000000060327120 RCX: 0000000000000005
[ 2680.303007] RDX: 0000000000000000 RSI: 000000000000008b RDI: 00000000603271a8
[ 2680.303009] RBP: 0000000000000005 R08: 0000000000000000 R09: 0000000061600001
[ 2680.303011] R10: 0000000000000000 R11: 0000000060079ac8 R12: 0000000000000003
[ 2680.303012] R13: ffffffffffffffe0 R14: 0000000060327120 R15: 0000555555a7a2f0
[ 2680.303014] irq event stamp: 154652
[ 2680.303018] hardirqs last  enabled at (154651): [<ffffffff811e892d>] __cancel_work_timer+0x7d/0x190
[ 2680.303020] hardirqs last disabled at (154652): [<ffffffff8200ace1>] _raw_spin_lock_irq+0x11/0x70
[ 2680.303023] softirqs last  enabled at (142674): [<ffffffff811cb6fd>] irq_exit+0x9d/0xe0
[ 2680.303026] softirqs last disabled at (142663): [<ffffffff811cb6fd>] irq_exit+0x9d/0xe0
[ 2680.303027] ---[ end trace 6b426c94345d96fc ]---


Please let me know if I can provide more information.

Best regards,
Zdenek Sojka

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: workqueue: PF_MEMALLOC task 14771(cc1plus) is flushing !WQ_MEM_RECLAIM events:gen6_pm_rps_work
  2019-09-26  7:06 workqueue: PF_MEMALLOC task 14771(cc1plus) is flushing !WQ_MEM_RECLAIM events:gen6_pm_rps_work Zdenek Sojka
@ 2019-10-03 16:49 ` Tejun Heo
  0 siblings, 0 replies; 2+ messages in thread
From: Tejun Heo @ 2019-10-03 16:49 UTC (permalink / raw)
  To: Zdenek Sojka; +Cc: linux-kernel, jiangshanlai

Hello,

On Thu, Sep 26, 2019 at 09:06:58AM +0200, Zdenek Sojka wrote:
> I've hit the following dmesg with a 5.3.1 kernel; it looks similar to https://lkml.org/lkml/2019/8/28/754 , which should have been fixed as noted in https://lkml.org/lkml/2019/8/28/763 (if the patch is in the 5.3 release)

This isn't a wq problem per-se.  Can you repost w/ drm folks cc'd?

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2019-10-03 16:57 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-09-26  7:06 workqueue: PF_MEMALLOC task 14771(cc1plus) is flushing !WQ_MEM_RECLAIM events:gen6_pm_rps_work Zdenek Sojka
2019-10-03 16:49 ` Tejun Heo

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).