All of lore.kernel.org
 help / color / mirror / Atom feed
From: Mike Galbraith <efault@gmx.de>
To: Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
	Thomas Gleixner <tglx@linutronix.de>
Cc: LKML <linux-kernel@vger.kernel.org>,
	linux-rt-users <linux-rt-users@vger.kernel.org>,
	Steven Rostedt <rostedt@goodmis.org>
Subject: ltp or kvm triggerable lockdep alloc_pid()  deadlock gripe
Date: Thu, 22 Oct 2020 07:21:13 +0200	[thread overview]
Message-ID: <1236de05e704f0a0b28dc0ad75f9ad4d81b7a057.camel@gmx.de> (raw)
In-Reply-To: <20201021125324.ualpvrxvzyie6d7d@linutronix.de>

[-- Attachment #1: Type: text/plain, Size: 5886 bytes --]

Greetings,

The gripe below is repeatable in two ways here, boot with nomodeset so
nouveau doesn't steal the lockdep show when I then fire up one of my
(oink) full distro VM's, or from an ltp directory ./runltp -f cpuset
with the attached subset of controllers file placed in ./runtest dir.

Method2 may lead to a real deal deadlock, I've got a crashdump of one,
stack traces of uninterruptible sleepers attached.

[  154.927302] ======================================================
[  154.927303] WARNING: possible circular locking dependency detected
[  154.927304] 5.9.1-rt18-rt #5 Tainted: G S          E
[  154.927305] ------------------------------------------------------
[  154.927306] cpuset_inherit_/4992 is trying to acquire lock:
[  154.927307] ffff9d334c5e64d8 (&s->seqcount){+.+.}-{0:0}, at: __slab_alloc.isra.87+0xad/0xc0
[  154.927317]
               but task is already holding lock:
[  154.927317] ffffffffac4052d0 (pidmap_lock){+.+.}-{2:2}, at: alloc_pid+0x1fb/0x510
[  154.927324]
               which lock already depends on the new lock.

[  154.927324]
               the existing dependency chain (in reverse order) is:
[  154.927325]
               -> #1 (pidmap_lock){+.+.}-{2:2}:
[  154.927328]        lock_acquire+0x92/0x410
[  154.927331]        rt_spin_lock+0x2b/0xc0
[  154.927335]        free_pid+0x27/0xc0
[  154.927338]        release_task+0x34a/0x640
[  154.927340]        do_exit+0x6e9/0xcf0
[  154.927342]        kthread+0x11c/0x190
[  154.927344]        ret_from_fork+0x1f/0x30
[  154.927347]
               -> #0 (&s->seqcount){+.+.}-{0:0}:
[  154.927350]        validate_chain+0x981/0x1250
[  154.927352]        __lock_acquire+0x86f/0xbd0
[  154.927354]        lock_acquire+0x92/0x410
[  154.927356]        ___slab_alloc+0x71b/0x820
[  154.927358]        __slab_alloc.isra.87+0xad/0xc0
[  154.927359]        kmem_cache_alloc+0x700/0x8c0
[  154.927361]        radix_tree_node_alloc.constprop.22+0xa2/0xf0
[  154.927365]        idr_get_free+0x207/0x2b0
[  154.927367]        idr_alloc_u32+0x54/0xa0
[  154.927369]        idr_alloc_cyclic+0x4f/0xa0
[  154.927370]        alloc_pid+0x22b/0x510
[  154.927372]        copy_process+0xeb5/0x1de0
[  154.927375]        _do_fork+0x52/0x750
[  154.927377]        __do_sys_clone+0x64/0x70
[  154.927379]        do_syscall_64+0x33/0x40
[  154.927382]        entry_SYSCALL_64_after_hwframe+0x44/0xa9
[  154.927384]
               other info that might help us debug this:

[  154.927384]  Possible unsafe locking scenario:

[  154.927385]        CPU0                    CPU1
[  154.927386]        ----                    ----
[  154.927386]   lock(pidmap_lock);
[  154.927388]                                lock(&s->seqcount);
[  154.927389]                                lock(pidmap_lock);
[  154.927391]   lock(&s->seqcount);
[  154.927392]
                *** DEADLOCK ***

[  154.927393] 4 locks held by cpuset_inherit_/4992:
[  154.927394]  #0: ffff9d33decea5b0 ((lock).lock){+.+.}-{2:2}, at: __radix_tree_preload+0x52/0x3b0
[  154.927399]  #1: ffffffffac598fa0 (rcu_read_lock){....}-{1:2}, at: rt_spin_lock+0x5/0xc0
[  154.927405]  #2: ffffffffac4052d0 (pidmap_lock){+.+.}-{2:2}, at: alloc_pid+0x1fb/0x510
[  154.927409]  #3: ffffffffac598fa0 (rcu_read_lock){....}-{1:2}, at: rt_spin_lock+0x5/0xc0
[  154.927414]
               stack backtrace:
[  154.927416] CPU: 3 PID: 4992 Comm: cpuset_inherit_ Kdump: loaded Tainted: G S          E     5.9.1-rt18-rt #5
[  154.927418] Hardware name: MEDION MS-7848/MS-7848, BIOS M7848W08.20C 09/23/2013
[  154.927419] Call Trace:
[  154.927422]  dump_stack+0x77/0x9b
[  154.927425]  check_noncircular+0x148/0x160
[  154.927432]  ? validate_chain+0x981/0x1250
[  154.927435]  validate_chain+0x981/0x1250
[  154.927441]  __lock_acquire+0x86f/0xbd0
[  154.927446]  lock_acquire+0x92/0x410
[  154.927449]  ? __slab_alloc.isra.87+0xad/0xc0
[  154.927452]  ? kmem_cache_alloc+0x648/0x8c0
[  154.927453]  ? lock_acquire+0x92/0x410
[  154.927458]  ___slab_alloc+0x71b/0x820
[  154.927460]  ? __slab_alloc.isra.87+0xad/0xc0
[  154.927463]  ? radix_tree_node_alloc.constprop.22+0xa2/0xf0
[  154.927468]  ? __slab_alloc.isra.87+0x83/0xc0
[  154.927472]  ? radix_tree_node_alloc.constprop.22+0xa2/0xf0
[  154.927474]  ? __slab_alloc.isra.87+0xad/0xc0
[  154.927476]  __slab_alloc.isra.87+0xad/0xc0
[  154.927480]  ? radix_tree_node_alloc.constprop.22+0xa2/0xf0
[  154.927482]  kmem_cache_alloc+0x700/0x8c0
[  154.927487]  radix_tree_node_alloc.constprop.22+0xa2/0xf0
[  154.927491]  idr_get_free+0x207/0x2b0
[  154.927495]  idr_alloc_u32+0x54/0xa0
[  154.927500]  idr_alloc_cyclic+0x4f/0xa0
[  154.927503]  alloc_pid+0x22b/0x510
[  154.927506]  ? copy_thread+0x88/0x200
[  154.927512]  copy_process+0xeb5/0x1de0
[  154.927520]  _do_fork+0x52/0x750
[  154.927523]  ? lock_acquire+0x92/0x410
[  154.927525]  ? __might_fault+0x3e/0x90
[  154.927530]  ? find_held_lock+0x2d/0x90
[  154.927535]  __do_sys_clone+0x64/0x70
[  154.927541]  do_syscall_64+0x33/0x40
[  154.927544]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
[  154.927546] RIP: 0033:0x7f0b357356e3
[  154.927548] Code: db 45 85 ed 0f 85 ad 01 00 00 64 4c 8b 04 25 10 00 00 00 31 d2 4d 8d 90 d0 02 00 00 31 f6 bf 11 00 20 01 b8 38 00 00 00 0f 05 <48> 3d 00 f0 ff ff 0f 87 f1 00 00 00 85 c0 41 89 c4 0f 85 fe 00 00
[  154.927550] RSP: 002b:00007ffdfd6d15f0 EFLAGS: 00000246 ORIG_RAX: 0000000000000038
[  154.927552] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f0b357356e3
[  154.927554] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000001200011
[  154.927555] RBP: 00007ffdfd6d1620 R08: 00007f0b36052b80 R09: 0000000000000072
[  154.927556] R10: 00007f0b36052e50 R11: 0000000000000246 R12: 0000000000000000
[  154.927557] R13: 0000000000000000 R14: 0000000000000000 R15: 00005614ef57ecf0

[-- Attachment #2: cpuset --]
[-- Type: text/plain, Size: 587 bytes --]

#DESCRIPTION:Resource Management testing

cpuset_base_ops	cpuset_base_ops_testset.sh
cpuset_inherit	cpuset_inherit_testset.sh
cpuset_exclusive	cpuset_exclusive_test.sh
cpuset_hierarchy	cpuset_hierarchy_test.sh
cpuset_syscall	cpuset_syscall_testset.sh
cpuset_sched_domains	cpuset_sched_domains_test.sh
cpuset_load_balance	cpuset_load_balance_test.sh
cpuset_hotplug	cpuset_hotplug_test.sh
cpuset_memory	cpuset_memory_testset.sh
cpuset_memory_pressure	cpuset_memory_pressure_testset.sh
cpuset_memory_spread	cpuset_memory_spread_testset.sh

cpuset_regression_test cpuset_regression_test.sh


[-- Attachment #3: deadlock-log --]
[-- Type: text/plain, Size: 14019 bytes --]

      1      0   2  ffffa09c87fd0000  UN   0.1  221032   9368  systemd
    627      1   0  ffffa09f733c0000  UN   0.0   68392   7004  systemd-udevd
   3322   3247   7  ffffa09f512051c0  UN   0.0  167624   2732  gpg-agent
   3841   3468   2  ffffa09f32250000  UN   0.1   19912   9828  bash
   4209   3468   2  ffffa09dd070d1c0  UN   0.1   19912   9796  bash
   4845   3335   3  ffffa09f2ffe1b40  UN   0.1  268172  24880  file.so
   4846   3335   5  ffffa09f2d418000  UN   0.1  268172  24880  file.so
   5657      1   3  ffffa09f222e3680  UN   1.5 2884604 260248  Thread (pooled)
   6716   5797   3  ffffa09f30da1b40  UN   0.0   14128   4168  cpuset_hotplug_
   6743      1   3  ffffa09f54151b40  UN   0.1  574864  18532  pool-/usr/lib/x
   6744      1   4  ffffa09f357f9b40  UN   0.0  489516   6096  pool-/usr/lib/x
PID: 1      TASK: ffffa09c87fd0000  CPU: 2   COMMAND: "systemd"
 #0 [ffffbfbb00033c50] __schedule+837 at ffffffffa59b15f5
 #1 [ffffbfbb00033cd8] schedule+86 at ffffffffa59b1d96
 #2 [ffffbfbb00033ce8] __rt_mutex_slowlock+56 at ffffffffa59b3868
 #3 [ffffbfbb00033d30] rt_mutex_slowlock_locked+207 at ffffffffa59b3abf
 #4 [ffffbfbb00033d88] rt_mutex_slowlock.constprop.30+90 at ffffffffa59b3d3a
 #5 [ffffbfbb00033e00] proc_cgroup_show+74 at ffffffffa5184a7a
 #6 [ffffbfbb00033e48] proc_single_show+84 at ffffffffa53cd524
 #7 [ffffbfbb00033e80] seq_read+206 at ffffffffa534c30e
 #8 [ffffbfbb00033ed8] vfs_read+209 at ffffffffa531d281
 #9 [ffffbfbb00033f08] ksys_read+135 at ffffffffa531d637
#10 [ffffbfbb00033f40] do_syscall_64+51 at ffffffffa59a35c3
#11 [ffffbfbb00033f50] entry_SYSCALL_64_after_hwframe+68 at ffffffffa5a0008c
    RIP: 00007f1bb97dd1d8  RSP: 00007ffd5f424ac0  RFLAGS: 00000246
    RAX: ffffffffffffffda  RBX: 000055e3b1224cc0  RCX: 00007f1bb97dd1d8
    RDX: 0000000000000400  RSI: 000055e3b1224cc0  RDI: 0000000000000055
    RBP: 0000000000000400   R8: 0000000000000000   R9: 0000000000000000
    R10: 00007f1bbb1f9940  R11: 0000000000000246  R12: 00007f1bb9aa57a0
    R13: 00007f1bb9aa62e0  R14: 0000000000000000  R15: 000055e3b1377f20
    ORIG_RAX: 0000000000000000  CS: 0033  SS: 002b
PID: 627    TASK: ffffa09f733c0000  CPU: 0   COMMAND: "systemd-udevd"
 #0 [ffffbfbb00b6fc38] __schedule+837 at ffffffffa59b15f5
 #1 [ffffbfbb00b6fcc0] schedule+86 at ffffffffa59b1d96
 #2 [ffffbfbb00b6fcc8] percpu_rwsem_wait+181 at ffffffffa5101b75
 #3 [ffffbfbb00b6fd28] __percpu_down_read+114 at ffffffffa5101ec2
 #4 [ffffbfbb00b6fd40] cgroup_can_fork+1321 at ffffffffa5185e69
 #5 [ffffbfbb00b6fd88] copy_process+4457 at ffffffffa508aab9
 #6 [ffffbfbb00b6fe30] _do_fork+82 at ffffffffa508b882
 #7 [ffffbfbb00b6fed0] __do_sys_clone+100 at ffffffffa508c054
 #8 [ffffbfbb00b6ff40] do_syscall_64+51 at ffffffffa59a35c3
 #9 [ffffbfbb00b6ff50] entry_SYSCALL_64_after_hwframe+68 at ffffffffa5a0008c
    RIP: 00007fa6261286e3  RSP: 00007ffc0a16daf0  RFLAGS: 00000246
    RAX: ffffffffffffffda  RBX: 00007ffc0a16daf0  RCX: 00007fa6261286e3
    RDX: 0000000000000000  RSI: 0000000000000000  RDI: 0000000001200011
    RBP: 00007ffc0a16db40   R8: 00007fa6272cfd40   R9: 0000000000000001
    R10: 00007fa6272d0010  R11: 0000000000000246  R12: 0000000000000000
    R13: 0000000000000000  R14: 0000000000000000  R15: 0000557a3d4a9a10
    ORIG_RAX: 0000000000000038  CS: 0033  SS: 002b
PID: 3322   TASK: ffffa09f512051c0  CPU: 7   COMMAND: "gpg-agent"
 #0 [ffffbfbb00ef7c38] __schedule+837 at ffffffffa59b15f5
 #1 [ffffbfbb00ef7cc0] schedule+86 at ffffffffa59b1d96
 #2 [ffffbfbb00ef7cc8] percpu_rwsem_wait+181 at ffffffffa5101b75
 #3 [ffffbfbb00ef7d28] __percpu_down_read+114 at ffffffffa5101ec2
 #4 [ffffbfbb00ef7d40] cgroup_can_fork+1321 at ffffffffa5185e69
 #5 [ffffbfbb00ef7d88] copy_process+4457 at ffffffffa508aab9
 #6 [ffffbfbb00ef7e30] _do_fork+82 at ffffffffa508b882
 #7 [ffffbfbb00ef7ed0] __do_sys_clone+100 at ffffffffa508c054
 #8 [ffffbfbb00ef7f40] do_syscall_64+51 at ffffffffa59a35c3
 #9 [ffffbfbb00ef7f50] entry_SYSCALL_64_after_hwframe+68 at ffffffffa5a0008c
    RIP: 00007f4355ee7fb1  RSP: 00007ffdf3130358  RFLAGS: 00000202
    RAX: ffffffffffffffda  RBX: 00007f4355be7700  RCX: 00007f4355ee7fb1
    RDX: 00007f4355be79d0  RSI: 00007f4355be6fb0  RDI: 00000000003d0f00
    RBP: 00007ffdf3130760   R8: 00007f4355be7700   R9: 00007f4355be7700
    R10: 00007f4355be79d0  R11: 0000000000000202  R12: 00007ffdf31303fe
    R13: 00007ffdf31303ff  R14: 000055667da0f9e0  R15: 00007ffdf3130760
    ORIG_RAX: 0000000000000038  CS: 0033  SS: 002b
PID: 3841   TASK: ffffa09f32250000  CPU: 2   COMMAND: "bash"
 #0 [ffffbfbb02d0fc38] __schedule+837 at ffffffffa59b15f5
 #1 [ffffbfbb02d0fcc0] schedule+86 at ffffffffa59b1d96
 #2 [ffffbfbb02d0fcc8] percpu_rwsem_wait+181 at ffffffffa5101b75
 #3 [ffffbfbb02d0fd28] __percpu_down_read+114 at ffffffffa5101ec2
 #4 [ffffbfbb02d0fd40] cgroup_can_fork+1321 at ffffffffa5185e69
 #5 [ffffbfbb02d0fd88] copy_process+4457 at ffffffffa508aab9
 #6 [ffffbfbb02d0fe30] _do_fork+82 at ffffffffa508b882
 #7 [ffffbfbb02d0fed0] __do_sys_clone+100 at ffffffffa508c054
 #8 [ffffbfbb02d0ff40] do_syscall_64+51 at ffffffffa59a35c3
 #9 [ffffbfbb02d0ff50] entry_SYSCALL_64_after_hwframe+68 at ffffffffa5a0008c
    RIP: 00007f023eaf36e3  RSP: 00007ffe80698a30  RFLAGS: 00000246
    RAX: ffffffffffffffda  RBX: 0000000000000000  RCX: 00007f023eaf36e3
    RDX: 0000000000000000  RSI: 0000000000000000  RDI: 0000000001200011
    RBP: 00007ffe80698a60   R8: 00007f023f410b80   R9: 0000000000000000
    R10: 00007f023f410e50  R11: 0000000000000246  R12: 0000000000000000
    R13: 0000000000000000  R14: 0000562cd4ee6c90  R15: 0000562cd4ee6c90
    ORIG_RAX: 0000000000000038  CS: 0033  SS: 002b
PID: 4209   TASK: ffffa09dd070d1c0  CPU: 2   COMMAND: "bash"
 #0 [ffffbfbb0376fc38] __schedule+837 at ffffffffa59b15f5
 #1 [ffffbfbb0376fcc0] schedule+86 at ffffffffa59b1d96
 #2 [ffffbfbb0376fcc8] percpu_rwsem_wait+181 at ffffffffa5101b75
 #3 [ffffbfbb0376fd28] __percpu_down_read+114 at ffffffffa5101ec2
 #4 [ffffbfbb0376fd40] cgroup_can_fork+1321 at ffffffffa5185e69
 #5 [ffffbfbb0376fd88] copy_process+4457 at ffffffffa508aab9
 #6 [ffffbfbb0376fe30] _do_fork+82 at ffffffffa508b882
 #7 [ffffbfbb0376fed0] __do_sys_clone+100 at ffffffffa508c054
 #8 [ffffbfbb0376ff40] do_syscall_64+51 at ffffffffa59a35c3
 #9 [ffffbfbb0376ff50] entry_SYSCALL_64_after_hwframe+68 at ffffffffa5a0008c
    RIP: 00007f81a96426e3  RSP: 00007ffd97e5ebc0  RFLAGS: 00000246
    RAX: ffffffffffffffda  RBX: 0000000000000000  RCX: 00007f81a96426e3
    RDX: 0000000000000000  RSI: 0000000000000000  RDI: 0000000001200011
    RBP: 00007ffd97e5ebf0   R8: 00007f81a9f5fb80   R9: 0000000000000000
    R10: 00007f81a9f5fe50  R11: 0000000000000246  R12: 0000000000000000
    R13: 0000000000000000  R14: 000055a56fcb9c90  R15: 000055a56fcb9c90
    ORIG_RAX: 0000000000000038  CS: 0033  SS: 002b
PID: 4845   TASK: ffffa09f2ffe1b40  CPU: 3   COMMAND: "file.so"
 #0 [ffffbfbb03223d88] __schedule+837 at ffffffffa59b15f5
 #1 [ffffbfbb03223e10] schedule+86 at ffffffffa59b1d96
 #2 [ffffbfbb03223e18] percpu_rwsem_wait+181 at ffffffffa5101b75
 #3 [ffffbfbb03223e78] __percpu_down_read+114 at ffffffffa5101ec2
 #4 [ffffbfbb03223e90] exit_signals+711 at ffffffffa50a2f27
 #5 [ffffbfbb03223ea8] do_exit+216 at ffffffffa5093ef8
 #6 [ffffbfbb03223f10] do_group_exit+71 at ffffffffa5094bb7
 #7 [ffffbfbb03223f38] __x64_sys_exit_group+20 at ffffffffa5094c34
 #8 [ffffbfbb03223f40] do_syscall_64+51 at ffffffffa59a35c3
 #9 [ffffbfbb03223f50] entry_SYSCALL_64_after_hwframe+68 at ffffffffa5a0008c
    RIP: 00007f0ca6b20998  RSP: 00007ffc5e1eef48  RFLAGS: 00000246
    RAX: ffffffffffffffda  RBX: 0000000000000000  RCX: 00007f0ca6b20998
    RDX: 0000000000000000  RSI: 000000000000003c  RDI: 0000000000000000
    RBP: 00007f0ca6e0d510   R8: 00000000000000e7   R9: ffffffffffffff60
    R10: 00007f0ca5fd10f8  R11: 0000000000000246  R12: 00007f0ca6e0d510
    R13: 00007f0ca6e0d8c0  R14: 00007ffc5e1eefe0  R15: 0000000000000020
    ORIG_RAX: 00000000000000e7  CS: 0033  SS: 002b
PID: 4846   TASK: ffffa09f2d418000  CPU: 5   COMMAND: "file.so"
 #0 [ffffbfbb0396fd88] __schedule+837 at ffffffffa59b15f5
 #1 [ffffbfbb0396fe10] schedule+86 at ffffffffa59b1d96
 #2 [ffffbfbb0396fe18] percpu_rwsem_wait+181 at ffffffffa5101b75
 #3 [ffffbfbb0396fe78] __percpu_down_read+114 at ffffffffa5101ec2
 #4 [ffffbfbb0396fe90] exit_signals+711 at ffffffffa50a2f27
 #5 [ffffbfbb0396fea8] do_exit+216 at ffffffffa5093ef8
 #6 [ffffbfbb0396ff10] do_group_exit+71 at ffffffffa5094bb7
 #7 [ffffbfbb0396ff38] __x64_sys_exit_group+20 at ffffffffa5094c34
 #8 [ffffbfbb0396ff40] do_syscall_64+51 at ffffffffa59a35c3
 #9 [ffffbfbb0396ff50] entry_SYSCALL_64_after_hwframe+68 at ffffffffa5a0008c
    RIP: 00007f0ca6b20998  RSP: 00007ffc5e1eef48  RFLAGS: 00000246
    RAX: ffffffffffffffda  RBX: 0000000000000000  RCX: 00007f0ca6b20998
    RDX: 0000000000000000  RSI: 000000000000003c  RDI: 0000000000000000
    RBP: 00007f0ca6e0d510   R8: 00000000000000e7   R9: ffffffffffffff60
    R10: 00007f0ca5fd10f8  R11: 0000000000000246  R12: 00007f0ca6e0d510
    R13: 00007f0ca6e0d8c0  R14: 00007ffc5e1eefe0  R15: 0000000000000020
    ORIG_RAX: 00000000000000e7  CS: 0033  SS: 002b
PID: 5657   TASK: ffffa09f222e3680  CPU: 3   COMMAND: "Thread (pooled)"
 #0 [ffffbfbb03a07db0] __schedule+837 at ffffffffa59b15f5
 #1 [ffffbfbb03a07e38] schedule+86 at ffffffffa59b1d96
 #2 [ffffbfbb03a07e40] percpu_rwsem_wait+181 at ffffffffa5101b75
 #3 [ffffbfbb03a07ea0] __percpu_down_read+114 at ffffffffa5101ec2
 #4 [ffffbfbb03a07eb8] exit_signals+711 at ffffffffa50a2f27
 #5 [ffffbfbb03a07ed0] do_exit+216 at ffffffffa5093ef8
 #6 [ffffbfbb03a07f38] __x64_sys_exit+23 at ffffffffa5094b67
 #7 [ffffbfbb03a07f40] do_syscall_64+51 at ffffffffa59a35c3
 #8 [ffffbfbb03a07f50] entry_SYSCALL_64_after_hwframe+68 at ffffffffa5a0008c
    RIP: 00007f1f2f9265b6  RSP: 00007f1e95057d50  RFLAGS: 00000246
    RAX: ffffffffffffffda  RBX: 00007f1e95058700  RCX: 00007f1f2f9265b6
    RDX: 000000000000003c  RSI: 00007f1f2fb38010  RDI: 0000000000000000
    RBP: 0000000000000000   R8: 00007f1e880029c0   R9: 0000000000000000
    R10: 0000000000000020  R11: 0000000000000246  R12: 00007ffd7582f27e
    R13: 00007ffd7582f27f  R14: 000055fce0818180  R15: 00007ffd7582f350
    ORIG_RAX: 000000000000003c  CS: 0033  SS: 002b
PID: 6716   TASK: ffffa09f30da1b40  CPU: 3   COMMAND: "cpuset_hotplug_"
 #0 [ffffbfbb03ac79d8] __schedule+837 at ffffffffa59b15f5
 #1 [ffffbfbb03ac7a60] schedule+86 at ffffffffa59b1d96
 #2 [ffffbfbb03ac7a70] schedule_timeout+495 at ffffffffa59b4cbf
 #3 [ffffbfbb03ac7b08] wait_for_completion+165 at ffffffffa59b2f25
 #4 [ffffbfbb03ac7b48] affine_move_task+705 at ffffffffa50cc231
 #5 [ffffbfbb03ac7c88] __set_cpus_allowed_ptr+274 at ffffffffa50cc562
 #6 [ffffbfbb03ac7cc8] cpuset_attach+195 at ffffffffa518df73
 #7 [ffffbfbb03ac7d00] cgroup_migrate_execute+1133 at ffffffffa518075d
 #8 [ffffbfbb03ac7d58] cgroup_attach_task+524 at ffffffffa5180abc
 #9 [ffffbfbb03ac7e28] __cgroup1_procs_write.constprop.21+243 at ffffffffa5187843
#10 [ffffbfbb03ac7e68] cgroup_file_write+126 at ffffffffa517b07e
#11 [ffffbfbb03ac7ea0] kernfs_fop_write+275 at ffffffffa53e1c13
#12 [ffffbfbb03ac7ed8] vfs_write+240 at ffffffffa531d470
#13 [ffffbfbb03ac7f08] ksys_write+135 at ffffffffa531d737
#14 [ffffbfbb03ac7f40] do_syscall_64+51 at ffffffffa59a35c3
#15 [ffffbfbb03ac7f50] entry_SYSCALL_64_after_hwframe+68 at ffffffffa5a0008c
    RIP: 00007f0176b84244  RSP: 00007ffffcf7e2b8  RFLAGS: 00000246
    RAX: ffffffffffffffda  RBX: 0000000000000005  RCX: 00007f0176b84244
    RDX: 0000000000000005  RSI: 000055b15203d890  RDI: 0000000000000001
    RBP: 000055b15203d890   R8: 000000000000000a   R9: 0000000000000000
    R10: 000000000000000a  R11: 0000000000000246  R12: 0000000000000005
    R13: 0000000000000001  R14: 00007f0176e4c5a0  R15: 0000000000000005
    ORIG_RAX: 0000000000000001  CS: 0033  SS: 002b
PID: 6743   TASK: ffffa09f54151b40  CPU: 3   COMMAND: "pool-/usr/lib/x"
 #0 [ffffbfbb08657db0] __schedule+837 at ffffffffa59b15f5
 #1 [ffffbfbb08657e38] schedule+86 at ffffffffa59b1d96
 #2 [ffffbfbb08657e40] percpu_rwsem_wait+181 at ffffffffa5101b75
 #3 [ffffbfbb08657ea0] __percpu_down_read+114 at ffffffffa5101ec2
 #4 [ffffbfbb08657eb8] exit_signals+711 at ffffffffa50a2f27
 #5 [ffffbfbb08657ed0] do_exit+216 at ffffffffa5093ef8
 #6 [ffffbfbb08657f38] __x64_sys_exit+23 at ffffffffa5094b67
 #7 [ffffbfbb08657f40] do_syscall_64+51 at ffffffffa59a35c3
 #8 [ffffbfbb08657f50] entry_SYSCALL_64_after_hwframe+68 at ffffffffa5a0008c
    RIP: 00007f0176d6b5b6  RSP: 00007f015d540dd0  RFLAGS: 00000246
    RAX: ffffffffffffffda  RBX: 00007f015d541700  RCX: 00007f0176d6b5b6
    RDX: 000000000000003c  RSI: 00007f0176f7d010  RDI: 0000000000000000
    RBP: 0000000000000000   R8: 00007f01500008c0   R9: 0000000000000004
    R10: 000055ee65f185d0  R11: 0000000000000246  R12: 00007ffc71eef56e
    R13: 00007ffc71eef56f  R14: 00007f01640038f0  R15: 00007ffc71eef600
    ORIG_RAX: 000000000000003c  CS: 0033  SS: 002b
PID: 6744   TASK: ffffa09f357f9b40  CPU: 4   COMMAND: "pool-/usr/lib/x"
 #0 [ffffbfbb0865fdb0] __schedule+837 at ffffffffa59b15f5
 #1 [ffffbfbb0865fe38] schedule+86 at ffffffffa59b1d96
 #2 [ffffbfbb0865fe40] percpu_rwsem_wait+181 at ffffffffa5101b75
 #3 [ffffbfbb0865fea0] __percpu_down_read+114 at ffffffffa5101ec2
 #4 [ffffbfbb0865feb8] exit_signals+711 at ffffffffa50a2f27
 #5 [ffffbfbb0865fed0] do_exit+216 at ffffffffa5093ef8
 #6 [ffffbfbb0865ff38] __x64_sys_exit+23 at ffffffffa5094b67
 #7 [ffffbfbb0865ff40] do_syscall_64+51 at ffffffffa59a35c3
 #8 [ffffbfbb0865ff50] entry_SYSCALL_64_after_hwframe+68 at ffffffffa5a0008c
    RIP: 00007ff49b0c85b6  RSP: 00007ff493ffedd0  RFLAGS: 00000246
    RAX: ffffffffffffffda  RBX: 00007ff493fff700  RCX: 00007ff49b0c85b6
    RDX: 000000000000003c  RSI: 00007ff49b2da010  RDI: 0000000000000000
    RBP: 0000000000000000   R8: 00007ff488001600   R9: 0000000000000000
    R10: 0000000000000050  R11: 0000000000000246  R12: 00007ffdc88c817e
    R13: 00007ffdc88c817f  R14: 00007ff48c0069e0  R15: 00007ffdc88c8210
    ORIG_RAX: 000000000000003c  CS: 0033  SS: 002b

  parent reply	other threads:[~2020-10-22  5:22 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-21 12:53 [ANNOUNCE] v5.9.1-rt18 Sebastian Andrzej Siewior
2020-10-21 13:14 ` Sebastian Andrzej Siewior
2020-10-27  6:53   ` Fernando Lopez-Lezcano
2020-10-27  8:22     ` Sebastian Andrzej Siewior
2020-10-27 17:07       ` Fernando Lopez-Lezcano
2020-10-28 20:24         ` Sebastian Andrzej Siewior
2020-10-22  5:21 ` Mike Galbraith [this message]
2020-10-22 16:44   ` ltp or kvm triggerable lockdep alloc_pid() deadlock gripe Sebastian Andrzej Siewior
2020-10-22  5:28 ` kvm+nouveau induced lockdep gripe Mike Galbraith
2020-10-23  9:01   ` Sebastian Andrzej Siewior
2020-10-23 12:07     ` Mike Galbraith
     [not found]     ` <e4bf2fe3c5d2fdeded9b3d873a08094dbf145bf9.camel-Mmb7MZpHnFY@public.gmane.org>
2020-10-24  2:22       ` Hillf Danton
2020-10-24  3:38         ` Mike Galbraith
2020-10-24  3:38           ` Mike Galbraith
     [not found]         ` <20201024050000.8104-1-hdanton@sina.com>
2020-10-24  5:25           ` Mike Galbraith
     [not found]           ` <20201024094224.2804-1-hdanton@sina.com>
2020-10-26 17:26             ` Sebastian Andrzej Siewior
2020-10-26 17:31           ` Sebastian Andrzej Siewior
2020-10-26 19:15             ` Mike Galbraith
2020-10-26 19:53               ` Sebastian Andrzej Siewior
2020-10-27  6:03                 ` Mike Galbraith
2020-10-27  9:00                   ` Sebastian Andrzej Siewior
2020-10-27  9:49                     ` Mike Galbraith
2020-10-27 10:14                     ` Mike Galbraith
2020-10-27 10:18                       ` Sebastian Andrzej Siewior
2020-10-27 11:13                         ` Mike Galbraith
2021-09-17 13:17   ` [tip: irq/core] genirq: Move prio assignment into the newly created thread tip-bot2 for Thomas Gleixner
2021-09-17 13:17   ` [tip: sched/core] kthread: Move prio/affinite change " tip-bot2 for Sebastian Andrzej Siewior
2021-09-17 18:36   ` tip-bot2 for Sebastian Andrzej Siewior
2021-10-05 14:12   ` tip-bot2 for Sebastian Andrzej Siewior

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1236de05e704f0a0b28dc0ad75f9ad4d81b7a057.camel@gmx.de \
    --to=efault@gmx.de \
    --cc=bigeasy@linutronix.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-rt-users@vger.kernel.org \
    --cc=rostedt@goodmis.org \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.