All of lore.kernel.org
 help / color / mirror / Atom feed
* ww_mutex.sh hangs since v5.16-rc1
@ 2021-12-01  1:26 ` Li Zhijian
  0 siblings, 0 replies; 7+ messages in thread
From: Li Zhijian @ 2021-12-01  1:26 UTC (permalink / raw)
  To: peterz, mingo, will, longman, boqun.feng; +Cc: open list, linux-kselftest, lkp

Hi Folks


LKP/0Day found that ww_mutex.sh cannot complete since v5.16-rc1, but
I'm pretty sorry that we failed to bisect the FBC, instead, the bisection pointed
to a/below merge commit(91e1c99e17) finally.

Due to this hang, other tests in the same group are also blocked in 0Day, we
hope we can fix this hang ASAP.

So if you have any idea about this, or need more debug information, feel free to let me know :)

BTW, ww_mutex.sh was failed in v5.15 without hang, and looks it cannot reproduce on a vm.

Our box:
root@lkp-knm01 ~# lscpu
Architecture:        x86_64
CPU op-mode(s):      32-bit, 64-bit
Byte Order:          Little Endian
Address sizes:       46 bits physical, 48 bits virtual
CPU(s):              288
On-line CPU(s) list: 0-287
Thread(s) per core:  4
Core(s) per socket:  72
Socket(s):           1
NUMA node(s):        2
Vendor ID:           GenuineIntel
CPU family:          6
Model:               133
Model name:          Intel(R) Xeon Phi(TM) CPU 7295 @ 1.50GHz
Stepping:            0
CPU MHz:             1385.255
CPU max MHz:         1600.0000
CPU min MHz:         1000.0000
BogoMIPS:            2992.76
Virtualization:      VT-x
L1d cache:           32K
L1i cache:           32K
L2 cache:            1024K
NUMA node0 CPU(s):   0-287
NUMA node1 CPU(s):
Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch ring3mwait cpuid_fault epb pti tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms avx512f rdseed adx avx512pf avx512er avx512cd xsaveopt dtherm ida arat pln pts avx512_vpopcntdq avx512_4vnniw avx512_4fmaps

Below the call stack in v5.16-rc2

[ 1000.374954][ T2713] make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-8.3-kselftests-136057256686de39cc3a07c2e39ef6bc43003ff6/tools/testing/selftests/locking'
[ 1000.375030][ T2713]
[ 1000.428791][ T2713] 2021-11-22 22:21:27 make run_tests -C locking
[ 1000.428864][ T2713]
[ 1000.491043][ T2713] make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-8.3-kselftests-136057256686de39cc3a07c2e39ef6bc43003ff6/tools/testing/selftests/locking'
[ 1000.491121][ T2713]
[ 1000.540807][ T2713] TAP version 13
[ 1000.540882][ T2713]
[ 1000.576050][ T2713] 1..1
[ 1000.576282][ T2713]
[ 1000.612980][ T2713] # selftests: locking: ww_mutex.sh
[ 1000.613288][ T2713]
[ 1495.201324][ T1577] INFO: task kworker/u576:16:1470 blocked for more than 491 seconds.
[ 1495.220059][ T1577]       Tainted: G    B             5.16.0-rc2 #1
[ 1495.240902][ T1577] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 1495.265617][ T1577] task:kworker/u576:16 state:D stack:    0 pid: 1470 ppid:     2 flags:0x00004000
[ 1495.289054][ T1577] Workqueue: test-ww_mutex test_cycle_work [test_ww_mutex]
[ 1495.310936][ T1577] Call Trace:
[ 1495.327809][ T1577] <TASK>
[ 1495.344735][ T1577] __schedule+0xdb0/0x25c0
[ 1495.362764][ T1577]  ? io_schedule_timeout+0x180/0x180
[ 1495.382013][ T1577]  ? lock_downgrade+0x680/0x680
[ 1495.400894][ T1577]  ? do_raw_spin_lock+0x125/0x2c0
[ 1495.418866][ T1577] schedule+0xe4/0x280
[ 1495.435597][ T1577] schedule_preempt_disabled+0x18/0x40
[ 1495.454588][ T1577] __ww_mutex_lock+0x1248/0x34c0
[ 1495.476189][ T1577]  ? test_cycle_work+0x1bb/0x500 [test_ww_mutex]
[ 1495.497763][ T1577]  ? mutex_lock_interruptible_nested+0x40/0x40
[ 1495.518959][ T1577]  ? lock_downgrade+0x680/0x680
[ 1495.536861][ T1577]  ? wait_for_completion_interruptible+0x340/0x340
[ 1495.556253][ T1577]  ? ww_mutex_lock+0x3e/0x380
[ 1495.574003][ T1577] ww_mutex_lock+0x3e/0x380
[ 1495.591958][ T1577]  test_cycle_work+0x1bb/0x500 [test_ww_mutex]
[ 1495.612260][ T1577]  ? stress_reorder_work+0xa00/0xa00 [test_ww_mutex]
[ 1495.632857][ T1577]  ? 0xffffffff81000000
[ 1495.649027][ T1577]  ? rcu_read_lock_sched_held+0x5f/0x100
[ 1495.668211][ T1577]  ? rcu_read_lock_bh_held+0xc0/0xc0
[ 1495.687010][ T1577] process_one_work+0x817/0x13c0
[ 1495.704991][ T1577]  ? rcu_read_unlock+0x40/0x40
[ 1495.723024][ T1577]  ? pwq_dec_nr_in_flight+0x280/0x280
[ 1495.740211][ T1577]  ? rwlock_bug+0xc0/0xc0
[ 1495.758038][ T1577] worker_thread+0x8b/0xd80
[ 1495.775008][ T1577]  ? process_one_work+0x13c0/0x13c0
[ 1495.793017][ T1577] kthread+0x3b9/0x4c0
[ 1495.810782][ T1577]  ? set_kthread_struct+0x100/0x100
[ 1495.829988][ T1577] ret_from_fork+0x22/0x30
[ 1495.845811][ T1577] </TASK>
[ 1495.859087][ T1577] INFO: task kworker/u576:36:1490 blocked for more than 492 seconds.
[ 1495.879048][ T1577]       Tainted: G    B             5.16.0-rc2 #1
[ 1495.897879][ T1577] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 1495.919582][ T1577] task:kworker/u576:36 state:D stack:    0 pid: 1490 ppid:     2 flags:0x00004000
[ 1495.941865][ T1577] Workqueue: test-ww_mutex test_cycle_work [test_ww_mutex]
[ 1495.959889][ T1577] Call Trace:
[ 1495.974816][ T1577] <TASK>
[ 1495.988759][ T1577] __schedule+0xdb0/0x25c0
[ 1495.988759][ T1577] __schedule+0xdb0/0x25c0
[ 1496.003849][ T1577]  ? io_schedule_timeout+0x180/0x180
[ 1496.020839][ T1577]  ? lock_downgrade+0x680/0x680
[ 1496.036854][ T1577]  ? do_raw_spin_lock+0x125/0x2c0
[ 1496.051976][ T1577] schedule+0xe4/0x280
[ 1496.067780][ T1577] schedule_preempt_disabled+0x18/0x40
[ 1496.085004][ T1577] __ww_mutex_lock+0x1248/0x34c0
[ 1496.101895][ T1577]  ? test_cycle_work+0x1bb/0x500 [test_ww_mutex]
[ 1496.119889][ T1577]  ? mutex_lock_interruptible_nested+0x40/0x40
[ 1496.137873][ T1577]  ? lock_downgrade+0x680/0x680
[ 1496.152657][ T1577]  ? wait_for_completion_interruptible+0x340/0x340
[ 1496.168773][ T1577]  ? ww_mutex_lock+0x3e/0x380
[ 1496.184862][ T1577] ww_mutex_lock+0x3e/0x380
[ 1496.199979][ T1577]  test_cycle_work+0x1bb/0x500 [test_ww_mutex]
[ 1496.216277][ T1577]  ? stress_reorder_work+0xa00/0xa00 [test_ww_mutex]
[ 1496.234904][ T1577]  ? 0xffffffff81000000
[ 1496.249856][ T1577]  ? rcu_read_lock_sched_held+0x5f/0x100
[ 1496.265951][ T1577]  ? rcu_read_lock_bh_held+0xc0/0xc0
[ 1496.282815][ T1577] process_one_work+0x817/0x13c0
[ 1496.299791][ T1577]  ? rcu_read_unlock+0x40/0x40
[ 1496.314754][ T1577]  ? pwq_dec_nr_in_flight+0x280/0x280
[ 1496.331779][ T1577]  ? rwlock_bug+0xc0/0xc0
[ 1496.348007][ T1577] worker_thread+0x8b/0xd80
[ 1496.362905][ T1577]  ? process_one_work+0x13c0/0x13c0
[ 1496.378975][ T1577] kthread+0x3b9/0x4c0
[ 1496.393866][ T1577]  ? set_kthread_struct+0x100/0x100
[ 1496.408827][ T1577] ret_from_fork+0x22/0x30
[ 1496.423901][ T1577] </TASK>
[ 1496.437994][ T1577] INFO: task kworker/u576:0:15113 blocked for more than 492 seconds.
[ 1496.455862][ T1577]       Tainted: G    B             5.16.0-rc2 #1
[ 1496.473759][ T1577] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 1496.494808][ T1577] task:kworker/u576:0  state:D stack:    0 pid:15113 ppid:     2 flags:0x00004000
[ 1496.517000][ T1577] Workqueue: test-ww_mutex test_cycle_work [test_ww_mutex]
[ 1496.537035][ T1577] Call Trace:
[ 1496.551187][ T1577] <TASK>
[ 1496.566405][ T1577] __schedule+0xdb0/0x25c0
[ 1496.582012][ T1577]  ? io_schedule_timeout+0x180/0x180
[ 1496.598049][ T1577]  ? lock_downgrade+0x680/0x680
[ 1496.615360][ T1577]  ? do_raw_spin_lock+0x125/0x2c0
[ 1496.631835][ T1577] schedule+0xe4/0x280
[ 1496.645972][ T1577] schedule_preempt_disabled+0x18/0x40
[ 1496.663774][ T1577] __ww_mutex_lock+0x1248/0x34c0
[ 1496.681795][ T1577]  ? test_cycle_work+0x1bb/0x500 [test_ww_mutex]
[ 1496.698731][ T1577]  ? mutex_lock_interruptible_nested+0x40/0x40
[ 1496.714996][ T1577]  ? lock_downgrade+0x680/0x680
[ 1496.730888][ T1577]  ? wait_for_completion_interruptible+0x340/0x340
[ 1496.747926][ T1577]  ? ww_mutex_lock+0x3e/0x380
[ 1496.762482][ T1577] ww_mutex_lock+0x3e/0x380
[ 1496.778844][ T1577]  test_cycle_work+0x1bb/0x500 [test_ww_mutex]


And, we found that it  occasionally hangs on v5.16-rc3 (1/3 runs), below is a good dmesg.
[  962.136756][ T2950] make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-8.3-kselftests-d58071a8a76d779eedab38033ae4c821c30295a5/tools/testing/selftests/locking'
[  962.136831][ T2950]-
[  962.205036][ T2950] TAP version 13
[  962.206003][ T2950]-
[  962.298458][ T2950] 1..1
[  962.299657][ T2950]-
[  962.345588][ T2950] # selftests: locking: ww_mutex.sh
[  962.345657][ T2950]-
[  973.641869][T25509] All ww mutex selftests passed
[  973.773996][ T2950] # locking/ww_mutex: ok
[  973.774068][ T2950]-
[  973.774236][ T2960] # locking/ww_mutex: ok
[  973.802355][ T2960]-
[  973.829966][ T2950] ok 1 selftests: locking: ww_mutex.sh
[  973.834748][ T2950]-
[  973.838302][ T2960] ok 1 selftests: locking: ww_mutex.sh
[  973.899815][ T2960]-
[  973.921431][ T2950] make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-8.3-kselftests-d58071a8a76d779eedab38033ae4c821c30295a5/tools/testing/selftests/locking'
[  973.932312][ T2950]-
[  973.957345][ T2960] make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-8.3-kselftests-d58071a8a76d779eedab38033ae4c821c30295a5/tools/testing/selftests/locking'



Thanks
Zhijian@0Day




^ permalink raw reply	[flat|nested] 7+ messages in thread

* ww_mutex.sh hangs since v5.16-rc1
@ 2021-12-01  1:26 ` Li Zhijian
  0 siblings, 0 replies; 7+ messages in thread
From: Li Zhijian @ 2021-12-01  1:26 UTC (permalink / raw)
  To: lkp

[-- Attachment #1: Type: text/plain, Size: 9865 bytes --]

Hi Folks


LKP/0Day found that ww_mutex.sh cannot complete since v5.16-rc1, but
I'm pretty sorry that we failed to bisect the FBC, instead, the bisection pointed
to a/below merge commit(91e1c99e17) finally.

Due to this hang, other tests in the same group are also blocked in 0Day, we
hope we can fix this hang ASAP.

So if you have any idea about this, or need more debug information, feel free to let me know :)

BTW, ww_mutex.sh was failed in v5.15 without hang, and looks it cannot reproduce on a vm.

Our box:
root(a)lkp-knm01 ~# lscpu
Architecture:        x86_64
CPU op-mode(s):      32-bit, 64-bit
Byte Order:          Little Endian
Address sizes:       46 bits physical, 48 bits virtual
CPU(s):              288
On-line CPU(s) list: 0-287
Thread(s) per core:  4
Core(s) per socket:  72
Socket(s):           1
NUMA node(s):        2
Vendor ID:           GenuineIntel
CPU family:          6
Model:               133
Model name:          Intel(R) Xeon Phi(TM) CPU 7295 @ 1.50GHz
Stepping:            0
CPU MHz:             1385.255
CPU max MHz:         1600.0000
CPU min MHz:         1000.0000
BogoMIPS:            2992.76
Virtualization:      VT-x
L1d cache:           32K
L1i cache:           32K
L2 cache:            1024K
NUMA node0 CPU(s):   0-287
NUMA node1 CPU(s):
Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch ring3mwait cpuid_fault epb pti tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms avx512f rdseed adx avx512pf avx512er avx512cd xsaveopt dtherm ida arat pln pts avx512_vpopcntdq avx512_4vnniw avx512_4fmaps

Below the call stack in v5.16-rc2

[ 1000.374954][ T2713] make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-8.3-kselftests-136057256686de39cc3a07c2e39ef6bc43003ff6/tools/testing/selftests/locking'
[ 1000.375030][ T2713]
[ 1000.428791][ T2713] 2021-11-22 22:21:27 make run_tests -C locking
[ 1000.428864][ T2713]
[ 1000.491043][ T2713] make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-8.3-kselftests-136057256686de39cc3a07c2e39ef6bc43003ff6/tools/testing/selftests/locking'
[ 1000.491121][ T2713]
[ 1000.540807][ T2713] TAP version 13
[ 1000.540882][ T2713]
[ 1000.576050][ T2713] 1..1
[ 1000.576282][ T2713]
[ 1000.612980][ T2713] # selftests: locking: ww_mutex.sh
[ 1000.613288][ T2713]
[ 1495.201324][ T1577] INFO: task kworker/u576:16:1470 blocked for more than 491 seconds.
[ 1495.220059][ T1577]       Tainted: G    B             5.16.0-rc2 #1
[ 1495.240902][ T1577] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 1495.265617][ T1577] task:kworker/u576:16 state:D stack:    0 pid: 1470 ppid:     2 flags:0x00004000
[ 1495.289054][ T1577] Workqueue: test-ww_mutex test_cycle_work [test_ww_mutex]
[ 1495.310936][ T1577] Call Trace:
[ 1495.327809][ T1577] <TASK>
[ 1495.344735][ T1577] __schedule+0xdb0/0x25c0
[ 1495.362764][ T1577]  ? io_schedule_timeout+0x180/0x180
[ 1495.382013][ T1577]  ? lock_downgrade+0x680/0x680
[ 1495.400894][ T1577]  ? do_raw_spin_lock+0x125/0x2c0
[ 1495.418866][ T1577] schedule+0xe4/0x280
[ 1495.435597][ T1577] schedule_preempt_disabled+0x18/0x40
[ 1495.454588][ T1577] __ww_mutex_lock+0x1248/0x34c0
[ 1495.476189][ T1577]  ? test_cycle_work+0x1bb/0x500 [test_ww_mutex]
[ 1495.497763][ T1577]  ? mutex_lock_interruptible_nested+0x40/0x40
[ 1495.518959][ T1577]  ? lock_downgrade+0x680/0x680
[ 1495.536861][ T1577]  ? wait_for_completion_interruptible+0x340/0x340
[ 1495.556253][ T1577]  ? ww_mutex_lock+0x3e/0x380
[ 1495.574003][ T1577] ww_mutex_lock+0x3e/0x380
[ 1495.591958][ T1577]  test_cycle_work+0x1bb/0x500 [test_ww_mutex]
[ 1495.612260][ T1577]  ? stress_reorder_work+0xa00/0xa00 [test_ww_mutex]
[ 1495.632857][ T1577]  ? 0xffffffff81000000
[ 1495.649027][ T1577]  ? rcu_read_lock_sched_held+0x5f/0x100
[ 1495.668211][ T1577]  ? rcu_read_lock_bh_held+0xc0/0xc0
[ 1495.687010][ T1577] process_one_work+0x817/0x13c0
[ 1495.704991][ T1577]  ? rcu_read_unlock+0x40/0x40
[ 1495.723024][ T1577]  ? pwq_dec_nr_in_flight+0x280/0x280
[ 1495.740211][ T1577]  ? rwlock_bug+0xc0/0xc0
[ 1495.758038][ T1577] worker_thread+0x8b/0xd80
[ 1495.775008][ T1577]  ? process_one_work+0x13c0/0x13c0
[ 1495.793017][ T1577] kthread+0x3b9/0x4c0
[ 1495.810782][ T1577]  ? set_kthread_struct+0x100/0x100
[ 1495.829988][ T1577] ret_from_fork+0x22/0x30
[ 1495.845811][ T1577] </TASK>
[ 1495.859087][ T1577] INFO: task kworker/u576:36:1490 blocked for more than 492 seconds.
[ 1495.879048][ T1577]       Tainted: G    B             5.16.0-rc2 #1
[ 1495.897879][ T1577] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 1495.919582][ T1577] task:kworker/u576:36 state:D stack:    0 pid: 1490 ppid:     2 flags:0x00004000
[ 1495.941865][ T1577] Workqueue: test-ww_mutex test_cycle_work [test_ww_mutex]
[ 1495.959889][ T1577] Call Trace:
[ 1495.974816][ T1577] <TASK>
[ 1495.988759][ T1577] __schedule+0xdb0/0x25c0
[ 1495.988759][ T1577] __schedule+0xdb0/0x25c0
[ 1496.003849][ T1577]  ? io_schedule_timeout+0x180/0x180
[ 1496.020839][ T1577]  ? lock_downgrade+0x680/0x680
[ 1496.036854][ T1577]  ? do_raw_spin_lock+0x125/0x2c0
[ 1496.051976][ T1577] schedule+0xe4/0x280
[ 1496.067780][ T1577] schedule_preempt_disabled+0x18/0x40
[ 1496.085004][ T1577] __ww_mutex_lock+0x1248/0x34c0
[ 1496.101895][ T1577]  ? test_cycle_work+0x1bb/0x500 [test_ww_mutex]
[ 1496.119889][ T1577]  ? mutex_lock_interruptible_nested+0x40/0x40
[ 1496.137873][ T1577]  ? lock_downgrade+0x680/0x680
[ 1496.152657][ T1577]  ? wait_for_completion_interruptible+0x340/0x340
[ 1496.168773][ T1577]  ? ww_mutex_lock+0x3e/0x380
[ 1496.184862][ T1577] ww_mutex_lock+0x3e/0x380
[ 1496.199979][ T1577]  test_cycle_work+0x1bb/0x500 [test_ww_mutex]
[ 1496.216277][ T1577]  ? stress_reorder_work+0xa00/0xa00 [test_ww_mutex]
[ 1496.234904][ T1577]  ? 0xffffffff81000000
[ 1496.249856][ T1577]  ? rcu_read_lock_sched_held+0x5f/0x100
[ 1496.265951][ T1577]  ? rcu_read_lock_bh_held+0xc0/0xc0
[ 1496.282815][ T1577] process_one_work+0x817/0x13c0
[ 1496.299791][ T1577]  ? rcu_read_unlock+0x40/0x40
[ 1496.314754][ T1577]  ? pwq_dec_nr_in_flight+0x280/0x280
[ 1496.331779][ T1577]  ? rwlock_bug+0xc0/0xc0
[ 1496.348007][ T1577] worker_thread+0x8b/0xd80
[ 1496.362905][ T1577]  ? process_one_work+0x13c0/0x13c0
[ 1496.378975][ T1577] kthread+0x3b9/0x4c0
[ 1496.393866][ T1577]  ? set_kthread_struct+0x100/0x100
[ 1496.408827][ T1577] ret_from_fork+0x22/0x30
[ 1496.423901][ T1577] </TASK>
[ 1496.437994][ T1577] INFO: task kworker/u576:0:15113 blocked for more than 492 seconds.
[ 1496.455862][ T1577]       Tainted: G    B             5.16.0-rc2 #1
[ 1496.473759][ T1577] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 1496.494808][ T1577] task:kworker/u576:0  state:D stack:    0 pid:15113 ppid:     2 flags:0x00004000
[ 1496.517000][ T1577] Workqueue: test-ww_mutex test_cycle_work [test_ww_mutex]
[ 1496.537035][ T1577] Call Trace:
[ 1496.551187][ T1577] <TASK>
[ 1496.566405][ T1577] __schedule+0xdb0/0x25c0
[ 1496.582012][ T1577]  ? io_schedule_timeout+0x180/0x180
[ 1496.598049][ T1577]  ? lock_downgrade+0x680/0x680
[ 1496.615360][ T1577]  ? do_raw_spin_lock+0x125/0x2c0
[ 1496.631835][ T1577] schedule+0xe4/0x280
[ 1496.645972][ T1577] schedule_preempt_disabled+0x18/0x40
[ 1496.663774][ T1577] __ww_mutex_lock+0x1248/0x34c0
[ 1496.681795][ T1577]  ? test_cycle_work+0x1bb/0x500 [test_ww_mutex]
[ 1496.698731][ T1577]  ? mutex_lock_interruptible_nested+0x40/0x40
[ 1496.714996][ T1577]  ? lock_downgrade+0x680/0x680
[ 1496.730888][ T1577]  ? wait_for_completion_interruptible+0x340/0x340
[ 1496.747926][ T1577]  ? ww_mutex_lock+0x3e/0x380
[ 1496.762482][ T1577] ww_mutex_lock+0x3e/0x380
[ 1496.778844][ T1577]  test_cycle_work+0x1bb/0x500 [test_ww_mutex]


And, we found that it  occasionally hangs on v5.16-rc3 (1/3 runs), below is a good dmesg.
[  962.136756][ T2950] make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-8.3-kselftests-d58071a8a76d779eedab38033ae4c821c30295a5/tools/testing/selftests/locking'
[  962.136831][ T2950]-
[  962.205036][ T2950] TAP version 13
[  962.206003][ T2950]-
[  962.298458][ T2950] 1..1
[  962.299657][ T2950]-
[  962.345588][ T2950] # selftests: locking: ww_mutex.sh
[  962.345657][ T2950]-
[  973.641869][T25509] All ww mutex selftests passed
[  973.773996][ T2950] # locking/ww_mutex: ok
[  973.774068][ T2950]-
[  973.774236][ T2960] # locking/ww_mutex: ok
[  973.802355][ T2960]-
[  973.829966][ T2950] ok 1 selftests: locking: ww_mutex.sh
[  973.834748][ T2950]-
[  973.838302][ T2960] ok 1 selftests: locking: ww_mutex.sh
[  973.899815][ T2960]-
[  973.921431][ T2950] make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-8.3-kselftests-d58071a8a76d779eedab38033ae4c821c30295a5/tools/testing/selftests/locking'
[  973.932312][ T2950]-
[  973.957345][ T2960] make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-8.3-kselftests-d58071a8a76d779eedab38033ae4c821c30295a5/tools/testing/selftests/locking'



Thanks
Zhijian(a)0Day



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: ww_mutex.sh hangs since v5.16-rc1
  2021-12-01  1:26 ` Li Zhijian
@ 2021-12-02 22:57   ` Peter Zijlstra
  -1 siblings, 0 replies; 7+ messages in thread
From: Peter Zijlstra @ 2021-12-02 22:57 UTC (permalink / raw)
  To: Li Zhijian
  Cc: mingo, will, longman, boqun.feng, open list, linux-kselftest, lkp

On Wed, Dec 01, 2021 at 09:26:18AM +0800, Li Zhijian wrote:
> Hi Folks
> 
> 
> LKP/0Day found that ww_mutex.sh cannot complete since v5.16-rc1, but
> I'm pretty sorry that we failed to bisect the FBC, instead, the bisection pointed
> to a/below merge commit(91e1c99e17) finally.
> 
> Due to this hang, other tests in the same group are also blocked in 0Day, we
> hope we can fix this hang ASAP.
> 
> So if you have any idea about this, or need more debug information, feel free to let me know :)
> 
> BTW, ww_mutex.sh was failed in v5.15 without hang, and looks it cannot reproduce on a vm.

On real hardware:

root@ivb-ep:/usr/src/linux-2.6/tools/testing/selftests/locking# uname -a
Linux ivb-ep 5.16.0-rc3-00284-g68601c558556 #1 SMP PREEMPT Thu Dec 2 23:03:29 CET 2021 x86_64 GNU/Linux
root@ivb-ep:/usr/src/linux-2.6/tools/testing/selftests/locking# ./ww_mutex.sh
locking/ww_mutex: ok

[ 1907.907801] Beginning ww mutex selftests
[ 1915.700077] All ww mutex selftests passed


What else do I try?

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: ww_mutex.sh hangs since v5.16-rc1
@ 2021-12-02 22:57   ` Peter Zijlstra
  0 siblings, 0 replies; 7+ messages in thread
From: Peter Zijlstra @ 2021-12-02 22:57 UTC (permalink / raw)
  To: lkp

[-- Attachment #1: Type: text/plain, Size: 1028 bytes --]

On Wed, Dec 01, 2021 at 09:26:18AM +0800, Li Zhijian wrote:
> Hi Folks
> 
> 
> LKP/0Day found that ww_mutex.sh cannot complete since v5.16-rc1, but
> I'm pretty sorry that we failed to bisect the FBC, instead, the bisection pointed
> to a/below merge commit(91e1c99e17) finally.
> 
> Due to this hang, other tests in the same group are also blocked in 0Day, we
> hope we can fix this hang ASAP.
> 
> So if you have any idea about this, or need more debug information, feel free to let me know :)
> 
> BTW, ww_mutex.sh was failed in v5.15 without hang, and looks it cannot reproduce on a vm.

On real hardware:

root(a)ivb-ep:/usr/src/linux-2.6/tools/testing/selftests/locking# uname -a
Linux ivb-ep 5.16.0-rc3-00284-g68601c558556 #1 SMP PREEMPT Thu Dec 2 23:03:29 CET 2021 x86_64 GNU/Linux
root(a)ivb-ep:/usr/src/linux-2.6/tools/testing/selftests/locking# ./ww_mutex.sh
locking/ww_mutex: ok

[ 1907.907801] Beginning ww mutex selftests
[ 1915.700077] All ww mutex selftests passed


What else do I try?

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: ww_mutex.sh hangs since v5.16-rc1
  2021-12-02 22:57   ` Peter Zijlstra
@ 2021-12-07  4:27     ` Li Zhijian
  -1 siblings, 0 replies; 7+ messages in thread
From: Li Zhijian @ 2021-12-07  4:27 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: mingo, will, longman, boqun.feng, open list, linux-kselftest, lkp



On 03/12/2021 06:57, Peter Zijlstra wrote:
> On Wed, Dec 01, 2021 at 09:26:18AM +0800, Li Zhijian wrote:
>> Hi Folks
>>
>>
>> LKP/0Day found that ww_mutex.sh cannot complete since v5.16-rc1, but
>> I'm pretty sorry that we failed to bisect the FBC, instead, the bisection pointed
>> to a/below merge commit(91e1c99e17) finally.
>>
>> Due to this hang, other tests in the same group are also blocked in 0Day, we
>> hope we can fix this hang ASAP.
>>
>> So if you have any idea about this, or need more debug information, feel free to let me know :)
>>
>> BTW, ww_mutex.sh was failed in v5.15 without hang, and looks it cannot reproduce on a vm.
> On real hardware:
>
> root@ivb-ep:/usr/src/linux-2.6/tools/testing/selftests/locking# uname -a
> Linux ivb-ep 5.16.0-rc3-00284-g68601c558556 #1 SMP PREEMPT Thu Dec 2 23:03:29 CET 2021 x86_64 GNU/Linux
> root@ivb-ep:/usr/src/linux-2.6/tools/testing/selftests/locking# ./ww_mutex.sh
> locking/ww_mutex: ok
>
> [ 1907.907801] Beginning ww mutex selftests
> [ 1915.700077] All ww mutex selftests passed
>
>
> What else do I try?
I have no idea that, not sure below message is helpful

root@lkp-knm01 /opt/rootfs# pgrep test-ww_mutex | wc -l
256
root@lkp-knm01 /opt/rootfs# pgrep test-ww_mutex -l | head -n 3
1492 kworker/u576:38+test-ww_mutex
1497 kworker/u576:43+test-ww_mutex
1513 kworker/u576:59+test-ww_mutex

root@lkp-knm01 /opt/rootfs# lscpu
Architecture:        x86_64
CPU op-mode(s):      32-bit, 64-bit
Byte Order:          Little Endian
Address sizes:       46 bits physical, 48 bits virtual
CPU(s):              288
On-line CPU(s) list: 0-287
Thread(s) per core:  4
Core(s) per socket:  72
Socket(s):           1
NUMA node(s):        2
Vendor ID:           GenuineIntel
CPU family:          6
Model:               133
Model name:          Intel(R) Xeon Phi(TM) CPU 7295 @ 1.50GHz
Stepping:            0
CPU MHz:             1600.000
CPU max MHz:         1600.0000
CPU min MHz:         1000.0000
BogoMIPS:            2992.92
Virtualization:      VT-x
L1d cache:           32K
L1i cache:           32K
L2 cache:            1024K
NUMA node0 CPU(s):   0-287
NUMA node1 CPU(s):
Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch ring3mwait cpuid_fault epb pti tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms avx512f rdseed adx avx512pf avx512er avx512cd xsaveopt dtherm ida arat pln pts avx512_vpopcntdq avx512_4vnniw avx512_4fmaps
root@lkp-knm01 /opt/rootfs#


[ 1500.848024] INFO: task kworker/u576:5:12395 blocked for more than 496 seconds.
[ 1500.866920]       Not tainted 5.16.0-rc4 #1
[ 1500.882898] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 1500.903997] task:kworker/u576:5  state:D stack:    0 pid:12395 ppid:     2 flags:0x00004000
[ 1500.926047] Workqueue: test-ww_mutex test_cycle_work [test_ww_mutex]
[ 1500.945916] Call Trace:
[ 1500.959042] <TASK>
[ 1500.972977] __schedule+0xdb0/0x25c0
[ 1500.987868]  ? io_schedule_timeout+0x180/0x180
[ 1501.004971]  ? lock_downgrade+0x680/0x680
[ 1501.021975] schedule+0xe4/0x280
[ 1501.036896] schedule_preempt_disabled+0x18/0x40
[ 1501.053967] __ww_mutex_lock.constprop.0+0x1248/0x34c0
[ 1501.072020]  ? test_cycle_work+0x1bb/0x500 [test_ww_mutex]
[ 1501.088903]  ? mutex_lock_interruptible_nested+0x40/0x40
[ 1501.106256]  ? lock_is_held_type+0xd7/0x140
[ 1501.123083]  ? ww_mutex_lock+0x3e/0x380
[ 1501.138803] ww_mutex_lock+0x3e/0x380
[ 1501.154637]  test_cycle_work+0x1bb/0x500 [test_ww_mutex]
[ 1501.172498]  ? stress_reorder_work+0xa00/0xa00 [test_ww_mutex]
[ 1501.189955]  ? rcu_read_lock_sched_held+0xa1/0x100
[ 1501.205827]  ? rcu_read_lock_bh_held+0xc0/0xc0
[ 1501.221996] process_one_work+0x817/0x13c0
[ 1501.236500]  ? rcu_read_unlock+0x40/0x40
[ 1501.252896]  ? pwq_dec_nr_in_flight+0x280/0x280
[ 1501.269914]  ? rwlock_bug.part.0+0xc0/0xc0
[ 1501.285008] worker_thread+0x8b/0xd80
[ 1501.301083]  ? process_one_work+0x13c0/0x13c0
[ 1501.318071] kthread+0x3b9/0x4c0
[ 1501.331953]  ? set_kthread_struct+0x100/0x100
[ 1501.348374] ret_from_fork+0x22/0x30
[ 1501.363834] </TASK>
[ 1501.377565]
[ 1501.377565]
                Showing all locks held in the system:
[ 1501.407817] 4 locks held by kworker/u576:38/1492:
[ 1501.424053]  #0: ffff888262e50138 ((wq_completion)test-ww_mutex){+.+.}-{0:0}, at: process_one_work+0x740/0x13c0
[ 1501.447970]  #1: ffffc9000db6fdd0 ((work_completion)(&cycle->work)){+.+.}-{0:0}, at: process_one_work+0x76d/0x13c0
[ 1501.472031]  #2: ffffc9000db6fc98 (ww_class_acquire){+.+.}-{0:0}, at: process_one_work+0x817/0x13c0
[ 1501.495016]  #3: ffff8882688935c0 (ww_class_mutex){+.+.}-{3:3}, at: test_cycle_work+0x14c/0x500 [test_ww_mutex]
[ 1501.519059] 4 locks held by kworker/u576:43/1497:
[ 1501.535821]  #0: ffff888262e50138 ((wq_completion)test-ww_mutex){+.+.}-{0:0}, at: process_one_work+0x740/0x13c0
[ 1501.558931]  #1: ffffc9000dbbfdd0 ((work_completion)(&cycle->work)){+.+.}-{0:0}, at: process_one_work+0x76d/0x13c0
[ 1501.582978]  #2: ffffc9000dbbfc98 (ww_class_acquire){+.+.}-{0:0}, at: process_one_work+0x817/0x13c0
[ 1501.604950]  #3: ffff88826888e520 (ww_class_mutex){+.+.}-{3:3}, at: test_cycle_work+0x14c/0x500 [test_ww_mutex]
[ 1501.628905] 4 locks held by kworker/u576:59/1513:
[ 1501.646947]  #0: ffff888262e50138 ((wq_completion)test-ww_mutex){+.+.}-{0:0}, at: process_one_work+0x740/0x13c0
[ 1501.672947]  #1: ffffc9000dcbfdd0 ((work_completion)(&cycle->work)){+.+.}-{0:0}, at: process_one_work+0x76d/0x13c0
[ 1501.698754]  #2: ffffc9000dcbfc98 (ww_class_acquire){+.+.}-{0:0}, at: process_one_work+0x817/0x13c0
[ 1501.723030]  #3: ffff888268880208 (ww_class_mutex){+.+.}-{3:3}, at: test_cycle_work+0x14c/0x500 [test_ww_mutex]
[ 1501.748891] 1 lock held by khungtaskd/1650:
[ 1501.767506]  #0: ffffffff855d5060 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire.constprop.0+0x0/0x40
[ 1501.794899] 4 locks held by kworker/192:2/2053:
[ 1501.814535] 1 lock held by in:imklog/3102:
[ 1501.831982]  #0: ffff88828ca12170 (&f->f_pos_lock){+.+.}-{3:3}, at: __fdget_pos+0xa1/0xc0
[ 1501.855919] 1 lock held by dmesg/3826:
[ 1501.873472]  #0: ffff88821e5c80d0 (&user->lock){+.+.}-{3:3}, at: new_sync_write+0x4cc/0x640
[ 1501.897875] 4 locks held by kworker/u576:0/7921:
[ 1501.918442]  #0: ffff888262e50138 ((wq_completion)test-ww_mutex){+.+.}-{0:0}, at: process_one_work+0x740/0x13c0
[ 1501.946235]  #1: ffffc9000abcfdd0 ((work_completion)(&cycle->work)){+.+.}-{0:0}, at: process_one_work+0x76d/0x13c0
[ 1501.972770]  #2: ffffc9000abcfc98 (ww_class_acquire){+.+.}-{0:0}, at: process_one_work+0x817/0x13c0
[ 1501.998071]  #3: ffff888268884130 (ww_class_mutex){+.+.}-{3:3}, at: test_cycle_work+0x14c/0x500 [test_ww_mutex]
[ 1502.025008] 4 locks held by kworker/u576:1/12391:
[ 1502.045611]  #0: ffff888262e50138 ((wq_completion)test-ww_mutex){+.+.}-{0:0}, at: process_one_work+0x740/0x13c0
[ 1502.072864]  #1: ffffc900078dfdd0 ((work_completion)(&cycle->work)){+.+.}-{0:0}, at: process_one_work+0x76d/0x13c0
[ 1502.100181]  #2: ffffc900078dfc98 (ww_class_acquire){+.+.}-{0:0}, at: process_one_work+0x817/0x13c0
[ 1502.126541]  #3: ffff888268888460 (ww_class_mutex){+.+.}-{3:3}, at: test_cycle_work+0x14c/0x500 [test_ww_mutex]
[ 1502.154874] 4 locks held by kworker/u576:2/12392:
[ 1502.175873]  #0: ffff888262e50138 ((wq_completion)test-ww_mutex){+.+.}-{0:0}, at: process_one_work+0x740/0x13c0
[ 1502.203986]  #1: ffffc90007937dd0 ((work_completion)(&cycle->work)){+.+.}-{0:0}, at: process_one_work+0x76d/0x13c0
[ 1502.232522]  #2: ffffc90007937c98 (ww_class_acquire){+.+.}-{0:0}, at: process_one_work+0x817/0x13c0
[ 1502.260005]  #3: ffff88826888b4c0 (ww_class_mutex){+.+.}-{3:3}, at: test_cycle_work+0x14c/0x500 [test_ww_mutex]
[ 1502.289015] 4 locks held by kworker/u576:3/12393:
[ 1502.311079]  #0: ffff888262e50138 ((wq_completion)test-ww_mutex){+.+.}-{0:0}, at: process_one_work+0x740/0x13c0
[ 1502.340862]  #1: ffffc9000798fdd0 ((work_completion)(&cycle->work)){+.+.}-{0:0}, at: process_one_work+0x76d/0x13c0
[ 1502.369950]  #2: ffffc9000798fc98 (ww_class_acquire){+.+.}-{0:0}, at: process_one_work+0x817/0x13c0
[ 1502.398992]  #3: ffff8882688931b8 (ww_class_mutex){+.+.}-{3:3}, at: test_cycle_work+0x14c/0x500 [test_ww_mutex]
[ 1502.429018] 4 locks held by kworker/u576:4/12394:
[ 1502.452516]  #0: ffff888262e50138 ((wq_completion)test-ww_mutex){+.+.}-{0:0}, at: process_one_work+0x740/0x13c0
[ 1502.482907]  #1: ffffc900079e7dd0 ((work_completion)(&cycle->work)){+.+.}-{0:0}, at: process_one_work+0x76d/0x13c0
[ 1502.513932]  #2: ffffc900079e7c98 (ww_class_acquire){+.+.}-{0:0}, at: process_one_work+0x817/0x13c0
[ 1502.544097]  #3: ffff88826888a1f0 (ww_class_mutex){+.+.}-{3:3}, at: test_cycle_work+0x14c/0x500 [test_ww_mutex]
[ 1502.575802] 4 locks held by kworker/u576:5/12395:
[ 1502.600046]  #0: ffff888262e50138 ((wq_completion)test-ww_mutex){+.+.}-{0:0}, at: process_one_work+0x740/0x13c0
[ 1502.630939]  #1: ffffc90007a3fdd0 ((work_completion)(&cycle->work)){+.+.}-{0:0}, at: process_one_work+0x76d/0x13c0
[ 1502.662034]  #2: ffffc90007a3fc98 (ww_class_acquire){+.+.}-{0:0}, at: process_one_work+0x817/0x13c0
[ 1502.691957]  #3: ffff88826888d908 (ww_class_mutex){+.+.}-{3:3}, at: test_cycle_work+0x14c/0x500 [test_ww_mutex]
[ 1502.723051] 4 locks held by kworker/u576:6/12396:



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: ww_mutex.sh hangs since v5.16-rc1
@ 2021-12-07  4:27     ` Li Zhijian
  0 siblings, 0 replies; 7+ messages in thread
From: Li Zhijian @ 2021-12-07  4:27 UTC (permalink / raw)
  To: lkp

[-- Attachment #1: Type: text/plain, Size: 10055 bytes --]



On 03/12/2021 06:57, Peter Zijlstra wrote:
> On Wed, Dec 01, 2021 at 09:26:18AM +0800, Li Zhijian wrote:
>> Hi Folks
>>
>>
>> LKP/0Day found that ww_mutex.sh cannot complete since v5.16-rc1, but
>> I'm pretty sorry that we failed to bisect the FBC, instead, the bisection pointed
>> to a/below merge commit(91e1c99e17) finally.
>>
>> Due to this hang, other tests in the same group are also blocked in 0Day, we
>> hope we can fix this hang ASAP.
>>
>> So if you have any idea about this, or need more debug information, feel free to let me know :)
>>
>> BTW, ww_mutex.sh was failed in v5.15 without hang, and looks it cannot reproduce on a vm.
> On real hardware:
>
> root(a)ivb-ep:/usr/src/linux-2.6/tools/testing/selftests/locking# uname -a
> Linux ivb-ep 5.16.0-rc3-00284-g68601c558556 #1 SMP PREEMPT Thu Dec 2 23:03:29 CET 2021 x86_64 GNU/Linux
> root(a)ivb-ep:/usr/src/linux-2.6/tools/testing/selftests/locking# ./ww_mutex.sh
> locking/ww_mutex: ok
>
> [ 1907.907801] Beginning ww mutex selftests
> [ 1915.700077] All ww mutex selftests passed
>
>
> What else do I try?
I have no idea that, not sure below message is helpful

root(a)lkp-knm01 /opt/rootfs# pgrep test-ww_mutex | wc -l
256
root(a)lkp-knm01 /opt/rootfs# pgrep test-ww_mutex -l | head -n 3
1492 kworker/u576:38+test-ww_mutex
1497 kworker/u576:43+test-ww_mutex
1513 kworker/u576:59+test-ww_mutex

root(a)lkp-knm01 /opt/rootfs# lscpu
Architecture:        x86_64
CPU op-mode(s):      32-bit, 64-bit
Byte Order:          Little Endian
Address sizes:       46 bits physical, 48 bits virtual
CPU(s):              288
On-line CPU(s) list: 0-287
Thread(s) per core:  4
Core(s) per socket:  72
Socket(s):           1
NUMA node(s):        2
Vendor ID:           GenuineIntel
CPU family:          6
Model:               133
Model name:          Intel(R) Xeon Phi(TM) CPU 7295 @ 1.50GHz
Stepping:            0
CPU MHz:             1600.000
CPU max MHz:         1600.0000
CPU min MHz:         1000.0000
BogoMIPS:            2992.92
Virtualization:      VT-x
L1d cache:           32K
L1i cache:           32K
L2 cache:            1024K
NUMA node0 CPU(s):   0-287
NUMA node1 CPU(s):
Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch ring3mwait cpuid_fault epb pti tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms avx512f rdseed adx avx512pf avx512er avx512cd xsaveopt dtherm ida arat pln pts avx512_vpopcntdq avx512_4vnniw avx512_4fmaps
root(a)lkp-knm01 /opt/rootfs#


[ 1500.848024] INFO: task kworker/u576:5:12395 blocked for more than 496 seconds.
[ 1500.866920]       Not tainted 5.16.0-rc4 #1
[ 1500.882898] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 1500.903997] task:kworker/u576:5  state:D stack:    0 pid:12395 ppid:     2 flags:0x00004000
[ 1500.926047] Workqueue: test-ww_mutex test_cycle_work [test_ww_mutex]
[ 1500.945916] Call Trace:
[ 1500.959042] <TASK>
[ 1500.972977] __schedule+0xdb0/0x25c0
[ 1500.987868]  ? io_schedule_timeout+0x180/0x180
[ 1501.004971]  ? lock_downgrade+0x680/0x680
[ 1501.021975] schedule+0xe4/0x280
[ 1501.036896] schedule_preempt_disabled+0x18/0x40
[ 1501.053967] __ww_mutex_lock.constprop.0+0x1248/0x34c0
[ 1501.072020]  ? test_cycle_work+0x1bb/0x500 [test_ww_mutex]
[ 1501.088903]  ? mutex_lock_interruptible_nested+0x40/0x40
[ 1501.106256]  ? lock_is_held_type+0xd7/0x140
[ 1501.123083]  ? ww_mutex_lock+0x3e/0x380
[ 1501.138803] ww_mutex_lock+0x3e/0x380
[ 1501.154637]  test_cycle_work+0x1bb/0x500 [test_ww_mutex]
[ 1501.172498]  ? stress_reorder_work+0xa00/0xa00 [test_ww_mutex]
[ 1501.189955]  ? rcu_read_lock_sched_held+0xa1/0x100
[ 1501.205827]  ? rcu_read_lock_bh_held+0xc0/0xc0
[ 1501.221996] process_one_work+0x817/0x13c0
[ 1501.236500]  ? rcu_read_unlock+0x40/0x40
[ 1501.252896]  ? pwq_dec_nr_in_flight+0x280/0x280
[ 1501.269914]  ? rwlock_bug.part.0+0xc0/0xc0
[ 1501.285008] worker_thread+0x8b/0xd80
[ 1501.301083]  ? process_one_work+0x13c0/0x13c0
[ 1501.318071] kthread+0x3b9/0x4c0
[ 1501.331953]  ? set_kthread_struct+0x100/0x100
[ 1501.348374] ret_from_fork+0x22/0x30
[ 1501.363834] </TASK>
[ 1501.377565]
[ 1501.377565]
                Showing all locks held in the system:
[ 1501.407817] 4 locks held by kworker/u576:38/1492:
[ 1501.424053]  #0: ffff888262e50138 ((wq_completion)test-ww_mutex){+.+.}-{0:0}, at: process_one_work+0x740/0x13c0
[ 1501.447970]  #1: ffffc9000db6fdd0 ((work_completion)(&cycle->work)){+.+.}-{0:0}, at: process_one_work+0x76d/0x13c0
[ 1501.472031]  #2: ffffc9000db6fc98 (ww_class_acquire){+.+.}-{0:0}, at: process_one_work+0x817/0x13c0
[ 1501.495016]  #3: ffff8882688935c0 (ww_class_mutex){+.+.}-{3:3}, at: test_cycle_work+0x14c/0x500 [test_ww_mutex]
[ 1501.519059] 4 locks held by kworker/u576:43/1497:
[ 1501.535821]  #0: ffff888262e50138 ((wq_completion)test-ww_mutex){+.+.}-{0:0}, at: process_one_work+0x740/0x13c0
[ 1501.558931]  #1: ffffc9000dbbfdd0 ((work_completion)(&cycle->work)){+.+.}-{0:0}, at: process_one_work+0x76d/0x13c0
[ 1501.582978]  #2: ffffc9000dbbfc98 (ww_class_acquire){+.+.}-{0:0}, at: process_one_work+0x817/0x13c0
[ 1501.604950]  #3: ffff88826888e520 (ww_class_mutex){+.+.}-{3:3}, at: test_cycle_work+0x14c/0x500 [test_ww_mutex]
[ 1501.628905] 4 locks held by kworker/u576:59/1513:
[ 1501.646947]  #0: ffff888262e50138 ((wq_completion)test-ww_mutex){+.+.}-{0:0}, at: process_one_work+0x740/0x13c0
[ 1501.672947]  #1: ffffc9000dcbfdd0 ((work_completion)(&cycle->work)){+.+.}-{0:0}, at: process_one_work+0x76d/0x13c0
[ 1501.698754]  #2: ffffc9000dcbfc98 (ww_class_acquire){+.+.}-{0:0}, at: process_one_work+0x817/0x13c0
[ 1501.723030]  #3: ffff888268880208 (ww_class_mutex){+.+.}-{3:3}, at: test_cycle_work+0x14c/0x500 [test_ww_mutex]
[ 1501.748891] 1 lock held by khungtaskd/1650:
[ 1501.767506]  #0: ffffffff855d5060 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire.constprop.0+0x0/0x40
[ 1501.794899] 4 locks held by kworker/192:2/2053:
[ 1501.814535] 1 lock held by in:imklog/3102:
[ 1501.831982]  #0: ffff88828ca12170 (&f->f_pos_lock){+.+.}-{3:3}, at: __fdget_pos+0xa1/0xc0
[ 1501.855919] 1 lock held by dmesg/3826:
[ 1501.873472]  #0: ffff88821e5c80d0 (&user->lock){+.+.}-{3:3}, at: new_sync_write+0x4cc/0x640
[ 1501.897875] 4 locks held by kworker/u576:0/7921:
[ 1501.918442]  #0: ffff888262e50138 ((wq_completion)test-ww_mutex){+.+.}-{0:0}, at: process_one_work+0x740/0x13c0
[ 1501.946235]  #1: ffffc9000abcfdd0 ((work_completion)(&cycle->work)){+.+.}-{0:0}, at: process_one_work+0x76d/0x13c0
[ 1501.972770]  #2: ffffc9000abcfc98 (ww_class_acquire){+.+.}-{0:0}, at: process_one_work+0x817/0x13c0
[ 1501.998071]  #3: ffff888268884130 (ww_class_mutex){+.+.}-{3:3}, at: test_cycle_work+0x14c/0x500 [test_ww_mutex]
[ 1502.025008] 4 locks held by kworker/u576:1/12391:
[ 1502.045611]  #0: ffff888262e50138 ((wq_completion)test-ww_mutex){+.+.}-{0:0}, at: process_one_work+0x740/0x13c0
[ 1502.072864]  #1: ffffc900078dfdd0 ((work_completion)(&cycle->work)){+.+.}-{0:0}, at: process_one_work+0x76d/0x13c0
[ 1502.100181]  #2: ffffc900078dfc98 (ww_class_acquire){+.+.}-{0:0}, at: process_one_work+0x817/0x13c0
[ 1502.126541]  #3: ffff888268888460 (ww_class_mutex){+.+.}-{3:3}, at: test_cycle_work+0x14c/0x500 [test_ww_mutex]
[ 1502.154874] 4 locks held by kworker/u576:2/12392:
[ 1502.175873]  #0: ffff888262e50138 ((wq_completion)test-ww_mutex){+.+.}-{0:0}, at: process_one_work+0x740/0x13c0
[ 1502.203986]  #1: ffffc90007937dd0 ((work_completion)(&cycle->work)){+.+.}-{0:0}, at: process_one_work+0x76d/0x13c0
[ 1502.232522]  #2: ffffc90007937c98 (ww_class_acquire){+.+.}-{0:0}, at: process_one_work+0x817/0x13c0
[ 1502.260005]  #3: ffff88826888b4c0 (ww_class_mutex){+.+.}-{3:3}, at: test_cycle_work+0x14c/0x500 [test_ww_mutex]
[ 1502.289015] 4 locks held by kworker/u576:3/12393:
[ 1502.311079]  #0: ffff888262e50138 ((wq_completion)test-ww_mutex){+.+.}-{0:0}, at: process_one_work+0x740/0x13c0
[ 1502.340862]  #1: ffffc9000798fdd0 ((work_completion)(&cycle->work)){+.+.}-{0:0}, at: process_one_work+0x76d/0x13c0
[ 1502.369950]  #2: ffffc9000798fc98 (ww_class_acquire){+.+.}-{0:0}, at: process_one_work+0x817/0x13c0
[ 1502.398992]  #3: ffff8882688931b8 (ww_class_mutex){+.+.}-{3:3}, at: test_cycle_work+0x14c/0x500 [test_ww_mutex]
[ 1502.429018] 4 locks held by kworker/u576:4/12394:
[ 1502.452516]  #0: ffff888262e50138 ((wq_completion)test-ww_mutex){+.+.}-{0:0}, at: process_one_work+0x740/0x13c0
[ 1502.482907]  #1: ffffc900079e7dd0 ((work_completion)(&cycle->work)){+.+.}-{0:0}, at: process_one_work+0x76d/0x13c0
[ 1502.513932]  #2: ffffc900079e7c98 (ww_class_acquire){+.+.}-{0:0}, at: process_one_work+0x817/0x13c0
[ 1502.544097]  #3: ffff88826888a1f0 (ww_class_mutex){+.+.}-{3:3}, at: test_cycle_work+0x14c/0x500 [test_ww_mutex]
[ 1502.575802] 4 locks held by kworker/u576:5/12395:
[ 1502.600046]  #0: ffff888262e50138 ((wq_completion)test-ww_mutex){+.+.}-{0:0}, at: process_one_work+0x740/0x13c0
[ 1502.630939]  #1: ffffc90007a3fdd0 ((work_completion)(&cycle->work)){+.+.}-{0:0}, at: process_one_work+0x76d/0x13c0
[ 1502.662034]  #2: ffffc90007a3fc98 (ww_class_acquire){+.+.}-{0:0}, at: process_one_work+0x817/0x13c0
[ 1502.691957]  #3: ffff88826888d908 (ww_class_mutex){+.+.}-{3:3}, at: test_cycle_work+0x14c/0x500 [test_ww_mutex]
[ 1502.723051] 4 locks held by kworker/u576:6/12396:


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: ww_mutex.sh hangs since v5.16-rc1
  2021-12-01  1:26 ` Li Zhijian
  (?)
  (?)
@ 2023-07-07 20:23 ` John Stultz
  -1 siblings, 0 replies; 7+ messages in thread
From: John Stultz @ 2023-07-07 20:23 UTC (permalink / raw)
  To: Li Zhijian
  Cc: peterz, mingo, will, longman, boqun.feng, open list,
	linux-kselftest, lkp, Chris Wilson, Dietmar Eggemann,
	Joel Fernandes, Maarten Lankhorst

On Tue, Nov 30, 2021 at 5:26 PM Li Zhijian <zhijianx.li@intel.com> wrote:
>
> LKP/0Day found that ww_mutex.sh cannot complete since v5.16-rc1, but
> I'm pretty sorry that we failed to bisect the FBC, instead, the bisection pointed
> to a/below merge commit(91e1c99e17) finally.
>
> Due to this hang, other tests in the same group are also blocked in 0Day, we
> hope we can fix this hang ASAP.
>
> So if you have any idea about this, or need more debug information, feel free to let me know :)
>
> BTW, ww_mutex.sh was failed in v5.15 without hang, and looks it cannot reproduce on a vm.
>

So, as part of the proxy-execution work, I've been recently trying to
understand why the patch series was causing apparent hangs in the
ww_mutex test with large(64) cpu counts.
I was assuming my changes were causing a lost wakeup somehow, but as I
dug in I found it looked like the stress_inorder_work() function was
live-locking.

I noticed that adding printks to the logic would change the behavior,
and finally realized I could reproduce a livelock against mainline by
adding a printk before the "return -EDEADLK;" in __ww_mutex_kill(),
making it clear the logic was timing sensitive.  Then searching around
I found this old and unresolved thread.

Part of the issue is that we may not hit the timeout check at the end
of the loop, as the EDEADLK case short-cuts back to retry, allowing
the test to effectively get stuck.

But I know with ww_mutexes there's supposed to be a guarantee of
forward progress as the older context wins, but it's not clear to me
that works here. The EDEADLK case results in a releasing and
reacquiring of the locks (only with the contended lock being taken
first), and if a second EDEADLK occurs, it starts over again from
scratch (though with the new contended lock being chosen first instead
- which seems to lose any progress).

So maybe the test has broken that guarentee in how it restarts, or
with 128 threads trying to acquire a random order of 16 locks without
contention (and the order shifting slightly each time it does see
contention) it might just be a very big space to resolve if we don't
luck into good timing.

Anyway, I wanted to get some feedback from folks who have a better
theoretical understanding of the ww_mutexes. With large cpu counts are
we just asking for trouble here? Is the test doing something wrong? Or
is there possibly a ww_mutex bug under this?

thanks
-john

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2023-07-07 20:23 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-12-01  1:26 ww_mutex.sh hangs since v5.16-rc1 Li Zhijian
2021-12-01  1:26 ` Li Zhijian
2021-12-02 22:57 ` Peter Zijlstra
2021-12-02 22:57   ` Peter Zijlstra
2021-12-07  4:27   ` Li Zhijian
2021-12-07  4:27     ` Li Zhijian
2023-07-07 20:23 ` John Stultz

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.