* [linux-next][mainline/master] [IPR] [Function could be = "__mutex_lock_slowpath(lock)"]OOPs kernel crash while performing IPR test
@ 2023-08-27 8:26 Tasmiya Nalatwad
2023-08-27 16:09 ` Waiman Long
2024-01-29 19:23 ` Mohamed Khalfella
0 siblings, 2 replies; 4+ messages in thread
From: Tasmiya Nalatwad @ 2023-08-27 8:26 UTC (permalink / raw)
To: linux-kernel
Cc: peterz, abdhalee, mingo, will, longman, boqun.feng, sachinp, mputtash
Greetings,
[linux-next][mainline/master] [IPR] [Function could be =
"__mutex_lock_slowpath(lock)"]OOPs kernel crash while performing IPR test
--- Traces ---
--- Traces ---
[65818.211823] Kernel attempted to read user page (380) - exploit
attempt? (uid: 0)
[65818.211836] BUG: Kernel NULL pointer dereference on read at 0x00000380
[65818.211840] Faulting instruction address: 0xc000000000f5f2e4
[65818.211844] Oops: Kernel access of bad area, sig: 11 [#1]
[65818.211846] LE PAGE_SIZE=64K MMU=Radix SMP NR_CPUS=8192 NUMA pSeries
[65818.211850] Modules linked in: rpadlpar_io rpaphp nfnetlink xsk_diag
bonding tls rfkill sunrpc ses enclosure scsi_transport_sas vmx_crypto
pseries_rng binfmt_misc ip_tables ext4 mbcache jbd2 dm_service_time
sd_mod t10_pi crc64_rocksoft crc64 sg ibmvfc scsi_transport_fc ibmveth
ipr dm_multipath dm_mirror dm_region_hash dm_log dm_mod fuse
[65818.211879] CPU: 16 PID: 613 Comm: kworker/16:3 Kdump: loaded Not
tainted 6.5.0-rc7-next-20230824-auto #1
[65818.211883] Hardware name: IBM,9080-HEX POWER10 (raw) 0x800200
0xf000006 of:IBM,FW1030.30 (NH1030_062) hv:phyp pSeries
[65818.211887] Workqueue: events sg_remove_sfp_usercontext [sg]
[65818.211894] NIP: c000000000f5f2e4 LR: c000000000f5f2d8 CTR:
c00000000032df70
[65818.211897] REGS: c0000000081c7a10 TRAP: 0300 Not tainted
(6.5.0-rc7-next-20230824-auto)
[65818.211900] MSR: 800000000280b033 <SF,VEC,VSX,EE,FP,ME,IR,DR,RI,LE>
CR: 28000882 XER: 20040000
[65818.211909] CFAR: c000000000f5b0a4 DAR: 0000000000000380 DSISR:
40000000 IRQMASK: 0
[65818.211909] GPR00: c000000000f5f2d8 c0000000081c7cb0 c000000001451300
0000000000000000
[65818.211909] GPR04: 00000000000000c0 00000000c0000000 c000000006c5a298
98a2c506000000c0
[65818.211909] GPR08: c00000006408ab00 c0000000022a3515 0000000000000000
c008000000327d60
[65818.211909] GPR12: c00000000032df70 c000000c1bc93f00 c000000000197cc8
c000000008797500
[65818.211909] GPR16: 0000000000000000 0000000000000000 0000000000000000
c000000003071ab0
[65818.211909] GPR20: c000000003494c05 c000000c11340040 0000000000000000
c0000000b9bb4030
[65818.211909] GPR24: c0000000b9bb4000 c00000005e8627c0 0000000000000000
c000000c19b91e00
[65818.211909] GPR28: c0000000b9bb5328 c00000005e8627c0 0000000000000380
0000000000000380
[65818.211946] NIP [c000000000f5f2e4] mutex_lock+0x34/0x90
[65818.211953] LR [c000000000f5f2d8] mutex_lock+0x28/0x90
[65818.211957] Call Trace:
[65818.211959] [c0000000081c7cb0] [c000000000f5f2d8]
mutex_lock+0x28/0x90 (unreliable)
[65818.211966] [c0000000081c7ce0] [c00000000032df9c]
blk_trace_remove+0x2c/0x80
[65818.211971] [c0000000081c7d10] [c0080000003205fc]
sg_device_destroy+0x44/0x110 [sg]
[65818.211976] [c0000000081c7d90] [c008000000322988]
sg_remove_sfp_usercontext+0x1d0/0x2c0 [sg]
[65818.211981] [c0000000081c7e40] [c000000000188010]
process_scheduled_works+0x230/0x4f0
[65818.211987] [c0000000081c7f10] [c00000000018b044]
worker_thread+0x1e4/0x500
[65818.211992] [c0000000081c7f90] [c000000000197df8] kthread+0x138/0x140
[65818.211996] [c0000000081c7fe0] [c00000000000df98]
start_kernel_thread+0x14/0x18
[65818.212000] Code: 38422050 7c0802a6 60000000 7c0802a6 fbe1fff8
7c7f1b78 f8010010 f821ffd1 4bffbd95 60000000 39400000 e90d0908
<7d20f8a8> 7c295000 40c20010 7d00f9ad
[65818.212013] ---[ end trace 0000000000000000 ]---
Tried running gdb on the vmlinux code using faulting address. Looks like
the bug is initiated from the function "__mutex_lock_slowpath(lock);"
[root@localhost ]# gdb vmlinux -ex "disassemble /m 0xc000000000f5f2e4"
GNU gdb (GDB) Red Hat Enterprise Linux 8.2-15.el8
Copyright (C) 2018 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later
<http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "ppc64le-redhat-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from vmlinux...done.
Dump of assembler code for function mutex_lock:
282 {
0xc000000000f5f2b0 <+0>: addis r2,r12,79
0xc000000000f5f2b4 <+4>: addi r2,r2,8272
0xc000000000f5f2b8 <+8>: mflr r0
0xc000000000f5f2bc <+12>: bl 0xc0000000000807d4 <mcount>
283 might_sleep();
0xc000000000f5f2c0 <+16>: mflr r0
0xc000000000f5f2c4 <+20>: std r31,-8(r1)
0xc000000000f5f2c8 <+24>: mr r31,r3
0xc000000000f5f2cc <+28>: std r0,16(r1)
0xc000000000f5f2d0 <+32>: stdu r1,-48(r1)
0xc000000000f5f2d4 <+36>: bl 0xc000000000f5b068
<__cond_resched+8>
0xc000000000f5f2d8 <+40>: nop
284
285 if (!__mutex_trylock_fast(lock))
286 __mutex_lock_slowpath(lock);
0xc000000000f5f304 <+84>: addi r1,r1,48
0xc000000000f5f308 <+88>: mr r3,r31
0xc000000000f5f30c <+92>: ld r0,16(r1)
--Type <RET> for more, q to quit, c to continue without paging--c
0xc000000000f5f310 <+96>: ld r31,-8(r1)
0xc000000000f5f314 <+100>: mtlr r0
0xc000000000f5f318 <+104>: b 0xc000000000f5f298
<__mutex_lock_slowpath+8>
0xc000000000f5f31c <+108>: nop
0xc000000000f5f320 <+112>: addi r1,r1,48
0xc000000000f5f324 <+116>: ld r0,16(r1)
0xc000000000f5f328 <+120>: ld r31,-8(r1)
0xc000000000f5f32c <+124>: mtlr r0
0xc000000000f5f330 <+128>: blr
0xc000000000f5f334: nop
0xc000000000f5f338: nop
0xc000000000f5f33c: nop
End of assembler dump.
[root@localhost ]# grep -irn "mutex_lock_slowpath(lock)"
kernel/locking/mutex.c:286:
--
Regards,
Tasmiya Nalatwad
IBM Linux Technology Center
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [linux-next][mainline/master] [IPR] [Function could be = "__mutex_lock_slowpath(lock)"]OOPs kernel crash while performing IPR test
2023-08-27 8:26 [linux-next][mainline/master] [IPR] [Function could be = "__mutex_lock_slowpath(lock)"]OOPs kernel crash while performing IPR test Tasmiya Nalatwad
@ 2023-08-27 16:09 ` Waiman Long
2023-08-29 5:30 ` Tasmiya Nalatwad
2024-01-29 19:23 ` Mohamed Khalfella
1 sibling, 1 reply; 4+ messages in thread
From: Waiman Long @ 2023-08-27 16:09 UTC (permalink / raw)
To: Tasmiya Nalatwad, linux-kernel; +Cc: peterz, abdhalee, mingo, will, boqun.feng
On 8/27/23 04:26, Tasmiya Nalatwad wrote:
> Greetings,
>
> [linux-next][mainline/master] [IPR] [Function could be =
> "__mutex_lock_slowpath(lock)"]OOPs kernel crash while performing IPR test
>
> --- Traces ---
>
> --- Traces ---
> [65818.211823] Kernel attempted to read user page (380) - exploit
> attempt? (uid: 0)
> [65818.211836] BUG: Kernel NULL pointer dereference on read at 0x00000380
> [65818.211840] Faulting instruction address: 0xc000000000f5f2e4
> [65818.211844] Oops: Kernel access of bad area, sig: 11 [#1]
> [65818.211846] LE PAGE_SIZE=64K MMU=Radix SMP NR_CPUS=8192 NUMA pSeries
> [65818.211850] Modules linked in: rpadlpar_io rpaphp nfnetlink
> xsk_diag bonding tls rfkill sunrpc ses enclosure scsi_transport_sas
> vmx_crypto pseries_rng binfmt_misc ip_tables ext4 mbcache jbd2
> dm_service_time sd_mod t10_pi crc64_rocksoft crc64 sg ibmvfc
> scsi_transport_fc ibmveth ipr dm_multipath dm_mirror dm_region_hash
> dm_log dm_mod fuse
> [65818.211879] CPU: 16 PID: 613 Comm: kworker/16:3 Kdump: loaded Not
> tainted 6.5.0-rc7-next-20230824-auto #1
> [65818.211883] Hardware name: IBM,9080-HEX POWER10 (raw) 0x800200
> 0xf000006 of:IBM,FW1030.30 (NH1030_062) hv:phyp pSeries
> [65818.211887] Workqueue: events sg_remove_sfp_usercontext [sg]
> [65818.211894] NIP: c000000000f5f2e4 LR: c000000000f5f2d8 CTR:
> c00000000032df70
> [65818.211897] REGS: c0000000081c7a10 TRAP: 0300 Not tainted
> (6.5.0-rc7-next-20230824-auto)
> [65818.211900] MSR: 800000000280b033
> <SF,VEC,VSX,EE,FP,ME,IR,DR,RI,LE> CR: 28000882 XER: 20040000
> [65818.211909] CFAR: c000000000f5b0a4 DAR: 0000000000000380 DSISR:
> 40000000 IRQMASK: 0
> [65818.211909] GPR00: c000000000f5f2d8 c0000000081c7cb0
> c000000001451300 0000000000000000
> [65818.211909] GPR04: 00000000000000c0 00000000c0000000
> c000000006c5a298 98a2c506000000c0
> [65818.211909] GPR08: c00000006408ab00 c0000000022a3515
> 0000000000000000 c008000000327d60
> [65818.211909] GPR12: c00000000032df70 c000000c1bc93f00
> c000000000197cc8 c000000008797500
> [65818.211909] GPR16: 0000000000000000 0000000000000000
> 0000000000000000 c000000003071ab0
> [65818.211909] GPR20: c000000003494c05 c000000c11340040
> 0000000000000000 c0000000b9bb4030
> [65818.211909] GPR24: c0000000b9bb4000 c00000005e8627c0
> 0000000000000000 c000000c19b91e00
> [65818.211909] GPR28: c0000000b9bb5328 c00000005e8627c0
> 0000000000000380 0000000000000380
> [65818.211946] NIP [c000000000f5f2e4] mutex_lock+0x34/0x90
> [65818.211953] LR [c000000000f5f2d8] mutex_lock+0x28/0x90
> [65818.211957] Call Trace:
> [65818.211959] [c0000000081c7cb0] [c000000000f5f2d8]
> mutex_lock+0x28/0x90 (unreliable)
> [65818.211966] [c0000000081c7ce0] [c00000000032df9c]
> blk_trace_remove+0x2c/0x80
> [65818.211971] [c0000000081c7d10] [c0080000003205fc]
> sg_device_destroy+0x44/0x110 [sg]
> [65818.211976] [c0000000081c7d90] [c008000000322988]
> sg_remove_sfp_usercontext+0x1d0/0x2c0 [sg]
> [65818.211981] [c0000000081c7e40] [c000000000188010]
> process_scheduled_works+0x230/0x4f0
> [65818.211987] [c0000000081c7f10] [c00000000018b044]
> worker_thread+0x1e4/0x500
> [65818.211992] [c0000000081c7f90] [c000000000197df8] kthread+0x138/0x140
> [65818.211996] [c0000000081c7fe0] [c00000000000df98]
> start_kernel_thread+0x14/0x18
The panic happens when a device is being removed. Maybe it is a
use-after-free problem. The mutex lock itself seems to be in an area
that is no longer accessible. It is not a problem in the locking code
itself.
Cheers,
Longman
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [linux-next][mainline/master] [IPR] [Function could be = "__mutex_lock_slowpath(lock)"]OOPs kernel crash while performing IPR test
2023-08-27 16:09 ` Waiman Long
@ 2023-08-29 5:30 ` Tasmiya Nalatwad
0 siblings, 0 replies; 4+ messages in thread
From: Tasmiya Nalatwad @ 2023-08-29 5:30 UTC (permalink / raw)
To: Waiman Long, linux-kernel; +Cc: peterz, abdhalee, mingo, will, boqun.feng
Greetings,
Thank you Waiman Long for your analysis. The issue is seen consistently
on every build of linux-next and mainline/master
On 8/27/23 21:39, Waiman Long wrote:
>
> On 8/27/23 04:26, Tasmiya Nalatwad wrote:
>> Greetings,
>>
>> [linux-next][mainline/master] [IPR] [Function could be =
>> "__mutex_lock_slowpath(lock)"]OOPs kernel crash while performing IPR
>> test
>>
>> --- Traces ---
>>
>> --- Traces ---
>> [65818.211823] Kernel attempted to read user page (380) - exploit
>> attempt? (uid: 0)
>> [65818.211836] BUG: Kernel NULL pointer dereference on read at
>> 0x00000380
>> [65818.211840] Faulting instruction address: 0xc000000000f5f2e4
>> [65818.211844] Oops: Kernel access of bad area, sig: 11 [#1]
>> [65818.211846] LE PAGE_SIZE=64K MMU=Radix SMP NR_CPUS=8192 NUMA pSeries
>> [65818.211850] Modules linked in: rpadlpar_io rpaphp nfnetlink
>> xsk_diag bonding tls rfkill sunrpc ses enclosure scsi_transport_sas
>> vmx_crypto pseries_rng binfmt_misc ip_tables ext4 mbcache jbd2
>> dm_service_time sd_mod t10_pi crc64_rocksoft crc64 sg ibmvfc
>> scsi_transport_fc ibmveth ipr dm_multipath dm_mirror dm_region_hash
>> dm_log dm_mod fuse
>> [65818.211879] CPU: 16 PID: 613 Comm: kworker/16:3 Kdump: loaded Not
>> tainted 6.5.0-rc7-next-20230824-auto #1
>> [65818.211883] Hardware name: IBM,9080-HEX POWER10 (raw) 0x800200
>> 0xf000006 of:IBM,FW1030.30 (NH1030_062) hv:phyp pSeries
>> [65818.211887] Workqueue: events sg_remove_sfp_usercontext [sg]
>> [65818.211894] NIP: c000000000f5f2e4 LR: c000000000f5f2d8 CTR:
>> c00000000032df70
>> [65818.211897] REGS: c0000000081c7a10 TRAP: 0300 Not tainted
>> (6.5.0-rc7-next-20230824-auto)
>> [65818.211900] MSR: 800000000280b033
>> <SF,VEC,VSX,EE,FP,ME,IR,DR,RI,LE> CR: 28000882 XER: 20040000
>> [65818.211909] CFAR: c000000000f5b0a4 DAR: 0000000000000380 DSISR:
>> 40000000 IRQMASK: 0
>> [65818.211909] GPR00: c000000000f5f2d8 c0000000081c7cb0
>> c000000001451300 0000000000000000
>> [65818.211909] GPR04: 00000000000000c0 00000000c0000000
>> c000000006c5a298 98a2c506000000c0
>> [65818.211909] GPR08: c00000006408ab00 c0000000022a3515
>> 0000000000000000 c008000000327d60
>> [65818.211909] GPR12: c00000000032df70 c000000c1bc93f00
>> c000000000197cc8 c000000008797500
>> [65818.211909] GPR16: 0000000000000000 0000000000000000
>> 0000000000000000 c000000003071ab0
>> [65818.211909] GPR20: c000000003494c05 c000000c11340040
>> 0000000000000000 c0000000b9bb4030
>> [65818.211909] GPR24: c0000000b9bb4000 c00000005e8627c0
>> 0000000000000000 c000000c19b91e00
>> [65818.211909] GPR28: c0000000b9bb5328 c00000005e8627c0
>> 0000000000000380 0000000000000380
>> [65818.211946] NIP [c000000000f5f2e4] mutex_lock+0x34/0x90
>> [65818.211953] LR [c000000000f5f2d8] mutex_lock+0x28/0x90
>> [65818.211957] Call Trace:
>> [65818.211959] [c0000000081c7cb0] [c000000000f5f2d8]
>> mutex_lock+0x28/0x90 (unreliable)
>> [65818.211966] [c0000000081c7ce0] [c00000000032df9c]
>> blk_trace_remove+0x2c/0x80
>> [65818.211971] [c0000000081c7d10] [c0080000003205fc]
>> sg_device_destroy+0x44/0x110 [sg]
>> [65818.211976] [c0000000081c7d90] [c008000000322988]
>> sg_remove_sfp_usercontext+0x1d0/0x2c0 [sg]
>> [65818.211981] [c0000000081c7e40] [c000000000188010]
>> process_scheduled_works+0x230/0x4f0
>> [65818.211987] [c0000000081c7f10] [c00000000018b044]
>> worker_thread+0x1e4/0x500
>> [65818.211992] [c0000000081c7f90] [c000000000197df8] kthread+0x138/0x140
>> [65818.211996] [c0000000081c7fe0] [c00000000000df98]
>> start_kernel_thread+0x14/0x18
>
> The panic happens when a device is being removed. Maybe it is a
> use-after-free problem. The mutex lock itself seems to be in an area
> that is no longer accessible. It is not a problem in the locking code
> itself.
>
> Cheers,
> Longman
>
--
Regards,
Tasmiya Nalatwad
IBM Linux Technology Center
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [linux-next][mainline/master] [IPR] [Function could be = "__mutex_lock_slowpath(lock)"]OOPs kernel crash while performing IPR test
2023-08-27 8:26 [linux-next][mainline/master] [IPR] [Function could be = "__mutex_lock_slowpath(lock)"]OOPs kernel crash while performing IPR test Tasmiya Nalatwad
2023-08-27 16:09 ` Waiman Long
@ 2024-01-29 19:23 ` Mohamed Khalfella
1 sibling, 0 replies; 4+ messages in thread
From: Mohamed Khalfella @ 2024-01-29 19:23 UTC (permalink / raw)
To: Yu Kuai
Cc: linux-kernel, peterz, abdhalee, mingo, will, longman, boqun.feng,
sachinp, mputtash, Tasmiya Nalatwad
On 2023-08-27 13:56:14 +0530, Tasmiya Nalatwad wrote:
> Greetings,
>
> [linux-next][mainline/master] [IPR] [Function could be =
> "__mutex_lock_slowpath(lock)"]OOPs kernel crash while performing IPR test
Hello,
We hit this issue while testing 6.6.9 LTS kernel and I narrowed it down
to commit fcaa174a9c99 ("scsi/sg: don't grab scsi host module reference").
Not holding a reference to the scsi_device caused the last reference to
be dropped in sg_remove_sfp_usercontext(). This caused request_queue to
be set to NULL in scsi_device_dev_release(). Passing NULL to blk_trace_remove()
caused this panic. More detail below.
The issue can be reproduced by having userspace process holding the last
refcount to device that was removed.
# python3
\>>> import os
\>>> fd = os.open('/dev/sg22', os.O_RDONLY)
\>>> # wait until the device is removed
\>>> os.close(fd)
#
# echo 1 > /sys/bus/pci/devices/0000\:5e\:00.0/remove
# # Now run >>> os.close(fd) above
python3-14739 53..... 3782240930us : sg_remove_sfp_kprobe: (sg_remove_sfp+0x0/0xa0 <ffffffff816dd5c0>) kref=0xffff88b047055320
python3-14739 53..... 3782240934us : <stack trace>
=> sg_remove_sfp+0x1/0xa0 <ffffffff816dd5c1>
=> sg_release+0xa2/0x100 <ffffffff816de5e2>
=> __fput+0xe9/0x280 <ffffffff812fcf79>
=> __x64_sys_close+0x39/0x80 <ffffffff812f58a9>
=> do_syscall_64+0x35/0x80 <ffffffff81b57485>
=> entry_SYSCALL_64_after_hwframe+0x46/0xb0 <ffffffff81c0006a>
kworker/-2357 53..... 3782240948us : scsi_device_dev_release_kprobe: (scsi_device_dev_release+0x0/0x2c0 <ffffffff816c0680>) device=0xffff88ac553a61c0
kworker/-2357 53..... 3782240951us : <stack trace>
=> scsi_device_dev_release+0x1/0x2c0 <ffffffff816c0681>
=> device_release+0x31/0x90 <ffffffff81662fc1>
=> kobject_put+0x6d/0x180 <ffffffff81b3527d>
=> scsi_device_put+0x20/0x30 <ffffffff816b1190>
=> sg_remove_sfp_usercontext+0xfb/0x190 <ffffffff816de73b>
=> process_one_work+0x133/0x2f0 <ffffffff810a5983>
=> worker_thread+0x2ec/0x400 <ffffffff810a6dbc>
=> kthread+0xe2/0x110 <ffffffff810aed42>
=> ret_from_fork+0x2d/0x50 <ffffffff8103ddad>
=> ret_from_fork_asm+0x11/0x20 <ffffffff810017d1>
python3-14739 was holding the last refcount. sg_remove_sfp() queued
sg_remove_sfp_usercontext() for execution. scsi_device_dev_release()
set sdev->request_queue to NULL causing the panic.
kworker/49:1-607 [049] ..... 519.002877: scsi_device_dev_release_kprobe: (scsi_device_dev_release+0x0/0x2c0 <ffffffff816c0680>) device=0xffff889d227bf1c0
kworker/49:1-607 [049] ..... 519.002882: <stack trace>
=> scsi_device_dev_release+0x1/0x2c0 <ffffffff816c0681>
=> device_release+0x31/0x90 <ffffffff81662fc1>
=> kobject_put+0x6d/0x180 <ffffffff81b3526d>
=> scsi_device_put+0x20/0x30 <ffffffff816b1190>
=> sg_device_destroy+0x2f/0xb0 <ffffffff816dc16f>
=> sg_remove_sfp_usercontext+0x133/0x190 <ffffffff816de763>
=> process_one_work+0x133/0x2f0 <ffffffff810a5983>
=> worker_thread+0x2ec/0x400 <ffffffff810a6dbc>
=> kthread+0xe2/0x110 <ffffffff810aed42>
=> ret_from_fork+0x2d/0x50 <ffffffff8103ddad>
=> ret_from_fork_asm+0x11/0x20 <ffffffff810017d1>
Reverting 80b6051085c5 ("scsi: sg: Fix checking return value of
blk_get_queue()") and fcaa174a9c99 ("scsi/sg: don't grab scsi host module
reference") fixed the problem. The stacktrace above is showing the last
refcount of the scsi_device is dropped from sg_device_destroy().
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2024-01-29 19:23 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-08-27 8:26 [linux-next][mainline/master] [IPR] [Function could be = "__mutex_lock_slowpath(lock)"]OOPs kernel crash while performing IPR test Tasmiya Nalatwad
2023-08-27 16:09 ` Waiman Long
2023-08-29 5:30 ` Tasmiya Nalatwad
2024-01-29 19:23 ` Mohamed Khalfella
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).