* Re: [PATCHv2 1/1] RDMA/rxe: Fix a dead lock problem
2022-04-11 20:00 [PATCHv2 1/1] RDMA/rxe: Fix a dead lock problem yanjun.zhu
@ 2022-04-11 11:50 ` Jason Gunthorpe
2022-04-12 13:43 ` Yanjun Zhu
0 siblings, 1 reply; 7+ messages in thread
From: Jason Gunthorpe @ 2022-04-11 11:50 UTC (permalink / raw)
To: yanjun.zhu; +Cc: zyjzyj2000, leon, linux-rdma, Yi Zhang
On Mon, Apr 11, 2022 at 04:00:18PM -0400, yanjun.zhu@linux.dev wrote:
> @@ -138,8 +139,10 @@ void *rxe_alloc(struct rxe_pool *pool)
> elem->obj = obj;
> kref_init(&elem->ref_cnt);
>
> - err = xa_alloc_cyclic(&pool->xa, &elem->index, elem, pool->limit,
> - &pool->next, GFP_KERNEL);
> + xa_lock_irqsave(&pool->xa, flags);
> + err = __xa_alloc_cyclic(&pool->xa, &elem->index, elem, pool->limit,
> + &pool->next, GFP_ATOMIC);
> + xa_unlock_irqrestore(&pool->xa, flags);
No to using atomics, this needs to be either the _irq or _bh varient
Jason
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCHv2 1/1] RDMA/rxe: Fix a dead lock problem
@ 2022-04-11 20:00 yanjun.zhu
2022-04-11 11:50 ` Jason Gunthorpe
0 siblings, 1 reply; 7+ messages in thread
From: yanjun.zhu @ 2022-04-11 20:00 UTC (permalink / raw)
To: zyjzyj2000, jgg, leon, yanjun.zhu, linux-rdma; +Cc: Yi Zhang
From: Zhu Yanjun <yanjun.zhu@linux.dev>
This is a dead lock problem.
The xa_lock first is acquired in this:
{SOFTIRQ-ON-W} state was registered at:
lock_acquire+0x1d2/0x5a0
_raw_spin_lock+0x33/0x80
__rxe_add_to_pool+0x183/0x230 [rdma_rxe]
__ib_alloc_pd+0xf9/0x550 [ib_core]
ib_mad_init_device+0x2d9/0xd20 [ib_core]
add_client_context+0x2fa/0x450 [ib_core]
enable_device_and_get+0x1b7/0x350 [ib_core]
ib_register_device+0x757/0xaf0 [ib_core]
rxe_register_device+0x2eb/0x390 [rdma_rxe]
rxe_net_add+0x83/0xc0 [rdma_rxe]
rxe_newlink+0x76/0x90 [rdma_rxe]
nldev_newlink+0x245/0x3e0 [ib_core]
rdma_nl_rcv_msg+0x2d4/0x790 [ib_core]
rdma_nl_rcv+0x1ca/0x3f0 [ib_core]
netlink_unicast+0x43b/0x640
netlink_sendmsg+0x7eb/0xc40
sock_sendmsg+0xe0/0x110
__sys_sendto+0x1d7/0x2b0
__x64_sys_sendto+0xdd/0x1b0
do_syscall_64+0x37/0x80
entry_SYSCALL_64_after_hwframe+0x44/0xae
Then xa_lock is acquired in this:
{IN-SOFTIRQ-W}:
Call Trace:
<TASK>
dump_stack_lvl+0x44/0x57
mark_lock.part.52.cold.79+0x3c/0x46
__lock_acquire+0x1565/0x34a0
lock_acquire+0x1d2/0x5a0
_raw_spin_lock_irqsave+0x42/0x90
rxe_pool_get_index+0x72/0x1d0 [rdma_rxe]
rxe_get_av+0x168/0x2a0 [rdma_rxe]
rxe_requester+0x75b/0x4a90 [rdma_rxe]
rxe_do_task+0x134/0x230 [rdma_rxe]
tasklet_action_common.isra.12+0x1f7/0x2d0
__do_softirq+0x1ea/0xa4c
run_ksoftirqd+0x32/0x60
smpboot_thread_fn+0x503/0x860
kthread+0x29b/0x340
ret_from_fork+0x1f/0x30
</TASK>
From the above, in the function __rxe_add_to_pool,
xa_lock is acquired. Then the function __rxe_add_to_pool
is interrupted by softirq. The function
rxe_pool_get_index will also acquire xa_lock.
Finally, the dead lock appears.
[ 296.806097] CPU0
[ 296.808550] ----
[ 296.811003] lock(&xa->xa_lock#15); <----- __rxe_add_to_pool
[ 296.814583] <Interrupt>
[ 296.817209] lock(&xa->xa_lock#15); <---- rxe_pool_get_index
[ 296.820961]
*** DEADLOCK ***
Fixes: 3225717f6dfa ("RDMA/rxe: Replace red-black trees by carrays")
Reported-and-tested-by: Yi Zhang <yi.zhang@redhat.com>
Signed-off-by: Zhu Yanjun <yanjun.zhu@linux.dev>
---
V1->V2: Replace GFP_KERNEL with GFP_ATOMIC
---
drivers/infiniband/sw/rxe/rxe_pool.c | 19 ++++++++++++++-----
1 file changed, 14 insertions(+), 5 deletions(-)
diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c
index 87066d04ed18..9675184a759f 100644
--- a/drivers/infiniband/sw/rxe/rxe_pool.c
+++ b/drivers/infiniband/sw/rxe/rxe_pool.c
@@ -121,6 +121,7 @@ void *rxe_alloc(struct rxe_pool *pool)
struct rxe_pool_elem *elem;
void *obj;
int err;
+ unsigned long flags;
if (WARN_ON(!(pool->flags & RXE_POOL_ALLOC)))
return NULL;
@@ -138,8 +139,10 @@ void *rxe_alloc(struct rxe_pool *pool)
elem->obj = obj;
kref_init(&elem->ref_cnt);
- err = xa_alloc_cyclic(&pool->xa, &elem->index, elem, pool->limit,
- &pool->next, GFP_KERNEL);
+ xa_lock_irqsave(&pool->xa, flags);
+ err = __xa_alloc_cyclic(&pool->xa, &elem->index, elem, pool->limit,
+ &pool->next, GFP_ATOMIC);
+ xa_unlock_irqrestore(&pool->xa, flags);
if (err)
goto err_free;
@@ -155,6 +158,7 @@ void *rxe_alloc(struct rxe_pool *pool)
int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem)
{
int err;
+ unsigned long flags;
if (WARN_ON(pool->flags & RXE_POOL_ALLOC))
return -EINVAL;
@@ -166,8 +170,10 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem)
elem->obj = (u8 *)elem - pool->elem_offset;
kref_init(&elem->ref_cnt);
- err = xa_alloc_cyclic(&pool->xa, &elem->index, elem, pool->limit,
- &pool->next, GFP_KERNEL);
+ xa_lock_irqsave(&pool->xa, flags);
+ err = __xa_alloc_cyclic(&pool->xa, &elem->index, elem, pool->limit,
+ &pool->next, GFP_ATOMIC);
+ xa_unlock_irqrestore(&pool->xa, flags);
if (err)
goto err_cnt;
@@ -200,8 +206,11 @@ static void rxe_elem_release(struct kref *kref)
{
struct rxe_pool_elem *elem = container_of(kref, typeof(*elem), ref_cnt);
struct rxe_pool *pool = elem->pool;
+ unsigned long flags;
- xa_erase(&pool->xa, elem->index);
+ xa_lock_irqsave(&pool->xa, flags);
+ __xa_erase(&pool->xa, elem->index);
+ xa_unlock_irqrestore(&pool->xa, flags);
if (pool->cleanup)
pool->cleanup(elem);
--
2.27.0
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCHv2 1/1] RDMA/rxe: Fix a dead lock problem
2022-04-11 11:50 ` Jason Gunthorpe
@ 2022-04-12 13:43 ` Yanjun Zhu
2022-04-12 13:53 ` Jason Gunthorpe
0 siblings, 1 reply; 7+ messages in thread
From: Yanjun Zhu @ 2022-04-12 13:43 UTC (permalink / raw)
To: Jason Gunthorpe, yanjun.zhu; +Cc: zyjzyj2000, leon, linux-rdma, Yi Zhang
在 2022/4/11 19:50, Jason Gunthorpe 写道:
> On Mon, Apr 11, 2022 at 04:00:18PM -0400, yanjun.zhu@linux.dev wrote:
>> @@ -138,8 +139,10 @@ void *rxe_alloc(struct rxe_pool *pool)
>> elem->obj = obj;
>> kref_init(&elem->ref_cnt);
>>
>> - err = xa_alloc_cyclic(&pool->xa, &elem->index, elem, pool->limit,
>> - &pool->next, GFP_KERNEL);
>> + xa_lock_irqsave(&pool->xa, flags);
>> + err = __xa_alloc_cyclic(&pool->xa, &elem->index, elem, pool->limit,
>> + &pool->next, GFP_ATOMIC);
>> + xa_unlock_irqrestore(&pool->xa, flags);
>
> No to using atomics, this needs to be either the _irq or _bh varient
If I understand you correctly, you mean that we should use
xa_lock_irq/xa_unlock_irq or xa_lock_bh/xa_unlock_bh instead of
xa_unlock_irqrestore?
If so, xa_lock_irq/xa_unlock_irq or xa_lock_bh/xa_unlock_bh is used
here, the warning as below will appear. This means that
__rxe_add_to_pool disables softirq, but fpu_clone enables softirq.
"
Apr 12 16:24:53 kernel: softirqs last enabled at (13086):
[<ffffffff91830d26>] fpu_clone+0xf6/0x570
Apr 12 16:24:53 kernel: softirqs last disabled at (13129):
[<ffffffffc077f319>] __rxe_add_to_pool+0x49/0xa0 [rdma_rxe]
"
As such, it is better to use xa_unlock_irqrestore +
__xa_alloc(...,GFP_ATOMIC/GFP_NOWAIT).
Zhu Yanjun
>
> Jason
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCHv2 1/1] RDMA/rxe: Fix a dead lock problem
2022-04-12 13:43 ` Yanjun Zhu
@ 2022-04-12 13:53 ` Jason Gunthorpe
2022-04-12 14:28 ` Yanjun Zhu
0 siblings, 1 reply; 7+ messages in thread
From: Jason Gunthorpe @ 2022-04-12 13:53 UTC (permalink / raw)
To: Yanjun Zhu; +Cc: zyjzyj2000, leon, linux-rdma, Yi Zhang
On Tue, Apr 12, 2022 at 09:43:28PM +0800, Yanjun Zhu wrote:
> 在 2022/4/11 19:50, Jason Gunthorpe 写道:
> > On Mon, Apr 11, 2022 at 04:00:18PM -0400, yanjun.zhu@linux.dev wrote:
> > > @@ -138,8 +139,10 @@ void *rxe_alloc(struct rxe_pool *pool)
> > > elem->obj = obj;
> > > kref_init(&elem->ref_cnt);
> > > - err = xa_alloc_cyclic(&pool->xa, &elem->index, elem, pool->limit,
> > > - &pool->next, GFP_KERNEL);
> > > + xa_lock_irqsave(&pool->xa, flags);
> > > + err = __xa_alloc_cyclic(&pool->xa, &elem->index, elem, pool->limit,
> > > + &pool->next, GFP_ATOMIC);
> > > + xa_unlock_irqrestore(&pool->xa, flags);
> >
> > No to using atomics, this needs to be either the _irq or _bh varient
>
> If I understand you correctly, you mean that we should use
> xa_lock_irq/xa_unlock_irq or xa_lock_bh/xa_unlock_bh instead of
> xa_unlock_irqrestore?
This is correct
> If so, xa_lock_irq/xa_unlock_irq or xa_lock_bh/xa_unlock_bh is used here,
> the warning as below will appear. This means that __rxe_add_to_pool disables
> softirq, but fpu_clone enables softirq.
I don't know what this is, you need to show the whole debug.
fpu_clone does not call rxe_add_to_pool
> As such, it is better to use xa_unlock_irqrestore +
> __xa_alloc(...,GFP_ATOMIC/GFP_NOWAIT).
No
Jason
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCHv2 1/1] RDMA/rxe: Fix a dead lock problem
2022-04-12 13:53 ` Jason Gunthorpe
@ 2022-04-12 14:28 ` Yanjun Zhu
2022-04-12 14:31 ` Jason Gunthorpe
0 siblings, 1 reply; 7+ messages in thread
From: Yanjun Zhu @ 2022-04-12 14:28 UTC (permalink / raw)
To: Jason Gunthorpe, Yanjun Zhu; +Cc: zyjzyj2000, leon, linux-rdma, Yi Zhang
在 2022/4/12 21:53, Jason Gunthorpe 写道:
> On Tue, Apr 12, 2022 at 09:43:28PM +0800, Yanjun Zhu wrote:
>> 在 2022/4/11 19:50, Jason Gunthorpe 写道:
>>> On Mon, Apr 11, 2022 at 04:00:18PM -0400, yanjun.zhu@linux.dev wrote:
>>>> @@ -138,8 +139,10 @@ void *rxe_alloc(struct rxe_pool *pool)
>>>> elem->obj = obj;
>>>> kref_init(&elem->ref_cnt);
>>>> - err = xa_alloc_cyclic(&pool->xa, &elem->index, elem, pool->limit,
>>>> - &pool->next, GFP_KERNEL);
>>>> + xa_lock_irqsave(&pool->xa, flags);
>>>> + err = __xa_alloc_cyclic(&pool->xa, &elem->index, elem, pool->limit,
>>>> + &pool->next, GFP_ATOMIC);
>>>> + xa_unlock_irqrestore(&pool->xa, flags);
>>>
>>> No to using atomics, this needs to be either the _irq or _bh varient
>>
>> If I understand you correctly, you mean that we should use
>> xa_lock_irq/xa_unlock_irq or xa_lock_bh/xa_unlock_bh instead of
>> xa_unlock_irqrestore?
>
> This is correct
>
>> If so, xa_lock_irq/xa_unlock_irq or xa_lock_bh/xa_unlock_bh is used here,
>> the warning as below will appear. This means that __rxe_add_to_pool disables
>> softirq, but fpu_clone enables softirq.
>
> I don't know what this is, you need to show the whole debug.
The followings are the warnings if xa_lock_bh +
__xa_alloc(...,GFP_KERNEL) is used. The diff is as below.
If xa_lock_irqsave/irqrestore + __xa_alloc(...,GFP_ATOMIC) is used,
the waring does not appear.
It seems to be related with local_bh_enable/disable.
------------------------------the Diff---------------------------
"
--- a/drivers/infiniband/sw/rxe/rxe_pool.c
+++ b/drivers/infiniband/sw/rxe/rxe_pool.c
@@ -138,8 +138,10 @@ void *rxe_alloc(struct rxe_pool *pool)
elem->obj = obj;
kref_init(&elem->ref_cnt);
- err = xa_alloc_cyclic(&pool->xa, &elem->index, elem, pool->limit,
- &pool->next, GFP_KERNEL);
+ xa_lock_bh(&pool->xa);
+ err = __xa_alloc_cyclic(&pool->xa, &elem->index, elem, pool->limit,
+ &pool->next, GFP_KERNEL);
+ xa_unlock_bh(&pool->xa);
if (err)
goto err_free;
@@ -166,8 +168,10 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct
rxe_pool_elem *elem)
elem->obj = (u8 *)elem - pool->elem_offset;
kref_init(&elem->ref_cnt);
- err = xa_alloc_cyclic(&pool->xa, &elem->index, elem, pool->limit,
- &pool->next, GFP_KERNEL);
+ xa_lock_bh(&pool->xa);
+ err = __xa_alloc_cyclic(&pool->xa, &elem->index, elem, pool->limit,
+ &pool->next, GFP_KERNEL);
+ xa_unlock_bh(&pool->xa);
if (err)
goto err_cnt;
------------the diff-----------------------------------
--------------The warings--------------------------------------------
[ 92.107272] ------------[ cut here ]------------
[ 92.107283] WARNING: CPU: 68 PID: 4009 at kernel/softirq.c:363
__local_bh_enable_ip+0x96/0xe0
[ 92.107292] Modules linked in: rdma_rxe(OE) ip6_udp_tunnel udp_tunnel
rds_rdma rds xt_CHECKSUM xt_MASQUERADE xt_conntrack ipt_REJECT
nf_reject_ipv4 nft_compat nft_chain_nat nf_nat nf_conntrack
nf_defrag_ipv6 nf_defrag_ipv4 nf_tables nfnetlink tun bridge stp llc
vfat fat rpcrdma sunrpc rdma_ucm ib_srpt ib_isert iscsi_target_mod
intel_rapl_msr intel_rapl_common target_core_mod ib_iser i10nm_edac
libiscsi nfit scsi_transport_iscsi libnvdimm rdma_cm iw_cm ib_cm
x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel ipmi_ssif kvm
iTCO_wdt irdma iTCO_vendor_support irqbypass crct10dif_pclmul
crc32_pclmul i40e ghash_clmulni_intel rapl ib_uverbs intel_cstate
isst_if_mbox_pci acpi_ipmi ib_core intel_uncore isst_if_mmio pcspkr
wmi_bmof ipmi_si mei_me isst_if_common i2c_i801 mei ipmi_devintf
i2c_smbus intel_pch_thermal ipmi_msghandler acpi_power_meter ip_tables
xfs libcrc32c sd_mod t10_pi crc64_rocksoft crc64 sg mgag200 i2c_algo_bit
drm_shmem_helper drm_kms_helper syscopyarea sysfillrect
[ 92.107420] sysimgblt fb_sys_fops ice ahci drm crc32c_intel libahci
megaraid_sas libata tg3 wmi dm_mirror dm_region_hash dm_log dm_mod fuse
[ 92.107445] CPU: 68 PID: 4009 Comm: rping Kdump: loaded Tainted: G S
OE 5.18.0-rc2rxe #15
[ 92.107448] Hardware name: Dell Inc. PowerEdge R750/06V45N, BIOS
1.2.4 05/28/2021
[ 92.107450] RIP: 0010:__local_bh_enable_ip+0x96/0xe0
[ 92.107454] Code: 00 e8 7e 17 03 00 e8 c9 8c 14 00 fb 65 8b 05 b1 54
93 56 85 c0 75 05 0f 1f 44 00 00 5b 5d c3 65 8b 05 5a 5c 93 56 85 c0 75
9d <0f> 0b eb 99 e8 51 89 14 00 eb 9a 48 89 ef e8 a7 7f 07 00 eb a3 0f
[ 92.107457] RSP: 0018:ff6d2e1209887a70 EFLAGS: 00010046
[ 92.107461] RAX: 0000000000000000 RBX: 0000000000000201 RCX:
000000000163f5c1
[ 92.107464] RDX: ff303e1d677c6118 RSI: 0000000000000201 RDI:
ffffffffc0866336
[ 92.107466] RBP: ffffffffc0866336 R08: ff303e1d677c6190 R09:
00000000fffffffe
[ 92.107468] R10: 000000002c8f236f R11: 00000000bf76f5c6 R12:
ff303e1e6166b368
[ 92.107470] R13: ff303e1e6166a000 R14: ff6d2e1209887ae8 R15:
ff303e1e8d815a68
[ 92.107472] FS: 00007ff6d3bf4740(0000) GS:ff303e2c7f900000(0000)
knlGS:0000000000000000
[ 92.107475] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 92.107477] CR2: 00007ff6cd7fdfb8 CR3: 000000103ee6c004 CR4:
0000000000771ee0
[ 92.107479] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
0000000000000000
[ 92.107481] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7:
0000000000000400
[ 92.107483] PKRU: 55555554
[ 92.107485] Call Trace:
[ 92.107486] <TASK>
[ 92.107490] __rxe_add_to_pool+0x76/0xa0 [rdma_rxe]
[ 92.107500] rxe_create_ah+0x59/0xe0 [rdma_rxe]
[ 92.107511] _rdma_create_ah+0x148/0x180 [ib_core]
[ 92.107546] rdma_create_ah+0xb7/0xf0 [ib_core]
[ 92.107565] cm_alloc_msg+0x5c/0x170 [ib_cm]
[ 92.107577] cm_alloc_priv_msg+0x1b/0x50 [ib_cm]
[ 92.107584] ib_send_cm_req+0x213/0x3f0 [ib_cm]
[ 92.107613] rdma_connect_locked+0x238/0x8e0 [rdma_cm]
[ 92.107637] rdma_connect+0x2b/0x40 [rdma_cm]
[ 92.107646] ucma_connect+0x128/0x1a0 [rdma_ucm]
[ 92.107690] ucma_write+0xaf/0x140 [rdma_ucm]
[ 92.107698] vfs_write+0xb8/0x370
[ 92.107707] ksys_write+0xbb/0xd0
[ 92.107709] ? syscall_trace_enter.isra.15+0x169/0x220
[ 92.107719] do_syscall_64+0x37/0x80
[ 92.107725] entry_SYSCALL_64_after_hwframe+0x44/0xae
[ 92.107730] RIP: 0033:0x7ff6d3011847
[ 92.107733] Code: c3 66 90 41 54 49 89 d4 55 48 89 f5 53 89 fb 48 83
ec 10 e8 1b fd ff ff 4c 89 e2 48 89 ee 89 df 41 89 c0 b8 01 00 00 00 0f
05 <48> 3d 00 f0 ff ff 77 35 44 89 c7 48 89 44 24 08 e8 54 fd ff ff 48
[ 92.107736] RSP: 002b:00007ffdd7b90af0 EFLAGS: 00000293 ORIG_RAX:
0000000000000001
[ 92.107740] RAX: ffffffffffffffda RBX: 0000000000000003 RCX:
00007ff6d3011847
[ 92.107742] RDX: 0000000000000128 RSI: 00007ffdd7b90b20 RDI:
0000000000000003
[ 92.107744] RBP: 00007ffdd7b90b20 R08: 0000000000000000 R09:
00007ff6cd7fe700
[ 92.107745] R10: 00000000ffffffff R11: 0000000000000293 R12:
0000000000000128
[ 92.107747] R13: 0000000000000011 R14: 0000000000000000 R15:
000056360dabca18
[ 92.107768] </TASK>
[ 92.107769] irq event stamp: 12947
[ 92.107771] hardirqs last enabled at (12945): [<ffffffffaa199135>]
_raw_read_unlock_irqrestore+0x55/0x70
[ 92.107775] hardirqs last disabled at (12946): [<ffffffffaa19895c>]
_raw_spin_lock_irqsave+0x4c/0x50
[ 92.107779] softirqs last enabled at (12900): [<ffffffffa9630d26>]
fpu_clone+0xf6/0x570
[ 92.107783] softirqs last disabled at (12947): [<ffffffffc0866309>]
__rxe_add_to_pool+0x49/0xa0 [rdma_rxe]
[ 92.107788] ---[ end trace 0000000000000000 ]---
[ 92.108017] ------------[ cut here ]------------
[ 92.108024] raw_local_irq_restore() called with IRQs enabled
[ 92.108029] WARNING: CPU: 68 PID: 4009 at
kernel/locking/irqflag-debug.c:10 warn_bogus_irq_restore+0x1d/0x20
[ 92.108034] Modules linked in: rdma_rxe(OE) ip6_udp_tunnel udp_tunnel
rds_rdma rds xt_CHECKSUM xt_MASQUERADE xt_conntrack ipt_REJECT
nf_reject_ipv4 nft_compat nft_chain_nat nf_nat nf_conntrack
nf_defrag_ipv6 nf_defrag_ipv4 nf_tables nfnetlink tun bridge stp llc
vfat fat rpcrdma sunrpc rdma_ucm ib_srpt ib_isert iscsi_target_mod
intel_rapl_msr intel_rapl_common target_core_mod ib_iser i10nm_edac
libiscsi nfit scsi_transport_iscsi libnvdimm rdma_cm iw_cm ib_cm
x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel ipmi_ssif kvm
iTCO_wdt irdma iTCO_vendor_support irqbypass crct10dif_pclmul
crc32_pclmul i40e ghash_clmulni_intel rapl ib_uverbs intel_cstate
isst_if_mbox_pci acpi_ipmi ib_core intel_uncore isst_if_mmio pcspkr
wmi_bmof ipmi_si mei_me isst_if_common i2c_i801 mei ipmi_devintf
i2c_smbus intel_pch_thermal ipmi_msghandler acpi_power_meter ip_tables
xfs libcrc32c sd_mod t10_pi crc64_rocksoft crc64 sg mgag200 i2c_algo_bit
drm_shmem_helper drm_kms_helper syscopyarea sysfillrect
[ 92.108160] sysimgblt fb_sys_fops ice ahci drm crc32c_intel libahci
megaraid_sas libata tg3 wmi dm_mirror dm_region_hash dm_log dm_mod fuse
[ 92.108184] CPU: 68 PID: 4009 Comm: rping Kdump: loaded Tainted: G S
W OE 5.18.0-rc2rxe #15
[ 92.108187] Hardware name: Dell Inc. PowerEdge R750/06V45N, BIOS
1.2.4 05/28/2021
[ 92.108189] RIP: 0010:warn_bogus_irq_restore+0x1d/0x20
[ 92.108193] Code: 24 48 c7 c7 48 4f 94 aa e8 79 03 fb ff 80 3d cc ce
0f 01 00 74 01 c3 48 c7 c7 68 c7 94 aa c6 05 bb ce 0f 01 01 e8 67 02 fb
ff <0f> 0b c3 44 8b 05 b5 14 14 01 55 53 65 48 8b 1c 25 80 f1 01 00 45
[ 92.108196] RSP: 0018:ff6d2e1209887b88 EFLAGS: 00010282
[ 92.108199] RAX: 0000000000000000 RBX: 0000000000000200 RCX:
0000000000000002
[ 92.108201] RDX: 0000000000000002 RSI: ffffffffaa99de72 RDI:
00000000ffffffff
[ 92.108203] RBP: ff303e1d60ba2c78 R08: 0000000000000000 R09:
c0000000ffff7fff
[ 92.108205] R10: 0000000000000001 R11: ff6d2e12098879a8 R12:
ff303e1d60ba4138
[ 92.108207] R13: ff303e1d60ba4000 R14: ff303e1d60ba2c78 R15:
0000000000000206
[ 92.108209] FS: 00007ff6d3bf4740(0000) GS:ff303e2c7f900000(0000)
knlGS:0000000000000000
[ 92.108212] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 92.108214] CR2: 00007ff6cd7fdfb8 CR3: 000000103ee6c004 CR4:
0000000000771ee0
[ 92.108216] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
0000000000000000
[ 92.108218] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7:
0000000000000400
[ 92.108219] PKRU: 55555554
[ 92.108221] Call Trace:
[ 92.108223] <TASK>
[ 92.108225] _raw_spin_unlock_irqrestore+0x63/0x70
[ 92.108230] ib_send_cm_req+0x2c0/0x3f0 [ib_cm]
[ 92.108257] rdma_connect_locked+0x238/0x8e0 [rdma_cm]
[ 92.108383] rdma_connect+0x2b/0x40 [rdma_cm]
[ 92.108393] ucma_connect+0x128/0x1a0 [rdma_ucm]
[ 92.108429] ucma_write+0xaf/0x140 [rdma_ucm]
[ 92.108437] vfs_write+0xb8/0x370
[ 92.108444] ksys_write+0xbb/0xd0
[ 92.108446] ? syscall_trace_enter.isra.15+0x169/0x220
[ 92.108454] do_syscall_64+0x37/0x80
[ 92.108459] entry_SYSCALL_64_after_hwframe+0x44/0xae
[ 92.108463] RIP: 0033:0x7ff6d3011847
[ 92.108466] Code: c3 66 90 41 54 49 89 d4 55 48 89 f5 53 89 fb 48 83
ec 10 e8 1b fd ff ff 4c 89 e2 48 89 ee 89 df 41 89 c0 b8 01 00 00 00 0f
05 <48> 3d 00 f0 ff ff 77 35 44 89 c7 48 89 44 24 08 e8 54 fd ff ff 48
[ 92.108469] RSP: 002b:00007ffdd7b90af0 EFLAGS: 00000293 ORIG_RAX:
0000000000000001
[ 92.108473] RAX: ffffffffffffffda RBX: 0000000000000003 RCX:
00007ff6d3011847
[ 92.108474] RDX: 0000000000000128 RSI: 00007ffdd7b90b20 RDI:
0000000000000003
[ 92.108476] RBP: 00007ffdd7b90b20 R08: 0000000000000000 R09:
00007ff6cd7fe700
[ 92.108478] R10: 00000000ffffffff R11: 0000000000000293 R12:
0000000000000128
[ 92.108480] R13: 0000000000000011 R14: 0000000000000000 R15:
000056360dabca18
[ 92.108498] </TASK>
[ 92.108499] irq event stamp: 13915
[ 92.108501] hardirqs last enabled at (13921): [<ffffffffa976dfa7>]
__up_console_sem+0x67/0x70
[ 92.108507] hardirqs last disabled at (13926): [<ffffffffa976df8c>]
__up_console_sem+0x4c/0x70
[ 92.108510] softirqs last enabled at (13806): [<ffffffffaa40032a>]
__do_softirq+0x32a/0x48c
[ 92.108514] softirqs last disabled at (13761): [<ffffffffa96e9e83>]
irq_exit_rcu+0xe3/0x120
[ 92.108517] ---[ end trace 0000000000000000 ]---
-----------The warnings----------------------------------------------
Zhu Yanjun
>
> fpu_clone does not call rxe_add_to_pool
>
>> As such, it is better to use xa_unlock_irqrestore +
>> __xa_alloc(...,GFP_ATOMIC/GFP_NOWAIT).
>
> No
>
> Jason
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCHv2 1/1] RDMA/rxe: Fix a dead lock problem
2022-04-12 14:28 ` Yanjun Zhu
@ 2022-04-12 14:31 ` Jason Gunthorpe
2022-04-12 14:46 ` Yanjun Zhu
0 siblings, 1 reply; 7+ messages in thread
From: Jason Gunthorpe @ 2022-04-12 14:31 UTC (permalink / raw)
To: Yanjun Zhu; +Cc: zyjzyj2000, leon, linux-rdma, Yi Zhang
On Tue, Apr 12, 2022 at 10:28:16PM +0800, Yanjun Zhu wrote:
> 在 2022/4/12 21:53, Jason Gunthorpe 写道:
> > On Tue, Apr 12, 2022 at 09:43:28PM +0800, Yanjun Zhu wrote:
> > > 在 2022/4/11 19:50, Jason Gunthorpe 写道:
> > > > On Mon, Apr 11, 2022 at 04:00:18PM -0400, yanjun.zhu@linux.dev wrote:
> > > > > @@ -138,8 +139,10 @@ void *rxe_alloc(struct rxe_pool *pool)
> > > > > elem->obj = obj;
> > > > > kref_init(&elem->ref_cnt);
> > > > > - err = xa_alloc_cyclic(&pool->xa, &elem->index, elem, pool->limit,
> > > > > - &pool->next, GFP_KERNEL);
> > > > > + xa_lock_irqsave(&pool->xa, flags);
> > > > > + err = __xa_alloc_cyclic(&pool->xa, &elem->index, elem, pool->limit,
> > > > > + &pool->next, GFP_ATOMIC);
> > > > > + xa_unlock_irqrestore(&pool->xa, flags);
> > > >
> > > > No to using atomics, this needs to be either the _irq or _bh varient
> > >
> > > If I understand you correctly, you mean that we should use
> > > xa_lock_irq/xa_unlock_irq or xa_lock_bh/xa_unlock_bh instead of
> > > xa_unlock_irqrestore?
> >
> > This is correct
> >
> > > If so, xa_lock_irq/xa_unlock_irq or xa_lock_bh/xa_unlock_bh is used here,
> > > the warning as below will appear. This means that __rxe_add_to_pool disables
> > > softirq, but fpu_clone enables softirq.
> >
> > I don't know what this is, you need to show the whole debug.
>
> The followings are the warnings if xa_lock_bh + __xa_alloc(...,GFP_KERNEL)
> is used. The diff is as below.
>
> If xa_lock_irqsave/irqrestore + __xa_alloc(...,GFP_ATOMIC) is used,
> the waring does not appear.
That is because this was called in an atomic context:
> [ 92.107490] __rxe_add_to_pool+0x76/0xa0 [rdma_rxe]
> [ 92.107500] rxe_create_ah+0x59/0xe0 [rdma_rxe]
> [ 92.107511] _rdma_create_ah+0x148/0x180 [ib_core]
> [ 92.107546] rdma_create_ah+0xb7/0xf0 [ib_core]
> [ 92.107565] cm_alloc_msg+0x5c/0x170 [ib_cm]
> [ 92.107577] cm_alloc_priv_msg+0x1b/0x50 [ib_cm]
> [ 92.107584] ib_send_cm_req+0x213/0x3f0 [ib_cm]
> [ 92.107613] rdma_connect_locked+0x238/0x8e0 [rdma_cm]
> [ 92.107637] rdma_connect+0x2b/0x40 [rdma_cm]
> [ 92.107646] ucma_connect+0x128/0x1a0 [rdma_ucm]
> [ 92.107690] ucma_write+0xaf/0x140 [rdma_ucm]
> [ 92.107698] vfs_write+0xb8/0x370
> [ 92.107707] ksys_write+0xbb/0xd0
Meaning the GFP_KERNEL is already wrong.
The AH path needs to have its own special atomic allocation flow and
you have to use an irq lock and GFP_ATOMIC for it.
Jason
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCHv2 1/1] RDMA/rxe: Fix a dead lock problem
2022-04-12 14:31 ` Jason Gunthorpe
@ 2022-04-12 14:46 ` Yanjun Zhu
0 siblings, 0 replies; 7+ messages in thread
From: Yanjun Zhu @ 2022-04-12 14:46 UTC (permalink / raw)
To: Jason Gunthorpe; +Cc: zyjzyj2000, leon, linux-rdma, Yi Zhang
在 2022/4/12 22:31, Jason Gunthorpe 写道:
> On Tue, Apr 12, 2022 at 10:28:16PM +0800, Yanjun Zhu wrote:
>> 在 2022/4/12 21:53, Jason Gunthorpe 写道:
>>> On Tue, Apr 12, 2022 at 09:43:28PM +0800, Yanjun Zhu wrote:
>>>> 在 2022/4/11 19:50, Jason Gunthorpe 写道:
>>>>> On Mon, Apr 11, 2022 at 04:00:18PM -0400, yanjun.zhu@linux.dev wrote:
>>>>>> @@ -138,8 +139,10 @@ void *rxe_alloc(struct rxe_pool *pool)
>>>>>> elem->obj = obj;
>>>>>> kref_init(&elem->ref_cnt);
>>>>>> - err = xa_alloc_cyclic(&pool->xa, &elem->index, elem, pool->limit,
>>>>>> - &pool->next, GFP_KERNEL);
>>>>>> + xa_lock_irqsave(&pool->xa, flags);
>>>>>> + err = __xa_alloc_cyclic(&pool->xa, &elem->index, elem, pool->limit,
>>>>>> + &pool->next, GFP_ATOMIC);
>>>>>> + xa_unlock_irqrestore(&pool->xa, flags);
>>>>> No to using atomics, this needs to be either the _irq or _bh varient
>>>> If I understand you correctly, you mean that we should use
>>>> xa_lock_irq/xa_unlock_irq or xa_lock_bh/xa_unlock_bh instead of
>>>> xa_unlock_irqrestore?
>>> This is correct
>>>
>>>> If so, xa_lock_irq/xa_unlock_irq or xa_lock_bh/xa_unlock_bh is used here,
>>>> the warning as below will appear. This means that __rxe_add_to_pool disables
>>>> softirq, but fpu_clone enables softirq.
>>> I don't know what this is, you need to show the whole debug.
>> The followings are the warnings if xa_lock_bh + __xa_alloc(...,GFP_KERNEL)
>> is used. The diff is as below.
>>
>> If xa_lock_irqsave/irqrestore + __xa_alloc(...,GFP_ATOMIC) is used,
>> the waring does not appear.
> That is because this was called in an atomic context:
>
>> [ 92.107490] __rxe_add_to_pool+0x76/0xa0 [rdma_rxe]
>> [ 92.107500] rxe_create_ah+0x59/0xe0 [rdma_rxe]
>> [ 92.107511] _rdma_create_ah+0x148/0x180 [ib_core]
>> [ 92.107546] rdma_create_ah+0xb7/0xf0 [ib_core]
>> [ 92.107565] cm_alloc_msg+0x5c/0x170 [ib_cm]
>> [ 92.107577] cm_alloc_priv_msg+0x1b/0x50 [ib_cm]
>> [ 92.107584] ib_send_cm_req+0x213/0x3f0 [ib_cm]
>> [ 92.107613] rdma_connect_locked+0x238/0x8e0 [rdma_cm]
>> [ 92.107637] rdma_connect+0x2b/0x40 [rdma_cm]
>> [ 92.107646] ucma_connect+0x128/0x1a0 [rdma_ucm]
>> [ 92.107690] ucma_write+0xaf/0x140 [rdma_ucm]
>> [ 92.107698] vfs_write+0xb8/0x370
>> [ 92.107707] ksys_write+0xbb/0xd0
> Meaning the GFP_KERNEL is already wrong.
>
> The AH path needs to have its own special atomic allocation flow and
> you have to use an irq lock and GFP_ATOMIC for it.
static struct ib_mad_send_buf *cm_alloc_msg(struct cm_id_private
*cm_id_priv)
{
...
spin_lock(&cm_id_priv->av.port->cm_dev->mad_agent_lock);
...
ah = rdma_create_ah(mad_agent->qp->pd, &cm_id_priv->av.ah_attr, 0);
...
spin_unlock(&cm_id_priv->av.port->cm_dev->mad_agent_lock);
...
}
Yes. Exactly.
In cm_alloc_msg, spinlock is used. And __rxe_add_to_pool should not use
GFP_KERNEL.
Thanks a lot. I will send the latest patch very soon.
Zhu Yanjun
>
> Jason
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2022-04-12 14:46 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-11 20:00 [PATCHv2 1/1] RDMA/rxe: Fix a dead lock problem yanjun.zhu
2022-04-11 11:50 ` Jason Gunthorpe
2022-04-12 13:43 ` Yanjun Zhu
2022-04-12 13:53 ` Jason Gunthorpe
2022-04-12 14:28 ` Yanjun Zhu
2022-04-12 14:31 ` Jason Gunthorpe
2022-04-12 14:46 ` Yanjun Zhu
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.