All of lore.kernel.org
 help / color / mirror / Atom feed
* RDMA/addr: NULL dereference in process_one_req
@ 2020-09-22 15:13 Dan Aloni
  2020-09-22 17:09 ` Jason Gunthorpe
  0 siblings, 1 reply; 3+ messages in thread
From: Dan Aloni @ 2020-09-22 15:13 UTC (permalink / raw)
  To: Jason Gunthorpe; +Cc: Leon Romanovsky, linux-rdma

The Oops below [1], is quite rare, and occurs after awhile when kernel
code repeatedly tries to resolve addresses. According to my analysis the
work item is executed twice, and in the second time a NULL value of
`req->callback` triggers this Oops.

After many run iterations, I did managed to reproduce this issue once
with an isolated sample kernel code I posted at this address:

    https://github.com/kernelim/ibaddr-null-deref-repro

The sample code works similarly to the client code in the rpcrdma kernel
module.

Is it possible that once a work item is executing, the netevent-based
side call to requeue it in `set_timeout`, puts it on another CPU while
it is still running?  Otherwise it is hard to explain what I'm seeing.
My sample code also attempts to inject a notifier NETEVENT_NEIGH_UPDATE
event to trigger this, but it did not increase the frequency of
reproduction.

I'm experimenting with a fix [2] but I'm not sure it would solve this
issue yet. I'm hoping for more suggestions and insight.

Thanks

[1]

[165371.631784] Workqueue: ib_addr process_one_req [ib_core]
[165371.637268] RIP: 0010:0x0
[165371.640066] Code: Bad RIP value.
[165371.643468] RSP: 0018:ffffb484cfd87e60 EFLAGS: 00010297
[165371.648870] RAX: 0000000000000000 RBX: ffff94ef2e027130 RCX: ffff94eee8271800
[165371.656196] RDX: ffff94eee8271920 RSI: ffff94ef2e027010 RDI: 00000000ffffff92
[165371.663518] RBP: ffffb484cfd87e80 R08: 00726464615f6269 R09: 8080808080808080
[165371.670839] R10: ffffb484cfd87c68 R11: fefefefefefefeff R12: ffff94ef2e027000
[165371.678162] R13: ffff94ef2e027010 R14: ffff94ef2e027130 R15: 0ffff951f2c624a0
[165371.685485] FS:  0000000000000000(0000) GS:ffff94ef40e80000(0000) knlGS:0000000000000000
[165371.693762] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[165371.699681] CR2: ffffffffffffffd6 CR3: 0000005eca20a002 CR4: 00000000007606e0
[165371.707001] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[165371.714325] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[165371.721647] PKRU: 55555554
[165371.724526] Call Trace:
[165371.727170]  process_one_req+0x39/0x150 [ib_core]
[165371.732051]  process_one_work+0x20f/0x400
[165371.736242]  worker_thread+0x34/0x410
[165371.740082]  kthread+0x121/0x140
[165371.743484]  ? process_one_work+0x400/0x400
[165371.747844]  ? kthread_park+0x90/0x90
[165371.751681]  ret_from_fork+0x1f/0x40


[2]

diff --git a/drivers/infiniband/core/addr.c b/drivers/infiniband/core/addr.c
index 3a98439bba83..6d7c325cb8e6 100644
--- a/drivers/infiniband/core/addr.c
+++ b/drivers/infiniband/core/addr.c
@@ -636,7 +636,8 @@ static void process_one_req(struct work_struct *_work)
                        /* requeue the work for retrying again */
                        spin_lock_bh(&lock);
                        if (!list_empty(&req->list))
-                               set_timeout(req, req->timeout);
+                               if (delayed_work_pending(&req->work))
+                                       set_timeout(req, req->timeout);
                        spin_unlock_bh(&lock);
                        return;
                }

Signed-off-by: Dan Aloni <dan@kernelim.com>

-- 
Dan Aloni

^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: RDMA/addr: NULL dereference in process_one_req
  2020-09-22 15:13 RDMA/addr: NULL dereference in process_one_req Dan Aloni
@ 2020-09-22 17:09 ` Jason Gunthorpe
  2020-09-23  4:45   ` Dan Aloni
  0 siblings, 1 reply; 3+ messages in thread
From: Jason Gunthorpe @ 2020-09-22 17:09 UTC (permalink / raw)
  To: Dan Aloni; +Cc: Leon Romanovsky, linux-rdma

On Tue, Sep 22, 2020 at 06:13:48PM +0300, Dan Aloni wrote:
> The Oops below [1], is quite rare, and occurs after awhile when kernel
> code repeatedly tries to resolve addresses. According to my analysis the
> work item is executed twice, and in the second time a NULL value of
> `req->callback` triggers this Oops.

Hum I think the race is rdma_addr_cancel(), process_one_req() and 
netevent_callback() running concurrently

It is very narrow but it looks like netevent_callback() could cause
the work to become running such that rdma_addr_cancel() has already
done the list_del_init() which causes the cancel_delayed_work() to be
skipped, and the work re-run before rdma_addr_cancel() hits its
cancel_work_sync()

Please try this:

From fac94acc7a6fb4d78ddd06c51674110937442d15 Mon Sep 17 00:00:00 2001
From: Jason Gunthorpe <jgg@nvidia.com>
Date: Tue, 22 Sep 2020 13:54:17 -0300
Subject: [PATCH] RDMA/addr: Fix race with
 netevent_callback()/rdma_addr_cancel()

This three thread race can result in the work being run once the callback
becomes NULL:

       CPU1                 CPU2                   CPU3
 netevent_callback()
                     process_one_req()       rdma_addr_cancel()
                      [..]
     spin_lock_bh()
  	set_timeout()
     spin_unlock_bh()

						spin_lock_bh()
						list_del_init(&req->list);
						spin_unlock_bh()

		     req->callback = NULL
		     spin_lock_bh()
		       if (!list_empty(&req->list))
                         // Skipped!
		         // cancel_delayed_work(&req->work);
		     spin_unlock_bh()

		    process_one_req() // again
		     req->callback() // BOOM
						cancel_delayed_work_sync()

The solution is to always cancel the work once it is completed so any
inbetween set_timeout() does not result in it running again.

Fixes: 44e75052bc2a ("RDMA/rdma_cm: Make rdma_addr_cancel into a fence")
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/infiniband/core/addr.c | 11 +++++------
 1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/drivers/infiniband/core/addr.c b/drivers/infiniband/core/addr.c
index 3a98439bba832b..0abce004a9591e 100644
--- a/drivers/infiniband/core/addr.c
+++ b/drivers/infiniband/core/addr.c
@@ -647,13 +647,12 @@ static void process_one_req(struct work_struct *_work)
 	req->callback = NULL;
 
 	spin_lock_bh(&lock);
+	/*
+	 * Although the work will normally have been canceled by the workqueue,
+	 * it can still be requeued as long as it is on the req_list.
+	 */
+	cancel_delayed_work(&req->work);
 	if (!list_empty(&req->list)) {
-		/*
-		 * Although the work will normally have been canceled by the
-		 * workqueue, it can still be requeued as long as it is on the
-		 * req_list.
-		 */
-		cancel_delayed_work(&req->work);
 		list_del_init(&req->list);
 		kfree(req);
 	}
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: RDMA/addr: NULL dereference in process_one_req
  2020-09-22 17:09 ` Jason Gunthorpe
@ 2020-09-23  4:45   ` Dan Aloni
  0 siblings, 0 replies; 3+ messages in thread
From: Dan Aloni @ 2020-09-23  4:45 UTC (permalink / raw)
  To: Jason Gunthorpe; +Cc: Leon Romanovsky, linux-rdma

On Tue, Sep 22, 2020 at 02:09:51PM -0300, Jason Gunthorpe wrote:
> On Tue, Sep 22, 2020 at 06:13:48PM +0300, Dan Aloni wrote:
> > The Oops below [1], is quite rare, and occurs after awhile when kernel
> > code repeatedly tries to resolve addresses. According to my analysis the
> > work item is executed twice, and in the second time a NULL value of
> > `req->callback` triggers this Oops.
> 
> Hum I think the race is rdma_addr_cancel(), process_one_req() and 
> netevent_callback() running concurrently
> 
> It is very narrow but it looks like netevent_callback() could cause
> the work to become running such that rdma_addr_cancel() has already
> done the list_del_init() which causes the cancel_delayed_work() to be
> skipped, and the work re-run before rdma_addr_cancel() hits its
> cancel_work_sync()

Thanks for the quick response! This 3-CPU race has really been a head
scratcher.

> Please try this:
> 
> From fac94acc7a6fb4d78ddd06c51674110937442d15 Mon Sep 17 00:00:00 2001
> From: Jason Gunthorpe <jgg@nvidia.com>
> Date: Tue, 22 Sep 2020 13:54:17 -0300
> Subject: [PATCH] RDMA/addr: Fix race with
>  netevent_callback()/rdma_addr_cancel()

Looks good - I've ran this for 11 hours now and it's stable. I think it
solved the problem.

-- 
Dan Aloni

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2020-09-23  4:46 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-22 15:13 RDMA/addr: NULL dereference in process_one_req Dan Aloni
2020-09-22 17:09 ` Jason Gunthorpe
2020-09-23  4:45   ` Dan Aloni

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.