linux-rdma.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jason Gunthorpe <jgg@nvidia.com>
To: Bart Van Assche <bvanassche@acm.org>
Cc: linux-rdma@vger.kernel.org
Subject: Re: v5.15-rc1 issue with the soft-iWARP driver
Date: Wed, 15 Sep 2021 11:34:04 -0300	[thread overview]
Message-ID: <20210915143404.GL4065468@nvidia.com> (raw)
In-Reply-To: <ccb9ee03-4aaa-2288-3d2f-ce01f550a609@acm.org>

On Tue, Sep 14, 2021 at 09:54:27PM -0700, Bart Van Assche wrote:
> Hi,
> 
> If I run test srp/015 from the blktests suite then the following appears
> in the kernel log:
> 
> ==================================================================
> BUG: KASAN: null-ptr-deref in __list_del_entry_valid+0x4d/0xe0
> Read of size 8 at addr 0000000000000000 by task kworker/u16:3/161
> 
> CPU: 5 PID: 161 Comm: kworker/u16:3 Not tainted 5.15.0-rc1-dbg+ #2
> Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a-rebuilt.opensuse.org 04/01/2014
> Workqueue: iw_cm_wq cm_work_handler [iw_cm]
> Call Trace:
>  show_stack+0x52/0x58
>  dump_stack_lvl+0x5b/0x82
>  kasan_report.cold+0x52/0x57
>  __asan_load8+0x69/0x90
>  __list_del_entry_valid+0x4d/0xe0
>  _cma_cancel_listens+0x49/0x230 [rdma_cm]
>  _destroy_id+0x4e/0x420 [rdma_cm]
>  destroy_id_handler_unlock+0xc4/0x200 [rdma_cm]
>  iw_conn_req_handler+0x335/0x370 [rdma_cm]
>  cm_conn_req_handler+0x546/0x7d0 [iw_cm]
>  cm_work_handler+0x419/0x480 [iw_cm]
>  process_one_work+0x59d/0xb00
>  worker_thread+0x8f/0x5c0
>  kthread+0x1fc/0x230
>  ret_from_fork+0x1f/0x30
> ==================================================================
> 
> I think this is a regression. This happened with commit a17a1faf5d3e
> ("RDMA/cma: Fix listener leak in rdma_cma_listen_on_all() failure").

Gurk, this is a horrific mess, these list heads are multiplexing between
4 different roles depending on stuff. Thanks Bart

So let's go closer to the original patch:

From ca465e1f1f9b38fe916a36f7d80c5d25f2337c81 Mon Sep 17 00:00:00 2001
From: Tao Liu <thomas.liu@ucloud.cn>
Date: Mon, 13 Sep 2021 17:33:44 +0800
Subject: [PATCH] RDMA/cma: Fix listener leak in rdma_cma_listen_on_all()
 failure

If cma_listen_on_all() fails it leaves the per-device ID still on the
listen_list but the state is not set to RDMA_CM_ADDR_BOUND.

When the cmid is eventually destroyed cma_cancel_listens() is not called
due to the wrong state, however the per-device IDs are still holding the
refcount preventing the ID from being destroyed, thus deadlocking:

 task:rping state:D stack:   0 pid:19605 ppid: 47036 flags:0x00000084
 Call Trace:
  __schedule+0x29a/0x780
  ? free_unref_page_commit+0x9b/0x110
  schedule+0x3c/0xa0
  schedule_timeout+0x215/0x2b0
  ? __flush_work+0x19e/0x1e0
  wait_for_completion+0x8d/0xf0
  _destroy_id+0x144/0x210 [rdma_cm]
  ucma_close_id+0x2b/0x40 [rdma_ucm]
  __destroy_id+0x93/0x2c0 [rdma_ucm]
  ? __xa_erase+0x4a/0xa0
  ucma_destroy_id+0x9a/0x120 [rdma_ucm]
  ucma_write+0xb8/0x130 [rdma_ucm]
  vfs_write+0xb4/0x250
  ksys_write+0xb5/0xd0
  ? syscall_trace_enter.isra.19+0x123/0x190
  do_syscall_64+0x33/0x40
  entry_SYSCALL_64_after_hwframe+0x44/0xa9

Ensure that cma_listen_on_all() atomically unwinds its action under the
lock during error.

Fixes: c80a0c52d85c ("RDMA/cma: Add missing error handling of listen_id")
Link: https://lore.kernel.org/r/20210913093344.17230-1-thomas.liu@ucloud.cn
Signed-off-by: Tao Liu <thomas.liu@ucloud.cn>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/infiniband/core/cma.c | 13 ++++++++++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
index 86ee3b01b3ee47..5aa58897965df4 100644
--- a/drivers/infiniband/core/cma.c
+++ b/drivers/infiniband/core/cma.c
@@ -1746,15 +1746,16 @@ static void cma_cancel_route(struct rdma_id_private *id_priv)
 	}
 }
 
-static void cma_cancel_listens(struct rdma_id_private *id_priv)
+static void _cma_cancel_listens(struct rdma_id_private *id_priv)
 {
 	struct rdma_id_private *dev_id_priv;
 
+	lockdep_assert_held(&lock);
+
 	/*
 	 * Remove from listen_any_list to prevent added devices from spawning
 	 * additional listen requests.
 	 */
-	mutex_lock(&lock);
 	list_del(&id_priv->list);
 
 	while (!list_empty(&id_priv->listen_list)) {
@@ -1768,6 +1769,12 @@ static void cma_cancel_listens(struct rdma_id_private *id_priv)
 		rdma_destroy_id(&dev_id_priv->id);
 		mutex_lock(&lock);
 	}
+}
+
+static void cma_cancel_listens(struct rdma_id_private *id_priv)
+{
+	mutex_lock(&lock);
+	_cma_cancel_listens(id_priv);
 	mutex_unlock(&lock);
 }
 
@@ -2579,7 +2586,7 @@ static int cma_listen_on_all(struct rdma_id_private *id_priv)
 	return 0;
 
 err_listen:
-	list_del(&id_priv->list);
+	_cma_cancel_listens(id_priv);
 	mutex_unlock(&lock);
 	if (to_destroy)
 		rdma_destroy_id(&to_destroy->id);
-- 
2.33.0



  reply	other threads:[~2021-09-15 14:34 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-15  4:54 v5.15-rc1 issue with the soft-iWARP driver Bart Van Assche
2021-09-15 14:34 ` Jason Gunthorpe [this message]
2021-09-16 19:18 ` Bernard Metzler

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210915143404.GL4065468@nvidia.com \
    --to=jgg@nvidia.com \
    --cc=bvanassche@acm.org \
    --cc=linux-rdma@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).