linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] RDMA/cma: prevent rdma id destroy during cma_iw_handler
@ 2023-06-03  0:46 Shin'ichiro Kawasaki
  2023-06-11 13:37 ` Leon Romanovsky
  0 siblings, 1 reply; 3+ messages in thread
From: Shin'ichiro Kawasaki @ 2023-06-03  0:46 UTC (permalink / raw)
  To: linux-rdma, Jason Gunthorpe, Leon Romanovsky
  Cc: linux-nvme, Damien Le Moal, Shin'ichiro Kawasaki

When rdma_destroy_id() and cma_iw_handler() race, struct rdma_id_private
*id_priv can be destroyed during cma_iw_handler call. This causes "BUG:
KASAN: slab-use-after-free" at mutex_lock() in cma_iw_handler().
To prevent the destroy of id_priv, keep its reference count by calling
cma_id_get() and cma_id_put() at start and end of cma_iw_handler().

Signed-off-by: Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
Cc: stable@vger.kernel.org
---
The BUG KASAN was observed with blktests at test cases nvme/030 or nvme/031,
using SIW transport [1]. To reproduce it, it is required to repeat the test
cases from 30 to 50 times on my test system.

[1] https://lore.kernel.org/linux-block/rsmmxrchy6voi5qhl4irss5sprna3f5owkqtvybxglcv2pnylm@xmrnpfu3tfpe/

 drivers/infiniband/core/cma.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
index 93a1c48d0c32..c5267d9bb184 100644
--- a/drivers/infiniband/core/cma.c
+++ b/drivers/infiniband/core/cma.c
@@ -2477,6 +2477,7 @@ static int cma_iw_handler(struct iw_cm_id *iw_id, struct iw_cm_event *iw_event)
 	struct sockaddr *laddr = (struct sockaddr *)&iw_event->local_addr;
 	struct sockaddr *raddr = (struct sockaddr *)&iw_event->remote_addr;
 
+	cma_id_get(id_priv);
 	mutex_lock(&id_priv->handler_mutex);
 	if (READ_ONCE(id_priv->state) != RDMA_CM_CONNECT)
 		goto out;
@@ -2524,12 +2525,14 @@ static int cma_iw_handler(struct iw_cm_id *iw_id, struct iw_cm_event *iw_event)
 	if (ret) {
 		/* Destroy the CM ID by returning a non-zero value. */
 		id_priv->cm_id.iw = NULL;
+		cma_id_put(id_priv);
 		destroy_id_handler_unlock(id_priv);
 		return ret;
 	}
 
 out:
 	mutex_unlock(&id_priv->handler_mutex);
+	cma_id_put(id_priv);
 	return ret;
 }
 
-- 
2.40.1



^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH] RDMA/cma: prevent rdma id destroy during cma_iw_handler
  2023-06-03  0:46 [PATCH] RDMA/cma: prevent rdma id destroy during cma_iw_handler Shin'ichiro Kawasaki
@ 2023-06-11 13:37 ` Leon Romanovsky
  2023-06-12  3:04   ` Shinichiro Kawasaki
  0 siblings, 1 reply; 3+ messages in thread
From: Leon Romanovsky @ 2023-06-11 13:37 UTC (permalink / raw)
  To: Shin'ichiro Kawasaki
  Cc: linux-rdma, Jason Gunthorpe, linux-nvme, Damien Le Moal

On Sat, Jun 03, 2023 at 09:46:20AM +0900, Shin'ichiro Kawasaki wrote:
> When rdma_destroy_id() and cma_iw_handler() race, struct rdma_id_private
> *id_priv can be destroyed during cma_iw_handler call. This causes "BUG:
> KASAN: slab-use-after-free" at mutex_lock() in cma_iw_handler().
> To prevent the destroy of id_priv, keep its reference count by calling
> cma_id_get() and cma_id_put() at start and end of cma_iw_handler().

Please add relevant kernel panic to commit message.

> 
> Signed-off-by: Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
> Cc: stable@vger.kernel.org

Add Fixes line when you are fixing bug.

> ---
> The BUG KASAN was observed with blktests at test cases nvme/030 or nvme/031,
> using SIW transport [1]. To reproduce it, it is required to repeat the test
> cases from 30 to 50 times on my test system.
> 
> [1] https://lore.kernel.org/linux-block/rsmmxrchy6voi5qhl4irss5sprna3f5owkqtvybxglcv2pnylm@xmrnpfu3tfpe/
> 
>  drivers/infiniband/core/cma.c | 3 +++
>  1 file changed, 3 insertions(+)

The fix looks correct to me.

Thanks


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH] RDMA/cma: prevent rdma id destroy during cma_iw_handler
  2023-06-11 13:37 ` Leon Romanovsky
@ 2023-06-12  3:04   ` Shinichiro Kawasaki
  0 siblings, 0 replies; 3+ messages in thread
From: Shinichiro Kawasaki @ 2023-06-12  3:04 UTC (permalink / raw)
  To: Leon Romanovsky; +Cc: linux-rdma, Jason Gunthorpe, linux-nvme, Damien Le Moal

Thanks for the comments.

On Jun 11, 2023 / 16:37, Leon Romanovsky wrote:
> On Sat, Jun 03, 2023 at 09:46:20AM +0900, Shin'ichiro Kawasaki wrote:
> > When rdma_destroy_id() and cma_iw_handler() race, struct rdma_id_private
> > *id_priv can be destroyed during cma_iw_handler call. This causes "BUG:
> > KASAN: slab-use-after-free" at mutex_lock() in cma_iw_handler().
> > To prevent the destroy of id_priv, keep its reference count by calling
> > cma_id_get() and cma_id_put() at start and end of cma_iw_handler().
> 
> Please add relevant kernel panic to commit message.

Sure, will do in v2.

> 
> > 
> > Signed-off-by: Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
> > Cc: stable@vger.kernel.org
> 
> Add Fixes line when you are fixing bug.

I see. I checked commit logs of drivers/infinibad/core/cma.c. It looks the issue
has been existing since the commit de910bd92137 ("RDMA/cma: Simplify locking
needed for serialization of callbacks") in 2008, which modified the method to
guard id_priv. I'll add the Fixes tag with this commit.

> 
> > ---
> > The BUG KASAN was observed with blktests at test cases nvme/030 or nvme/031,
> > using SIW transport [1]. To reproduce it, it is required to repeat the test
> > cases from 30 to 50 times on my test system.
> > 
> > [1] https://lore.kernel.org/linux-block/rsmmxrchy6voi5qhl4irss5sprna3f5owkqtvybxglcv2pnylm@xmrnpfu3tfpe/
> > 
> >  drivers/infiniband/core/cma.c | 3 +++
> >  1 file changed, 3 insertions(+)
> 
> The fix looks correct to me.
> 
> Thanks

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2023-06-12  3:05 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-06-03  0:46 [PATCH] RDMA/cma: prevent rdma id destroy during cma_iw_handler Shin'ichiro Kawasaki
2023-06-11 13:37 ` Leon Romanovsky
2023-06-12  3:04   ` Shinichiro Kawasaki

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).