linux-rdma.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH for-next] RDMA/rxe: Convert spinlock to memory barrier
@ 2022-10-06 16:39 Bob Pearson
  2022-10-28 17:28 ` Jason Gunthorpe
  0 siblings, 1 reply; 2+ messages in thread
From: Bob Pearson @ 2022-10-06 16:39 UTC (permalink / raw)
  To: jgg, zyjzyj2000, linux-rdma; +Cc: Bob Pearson

Currently the rxe driver takes a spinlock to safely pass a
control variable from a verbs API to a tasklet. A release/acquire
memory barrier pair can accomplish the same thing with less effort.

Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
---
 drivers/infiniband/sw/rxe/rxe_cq.c | 15 ++++-----------
 1 file changed, 4 insertions(+), 11 deletions(-)

diff --git a/drivers/infiniband/sw/rxe/rxe_cq.c b/drivers/infiniband/sw/rxe/rxe_cq.c
index b1a0ab3cd4bd..76534bc66cb6 100644
--- a/drivers/infiniband/sw/rxe/rxe_cq.c
+++ b/drivers/infiniband/sw/rxe/rxe_cq.c
@@ -42,14 +42,10 @@ int rxe_cq_chk_attr(struct rxe_dev *rxe, struct rxe_cq *cq,
 static void rxe_send_complete(struct tasklet_struct *t)
 {
 	struct rxe_cq *cq = from_tasklet(cq, t, comp_task);
-	unsigned long flags;
 
-	spin_lock_irqsave(&cq->cq_lock, flags);
-	if (cq->is_dying) {
-		spin_unlock_irqrestore(&cq->cq_lock, flags);
+	/* pairs with rxe_cq_disable */
+	if (smp_load_acquire(&cq->is_dying))
 		return;
-	}
-	spin_unlock_irqrestore(&cq->cq_lock, flags);
 
 	cq->ibcq.comp_handler(&cq->ibcq, cq->ibcq.cq_context);
 }
@@ -143,11 +139,8 @@ int rxe_cq_post(struct rxe_cq *cq, struct rxe_cqe *cqe, int solicited)
 
 void rxe_cq_disable(struct rxe_cq *cq)
 {
-	unsigned long flags;
-
-	spin_lock_irqsave(&cq->cq_lock, flags);
-	cq->is_dying = true;
-	spin_unlock_irqrestore(&cq->cq_lock, flags);
+	/* pairs with rxe_send_complete */
+	smp_store_release(&cq->is_dying, true);
 }
 
 void rxe_cq_cleanup(struct rxe_pool_elem *elem)

base-commit: cbdae01d8b517b81ed271981395fee8ebd08ba7d
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH for-next] RDMA/rxe: Convert spinlock to memory barrier
  2022-10-06 16:39 [PATCH for-next] RDMA/rxe: Convert spinlock to memory barrier Bob Pearson
@ 2022-10-28 17:28 ` Jason Gunthorpe
  0 siblings, 0 replies; 2+ messages in thread
From: Jason Gunthorpe @ 2022-10-28 17:28 UTC (permalink / raw)
  To: Bob Pearson; +Cc: zyjzyj2000, linux-rdma

On Thu, Oct 06, 2022 at 11:39:00AM -0500, Bob Pearson wrote:
> Currently the rxe driver takes a spinlock to safely pass a
> control variable from a verbs API to a tasklet. A release/acquire
> memory barrier pair can accomplish the same thing with less effort.

The only reason it seems like this is because it is already completely
wrong. Everyone time I see one of these 'is dying' things it is just a
racy mess.

The code that sets it to true is rushing toward freeing the memory.

Meaning if you observe it to be true, you are almost certainly about
to UAF it.

You can see how silly it is because the tasklet itself, while sitting
on the scheduling queue, is holding a reference to the struct rxe_cq -
so is_dying is totally pointless.

The proper way to use something like 'is_dying' is part of the tasklet
shutdown sequence.

First you prevent new calls to tasklet_schedule(), then you flush and
kill the tasklet. is_dying is an appropriate way to prevent new calls,
when wrapped around tasklet_schedule() under a lock.

Please send a patch properly managing to clean up the tasklets for the
cq, not this.

Jason


> 
> Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
> ---
>  drivers/infiniband/sw/rxe/rxe_cq.c | 15 ++++-----------
>  1 file changed, 4 insertions(+), 11 deletions(-)
> 
> diff --git a/drivers/infiniband/sw/rxe/rxe_cq.c b/drivers/infiniband/sw/rxe/rxe_cq.c
> index b1a0ab3cd4bd..76534bc66cb6 100644
> --- a/drivers/infiniband/sw/rxe/rxe_cq.c
> +++ b/drivers/infiniband/sw/rxe/rxe_cq.c
> @@ -42,14 +42,10 @@ int rxe_cq_chk_attr(struct rxe_dev *rxe, struct rxe_cq *cq,
>  static void rxe_send_complete(struct tasklet_struct *t)
>  {
>  	struct rxe_cq *cq = from_tasklet(cq, t, comp_task);
> -	unsigned long flags;
>  
> -	spin_lock_irqsave(&cq->cq_lock, flags);
> -	if (cq->is_dying) {
> -		spin_unlock_irqrestore(&cq->cq_lock, flags);
> +	/* pairs with rxe_cq_disable */
> +	if (smp_load_acquire(&cq->is_dying))
>  		return;
> -	}
> -	spin_unlock_irqrestore(&cq->cq_lock, flags);
>  
>  	cq->ibcq.comp_handler(&cq->ibcq, cq->ibcq.cq_context);
>  }
> @@ -143,11 +139,8 @@ int rxe_cq_post(struct rxe_cq *cq, struct rxe_cqe *cqe, int solicited)
>  
>  void rxe_cq_disable(struct rxe_cq *cq)
>  {
> -	unsigned long flags;
> -
> -	spin_lock_irqsave(&cq->cq_lock, flags);
> -	cq->is_dying = true;
> -	spin_unlock_irqrestore(&cq->cq_lock, flags);
> +	/* pairs with rxe_send_complete */
> +	smp_store_release(&cq->is_dying, true);
>  }
>  
>  void rxe_cq_cleanup(struct rxe_pool_elem *elem)
> 
> base-commit: cbdae01d8b517b81ed271981395fee8ebd08ba7d

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2022-10-28 17:29 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-10-06 16:39 [PATCH for-next] RDMA/rxe: Convert spinlock to memory barrier Bob Pearson
2022-10-28 17:28 ` Jason Gunthorpe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).