All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] RDMA/cm: Make the local_id_table xarray non-irq
@ 2020-11-04 21:40 Jason Gunthorpe
  2020-11-05  8:52 ` Leon Romanovsky
  2020-11-12 16:32 ` Jason Gunthorpe
  0 siblings, 2 replies; 5+ messages in thread
From: Jason Gunthorpe @ 2020-11-04 21:40 UTC (permalink / raw)
  To: linux-rdma; +Cc: Leon Romanovsky, Matthew Wilcox

The xarray is never mutated from an IRQ handler, only from work queues
under a spinlock_irq. Thus there is no reason for it be an IRQ type
xarray.

This was copied over from the original IDR code, but the recent rework put
the xarray inside another spinlock_irq which will unbalance the unlocking.

Fixes: c206f8bad15d ("RDMA/cm: Make it clearer how concurrency works in cm_req_handler()")
Reported-by: Matthew Wilcox <willy@infradead.org>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/infiniband/core/cm.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index 0201364974594f..167e436ae11ded 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -859,8 +859,8 @@ static struct cm_id_private *cm_alloc_id_priv(struct ib_device *device,
 	atomic_set(&cm_id_priv->work_count, -1);
 	refcount_set(&cm_id_priv->refcount, 1);
 
-	ret = xa_alloc_cyclic_irq(&cm.local_id_table, &id, NULL, xa_limit_32b,
-				  &cm.local_id_next, GFP_KERNEL);
+	ret = xa_alloc_cyclic(&cm.local_id_table, &id, NULL, xa_limit_32b,
+			      &cm.local_id_next, GFP_KERNEL);
 	if (ret < 0)
 		goto error;
 	cm_id_priv->id.local_id = (__force __be32)id ^ cm.random_id_operand;
@@ -878,8 +878,8 @@ static struct cm_id_private *cm_alloc_id_priv(struct ib_device *device,
  */
 static void cm_finalize_id(struct cm_id_private *cm_id_priv)
 {
-	xa_store_irq(&cm.local_id_table, cm_local_id(cm_id_priv->id.local_id),
-		     cm_id_priv, GFP_KERNEL);
+	xa_store(&cm.local_id_table, cm_local_id(cm_id_priv->id.local_id),
+		 cm_id_priv, GFP_ATOMIC);
 }
 
 struct ib_cm_id *ib_create_cm_id(struct ib_device *device,
@@ -1169,7 +1169,7 @@ static void cm_destroy_id(struct ib_cm_id *cm_id, int err)
 	spin_unlock(&cm.lock);
 	spin_unlock_irq(&cm_id_priv->lock);
 
-	xa_erase_irq(&cm.local_id_table, cm_local_id(cm_id->local_id));
+	xa_erase(&cm.local_id_table, cm_local_id(cm_id->local_id));
 	cm_deref_id(cm_id_priv);
 	wait_for_completion(&cm_id_priv->comp);
 	while ((work = cm_dequeue_work(cm_id_priv)) != NULL)
@@ -4482,7 +4482,7 @@ static int __init ib_cm_init(void)
 	cm.remote_id_table = RB_ROOT;
 	cm.remote_qp_table = RB_ROOT;
 	cm.remote_sidr_table = RB_ROOT;
-	xa_init_flags(&cm.local_id_table, XA_FLAGS_ALLOC | XA_FLAGS_LOCK_IRQ);
+	xa_init_flags(&cm.local_id_table, XA_FLAGS_ALLOC);
 	get_random_bytes(&cm.random_id_operand, sizeof cm.random_id_operand);
 	INIT_LIST_HEAD(&cm.timewait_list);
 
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH] RDMA/cm: Make the local_id_table xarray non-irq
  2020-11-04 21:40 [PATCH] RDMA/cm: Make the local_id_table xarray non-irq Jason Gunthorpe
@ 2020-11-05  8:52 ` Leon Romanovsky
  2020-11-05 15:15   ` Jason Gunthorpe
  2020-11-12 16:32 ` Jason Gunthorpe
  1 sibling, 1 reply; 5+ messages in thread
From: Leon Romanovsky @ 2020-11-05  8:52 UTC (permalink / raw)
  To: Jason Gunthorpe; +Cc: linux-rdma, Matthew Wilcox

On Wed, Nov 04, 2020 at 05:40:59PM -0400, Jason Gunthorpe wrote:
> The xarray is never mutated from an IRQ handler, only from work queues
> under a spinlock_irq. Thus there is no reason for it be an IRQ type
> xarray.
>
> This was copied over from the original IDR code, but the recent rework put
> the xarray inside another spinlock_irq which will unbalance the unlocking.
>
> Fixes: c206f8bad15d ("RDMA/cm: Make it clearer how concurrency works in cm_req_handler()")
> Reported-by: Matthew Wilcox <willy@infradead.org>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
> ---
>  drivers/infiniband/core/cm.c | 12 ++++++------
>  1 file changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
> index 0201364974594f..167e436ae11ded 100644
> --- a/drivers/infiniband/core/cm.c
> +++ b/drivers/infiniband/core/cm.c
> @@ -859,8 +859,8 @@ static struct cm_id_private *cm_alloc_id_priv(struct ib_device *device,
>  	atomic_set(&cm_id_priv->work_count, -1);
>  	refcount_set(&cm_id_priv->refcount, 1);
>
> -	ret = xa_alloc_cyclic_irq(&cm.local_id_table, &id, NULL, xa_limit_32b,
> -				  &cm.local_id_next, GFP_KERNEL);
> +	ret = xa_alloc_cyclic(&cm.local_id_table, &id, NULL, xa_limit_32b,
> +			      &cm.local_id_next, GFP_KERNEL);
>  	if (ret < 0)
>  		goto error;
>  	cm_id_priv->id.local_id = (__force __be32)id ^ cm.random_id_operand;
> @@ -878,8 +878,8 @@ static struct cm_id_private *cm_alloc_id_priv(struct ib_device *device,
>   */
>  static void cm_finalize_id(struct cm_id_private *cm_id_priv)
>  {
> -	xa_store_irq(&cm.local_id_table, cm_local_id(cm_id_priv->id.local_id),
> -		     cm_id_priv, GFP_KERNEL);
> +	xa_store(&cm.local_id_table, cm_local_id(cm_id_priv->id.local_id),
> +		 cm_id_priv, GFP_ATOMIC);
>  }

I see that in the ib_create_cm_id() function, we call to cm_finalize_id(),
won't it be a problem to do it without irq lock?

Thanks

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] RDMA/cm: Make the local_id_table xarray non-irq
  2020-11-05  8:52 ` Leon Romanovsky
@ 2020-11-05 15:15   ` Jason Gunthorpe
  2020-11-10  9:07     ` Leon Romanovsky
  0 siblings, 1 reply; 5+ messages in thread
From: Jason Gunthorpe @ 2020-11-05 15:15 UTC (permalink / raw)
  To: Leon Romanovsky; +Cc: linux-rdma, Matthew Wilcox

On Thu, Nov 05, 2020 at 10:52:31AM +0200, Leon Romanovsky wrote:
> On Wed, Nov 04, 2020 at 05:40:59PM -0400, Jason Gunthorpe wrote:
> > The xarray is never mutated from an IRQ handler, only from work queues
> > under a spinlock_irq. Thus there is no reason for it be an IRQ type
> > xarray.
> >
> > This was copied over from the original IDR code, but the recent rework put
> > the xarray inside another spinlock_irq which will unbalance the unlocking.
> >
> > Fixes: c206f8bad15d ("RDMA/cm: Make it clearer how concurrency works in cm_req_handler()")
> > Reported-by: Matthew Wilcox <willy@infradead.org>
> > Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
> >  drivers/infiniband/core/cm.c | 12 ++++++------
> >  1 file changed, 6 insertions(+), 6 deletions(-)
> >
> > diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
> > index 0201364974594f..167e436ae11ded 100644
> > +++ b/drivers/infiniband/core/cm.c
> > @@ -859,8 +859,8 @@ static struct cm_id_private *cm_alloc_id_priv(struct ib_device *device,
> >  	atomic_set(&cm_id_priv->work_count, -1);
> >  	refcount_set(&cm_id_priv->refcount, 1);
> >
> > -	ret = xa_alloc_cyclic_irq(&cm.local_id_table, &id, NULL, xa_limit_32b,
> > -				  &cm.local_id_next, GFP_KERNEL);
> > +	ret = xa_alloc_cyclic(&cm.local_id_table, &id, NULL, xa_limit_32b,
> > +			      &cm.local_id_next, GFP_KERNEL);
> >  	if (ret < 0)
> >  		goto error;
> >  	cm_id_priv->id.local_id = (__force __be32)id ^ cm.random_id_operand;
> > @@ -878,8 +878,8 @@ static struct cm_id_private *cm_alloc_id_priv(struct ib_device *device,
> >   */
> >  static void cm_finalize_id(struct cm_id_private *cm_id_priv)
> >  {
> > -	xa_store_irq(&cm.local_id_table, cm_local_id(cm_id_priv->id.local_id),
> > -		     cm_id_priv, GFP_KERNEL);
> > +	xa_store(&cm.local_id_table, cm_local_id(cm_id_priv->id.local_id),
> > +		 cm_id_priv, GFP_ATOMIC);
> >  }
> 
> I see that in the ib_create_cm_id() function, we call to cm_finalize_id(),
> won't it be a problem to do it without irq lock?

The _irq or _bh notations are only needed if some place acquires the
internal spinlock from a bh (timer, tasklet, etc) or irq.

Since all the places working with local_id_table are obviously in
contexts that can do GFP_KERNEL allocations I conclude a normal
spinlock is fine.

Jason

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] RDMA/cm: Make the local_id_table xarray non-irq
  2020-11-05 15:15   ` Jason Gunthorpe
@ 2020-11-10  9:07     ` Leon Romanovsky
  0 siblings, 0 replies; 5+ messages in thread
From: Leon Romanovsky @ 2020-11-10  9:07 UTC (permalink / raw)
  To: Jason Gunthorpe; +Cc: linux-rdma, Matthew Wilcox

On Thu, Nov 05, 2020 at 11:15:22AM -0400, Jason Gunthorpe wrote:
> On Thu, Nov 05, 2020 at 10:52:31AM +0200, Leon Romanovsky wrote:
> > On Wed, Nov 04, 2020 at 05:40:59PM -0400, Jason Gunthorpe wrote:
> > > The xarray is never mutated from an IRQ handler, only from work queues
> > > under a spinlock_irq. Thus there is no reason for it be an IRQ type
> > > xarray.
> > >
> > > This was copied over from the original IDR code, but the recent rework put
> > > the xarray inside another spinlock_irq which will unbalance the unlocking.
> > >
> > > Fixes: c206f8bad15d ("RDMA/cm: Make it clearer how concurrency works in cm_req_handler()")
> > > Reported-by: Matthew Wilcox <willy@infradead.org>
> > > Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
> > >  drivers/infiniband/core/cm.c | 12 ++++++------
> > >  1 file changed, 6 insertions(+), 6 deletions(-)
> > >
> > > diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
> > > index 0201364974594f..167e436ae11ded 100644
> > > +++ b/drivers/infiniband/core/cm.c
> > > @@ -859,8 +859,8 @@ static struct cm_id_private *cm_alloc_id_priv(struct ib_device *device,
> > >  	atomic_set(&cm_id_priv->work_count, -1);
> > >  	refcount_set(&cm_id_priv->refcount, 1);
> > >
> > > -	ret = xa_alloc_cyclic_irq(&cm.local_id_table, &id, NULL, xa_limit_32b,
> > > -				  &cm.local_id_next, GFP_KERNEL);
> > > +	ret = xa_alloc_cyclic(&cm.local_id_table, &id, NULL, xa_limit_32b,
> > > +			      &cm.local_id_next, GFP_KERNEL);
> > >  	if (ret < 0)
> > >  		goto error;
> > >  	cm_id_priv->id.local_id = (__force __be32)id ^ cm.random_id_operand;
> > > @@ -878,8 +878,8 @@ static struct cm_id_private *cm_alloc_id_priv(struct ib_device *device,
> > >   */
> > >  static void cm_finalize_id(struct cm_id_private *cm_id_priv)
> > >  {
> > > -	xa_store_irq(&cm.local_id_table, cm_local_id(cm_id_priv->id.local_id),
> > > -		     cm_id_priv, GFP_KERNEL);
> > > +	xa_store(&cm.local_id_table, cm_local_id(cm_id_priv->id.local_id),
> > > +		 cm_id_priv, GFP_ATOMIC);
> > >  }
> >
> > I see that in the ib_create_cm_id() function, we call to cm_finalize_id(),
> > won't it be a problem to do it without irq lock?
>
> The _irq or _bh notations are only needed if some place acquires the
> internal spinlock from a bh (timer, tasklet, etc) or irq.
>
> Since all the places working with local_id_table are obviously in
> contexts that can do GFP_KERNEL allocations I conclude a normal
> spinlock is fine.

I see, Thanks

>
> Jason

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] RDMA/cm: Make the local_id_table xarray non-irq
  2020-11-04 21:40 [PATCH] RDMA/cm: Make the local_id_table xarray non-irq Jason Gunthorpe
  2020-11-05  8:52 ` Leon Romanovsky
@ 2020-11-12 16:32 ` Jason Gunthorpe
  1 sibling, 0 replies; 5+ messages in thread
From: Jason Gunthorpe @ 2020-11-12 16:32 UTC (permalink / raw)
  To: linux-rdma; +Cc: Leon Romanovsky, Matthew Wilcox

On Wed, Nov 04, 2020 at 05:40:59PM -0400, Jason Gunthorpe wrote:
> The xarray is never mutated from an IRQ handler, only from work queues
> under a spinlock_irq. Thus there is no reason for it be an IRQ type
> xarray.
> 
> This was copied over from the original IDR code, but the recent rework put
> the xarray inside another spinlock_irq which will unbalance the unlocking.
> 
> Fixes: c206f8bad15d ("RDMA/cm: Make it clearer how concurrency works in cm_req_handler()")
> Reported-by: Matthew Wilcox <willy@infradead.org>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
> ---
>  drivers/infiniband/core/cm.c | 12 ++++++------
>  1 file changed, 6 insertions(+), 6 deletions(-)

Applied to for-rc, thanks

Jason

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2020-11-12 16:32 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-04 21:40 [PATCH] RDMA/cm: Make the local_id_table xarray non-irq Jason Gunthorpe
2020-11-05  8:52 ` Leon Romanovsky
2020-11-05 15:15   ` Jason Gunthorpe
2020-11-10  9:07     ` Leon Romanovsky
2020-11-12 16:32 ` Jason Gunthorpe

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.