rcu.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 -rcu] workqueue: Convert for_each_wq to use built-in list check
@ 2019-08-15 14:18 Joel Fernandes (Google)
  2019-08-15 14:57 ` Matthew Wilcox
  2019-08-16 16:45 ` Paul E. McKenney
  0 siblings, 2 replies; 4+ messages in thread
From: Joel Fernandes (Google) @ 2019-08-15 14:18 UTC (permalink / raw)
  To: linux-kernel
  Cc: Joel Fernandes (Google),
	Greg Kroah-Hartman, Jonathan Corbet, Josh Triplett,
	Lai Jiangshan, linux-doc, Mathieu Desnoyers, Paul E. McKenney,
	Rafael J. Wysocki, rcu, Steven Rostedt, Tejun Heo

list_for_each_entry_rcu now has support to check for RCU reader sections
as well as lock. Just use the support in it, instead of explicitly
checking in the caller.

Acked-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org>
---
v1->v3: Changed lock_is_held() to lockdep_is_held()

 kernel/workqueue.c | 10 ++--------
 1 file changed, 2 insertions(+), 8 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 601d61150b65..e882477ebf6e 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -364,11 +364,6 @@ static void workqueue_sysfs_unregister(struct workqueue_struct *wq);
 			 !lockdep_is_held(&wq_pool_mutex),		\
 			 "RCU or wq_pool_mutex should be held")
 
-#define assert_rcu_or_wq_mutex(wq)					\
-	RCU_LOCKDEP_WARN(!rcu_read_lock_held() &&			\
-			 !lockdep_is_held(&wq->mutex),			\
-			 "RCU or wq->mutex should be held")
-
 #define assert_rcu_or_wq_mutex_or_pool_mutex(wq)			\
 	RCU_LOCKDEP_WARN(!rcu_read_lock_held() &&			\
 			 !lockdep_is_held(&wq->mutex) &&		\
@@ -425,9 +420,8 @@ static void workqueue_sysfs_unregister(struct workqueue_struct *wq);
  * ignored.
  */
 #define for_each_pwq(pwq, wq)						\
-	list_for_each_entry_rcu((pwq), &(wq)->pwqs, pwqs_node)		\
-		if (({ assert_rcu_or_wq_mutex(wq); false; })) { }	\
-		else
+	list_for_each_entry_rcu((pwq), &(wq)->pwqs, pwqs_node,		\
+				 lockdep_is_held(&(wq->mutex)))
 
 #ifdef CONFIG_DEBUG_OBJECTS_WORK
 
-- 
2.23.0.rc1.153.gdeed80330f-goog


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH v3 -rcu] workqueue: Convert for_each_wq to use built-in list check
  2019-08-15 14:18 [PATCH v3 -rcu] workqueue: Convert for_each_wq to use built-in list check Joel Fernandes (Google)
@ 2019-08-15 14:57 ` Matthew Wilcox
  2019-08-15 15:24   ` Joel Fernandes
  2019-08-16 16:45 ` Paul E. McKenney
  1 sibling, 1 reply; 4+ messages in thread
From: Matthew Wilcox @ 2019-08-15 14:57 UTC (permalink / raw)
  To: Joel Fernandes (Google)
  Cc: linux-kernel, Greg Kroah-Hartman, Jonathan Corbet, Josh Triplett,
	Lai Jiangshan, linux-doc, Mathieu Desnoyers, Paul E. McKenney,
	Rafael J. Wysocki, rcu, Steven Rostedt, Tejun Heo

On Thu, Aug 15, 2019 at 10:18:42AM -0400, Joel Fernandes (Google) wrote:
> list_for_each_entry_rcu now has support to check for RCU reader sections
> as well as lock. Just use the support in it, instead of explicitly
> checking in the caller.

...

>  #define assert_rcu_or_wq_mutex_or_pool_mutex(wq)			\
>  	RCU_LOCKDEP_WARN(!rcu_read_lock_held() &&			\
>  			 !lockdep_is_held(&wq->mutex) &&		\

Can't you also get rid of this macro?

It's used in one place:

static struct pool_workqueue *unbound_pwq_by_node(struct workqueue_struct *wq,
                                                  int node)
{
        assert_rcu_or_wq_mutex_or_pool_mutex(wq);

        /*
         * XXX: @node can be NUMA_NO_NODE if CPU goes offline while a
         * delayed item is pending.  The plan is to keep CPU -> NODE
         * mapping valid and stable across CPU on/offlines.  Once that
         * happens, this workaround can be removed.
         */
        if (unlikely(node == NUMA_NO_NODE))
                return wq->dfl_pwq;

        return rcu_dereference_raw(wq->numa_pwq_tbl[node]);
}

Shouldn't we delete that assert and use

+	return rcu_dereference_check(wq->numa_pwq_tbl[node],
+			lockdep_is_held(&wq->mutex) ||
+			lockdep_is_held(&wq_pool_mutex));


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH v3 -rcu] workqueue: Convert for_each_wq to use built-in list check
  2019-08-15 14:57 ` Matthew Wilcox
@ 2019-08-15 15:24   ` Joel Fernandes
  0 siblings, 0 replies; 4+ messages in thread
From: Joel Fernandes @ 2019-08-15 15:24 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: linux-kernel, Greg Kroah-Hartman, Jonathan Corbet, Josh Triplett,
	Lai Jiangshan, linux-doc, Mathieu Desnoyers, Paul E. McKenney,
	Rafael J. Wysocki, rcu, Steven Rostedt, Tejun Heo

On Thu, Aug 15, 2019 at 07:57:49AM -0700, Matthew Wilcox wrote:
> On Thu, Aug 15, 2019 at 10:18:42AM -0400, Joel Fernandes (Google) wrote:
> > list_for_each_entry_rcu now has support to check for RCU reader sections
> > as well as lock. Just use the support in it, instead of explicitly
> > checking in the caller.
> 
> ...
> 
> >  #define assert_rcu_or_wq_mutex_or_pool_mutex(wq)			\
> >  	RCU_LOCKDEP_WARN(!rcu_read_lock_held() &&			\
> >  			 !lockdep_is_held(&wq->mutex) &&		\
> 
> Can't you also get rid of this macro?

Could be. But that should be a different patch. I am only cleaning up the RCU
list lockdep checking in this series since the series introduces that
concept).  Please feel free to send a patch for the same.

Arguably, keeping the macro around also can be beneficial in the future.

> It's used in one place:
> 
> static struct pool_workqueue *unbound_pwq_by_node(struct workqueue_struct *wq,
>                                                   int node)
> {
>         assert_rcu_or_wq_mutex_or_pool_mutex(wq);
> 
>         /*
>          * XXX: @node can be NUMA_NO_NODE if CPU goes offline while a
>          * delayed item is pending.  The plan is to keep CPU -> NODE
>          * mapping valid and stable across CPU on/offlines.  Once that
>          * happens, this workaround can be removed.
>          */
>         if (unlikely(node == NUMA_NO_NODE))
>                 return wq->dfl_pwq;
> 
>         return rcu_dereference_raw(wq->numa_pwq_tbl[node]);
> }
> 
> Shouldn't we delete that assert and use
> 
> +	return rcu_dereference_check(wq->numa_pwq_tbl[node],
> +			lockdep_is_held(&wq->mutex) ||
> +			lockdep_is_held(&wq_pool_mutex));

Makes sense. This API also does sparse checking. Also hopefully no sparse
issues show up because rcu_dereference_check() but anyone such issues should
be fixed as well.

thanks,

 - Joel

> 

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH v3 -rcu] workqueue: Convert for_each_wq to use built-in list check
  2019-08-15 14:18 [PATCH v3 -rcu] workqueue: Convert for_each_wq to use built-in list check Joel Fernandes (Google)
  2019-08-15 14:57 ` Matthew Wilcox
@ 2019-08-16 16:45 ` Paul E. McKenney
  1 sibling, 0 replies; 4+ messages in thread
From: Paul E. McKenney @ 2019-08-16 16:45 UTC (permalink / raw)
  To: Joel Fernandes (Google)
  Cc: linux-kernel, Greg Kroah-Hartman, Jonathan Corbet, Josh Triplett,
	Lai Jiangshan, linux-doc, Mathieu Desnoyers, Rafael J. Wysocki,
	rcu, Steven Rostedt, Tejun Heo

On Thu, Aug 15, 2019 at 10:18:42AM -0400, Joel Fernandes (Google) wrote:
> list_for_each_entry_rcu now has support to check for RCU reader sections
> as well as lock. Just use the support in it, instead of explicitly
> checking in the caller.
> 
> Acked-by: Tejun Heo <tj@kernel.org>
> Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org>

Pulled into -rcu for testing and further review, thank you!

							Thanx, Paul

> ---
> v1->v3: Changed lock_is_held() to lockdep_is_held()
> 
>  kernel/workqueue.c | 10 ++--------
>  1 file changed, 2 insertions(+), 8 deletions(-)
> 
> diff --git a/kernel/workqueue.c b/kernel/workqueue.c
> index 601d61150b65..e882477ebf6e 100644
> --- a/kernel/workqueue.c
> +++ b/kernel/workqueue.c
> @@ -364,11 +364,6 @@ static void workqueue_sysfs_unregister(struct workqueue_struct *wq);
>  			 !lockdep_is_held(&wq_pool_mutex),		\
>  			 "RCU or wq_pool_mutex should be held")
>  
> -#define assert_rcu_or_wq_mutex(wq)					\
> -	RCU_LOCKDEP_WARN(!rcu_read_lock_held() &&			\
> -			 !lockdep_is_held(&wq->mutex),			\
> -			 "RCU or wq->mutex should be held")
> -
>  #define assert_rcu_or_wq_mutex_or_pool_mutex(wq)			\
>  	RCU_LOCKDEP_WARN(!rcu_read_lock_held() &&			\
>  			 !lockdep_is_held(&wq->mutex) &&		\
> @@ -425,9 +420,8 @@ static void workqueue_sysfs_unregister(struct workqueue_struct *wq);
>   * ignored.
>   */
>  #define for_each_pwq(pwq, wq)						\
> -	list_for_each_entry_rcu((pwq), &(wq)->pwqs, pwqs_node)		\
> -		if (({ assert_rcu_or_wq_mutex(wq); false; })) { }	\
> -		else
> +	list_for_each_entry_rcu((pwq), &(wq)->pwqs, pwqs_node,		\
> +				 lockdep_is_held(&(wq->mutex)))
>  
>  #ifdef CONFIG_DEBUG_OBJECTS_WORK
>  
> -- 
> 2.23.0.rc1.153.gdeed80330f-goog
> 

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2019-08-16 16:46 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-08-15 14:18 [PATCH v3 -rcu] workqueue: Convert for_each_wq to use built-in list check Joel Fernandes (Google)
2019-08-15 14:57 ` Matthew Wilcox
2019-08-15 15:24   ` Joel Fernandes
2019-08-16 16:45 ` Paul E. McKenney

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).