linux-wireless.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH v2 0/3] wifi locking simplification
@ 2023-05-10 20:12 Johannes Berg
  2023-05-10 20:12 ` [RFC PATCH v2 1/3] workqueue: support pausing ordered workqueues Johannes Berg
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Johannes Berg @ 2023-05-10 20:12 UTC (permalink / raw)
  To: linux-kernel; +Cc: linux-wireless, Tejun Heo, Lai Jiangshan

The rationale etc. didn't change in v2, so see
https://lore.kernel.org/r/20230510160428.175409-1-johannes@sipsolutions.net

In v2 I dropped the workqueue locking patch and consequently
made new wiphy_work and wiphy_delayed_work things.

I kept the pause/resume for the workqueue because not that I
had it, it seemed simpler than keeping track of "should it be
paused" in a separate bool variable to not queue the work when
it shouldn't be ...

Still just serves to illustrate things, I didn't really test it.

johannes



^ permalink raw reply	[flat|nested] 6+ messages in thread

* [RFC PATCH v2 1/3] workqueue: support pausing ordered workqueues
  2023-05-10 20:12 [RFC PATCH v2 0/3] wifi locking simplification Johannes Berg
@ 2023-05-10 20:12 ` Johannes Berg
  2023-05-10 20:13   ` Tejun Heo
  2023-05-10 20:12 ` [RFC PATCH v2 2/3] wifi: cfg80211: add a workqueue with special semantics Johannes Berg
  2023-05-10 20:12 ` [RFC PATCH v2 3/3] wifi: cfg80211: move scan done work to cfg80211 workqueue Johannes Berg
  2 siblings, 1 reply; 6+ messages in thread
From: Johannes Berg @ 2023-05-10 20:12 UTC (permalink / raw)
  To: linux-kernel; +Cc: linux-wireless, Tejun Heo, Lai Jiangshan, Johannes Berg

From: Johannes Berg <johannes.berg@intel.com>

Add some infrastructure to support pausing ordered
workqueues, so that no work items are executing nor
can execute while the workqueue is paused.

This can be used to simplify locking between work
structs and other processes (e.g. userspace calls)
when the workqueue is paused while other code is
running, where we can then more easily avoid issues
in code paths needing to cancel works.

Signed-off-by: Johannes Berg <johannes.berg@intel.com>
---
v2:
 - fix bug with new_max_active no being used
 - don't unify pause/resume
 - looked at unifying pause/freeze, but the conditions
   are very different so I didn't find a plausible way
---
 include/linux/workqueue.h |  4 +++
 kernel/workqueue.c        | 54 ++++++++++++++++++++++++++++++++++++---
 2 files changed, 55 insertions(+), 3 deletions(-)

diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h
index ac551b8ee7d9..7a76d27d325f 100644
--- a/include/linux/workqueue.h
+++ b/include/linux/workqueue.h
@@ -340,6 +340,7 @@ enum {
 	__WQ_ORDERED		= 1 << 17, /* internal: workqueue is ordered */
 	__WQ_LEGACY		= 1 << 18, /* internal: create*_workqueue() */
 	__WQ_ORDERED_EXPLICIT	= 1 << 19, /* internal: alloc_ordered_workqueue() */
+	__WQ_PAUSED		= 1 << 20, /* internal: workqueue_pause() */
 
 	WQ_MAX_ACTIVE		= 512,	  /* I like 512, better ideas? */
 	WQ_MAX_UNBOUND_PER_CPU	= 4,	  /* 4 * #cpus for unbound wq */
@@ -475,6 +476,9 @@ extern void show_all_workqueues(void);
 extern void show_one_workqueue(struct workqueue_struct *wq);
 extern void wq_worker_comm(char *buf, size_t size, struct task_struct *task);
 
+void workqueue_pause(struct workqueue_struct *wq);
+void workqueue_resume(struct workqueue_struct *wq);
+
 /**
  * queue_work - queue work on a workqueue
  * @wq: workqueue to use
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index b8b541caed48..12e8b003568f 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -3863,12 +3863,18 @@ static void pwq_adjust_max_active(struct pool_workqueue *pwq)
 	struct workqueue_struct *wq = pwq->wq;
 	bool freezable = wq->flags & WQ_FREEZABLE;
 	unsigned long flags;
+	int new_max_active;
 
-	/* for @wq->saved_max_active */
+	/* for @wq->saved_max_active and @wq->flags */
 	lockdep_assert_held(&wq->mutex);
 
+	if (wq->flags & __WQ_PAUSED)
+		new_max_active = 0;
+	else
+		new_max_active = wq->saved_max_active;
+
 	/* fast exit for non-freezable wqs */
-	if (!freezable && pwq->max_active == wq->saved_max_active)
+	if (!freezable && pwq->max_active == new_max_active)
 		return;
 
 	/* this function can be called during early boot w/ irq disabled */
@@ -3882,7 +3888,7 @@ static void pwq_adjust_max_active(struct pool_workqueue *pwq)
 	if (!freezable || !workqueue_freezing) {
 		bool kick = false;
 
-		pwq->max_active = wq->saved_max_active;
+		pwq->max_active = new_max_active;
 
 		while (!list_empty(&pwq->inactive_works) &&
 		       pwq->nr_active < pwq->max_active) {
@@ -4642,6 +4648,48 @@ void workqueue_set_max_active(struct workqueue_struct *wq, int max_active)
 }
 EXPORT_SYMBOL_GPL(workqueue_set_max_active);
 
+/**
+ * workqueue_pause - pause a workqueue
+ * @wq: workqueue to pause
+ *
+ * Pause (and flush) the given workqueue so it's not executing any
+ * work structs and won't until workqueue_resume() is called.
+ */
+void workqueue_pause(struct workqueue_struct *wq)
+{
+	struct pool_workqueue *pwq;
+
+	mutex_lock(&wq->mutex);
+	wq->flags |= __WQ_PAUSED;
+
+	for_each_pwq(pwq, wq)
+		pwq_adjust_max_active(pwq);
+	mutex_unlock(&wq->mutex);
+
+	flush_workqueue(wq);
+}
+EXPORT_SYMBOL_GPL(workqueue_pause);
+
+/**
+ * workqueue_resume - resume a paused workqueue
+ * @wq: workqueue to resume
+ *
+ * Resume the given workqueue that was paused previously to
+ * make it run work structs again.
+ */
+void workqueue_resume(struct workqueue_struct *wq)
+{
+	struct pool_workqueue *pwq;
+
+	mutex_lock(&wq->mutex);
+	wq->flags &= ~__WQ_PAUSED;
+
+	for_each_pwq(pwq, wq)
+		pwq_adjust_max_active(pwq);
+	mutex_unlock(&wq->mutex);
+}
+EXPORT_SYMBOL_GPL(workqueue_resume);
+
 /**
  * current_work - retrieve %current task's work struct
  *
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [RFC PATCH v2 2/3] wifi: cfg80211: add a workqueue with special semantics
  2023-05-10 20:12 [RFC PATCH v2 0/3] wifi locking simplification Johannes Berg
  2023-05-10 20:12 ` [RFC PATCH v2 1/3] workqueue: support pausing ordered workqueues Johannes Berg
@ 2023-05-10 20:12 ` Johannes Berg
  2023-05-11 21:50   ` Johannes Berg
  2023-05-10 20:12 ` [RFC PATCH v2 3/3] wifi: cfg80211: move scan done work to cfg80211 workqueue Johannes Berg
  2 siblings, 1 reply; 6+ messages in thread
From: Johannes Berg @ 2023-05-10 20:12 UTC (permalink / raw)
  To: linux-kernel; +Cc: linux-wireless, Tejun Heo, Lai Jiangshan, Johannes Berg

From: Johannes Berg <johannes.berg@intel.com>

The special semantics are that it's paused during wiphy_lock()
and nothing can run or even start running on it while that is
locked.

Signed-off-by: Johannes Berg <johannes.berg@intel.com>
---
v2:
 - dropped the previous workqueue locking patch
 - therefore, rework this to have a specific wiphy
   work struct (and delayed), which is maybe better
   anyway since it's (more) type-safe
---
 include/net/cfg80211.h | 109 +++++++++++++++++++++++++++-----
 net/wireless/core.c    | 138 +++++++++++++++++++++++++++++++++++++++++
 net/wireless/core.h    |   6 ++
 net/wireless/nl80211.c |   8 ++-
 4 files changed, 245 insertions(+), 16 deletions(-)

diff --git a/include/net/cfg80211.h b/include/net/cfg80211.h
index f115b2550309..c9318ddf495e 100644
--- a/include/net/cfg80211.h
+++ b/include/net/cfg80211.h
@@ -5142,6 +5142,7 @@ struct wiphy_iftype_akm_suites {
 /**
  * struct wiphy - wireless hardware description
  * @mtx: mutex for the data (structures) of this device
+ * @mtx_fully_held: the mutex was held with workqueue pause/flush
  * @reg_notifier: the driver's regulatory notification callback,
  *	note that if your driver uses wiphy_apply_custom_regulatory()
  *	the reg_notifier's request can be passed as NULL
@@ -5351,6 +5352,9 @@ struct wiphy_iftype_akm_suites {
  */
 struct wiphy {
 	struct mutex mtx;
+#ifdef CONFIG_LOCKDEP
+	bool mtx_fully_held;
+#endif
 
 	/* assign these fields before you register the wiphy */
 
@@ -5669,31 +5673,108 @@ struct cfg80211_cqm_config;
  * wiphy_lock - lock the wiphy
  * @wiphy: the wiphy to lock
  *
- * This is mostly exposed so it can be done around registering and
- * unregistering netdevs that aren't created through cfg80211 calls,
- * since that requires locking in cfg80211 when the notifiers is
- * called, but that cannot differentiate which way it's called.
+ * This is needed around registering and unregistering netdevs that
+ * aren't created through cfg80211 calls, since that requires locking
+ * in cfg80211 when the notifiers is called, but that cannot
+ * differentiate which way it's called.
+ *
+ * It can also be used by drivers for their own purposes.
  *
  * When cfg80211 ops are called, the wiphy is already locked.
+ *
+ * Note that this makes sure that no workers that have been queued
+ * with wiphy_queue_work() are running.
  */
-static inline void wiphy_lock(struct wiphy *wiphy)
-	__acquires(&wiphy->mtx)
-{
-	mutex_lock(&wiphy->mtx);
-	__acquire(&wiphy->mtx);
-}
+void wiphy_lock(struct wiphy *wiphy) __acquires(&wiphy->mtx);
 
 /**
  * wiphy_unlock - unlock the wiphy again
  * @wiphy: the wiphy to unlock
  */
-static inline void wiphy_unlock(struct wiphy *wiphy)
-	__releases(&wiphy->mtx)
+void wiphy_unlock(struct wiphy *wiphy) __releases(&wiphy->mtx);
+
+struct wiphy_work;
+typedef void (*wiphy_work_func_t)(struct wiphy *, struct wiphy_work *);
+
+struct wiphy_work {
+	struct list_head entry;
+	wiphy_work_func_t func;
+};
+
+static inline void wiphy_work_init(struct wiphy_work *work,
+				   wiphy_work_func_t func)
 {
-	__release(&wiphy->mtx);
-	mutex_unlock(&wiphy->mtx);
+	INIT_LIST_HEAD(&work->entry);
+	work->func = func;
 }
 
+/**
+ * wiphy_work_queue - queue work for the wiphy
+ * @wiphy: the wiphy to queue for
+ * @work: the work item
+ *
+ * This is useful for work that must be done asynchronously, and work
+ * queued here has the special property that the wiphy mutex will be
+ * held as if wiphy_lock() was called, and that it cannot be running
+ * after wiphy_lock() was called. Therefore, wiphy_cancel_work() can
+ * use just cancel_work() instead of cancel_work_sync(), it requires
+ * being in a section protected by wiphy_lock().
+ */
+void wiphy_work_queue(struct wiphy *wiphy, struct wiphy_work *work);
+
+/**
+ * wiphy_work_cancel - cancel previously queued work
+ * @wiphy: the wiphy, for debug purposes
+ * @work: the work to cancel
+ *
+ * Cancel the work *without* waiting for it, this assumes being
+ * called under the wiphy mutex acquired by wiphy_lock().
+ */
+void wiphy_work_cancel(struct wiphy *wiphy, struct wiphy_work *work);
+
+struct wiphy_delayed_work {
+	struct wiphy_work work;
+	struct wiphy *wiphy;
+	struct timer_list timer;
+};
+
+void wiphy_delayed_work_timer(struct timer_list *t);
+
+static inline void wiphy_delayed_work_init(struct wiphy_delayed_work *dwork,
+					   wiphy_work_func_t func)
+{
+	timer_setup(&dwork->timer, wiphy_delayed_work_timer, 0);
+	wiphy_work_init(&dwork->work, func);
+}
+
+/**
+ * wiphy_delayed_work_queue - queue delayed work for the wiphy
+ * @wiphy: the wiphy to queue for
+ * @dwork: the delayable worker
+ * @delay: number of jiffies to wait before queueing
+ *
+ * This is useful for work that must be done asynchronously, and work
+ * queued here has the special property that the wiphy mutex will be
+ * held as if wiphy_lock() was called, and that it cannot be running
+ * after wiphy_lock() was called. Therefore, wiphy_cancel_work() can
+ * use just cancel_work() instead of cancel_work_sync(), it requires
+ * being in a section protected by wiphy_lock().
+ */
+void wiphy_delayed_work_queue(struct wiphy *wiphy,
+			      struct wiphy_delayed_work *dwork,
+			      unsigned long delay);
+
+/**
+ * wiphy_delayed_work_cancel - cancel previously queued delayed work
+ * @wiphy: the wiphy, for debug purposes
+ * @dwork: the delayed work to cancel
+ *
+ * Cancel the work *without* waiting for it, this assumes being
+ * called under the wiphy mutex acquired by wiphy_lock().
+ */
+void wiphy_delayed_work_cancel(struct wiphy *wiphy,
+			       struct wiphy_delayed_work *dwork);
+
 /**
  * struct wireless_dev - wireless device state
  *
diff --git a/net/wireless/core.c b/net/wireless/core.c
index 5b0c4d5b80cf..357fd5425bff 100644
--- a/net/wireless/core.c
+++ b/net/wireless/core.c
@@ -408,6 +408,28 @@ static void cfg80211_propagate_cac_done_wk(struct work_struct *work)
 	rtnl_unlock();
 }
 
+static void cfg80211_wiphy_work(struct work_struct *work)
+{
+	struct cfg80211_registered_device *rdev;
+
+	rdev = container_of(work, struct cfg80211_registered_device, wiphy_work);
+
+	spin_lock_irq(&rdev->wiphy_work_lock);
+	while (!list_empty(&rdev->wiphy_work_list)) {
+		struct wiphy_work *wk;
+
+		wk = list_first_entry(&rdev->wiphy_work_list,
+				      struct wiphy_work, entry);
+		list_del_init(&wk->entry);
+		spin_unlock_irq(&rdev->wiphy_work_lock);
+
+		mutex_lock(&rdev->wiphy.mtx);
+		wk->func(&rdev->wiphy, wk);
+		mutex_unlock(&rdev->wiphy.mtx);
+	}
+	spin_unlock_irq(&rdev->wiphy_work_lock);
+}
+
 /* exported functions */
 
 struct wiphy *wiphy_new_nm(const struct cfg80211_ops *ops, int sizeof_priv,
@@ -533,6 +555,15 @@ struct wiphy *wiphy_new_nm(const struct cfg80211_ops *ops, int sizeof_priv,
 		return NULL;
 	}
 
+	rdev->wq = alloc_workqueue("%s", 0, 0, dev_name(&rdev->wiphy.dev));
+	if (!rdev->wq) {
+		wiphy_free(&rdev->wiphy);
+		return NULL;
+	}
+
+	INIT_WORK(&rdev->wiphy_work, cfg80211_wiphy_work);
+	INIT_LIST_HEAD(&rdev->wiphy_work_list);
+	spin_lock_init(&rdev->wiphy_work_lock);
 	INIT_WORK(&rdev->rfkill_block, cfg80211_rfkill_block_work);
 	INIT_WORK(&rdev->conn_work, cfg80211_conn_work);
 	INIT_WORK(&rdev->event_work, cfg80211_event_work);
@@ -1068,6 +1099,13 @@ void wiphy_unregister(struct wiphy *wiphy)
 	wiphy_unlock(&rdev->wiphy);
 	rtnl_unlock();
 
+	/*
+	 * flush again, even if wiphy_lock() did above, something might
+	 * have been reaching it still while the code above was running,
+	 * e.g. via debugfs.
+	 */
+	flush_workqueue(rdev->wq);
+
 	flush_work(&rdev->scan_done_wk);
 	cancel_work_sync(&rdev->conn_work);
 	flush_work(&rdev->event_work);
@@ -1093,6 +1131,10 @@ void cfg80211_dev_free(struct cfg80211_registered_device *rdev)
 {
 	struct cfg80211_internal_bss *scan, *tmp;
 	struct cfg80211_beacon_registration *reg, *treg;
+
+	if (rdev->wq) /* might be NULL in error cases */
+		destroy_workqueue(rdev->wq);
+
 	rfkill_destroy(rdev->wiphy.rfkill);
 	list_for_each_entry_safe(reg, treg, &rdev->beacon_registrations, list) {
 		list_del(&reg->list);
@@ -1564,6 +1606,102 @@ static struct pernet_operations cfg80211_pernet_ops = {
 	.exit = cfg80211_pernet_exit,
 };
 
+void wiphy_lock(struct wiphy *wiphy)
+{
+	struct cfg80211_registered_device *rdev = wiphy_to_rdev(wiphy);
+
+	workqueue_pause(rdev->wq);
+
+	mutex_lock(&wiphy->mtx);
+
+#ifdef CONFIG_LOCKDEP
+	wiphy->mtx_fully_held = true;
+#endif
+}
+EXPORT_SYMBOL(wiphy_lock);
+
+void wiphy_unlock(struct wiphy *wiphy)
+{
+	struct cfg80211_registered_device *rdev = wiphy_to_rdev(wiphy);
+
+#ifdef CONFIG_LOCKDEP
+	WARN_ON_ONCE(!wiphy->mtx_fully_held);
+#endif
+
+	workqueue_resume(rdev->wq);
+
+#ifdef CONFIG_LOCKDEP
+	wiphy->mtx_fully_held = false;
+#endif
+
+	mutex_unlock(&wiphy->mtx);
+}
+EXPORT_SYMBOL(wiphy_unlock);
+
+void wiphy_work_queue(struct wiphy *wiphy, struct wiphy_work *work)
+{
+	struct cfg80211_registered_device *rdev = wiphy_to_rdev(wiphy);
+	unsigned long flags;
+
+	spin_lock_irqsave(&rdev->wiphy_work_lock, flags);
+	if (list_empty(&work->entry))
+		list_add_tail(&rdev->wiphy_work_list, &work->entry);
+	spin_unlock_irqrestore(&rdev->wiphy_work_lock, flags);
+
+	queue_work(rdev->wq, &rdev->wiphy_work);
+}
+EXPORT_SYMBOL_GPL(wiphy_work_queue);
+
+void wiphy_work_cancel(struct wiphy *wiphy, struct wiphy_work *work)
+{
+	struct cfg80211_registered_device *rdev = wiphy_to_rdev(wiphy);
+	unsigned long flags;
+
+#ifdef CONFIG_LOCKDEP
+	lockdep_assert_held(&wiphy->mtx);
+	WARN_ON_ONCE(!wiphy->mtx_fully_held);
+#endif
+	spin_lock_irqsave(&rdev->wiphy_work_lock, flags);
+	if (!list_empty(&work->entry))
+		list_del_init(&work->entry);
+	spin_unlock_irqrestore(&rdev->wiphy_work_lock, flags);
+}
+EXPORT_SYMBOL_GPL(wiphy_work_cancel);
+
+void wiphy_delayed_work_timer(struct timer_list *t)
+{
+	struct wiphy_delayed_work *dwork = from_timer(dwork, t, timer);
+
+	wiphy_work_queue(dwork->wiphy, &dwork->work);
+}
+EXPORT_SYMBOL(wiphy_delayed_work_timer);
+
+void wiphy_delayed_work_queue(struct wiphy *wiphy,
+			      struct wiphy_delayed_work *dwork,
+			      unsigned long delay)
+{
+	if (!delay) {
+		wiphy_work_queue(wiphy, &dwork->work);
+		return;
+	}
+
+	dwork->wiphy = wiphy;
+	mod_timer(&dwork->timer, jiffies + delay);
+}
+EXPORT_SYMBOL_GPL(wiphy_delayed_work_queue);
+
+void wiphy_delayed_work_cancel(struct wiphy *wiphy,
+			       struct wiphy_delayed_work *dwork)
+{
+#ifdef CONFIG_LOCKDEP
+	lockdep_assert_held(&wiphy->mtx);
+	WARN_ON_ONCE(!wiphy->mtx_fully_held);
+#endif
+	del_timer_sync(&dwork->timer);
+	wiphy_work_cancel(wiphy, &dwork->work);
+}
+EXPORT_SYMBOL_GPL(wiphy_delayed_work_cancel);
+
 static int __init cfg80211_init(void)
 {
 	int err;
diff --git a/net/wireless/core.h b/net/wireless/core.h
index 7c61752f6d83..adb45fa38acb 100644
--- a/net/wireless/core.h
+++ b/net/wireless/core.h
@@ -108,6 +108,12 @@ struct cfg80211_registered_device {
 	/* lock for all wdev lists */
 	spinlock_t mgmt_registrations_lock;
 
+	struct workqueue_struct *wq;
+	struct work_struct wiphy_work;
+	struct list_head wiphy_work_list;
+	/* protects the list above */
+	spinlock_t wiphy_work_lock;
+
 	/* must be last because of the way we do wiphy_priv(),
 	 * and it should at least be aligned to NETDEV_ALIGN */
 	struct wiphy wiphy __aligned(NETDEV_ALIGN);
diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
index 112b4bb009c8..1fb4978f7649 100644
--- a/net/wireless/nl80211.c
+++ b/net/wireless/nl80211.c
@@ -1002,7 +1002,9 @@ static int nl80211_prepare_wdev_dump(struct netlink_callback *cb,
 			return PTR_ERR(*wdev);
 		}
 		*rdev = wiphy_to_rdev((*wdev)->wiphy);
-		mutex_lock(&(*rdev)->wiphy.mtx);
+		wiphy_lock(&(*rdev)->wiphy);
+		/* the conditional locking is too hard for sparse */
+		__release(&(*rdev)->wiphy.mtx);
 		rtnl_unlock();
 		/* 0 is the first index - add 1 to parse only once */
 		cb->args[0] = (*rdev)->wiphy_idx + 1;
@@ -1032,7 +1034,9 @@ static int nl80211_prepare_wdev_dump(struct netlink_callback *cb,
 			rtnl_unlock();
 			return -ENODEV;
 		}
-		mutex_lock(&(*rdev)->wiphy.mtx);
+		wiphy_lock(&(*rdev)->wiphy);
+		/* the conditional locking is too hard for sparse */
+		__release(&(*rdev)->wiphy.mtx);
 		rtnl_unlock();
 	}
 
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [RFC PATCH v2 3/3] wifi: cfg80211: move scan done work to cfg80211 workqueue
  2023-05-10 20:12 [RFC PATCH v2 0/3] wifi locking simplification Johannes Berg
  2023-05-10 20:12 ` [RFC PATCH v2 1/3] workqueue: support pausing ordered workqueues Johannes Berg
  2023-05-10 20:12 ` [RFC PATCH v2 2/3] wifi: cfg80211: add a workqueue with special semantics Johannes Berg
@ 2023-05-10 20:12 ` Johannes Berg
  2 siblings, 0 replies; 6+ messages in thread
From: Johannes Berg @ 2023-05-10 20:12 UTC (permalink / raw)
  To: linux-kernel; +Cc: linux-wireless, Tejun Heo, Lai Jiangshan, Johannes Berg

From: Johannes Berg <johannes.berg@intel.com>

Now that we have the cfg80211 workqueue, move the scan_done work
to it as an example.

Signed-off-by: Johannes Berg <johannes.berg@intel.com>
---
v2:
 - use the new version of the cfg80211 patch
---
 net/wireless/core.c |  3 +--
 net/wireless/core.h |  4 ++--
 net/wireless/scan.c | 14 ++++----------
 3 files changed, 7 insertions(+), 14 deletions(-)

diff --git a/net/wireless/core.c b/net/wireless/core.c
index 357fd5425bff..3671a50796c2 100644
--- a/net/wireless/core.c
+++ b/net/wireless/core.c
@@ -517,7 +517,7 @@ struct wiphy *wiphy_new_nm(const struct cfg80211_ops *ops, int sizeof_priv,
 	spin_lock_init(&rdev->bss_lock);
 	INIT_LIST_HEAD(&rdev->bss_list);
 	INIT_LIST_HEAD(&rdev->sched_scan_req_list);
-	INIT_WORK(&rdev->scan_done_wk, __cfg80211_scan_done);
+	wiphy_work_init(&rdev->scan_done_wk, __cfg80211_scan_done);
 	INIT_DELAYED_WORK(&rdev->dfs_update_channels_wk,
 			  cfg80211_dfs_channels_update_work);
 #ifdef CONFIG_CFG80211_WEXT
@@ -1106,7 +1106,6 @@ void wiphy_unregister(struct wiphy *wiphy)
 	 */
 	flush_workqueue(rdev->wq);
 
-	flush_work(&rdev->scan_done_wk);
 	cancel_work_sync(&rdev->conn_work);
 	flush_work(&rdev->event_work);
 	cancel_delayed_work_sync(&rdev->dfs_update_channels_wk);
diff --git a/net/wireless/core.h b/net/wireless/core.h
index adb45fa38acb..cbd864224856 100644
--- a/net/wireless/core.h
+++ b/net/wireless/core.h
@@ -75,7 +75,7 @@ struct cfg80211_registered_device {
 	struct sk_buff *scan_msg;
 	struct list_head sched_scan_req_list;
 	time64_t suspend_at;
-	struct work_struct scan_done_wk;
+	struct wiphy_work scan_done_wk;
 
 	struct genl_info *cur_cmd_info;
 
@@ -441,7 +441,7 @@ bool cfg80211_valid_key_idx(struct cfg80211_registered_device *rdev,
 int cfg80211_validate_key_settings(struct cfg80211_registered_device *rdev,
 				   struct key_params *params, int key_idx,
 				   bool pairwise, const u8 *mac_addr);
-void __cfg80211_scan_done(struct work_struct *wk);
+void __cfg80211_scan_done(struct wiphy *wiphy, struct wiphy_work *wk);
 void ___cfg80211_scan_done(struct cfg80211_registered_device *rdev,
 			   bool send_message);
 void cfg80211_add_sched_scan_req(struct cfg80211_registered_device *rdev,
diff --git a/net/wireless/scan.c b/net/wireless/scan.c
index 790bc31cf82e..752eae5e0a00 100644
--- a/net/wireless/scan.c
+++ b/net/wireless/scan.c
@@ -1000,16 +1000,9 @@ void ___cfg80211_scan_done(struct cfg80211_registered_device *rdev,
 		nl80211_send_scan_msg(rdev, msg);
 }
 
-void __cfg80211_scan_done(struct work_struct *wk)
+void __cfg80211_scan_done(struct wiphy *wiphy, struct wiphy_work *wk)
 {
-	struct cfg80211_registered_device *rdev;
-
-	rdev = container_of(wk, struct cfg80211_registered_device,
-			    scan_done_wk);
-
-	wiphy_lock(&rdev->wiphy);
-	___cfg80211_scan_done(rdev, true);
-	wiphy_unlock(&rdev->wiphy);
+	___cfg80211_scan_done(wiphy_to_rdev(wiphy), true);
 }
 
 void cfg80211_scan_done(struct cfg80211_scan_request *request,
@@ -1035,7 +1028,8 @@ void cfg80211_scan_done(struct cfg80211_scan_request *request,
 	}
 
 	request->notified = true;
-	queue_work(cfg80211_wq, &wiphy_to_rdev(request->wiphy)->scan_done_wk);
+	wiphy_work_queue(request->wiphy,
+			 &wiphy_to_rdev(request->wiphy)->scan_done_wk);
 }
 EXPORT_SYMBOL(cfg80211_scan_done);
 
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [RFC PATCH v2 1/3] workqueue: support pausing ordered workqueues
  2023-05-10 20:12 ` [RFC PATCH v2 1/3] workqueue: support pausing ordered workqueues Johannes Berg
@ 2023-05-10 20:13   ` Tejun Heo
  0 siblings, 0 replies; 6+ messages in thread
From: Tejun Heo @ 2023-05-10 20:13 UTC (permalink / raw)
  To: Johannes Berg; +Cc: linux-kernel, linux-wireless, Lai Jiangshan, Johannes Berg

On Wed, May 10, 2023 at 10:12:03PM +0200, Johannes Berg wrote:
> From: Johannes Berg <johannes.berg@intel.com>
> 
> Add some infrastructure to support pausing ordered
> workqueues, so that no work items are executing nor
> can execute while the workqueue is paused.
> 
> This can be used to simplify locking between work
> structs and other processes (e.g. userspace calls)
> when the workqueue is paused while other code is
> running, where we can then more easily avoid issues
> in code paths needing to cancel works.
> 
> Signed-off-by: Johannes Berg <johannes.berg@intel.com>

I'd be happy to apply this once the rest of the patchset is ready.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [RFC PATCH v2 2/3] wifi: cfg80211: add a workqueue with special semantics
  2023-05-10 20:12 ` [RFC PATCH v2 2/3] wifi: cfg80211: add a workqueue with special semantics Johannes Berg
@ 2023-05-11 21:50   ` Johannes Berg
  0 siblings, 0 replies; 6+ messages in thread
From: Johannes Berg @ 2023-05-11 21:50 UTC (permalink / raw)
  To: linux-kernel; +Cc: linux-wireless, Tejun Heo, Lai Jiangshan

Now that I look at this more ...

> +static void cfg80211_wiphy_work(struct work_struct *work)
> +{
> +	struct cfg80211_registered_device *rdev;
> +
> +	rdev = container_of(work, struct cfg80211_registered_device, wiphy_work);
> +
> +	spin_lock_irq(&rdev->wiphy_work_lock);
> +	while (!list_empty(&rdev->wiphy_work_list)) {
> +		struct wiphy_work *wk;
> +
> +		wk = list_first_entry(&rdev->wiphy_work_list,
> +				      struct wiphy_work, entry);
> +		list_del_init(&wk->entry);
> +		spin_unlock_irq(&rdev->wiphy_work_lock);
> +
> +		mutex_lock(&rdev->wiphy.mtx);

If I just change the locking here to take the wiphy.mtx before looking
at the list, which basically doesn't matter, then I don't even need
workqueue_pause() and all, nor do I even need a separate workqueue, just
schedule_work() will be good enough ... Just needs a _bit_ more work
when cancelling and here we should reschedule the work if the list isn't
empty after the first round, but overall that ends up far simpler.

So I think I'll drop the workqueue pause/resume the next time around,
FWIW.

johannes


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2023-05-11 21:50 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-05-10 20:12 [RFC PATCH v2 0/3] wifi locking simplification Johannes Berg
2023-05-10 20:12 ` [RFC PATCH v2 1/3] workqueue: support pausing ordered workqueues Johannes Berg
2023-05-10 20:13   ` Tejun Heo
2023-05-10 20:12 ` [RFC PATCH v2 2/3] wifi: cfg80211: add a workqueue with special semantics Johannes Berg
2023-05-11 21:50   ` Johannes Berg
2023-05-10 20:12 ` [RFC PATCH v2 3/3] wifi: cfg80211: move scan done work to cfg80211 workqueue Johannes Berg

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).