linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Lyude Paul <lyude@redhat.com>
To: nouveau@lists.freedesktop.org, dri-devel@lists.freedesktop.org,
	linux-kernel@vger.kernel.org
Cc: "Daniel Vetter" <daniel@ffwll.ch>, "Tejun Heo" <tj@kernel.org>,
	"Ville Syrjälä" <ville.syrjala@linux.intel.com>,
	"Andrew Morton" <akpm@linux-foundation.org>,
	"Suren Baghdasaryan" <surenb@google.com>,
	"Petr Mladek" <pmladek@suse.com>,
	"Greg Kroah-Hartman" <gregkh@linuxfoundation.org>,
	"Steven Rostedt (VMware)" <rostedt@goodmis.org>,
	"Thomas Gleixner" <tglx@linutronix.de>,
	"Ben Dooks" <ben.dooks@codethink.co.uk>,
	"Liang Chen" <cl@rock-chips.com>
Subject: [RFC v4 01/12] kthread: Add kthread_queue_flush_work()
Date: Fri,  8 May 2020 16:46:51 -0400	[thread overview]
Message-ID: <20200508204751.155488-2-lyude@redhat.com> (raw)
In-Reply-To: <20200508204751.155488-1-lyude@redhat.com>

Currently, it's only possible to flush on a kthread_work in contexts
where it's possible to block. This can be kind of painful though when
trying to implement new types of delayed work which use kthread_work,
since it means we'd need to drop any spinlocks for new delayed work
implementations before we can actually call kthread_flush_work().

In the time between dropping locks and calling kthread_flush_work(), the
work might have already executed once and have gotten re-queued by the
time we're ready. This would mean that once the user finally executes
kthread_flush_work(), we'd accidentally wait for someone else's queued
work instead of our own.

For DRM vblank works it's preferable that we just return immediately
during such races, instead of blocking on the re-queue. Additionally, we
also want to be able to use kthread_flush_work structs in our own
contexts so that we can block until a vblank work's target vblank has
passed, _and_ said work has executed once since. And finally, we also
want to be able to finish flushing on a work early if it's been
cancelled at any point (e.g. before or after it's actually been queued
on the kthread_worker).

So, let's make all of these things possible by exposing struct
kthread_flush_work, and then splitting kthread_flush_work() into two
functions: kthread_queue_flush_work(); which handles possibly queuing up
the kthread_flush_work on the work's respective kthread_worker, and
kthread_flush_work(); which performs a simple synchronous flush just
like before. We also add a DEFINE_KTHREAD_FLUSH_WORK() macro, which
simply initializes a kthread_flush_work struct inline (I can't see
anyone needing to use a kthread_flush_work that gets used outside of the
scope of a single function, and that seems like it would be a bit
overkill anyway).

Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Tejun Heo <tj@kernel.org>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: dri-devel@lists.freedesktop.org
Cc: nouveau@lists.freedesktop.org
Signed-off-by: Lyude Paul <lyude@redhat.com>
---
 include/linux/kthread.h | 16 ++++++++
 kernel/kthread.c        | 87 ++++++++++++++++++++++++++---------------
 2 files changed, 71 insertions(+), 32 deletions(-)

diff --git a/include/linux/kthread.h b/include/linux/kthread.h
index 8bbcaad7ef0f..0006540ce7f9 100644
--- a/include/linux/kthread.h
+++ b/include/linux/kthread.h
@@ -105,6 +105,11 @@ struct kthread_delayed_work {
 	struct timer_list timer;
 };
 
+struct kthread_flush_work {
+	struct kthread_work	work;
+	struct completion	done;
+};
+
 #define KTHREAD_WORKER_INIT(worker)	{				\
 	.lock = __RAW_SPIN_LOCK_UNLOCKED((worker).lock),		\
 	.work_list = LIST_HEAD_INIT((worker).work_list),		\
@@ -122,6 +127,11 @@ struct kthread_delayed_work {
 				     TIMER_IRQSAFE),			\
 	}
 
+#define KTHREAD_FLUSH_WORK_INIT(fwork) { \
+	KTHREAD_WORK_INIT((fwork).work, __kthread_flush_work_fn), \
+	COMPLETION_INITIALIZER_ONSTACK((fwork).done), \
+	}
+
 #define DEFINE_KTHREAD_WORKER(worker)					\
 	struct kthread_worker worker = KTHREAD_WORKER_INIT(worker)
 
@@ -132,6 +142,9 @@ struct kthread_delayed_work {
 	struct kthread_delayed_work dwork =				\
 		KTHREAD_DELAYED_WORK_INIT(dwork, fn)
 
+#define DEFINE_KTHREAD_FLUSH_WORK(fwork) \
+	struct kthread_flush_work fwork = KTHREAD_FLUSH_WORK_INIT(fwork);
+
 /*
  * kthread_worker.lock needs its own lockdep class key when defined on
  * stack with lockdep enabled.  Use the following macros in such cases.
@@ -190,6 +203,9 @@ bool kthread_mod_delayed_work(struct kthread_worker *worker,
 			      struct kthread_delayed_work *dwork,
 			      unsigned long delay);
 
+bool kthread_queue_flush_work(struct kthread_work *work,
+			      struct kthread_flush_work *fwork);
+void __kthread_flush_work_fn(struct kthread_work *work);
 void kthread_flush_work(struct kthread_work *work);
 void kthread_flush_worker(struct kthread_worker *worker);
 
diff --git a/kernel/kthread.c b/kernel/kthread.c
index bfbfa481be3a..c1f8ec9d5836 100644
--- a/kernel/kthread.c
+++ b/kernel/kthread.c
@@ -943,52 +943,78 @@ bool kthread_queue_delayed_work(struct kthread_worker *worker,
 }
 EXPORT_SYMBOL_GPL(kthread_queue_delayed_work);
 
-struct kthread_flush_work {
-	struct kthread_work	work;
-	struct completion	done;
-};
-
-static void kthread_flush_work_fn(struct kthread_work *work)
-{
-	struct kthread_flush_work *fwork =
-		container_of(work, struct kthread_flush_work, work);
-	complete(&fwork->done);
-}
-
 /**
- * kthread_flush_work - flush a kthread_work
- * @work: work to flush
+ * kthread_queue_flush_work - try queuing a kthread_flush_work after a
+ * queued kthread_work to synchronize with later.
+ * @work: The &kthread_work to synchronize with later
+ * @fwork: The &kthread_flush_work to queue
+ *
+ * When working with &kthread_work structs in contexts where sleeping isn't
+ * possible it may be desirable to synchronize with a &kthread_work that's
+ * currently queued, but only after we've entered a context where it's safe to
+ * sleep again, and while making sure we don't block on any later re-queues of
+ * the work.
+ *
+ * If @work is queued or executing when kthread_queue_flush_work() is called,
+ * @fwork will be scheduled for execution immediately after @work. The caller
+ * can then later synchronize on @fwork.done, which will complete once @work
+ * has executed once or been cancelled since kthread_queue_flush_work() was
+ * called.
+ *
+ * Returns: %true% if @fwork was queued,and the caller needs to call
+ * wait_for_completion() on @fwork.done to finish synchronizing, %false%
+ * otherwise.
  *
- * If @work is queued or executing, wait for it to finish execution.
  */
-void kthread_flush_work(struct kthread_work *work)
+bool kthread_queue_flush_work(struct kthread_work *work,
+			      struct kthread_flush_work *fwork)
 {
-	struct kthread_flush_work fwork = {
-		KTHREAD_WORK_INIT(fwork.work, kthread_flush_work_fn),
-		COMPLETION_INITIALIZER_ONSTACK(fwork.done),
-	};
 	struct kthread_worker *worker;
-	bool noop = false;
+	unsigned long flags;
+	bool queued = true;
 
 	worker = work->worker;
 	if (!worker)
-		return;
+		return false;
 
-	raw_spin_lock_irq(&worker->lock);
+	raw_spin_lock_irqsave(&worker->lock, flags);
 	/* Work must not be used with >1 worker, see kthread_queue_work(). */
 	WARN_ON_ONCE(work->worker != worker);
 
 	if (!list_empty(&work->node))
-		kthread_insert_work(worker, &fwork.work, work->node.next);
+		kthread_insert_work(worker, &fwork->work, work->node.next);
 	else if (worker->current_work == work)
-		kthread_insert_work(worker, &fwork.work,
+		kthread_insert_work(worker, &fwork->work,
 				    worker->work_list.next);
 	else
-		noop = true;
+		queued = false;
 
-	raw_spin_unlock_irq(&worker->lock);
+	raw_spin_unlock_irqrestore(&worker->lock, flags);
+	return queued;
+}
+EXPORT_SYMBOL_GPL(kthread_queue_flush_work);
+
+void __kthread_flush_work_fn(struct kthread_work *work)
+{
+	struct kthread_flush_work *fwork =
+		container_of(work, struct kthread_flush_work, work);
+	complete(&fwork->done);
+}
+EXPORT_SYMBOL_GPL(__kthread_flush_work_fn);
+
+/**
+ * kthread_flush_work - flush a kthread_work
+ * @work: work to flush
+ *
+ * If @work is queued or executing, wait for it to finish execution.
+ */
+void kthread_flush_work(struct kthread_work *work)
+{
+	bool queued;
+	DEFINE_KTHREAD_FLUSH_WORK(fwork);
 
-	if (!noop)
+	queued = kthread_queue_flush_work(work, &fwork);
+	if (queued)
 		wait_for_completion(&fwork.done);
 }
 EXPORT_SYMBOL_GPL(kthread_flush_work);
@@ -1170,10 +1196,7 @@ EXPORT_SYMBOL_GPL(kthread_cancel_delayed_work_sync);
  */
 void kthread_flush_worker(struct kthread_worker *worker)
 {
-	struct kthread_flush_work fwork = {
-		KTHREAD_WORK_INIT(fwork.work, kthread_flush_work_fn),
-		COMPLETION_INITIALIZER_ONSTACK(fwork.done),
-	};
+	DEFINE_KTHREAD_FLUSH_WORK(fwork);
 
 	kthread_queue_work(worker, &fwork.work);
 	wait_for_completion(&fwork.done);
-- 
2.25.4


  reply	other threads:[~2020-05-08 20:48 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-05-08 20:46 [RFC v4 00/12] drm/nouveau: Introduce CRC support for gf119+ Lyude Paul
2020-05-08 20:46 ` Lyude Paul [this message]
2020-05-11 14:49   ` [RFC v4 01/12] kthread: Add kthread_queue_flush_work() Tejun Heo
2020-05-11 15:02     ` Daniel Vetter
2020-05-08 20:46 ` [RFC v4 02/12] kthread: Add kthread_(un)block_work_queuing() and kthread_work_queuable() Lyude Paul
2020-05-11 15:02   ` Tejun Heo
2020-05-08 20:46 ` [RFC v4 03/12] drm/vblank: Register drmm cleanup action once per drm_vblank_crtc Lyude Paul
2020-05-08 20:59   ` Daniel Vetter
2020-05-08 20:46 ` [RFC v4 04/12] drm/vblank: Add vblank works Lyude Paul
2020-05-08 20:46 ` [RFC v4 05/12] drm/nouveau/kms/nv50-: Unroll error cleanup in nv50_head_create() Lyude Paul
2020-05-08 20:46 ` [RFC v4 06/12] drm/nouveau/kms/nv140-: Don't modify depth in state during atomic commit Lyude Paul
2020-05-08 20:46 ` [RFC v4 07/12] drm/nouveau/kms/nv50-: Fix disabling dithering Lyude Paul
2020-05-08 20:46 ` [RFC v4 08/12] drm/nouveau/kms/nv50-: s/harm/armh/g Lyude Paul
2020-05-08 20:46 ` [RFC v4 09/12] drm/nouveau/kms/nv140-: Track wndw mappings in nv50_head_atom Lyude Paul
2020-05-08 20:47 ` [RFC v4 10/12] drm/nouveau/kms/nv50-: Expose nv50_outp_atom in disp.h Lyude Paul
2020-05-08 20:47 ` [RFC v4 11/12] drm/nouveau/kms/nv50-: Move hard-coded object handles into header Lyude Paul
2020-05-08 20:47 ` [RFC v4 12/12] drm/nouveau/kms/nvd9-: Add CRC support Lyude Paul

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200508204751.155488-2-lyude@redhat.com \
    --to=lyude@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=ben.dooks@codethink.co.uk \
    --cc=cl@rock-chips.com \
    --cc=daniel@ffwll.ch \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=gregkh@linuxfoundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=nouveau@lists.freedesktop.org \
    --cc=pmladek@suse.com \
    --cc=rostedt@goodmis.org \
    --cc=surenb@google.com \
    --cc=tglx@linutronix.de \
    --cc=tj@kernel.org \
    --cc=ville.syrjala@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).