All of lore.kernel.org
 help / color / mirror / Atom feed
* [to-be-updated] kthread-fix-kthread_mod_delayed_work-vs-kthread_cancel_delayed_work_sync-race.patch removed from -mm tree
@ 2021-05-27 23:10 akpm
  0 siblings, 0 replies; only message in thread
From: akpm @ 2021-05-27 23:10 UTC (permalink / raw)
  To: bp, davidchao, jenhaochen, jkosina, josh, liumartin, mhocko,
	mingo, mm-commits, nathan, ndesaulniers, oleg, paulmck, peterz,
	pmladek, rostedt, stable, tglx, tj, vbabka


The patch titled
     Subject: kthread: fix kthread_mod_delayed_work vs kthread_cancel_delayed_work_sync race
has been removed from the -mm tree.  Its filename was
     kthread-fix-kthread_mod_delayed_work-vs-kthread_cancel_delayed_work_sync-race.patch

This patch was dropped because an updated version will be merged

------------------------------------------------------
From: Martin Liu <liumartin@google.com>
Subject: kthread: fix kthread_mod_delayed_work vs kthread_cancel_delayed_work_sync race

We encountered a system hang issue while doing the tests.  The callstack
is as following

	schedule+0x80/0x100
	schedule_timeout+0x48/0x138
	wait_for_common+0xa4/0x134
	wait_for_completion+0x1c/0x2c
	kthread_flush_work+0x114/0x1cc
	kthread_cancel_work_sync.llvm.16514401384283632983+0xe8/0x144
	kthread_cancel_delayed_work_sync+0x18/0x2c
	xxxx_pm_notify+0xb0/0xd8
	blocking_notifier_call_chain_robust+0x80/0x194
	pm_notifier_call_chain_robust+0x28/0x4c
	suspend_prepare+0x40/0x260
	enter_state+0x80/0x3f4
	pm_suspend+0x60/0xdc
	state_store+0x108/0x144
	kobj_attr_store+0x38/0x88
	sysfs_kf_write+0x64/0xc0
	kernfs_fop_write_iter+0x108/0x1d0
	vfs_write+0x2f4/0x368
	ksys_write+0x7c/0xec

When we started investigating, we found race between
kthread_mod_delayed_work vs kthread_cancel_delayed_work_sync.  The race's
result could be simply reproduced as a kthread_mod_delayed_work with a
following kthread_flush_work call.

Thing is we release kthread_mod_delayed_work kspin_lock in
__kthread_cancel_work so it opens a race window for
kthread_cancel_delayed_work_sync to change the canceling count used to
prevent dwork from being requeued before calling kthread_flush_work. 
However, we don't check the canceling count after returning from
__kthread_cancel_work and then insert the dwork to the worker.  It results
the following kthread_flush_work inserts flush work to dwork's tail which
is at worker's dealyed_work_list.  Therefore, flush work will never get
moved to the worker's work_list to be executed.  Finally,
kthread_cancel_delayed_work_sync will NOT be able to get completed and
wait forever.  The code sequence diagram is as following

Thread A                Thread B
kthread_mod_delayed_work
  spin_lock
   __kthread_cancel_work
    canceling = 1
    spin_unlock
                        kthread_cancel_delayed_work_sync
                          spin_lock
                            kthread_cancel_work
                          canceling = 2
                          spin_unlock
    del_timer_sync
    spin_lock
    canceling = 1 // canceling count gets update in ThreadB before
  queue_delayed_work // dwork is put into the woker's dealyed_work_list
                        without checking the canceling count
 spin_unlock
                          kthread_flush_work
                            spin_lock
                            Insert flush work // at the tail of the
			                         dwork which is at
						 the worker's
						 dealyed_work_list
                            spin_unlock
                            wait_for_completion // Thread B stuck here as
			                           flush work will never
						   get executed

The canceling count could change in __kthread_cancel_work as the spinlock
get released and regained in between, let's check the count again before
we queue the delayed work to avoid the race.

Link: https://lkml.kernel.org/r/20210513065458.941403-1-liumartin@google.com
Fixes: 37be45d49dec2 ("kthread: allow to cancel kthread work")
Signed-off-by: Martin Liu <liumartin@google.com>
Tested-by: David Chao <davidchao@google.com>
Reviewed-by: Petr Mladek <pmladek@suse.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Josh Triplett <josh@joshtriplett.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Jiri Kosina <jkosina@suse.cz>
Cc: Borislav Petkov <bp@suse.de>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Nathan Chancellor <nathan@kernel.org>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Cc: <jenhaochen@google.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 kernel/kthread.c |   13 +++++++++++++
 1 file changed, 13 insertions(+)

--- a/kernel/kthread.c~kthread-fix-kthread_mod_delayed_work-vs-kthread_cancel_delayed_work_sync-race
+++ a/kernel/kthread.c
@@ -1181,6 +1181,19 @@ bool kthread_mod_delayed_work(struct kth
 		goto out;
 
 	ret = __kthread_cancel_work(work, true, &flags);
+
+	/*
+	 * Canceling could run in parallel from kthread_cancel_delayed_work_sync
+	 * and change work's canceling count as the spinlock is released and regain
+	 * in __kthread_cancel_work so we need to check the count again. Otherwise,
+	 * we might incorrectly queue the dwork and further cause
+	 * cancel_delayed_work_sync thread waiting for flush dwork endlessly.
+	 */
+	if (work->canceling) {
+		ret = false;
+		goto out;
+	}
+
 fast_queue:
 	__kthread_queue_delayed_work(worker, dwork, delay);
 out:
_

Patches currently in -mm which might be from liumartin@google.com are



^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2021-05-27 23:10 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-27 23:10 [to-be-updated] kthread-fix-kthread_mod_delayed_work-vs-kthread_cancel_delayed_work_sync-race.patch removed from -mm tree akpm

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.