All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/4] workqueue and lockdep stuffs..
@ 2017-08-23 11:58 Peter Zijlstra
  2017-08-23 11:58 ` [PATCH 1/4] workqueue: Use TASK_IDLE Peter Zijlstra
                   ` (4 more replies)
  0 siblings, 5 replies; 61+ messages in thread
From: Peter Zijlstra @ 2017-08-23 11:58 UTC (permalink / raw)
  To: mingo
  Cc: byungchul.park, tj, boqun.feng, david, johannes, oleg,
	linux-kernel, peterz

Hi all,

These are the patches I propose to deal with the current workqueue vs lockdep
fallout.

TJ, if you like 1/4, please pick it up yourself, its completely independent and
not really that important. I'll carry the rest to make sure tip/locking stays
functional.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH 1/4] workqueue: Use TASK_IDLE
  2017-08-23 11:58 [PATCH 0/4] workqueue and lockdep stuffs Peter Zijlstra
@ 2017-08-23 11:58 ` Peter Zijlstra
  2017-08-23 13:31   ` Tejun Heo
  2017-08-23 11:58 ` [PATCH 2/4] lockdep/selftests: Add mixed read-write ABBA tests Peter Zijlstra
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 61+ messages in thread
From: Peter Zijlstra @ 2017-08-23 11:58 UTC (permalink / raw)
  To: mingo
  Cc: byungchul.park, tj, boqun.feng, david, johannes, oleg,
	linux-kernel, peterz

[-- Attachment #1: peterz-wq-task_idle.patch --]
[-- Type: text/plain, Size: 860 bytes --]

Workqueues don't use signals, it (ab)uses TASK_INTERRUPTIBLE to avoid
increasing the loadavg numbers. We've 'recently' introduced TASK_IDLE
for this case:

  80ed87c8a9ca ("sched/wait: Introduce TASK_NOLOAD and TASK_IDLE")

use it.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -2249,7 +2249,7 @@ static int worker_thread(void *__worker)
 	 * event.
 	 */
 	worker_enter_idle(worker);
-	__set_current_state(TASK_INTERRUPTIBLE);
+	__set_current_state(TASK_IDLE);
 	spin_unlock_irq(&pool->lock);
 	schedule();
 	goto woke_up;
@@ -2291,7 +2291,7 @@ static int rescuer_thread(void *__rescue
 	 */
 	rescuer->task->flags |= PF_WQ_WORKER;
 repeat:
-	set_current_state(TASK_INTERRUPTIBLE);
+	set_current_state(TASK_IDLE);
 
 	/*
 	 * By the time the rescuer is requested to stop, the workqueue

^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH 2/4] lockdep/selftests: Add mixed read-write ABBA tests
  2017-08-23 11:58 [PATCH 0/4] workqueue and lockdep stuffs Peter Zijlstra
  2017-08-23 11:58 ` [PATCH 1/4] workqueue: Use TASK_IDLE Peter Zijlstra
@ 2017-08-23 11:58 ` Peter Zijlstra
  2017-08-23 11:58 ` [PATCH 3/4] workqueue/lockdep: Fix flush_work() annotation Peter Zijlstra
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 61+ messages in thread
From: Peter Zijlstra @ 2017-08-23 11:58 UTC (permalink / raw)
  To: mingo
  Cc: byungchul.park, tj, boqun.feng, david, johannes, oleg,
	linux-kernel, peterz

[-- Attachment #1: peterz-locking-selftest.patch --]
[-- Type: text/plain, Size: 3119 bytes --]

Currently lockdep has limited support for recursive readers, add a few
mixed read-write ABBA selftests to show the extend of these
limitations.

[    0.000000] ----------------------------------------------------------------------------
[    0.000000]                                  | spin |wlock |rlock |mutex | wsem | rsem |
[    0.000000]   --------------------------------------------------------------------------

[    0.000000]   mixed read-lock/lock-write ABBA:             |FAILED|             |  ok  |
[    0.000000]    mixed read-lock/lock-read ABBA:             |  ok  |             |  ok  |
[    0.000000]  mixed write-lock/lock-write ABBA:             |  ok  |             |  ok  |

This clearly illustrates the case where lockdep fails to find a
deadlock.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
--- a/lib/locking-selftest.c
+++ b/lib/locking-selftest.c
@@ -363,6 +363,103 @@ static void rsem_AA3(void)
 }
 
 /*
+ * read_lock(A)
+ * spin_lock(B)
+ *		spin_lock(B)
+ *		write_lock(A)
+ */
+static void rlock_ABBA1(void)
+{
+	RL(X1);
+	L(Y1);
+	U(Y1);
+	RU(X1);
+
+	L(Y1);
+	WL(X1);
+	WU(X1);
+	U(Y1); // should fail
+}
+
+static void rwsem_ABBA1(void)
+{
+	RSL(X1);
+	ML(Y1);
+	MU(Y1);
+	RSU(X1);
+
+	ML(Y1);
+	WSL(X1);
+	WSU(X1);
+	MU(Y1); // should fail
+}
+
+/*
+ * read_lock(A)
+ * spin_lock(B)
+ *		spin_lock(B)
+ *		read_lock(A)
+ */
+static void rlock_ABBA2(void)
+{
+	RL(X1);
+	L(Y1);
+	U(Y1);
+	RU(X1);
+
+	L(Y1);
+	RL(X1);
+	RU(X1);
+	U(Y1); // should NOT fail
+}
+
+static void rwsem_ABBA2(void)
+{
+	RSL(X1);
+	ML(Y1);
+	MU(Y1);
+	RSU(X1);
+
+	ML(Y1);
+	RSL(X1);
+	RSU(X1);
+	MU(Y1); // should fail
+}
+
+
+/*
+ * write_lock(A)
+ * spin_lock(B)
+ *		spin_lock(B)
+ *		write_lock(A)
+ */
+static void rlock_ABBA3(void)
+{
+	WL(X1);
+	L(Y1);
+	U(Y1);
+	WU(X1);
+
+	L(Y1);
+	WL(X1);
+	WU(X1);
+	U(Y1); // should fail
+}
+
+static void rwsem_ABBA3(void)
+{
+	WSL(X1);
+	ML(Y1);
+	MU(Y1);
+	WSU(X1);
+
+	ML(Y1);
+	WSL(X1);
+	WSU(X1);
+	MU(Y1); // should fail
+}
+
+/*
  * ABBA deadlock:
  */
 
@@ -1056,8 +1153,6 @@ static void dotest(void (*testcase_fn)(v
 	if (debug_locks != expected) {
 		unexpected_testcase_failures++;
 		pr_cont("FAILED|");
-
-		dump_stack();
 	} else {
 		testcase_successes++;
 		pr_cont("  ok  |");
@@ -1933,6 +2028,24 @@ void locking_selftest(void)
 	dotest(rsem_AA3, FAILURE, LOCKTYPE_RWSEM);
 	pr_cont("\n");
 
+	print_testname("mixed read-lock/lock-write ABBA");
+	pr_cont("             |");
+	dotest(rlock_ABBA1, FAILURE, LOCKTYPE_RWLOCK);
+	pr_cont("             |");
+	dotest(rwsem_ABBA1, FAILURE, LOCKTYPE_RWSEM);
+
+	print_testname("mixed read-lock/lock-read ABBA");
+	pr_cont("             |");
+	dotest(rlock_ABBA2, SUCCESS, LOCKTYPE_RWLOCK);
+	pr_cont("             |");
+	dotest(rwsem_ABBA2, FAILURE, LOCKTYPE_RWSEM);
+
+	print_testname("mixed write-lock/lock-write ABBA");
+	pr_cont("             |");
+	dotest(rlock_ABBA3, FAILURE, LOCKTYPE_RWLOCK);
+	pr_cont("             |");
+	dotest(rwsem_ABBA3, FAILURE, LOCKTYPE_RWSEM);
+
 	printk("  --------------------------------------------------------------------------\n");
 
 	/*

^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH 3/4] workqueue/lockdep: Fix flush_work() annotation
  2017-08-23 11:58 [PATCH 0/4] workqueue and lockdep stuffs Peter Zijlstra
  2017-08-23 11:58 ` [PATCH 1/4] workqueue: Use TASK_IDLE Peter Zijlstra
  2017-08-23 11:58 ` [PATCH 2/4] lockdep/selftests: Add mixed read-write ABBA tests Peter Zijlstra
@ 2017-08-23 11:58 ` Peter Zijlstra
  2017-08-23 11:58 ` [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation Peter Zijlstra
  2017-08-23 13:32 ` [PATCH 0/4] workqueue and lockdep stuffs Tejun Heo
  4 siblings, 0 replies; 61+ messages in thread
From: Peter Zijlstra @ 2017-08-23 11:58 UTC (permalink / raw)
  To: mingo
  Cc: byungchul.park, tj, boqun.feng, david, johannes, oleg,
	linux-kernel, peterz

[-- Attachment #1: peterz-wq-flush-lockdep.patch --]
[-- Type: text/plain, Size: 2475 bytes --]

The flush_work() annotation as introduced by commit:

  e159489baa71 ("workqueue: relax lockdep annotation on flush_work()")

hits on the lockdep problem with recursive read locks.

The situation as described is:

Work W1:                Work W2:        Task:

ARR(Q)                  ARR(Q)		flush_workqueue(Q)
A(W1)                   A(W2)             A(Q)
  flush_work(W2)			  R(Q)
    A(W2)
    R(W2)
    if (special)
      A(Q)
    else
      ARR(Q)
    R(Q)

where: A - acquire, ARR - acquire-read-recursive, R - release.

Where under 'special' conditions we want to trigger a lock recursion
deadlock, but otherwise allow the flush_work(). The allowing is done
by using recursive read locks (ARR), but lockdep is broken for
recursive stuff.

However, there appears to be no need to acquire the lock if we're not
'special', so if we remove the 'else' clause things become much
simpler and no longer need the recursion thing at all.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
---
 kernel/workqueue.c |   20 +++++++++++---------
 1 file changed, 11 insertions(+), 9 deletions(-)

--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -2091,7 +2091,7 @@ __acquires(&pool->lock)
 
 	spin_unlock_irq(&pool->lock);
 
-	lock_map_acquire_read(&pwq->wq->lockdep_map);
+	lock_map_acquire(&pwq->wq->lockdep_map);
 	lock_map_acquire(&lockdep_map);
 	crossrelease_hist_start(XHLOCK_PROC);
 	trace_workqueue_execute_start(work);
@@ -2817,16 +2817,18 @@ static bool start_flush_work(struct work
 	spin_unlock_irq(&pool->lock);
 
 	/*
-	 * If @max_active is 1 or rescuer is in use, flushing another work
-	 * item on the same workqueue may lead to deadlock.  Make sure the
-	 * flusher is not running on the same workqueue by verifying write
-	 * access.
+	 * Force a lock recursion deadlock when using flush_work() inside a
+	 * single-threaded or rescuer equipped workqueue.
+	 *
+	 * For single threaded workqueues the deadlock happens when the work
+	 * is after the work issuing the flush_work(). For rescuer equipped
+	 * workqueues the deadlock happens when the rescuer stalls, blocking
+	 * forward progress.
 	 */
-	if (pwq->wq->saved_max_active == 1 || pwq->wq->rescuer)
+	if (pwq->wq->saved_max_active == 1 || pwq->wq->rescuer) {
 		lock_map_acquire(&pwq->wq->lockdep_map);
-	else
-		lock_map_acquire_read(&pwq->wq->lockdep_map);
-	lock_map_release(&pwq->wq->lockdep_map);
+		lock_map_release(&pwq->wq->lockdep_map);
+	}
 
 	return true;
 already_gone:

^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
  2017-08-23 11:58 [PATCH 0/4] workqueue and lockdep stuffs Peter Zijlstra
                   ` (2 preceding siblings ...)
  2017-08-23 11:58 ` [PATCH 3/4] workqueue/lockdep: Fix flush_work() annotation Peter Zijlstra
@ 2017-08-23 11:58 ` Peter Zijlstra
  2017-08-24  2:18   ` Byungchul Park
  2017-08-29  6:46   ` Byungchul Park
  2017-08-23 13:32 ` [PATCH 0/4] workqueue and lockdep stuffs Tejun Heo
  4 siblings, 2 replies; 61+ messages in thread
From: Peter Zijlstra @ 2017-08-23 11:58 UTC (permalink / raw)
  To: mingo
  Cc: byungchul.park, tj, boqun.feng, david, johannes, oleg,
	linux-kernel, peterz

[-- Attachment #1: peterz-lockdep-cross-fix.patch --]
[-- Type: text/plain, Size: 7685 bytes --]

The new completion/crossrelease annotations interact unfavourable with
the extant flush_work()/flush_workqueue() annotations.

The problem is that when a single work class does:

  wait_for_completion(&C)

and

  complete(&C)

in different executions, we'll build dependencies like:

  lock_map_acquire(W)
  complete_acquire(C)

and

  lock_map_acquire(W)
  complete_release(C)

which results in the dependency chain: W->C->W, which lockdep thinks
spells deadlock, even though there is no deadlock potential since
works are ran concurrently.

One possibility would be to change the work 'lock' to recursive-read,
but that would mean hitting a lockdep limitation on recursive locks.
Also, unconditinoally switching to recursive-read here would fail to
detect the actual deadlock on single-threaded workqueues, which do
have a problem with this.

For now, forcefully disregard these locks for crossrelease.


Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 include/linux/irqflags.h |    4 +--
 include/linux/lockdep.h  |    8 +++---
 kernel/locking/lockdep.c |   60 +++++++++++++++++++++++++++++------------------
 kernel/workqueue.c       |   23 +++++++++++++++++-
 4 files changed, 66 insertions(+), 29 deletions(-)

--- a/include/linux/irqflags.h
+++ b/include/linux/irqflags.h
@@ -26,7 +26,7 @@
 # define trace_hardirq_enter()			\
 do {						\
 	current->hardirq_context++;		\
-	crossrelease_hist_start(XHLOCK_HARD);	\
+	crossrelease_hist_start(XHLOCK_HARD, 0);\
 } while (0)
 # define trace_hardirq_exit()			\
 do {						\
@@ -36,7 +36,7 @@ do {						\
 # define lockdep_softirq_enter()		\
 do {						\
 	current->softirq_context++;		\
-	crossrelease_hist_start(XHLOCK_SOFT);	\
+	crossrelease_hist_start(XHLOCK_SOFT, 0);\
 } while (0)
 # define lockdep_softirq_exit()			\
 do {						\
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -578,11 +578,11 @@ extern void lock_commit_crosslock(struct
 #define STATIC_LOCKDEP_MAP_INIT(_name, _key) \
 	{ .name = (_name), .key = (void *)(_key), .cross = 0, }
 
-extern void crossrelease_hist_start(enum xhlock_context_t c);
+extern void crossrelease_hist_start(enum xhlock_context_t c, bool force);
 extern void crossrelease_hist_end(enum xhlock_context_t c);
 extern void lockdep_init_task(struct task_struct *task);
 extern void lockdep_free_task(struct task_struct *task);
-#else
+#else /* !CROSSRELEASE */
 #define lockdep_init_map_crosslock(m, n, k, s) do {} while (0)
 /*
  * To initialize a lockdep_map statically use this macro.
@@ -591,11 +591,11 @@ extern void lockdep_free_task(struct tas
 #define STATIC_LOCKDEP_MAP_INIT(_name, _key) \
 	{ .name = (_name), .key = (void *)(_key), }
 
-static inline void crossrelease_hist_start(enum xhlock_context_t c) {}
+static inline void crossrelease_hist_start(enum xhlock_context_t c, bool force) {}
 static inline void crossrelease_hist_end(enum xhlock_context_t c) {}
 static inline void lockdep_init_task(struct task_struct *task) {}
 static inline void lockdep_free_task(struct task_struct *task) {}
-#endif
+#endif /* CROSSRELEASE */
 
 #ifdef CONFIG_LOCK_STAT
 
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -4629,7 +4629,7 @@ asmlinkage __visible void lockdep_sys_ex
 	 * the index to point to the last entry, which is already invalid.
 	 */
 	crossrelease_hist_end(XHLOCK_PROC);
-	crossrelease_hist_start(XHLOCK_PROC);
+	crossrelease_hist_start(XHLOCK_PROC, false);
 }
 
 void lockdep_rcu_suspicious(const char *file, const int line, const char *s)
@@ -4725,25 +4725,25 @@ static inline void invalidate_xhlock(str
 /*
  * Lock history stacks; we have 3 nested lock history stacks:
  *
- *   Hard IRQ
- *   Soft IRQ
- *   History / Task
- *
- * The thing is that once we complete a (Hard/Soft) IRQ the future task locks
- * should not depend on any of the locks observed while running the IRQ.
- *
- * So what we do is rewind the history buffer and erase all our knowledge of
- * that temporal event.
- */
-
-/*
- * We need this to annotate lock history boundaries. Take for instance
- * workqueues; each work is independent of the last. The completion of a future
- * work does not depend on the completion of a past work (in general).
- * Therefore we must not carry that (lock) dependency across works.
+ *   HARD(IRQ)
+ *   SOFT(IRQ)
+ *   PROC(ess)
+ *
+ * The thing is that once we complete a HARD/SOFT IRQ the future task locks
+ * should not depend on any of the locks observed while running the IRQ.  So
+ * what we do is rewind the history buffer and erase all our knowledge of that
+ * temporal event.
+ *
+ * The PROCess one is special though; it is used to annotate independence
+ * inside a task.
+ *
+ * Take for instance workqueues; each work is independent of the last. The
+ * completion of a future work does not depend on the completion of a past work
+ * (in general). Therefore we must not carry that (lock) dependency across
+ * works.
  *
  * This is true for many things; pretty much all kthreads fall into this
- * pattern, where they have an 'idle' state and future completions do not
+ * pattern, where they have an invariant state and future completions do not
  * depend on past completions. Its just that since they all have the 'same'
  * form -- the kthread does the same over and over -- it doesn't typically
  * matter.
@@ -4751,15 +4751,31 @@ static inline void invalidate_xhlock(str
  * The same is true for system-calls, once a system call is completed (we've
  * returned to userspace) the next system call does not depend on the lock
  * history of the previous system call.
+ *
+ * They key property for independence, this invariant state, is that it must be
+ * a point where we hold no locks and have no history. Because if we were to
+ * hold locks, the restore at _end() would not necessarily recover it's history
+ * entry. Similarly, independence per-definition means it does not depend on
+ * prior state.
  */
-void crossrelease_hist_start(enum xhlock_context_t c)
+void crossrelease_hist_start(enum xhlock_context_t c, bool force)
 {
 	struct task_struct *cur = current;
 
-	if (cur->xhlocks) {
-		cur->xhlock_idx_hist[c] = cur->xhlock_idx;
-		cur->hist_id_save[c] = cur->hist_id;
+	if (!cur->xhlocks)
+		return;
+
+	/*
+	 * We call this at an invariant point, no current state, no history.
+	 */
+	if (c == XHLOCK_PROC) {
+		/* verified the former, ensure the latter */
+		WARN_ON_ONCE(!force && cur->lockdep_depth);
+		invalidate_xhlock(&xhlock(cur->xhlock_idx));
 	}
+
+	cur->xhlock_idx_hist[c] = cur->xhlock_idx;
+	cur->hist_id_save[c]    = cur->hist_id;
 }
 
 void crossrelease_hist_end(enum xhlock_context_t c)
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -2093,7 +2093,28 @@ __acquires(&pool->lock)
 
 	lock_map_acquire(&pwq->wq->lockdep_map);
 	lock_map_acquire(&lockdep_map);
-	crossrelease_hist_start(XHLOCK_PROC);
+	/*
+	 * Strictly speaking we should do start(PROC) without holding any
+	 * locks, that is, before these two lock_map_acquire()'s.
+	 *
+	 * However, that would result in:
+	 *
+	 *   A(W1)
+	 *   WFC(C)
+	 *		A(W1)
+	 *		C(C)
+	 *
+	 * Which would create W1->C->W1 dependencies, even though there is no
+	 * actual deadlock possible. There are two solutions, using a
+	 * read-recursive acquire on the work(queue) 'locks', but this will then
+	 * hit the lockdep limitation on recursive locks, or simly discard
+	 * these locks.
+	 *
+	 * AFAICT there is no possible deadlock scenario between the
+	 * flush_work() and complete() primitives (except for single-threaded
+	 * workqueues), so hiding them isn't a problem.
+	 */
+	crossrelease_hist_start(XHLOCK_PROC, true);
 	trace_workqueue_execute_start(work);
 	worker->current_func(work);
 	/*

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 1/4] workqueue: Use TASK_IDLE
  2017-08-23 11:58 ` [PATCH 1/4] workqueue: Use TASK_IDLE Peter Zijlstra
@ 2017-08-23 13:31   ` Tejun Heo
  0 siblings, 0 replies; 61+ messages in thread
From: Tejun Heo @ 2017-08-23 13:31 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: mingo, byungchul.park, boqun.feng, david, johannes, oleg, linux-kernel

On Wed, Aug 23, 2017 at 01:58:44PM +0200, Peter Zijlstra wrote:
> Workqueues don't use signals, it (ab)uses TASK_INTERRUPTIBLE to avoid
> increasing the loadavg numbers. We've 'recently' introduced TASK_IDLE
> for this case:
> 
>   80ed87c8a9ca ("sched/wait: Introduce TASK_NOLOAD and TASK_IDLE")
> 
> use it.

Applied to wq/for-4.14.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 0/4] workqueue and lockdep stuffs..
  2017-08-23 11:58 [PATCH 0/4] workqueue and lockdep stuffs Peter Zijlstra
                   ` (3 preceding siblings ...)
  2017-08-23 11:58 ` [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation Peter Zijlstra
@ 2017-08-23 13:32 ` Tejun Heo
  2017-08-23 13:45   ` Peter Zijlstra
  4 siblings, 1 reply; 61+ messages in thread
From: Tejun Heo @ 2017-08-23 13:32 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: mingo, byungchul.park, boqun.feng, david, johannes, oleg, linux-kernel

On Wed, Aug 23, 2017 at 01:58:43PM +0200, Peter Zijlstra wrote:
> Hi all,
> 
> These are the patches I propose to deal with the current workqueue vs lockdep
> fallout.
> 
> TJ, if you like 1/4, please pick it up yourself, its completely independent and
> not really that important. I'll carry the rest to make sure tip/locking stays
> functional.

Took the first one.  Please feel free to add my ack on the workqueue
part and route through tip.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 0/4] workqueue and lockdep stuffs..
  2017-08-23 13:32 ` [PATCH 0/4] workqueue and lockdep stuffs Tejun Heo
@ 2017-08-23 13:45   ` Peter Zijlstra
  0 siblings, 0 replies; 61+ messages in thread
From: Peter Zijlstra @ 2017-08-23 13:45 UTC (permalink / raw)
  To: Tejun Heo
  Cc: mingo, byungchul.park, boqun.feng, david, johannes, oleg, linux-kernel

On Wed, Aug 23, 2017 at 06:32:25AM -0700, Tejun Heo wrote:
> On Wed, Aug 23, 2017 at 01:58:43PM +0200, Peter Zijlstra wrote:
> > Hi all,
> > 
> > These are the patches I propose to deal with the current workqueue vs lockdep
> > fallout.
> > 
> > TJ, if you like 1/4, please pick it up yourself, its completely independent and
> > not really that important. I'll carry the rest to make sure tip/locking stays
> > functional.
> 
> Took the first one.  Please feel free to add my ack on the workqueue
> part and route through tip.

Thanks TJ!

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
  2017-08-23 11:58 ` [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation Peter Zijlstra
@ 2017-08-24  2:18   ` Byungchul Park
  2017-08-24 14:02     ` Peter Zijlstra
  2017-08-29  6:46   ` Byungchul Park
  1 sibling, 1 reply; 61+ messages in thread
From: Byungchul Park @ 2017-08-24  2:18 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: mingo, tj, boqun.feng, david, johannes, oleg, linux-kernel

On Wed, Aug 23, 2017 at 01:58:47PM +0200, Peter Zijlstra wrote:
> The new completion/crossrelease annotations interact unfavourable with
> the extant flush_work()/flush_workqueue() annotations.
> 
> The problem is that when a single work class does:
> 
>   wait_for_completion(&C)
> 
> and
> 
>   complete(&C)
> 
> in different executions, we'll build dependencies like:
> 
>   lock_map_acquire(W)
>   complete_acquire(C)
> 
> and
> 
>   lock_map_acquire(W)
>   complete_release(C)
> 
> which results in the dependency chain: W->C->W, which lockdep thinks
> spells deadlock, even though there is no deadlock potential since
> works are ran concurrently.
> 
> One possibility would be to change the work 'lock' to recursive-read,

I'm not sure if this solve the issue perfectly, but anyway it should be
a recursive version after fixing lockdep, regardless of the issue.

> but that would mean hitting a lockdep limitation on recursive locks.

Fo now, work-around might be needed.

> Also, unconditinoally switching to recursive-read here would fail to
> detect the actual deadlock on single-threaded workqueues, which do

Do you mean it's true even in case having fixed lockdep properly?
Could you explain why if so? IMHO, I don't think so.

> @@ -4751,15 +4751,31 @@ static inline void invalidate_xhlock(str
>   * The same is true for system-calls, once a system call is completed (we've
>   * returned to userspace) the next system call does not depend on the lock
>   * history of the previous system call.
> + *
> + * They key property for independence, this invariant state, is that it must be
> + * a point where we hold no locks and have no history. Because if we were to
> + * hold locks, the restore at _end() would not necessarily recover it's history
> + * entry. Similarly, independence per-definition means it does not depend on
> + * prior state.
>   */
> -void crossrelease_hist_start(enum xhlock_context_t c)
> +void crossrelease_hist_start(enum xhlock_context_t c, bool force)
>  {
>  	struct task_struct *cur = current;
>  
> -	if (cur->xhlocks) {
> -		cur->xhlock_idx_hist[c] = cur->xhlock_idx;
> -		cur->hist_id_save[c] = cur->hist_id;
> +	if (!cur->xhlocks)
> +		return;
> +
> +	/*
> +	 * We call this at an invariant point, no current state, no history.
> +	 */

This very work-around code _must_ be removed after fixing read-recursive
thing in lockdep. I think it would be better to add a tag(comment)
saying it.

> +	if (c == XHLOCK_PROC) {
> +		/* verified the former, ensure the latter */
> +		WARN_ON_ONCE(!force && cur->lockdep_depth);
> +		invalidate_xhlock(&xhlock(cur->xhlock_idx));
>  	}
> +
> +	cur->xhlock_idx_hist[c] = cur->xhlock_idx;
> +	cur->hist_id_save[c]    = cur->hist_id;
>  }
>  
>  void crossrelease_hist_end(enum xhlock_context_t c)

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
  2017-08-24  2:18   ` Byungchul Park
@ 2017-08-24 14:02     ` Peter Zijlstra
  2017-08-25  1:11       ` Byungchul Park
  2017-08-25  4:39       ` Byungchul Park
  0 siblings, 2 replies; 61+ messages in thread
From: Peter Zijlstra @ 2017-08-24 14:02 UTC (permalink / raw)
  To: Byungchul Park; +Cc: mingo, tj, boqun.feng, david, johannes, oleg, linux-kernel

On Thu, Aug 24, 2017 at 11:18:40AM +0900, Byungchul Park wrote:
> On Wed, Aug 23, 2017 at 01:58:47PM +0200, Peter Zijlstra wrote:

> > Also, unconditinoally switching to recursive-read here would fail to
> > detect the actual deadlock on single-threaded workqueues, which do
> 
> Do you mean it's true even in case having fixed lockdep properly?
> Could you explain why if so? IMHO, I don't think so.

I'm saying that if lockdep is fixed it should be:

	if (wq->saved_max_active == 1 || wq->rescuer) {
		lock_map_acquire(wq->lockdep_map);
		lock_map_acquire(lockdep_map);
	} else {
		lock_map_acquire_read(wq->lockdep_map);
		lock_map_acquire_read(lockdep_map);
	}

or something like that, because for a single-threaded workqueue, the
following _IS_ a deadlock:

	work-n:
		wait_for_completion(C);

	work-n+1:
		complete(C);

And that is the only case we now fail to catch.

> > +void crossrelease_hist_start(enum xhlock_context_t c, bool force)
> >  {
> >  	struct task_struct *cur = current;
> >  
> > -	if (cur->xhlocks) {
> > -		cur->xhlock_idx_hist[c] = cur->xhlock_idx;
> > -		cur->hist_id_save[c] = cur->hist_id;
> > +	if (!cur->xhlocks)
> > +		return;
> > +
> > +	/*
> > +	 * We call this at an invariant point, no current state, no history.
> > +	 */
> 
> This very work-around code _must_ be removed after fixing read-recursive
> thing in lockdep. I think it would be better to add a tag(comment)
> saying it.
> 
> > +	if (c == XHLOCK_PROC) {
> > +		/* verified the former, ensure the latter */
> > +		WARN_ON_ONCE(!force && cur->lockdep_depth);
> > +		invalidate_xhlock(&xhlock(cur->xhlock_idx));
> >  	}

No, this is not a work around, this is fundamentally so. It's not going
away. The only thing that should go away is the .force argument.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
  2017-08-24 14:02     ` Peter Zijlstra
@ 2017-08-25  1:11       ` Byungchul Park
  2017-08-29  8:59         ` Peter Zijlstra
  2017-08-25  4:39       ` Byungchul Park
  1 sibling, 1 reply; 61+ messages in thread
From: Byungchul Park @ 2017-08-25  1:11 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: mingo, tj, boqun.feng, david, johannes, oleg, linux-kernel, kernel-team

On Thu, Aug 24, 2017 at 04:02:40PM +0200, Peter Zijlstra wrote:
> On Thu, Aug 24, 2017 at 11:18:40AM +0900, Byungchul Park wrote:
> > On Wed, Aug 23, 2017 at 01:58:47PM +0200, Peter Zijlstra wrote:
> 
> > > Also, unconditinoally switching to recursive-read here would fail to
> > > detect the actual deadlock on single-threaded workqueues, which do
> > 
> > Do you mean it's true even in case having fixed lockdep properly?
> > Could you explain why if so? IMHO, I don't think so.
> 
> I'm saying that if lockdep is fixed it should be:
> 
> 	if (wq->saved_max_active == 1 || wq->rescuer) {
> 		lock_map_acquire(wq->lockdep_map);
> 		lock_map_acquire(lockdep_map);
> 	} else {
> 		lock_map_acquire_read(wq->lockdep_map);
> 		lock_map_acquire_read(lockdep_map);
> 	}
> 
> or something like that, because for a single-threaded workqueue, the
> following _IS_ a deadlock:
> 
> 	work-n:
> 		wait_for_completion(C);
> 
> 	work-n+1:
> 		complete(C);
> 
> And that is the only case we now fail to catch.

Thank you for explanation.

> > > +void crossrelease_hist_start(enum xhlock_context_t c, bool force)
> > >  {
> > >  	struct task_struct *cur = current;
> > >  
> > > -	if (cur->xhlocks) {
> > > -		cur->xhlock_idx_hist[c] = cur->xhlock_idx;
> > > -		cur->hist_id_save[c] = cur->hist_id;
> > > +	if (!cur->xhlocks)
> > > +		return;
> > > +
> > > +	/*
> > > +	 * We call this at an invariant point, no current state, no history.
> > > +	 */
> > 
> > This very work-around code _must_ be removed after fixing read-recursive
> > thing in lockdep. I think it would be better to add a tag(comment)
> > saying it.
> > 
> > > +	if (c == XHLOCK_PROC) {
> > > +		/* verified the former, ensure the latter */
> > > +		WARN_ON_ONCE(!force && cur->lockdep_depth);
> > > +		invalidate_xhlock(&xhlock(cur->xhlock_idx));
> > >  	}
> 
> No, this is not a work around, this is fundamentally so. It's not going
> away. The only thing that should go away is the .force argument.

I meant, this seems to be led from your mis-understanding of
crossrelease_hist_{start, end}().

Uer of force == 1 should not exist or don't have to exist. I am sure you
haven't read my replys. Please read the following at least:

https://lkml.org/lkml/2017/8/24/126

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
  2017-08-24 14:02     ` Peter Zijlstra
  2017-08-25  1:11       ` Byungchul Park
@ 2017-08-25  4:39       ` Byungchul Park
  1 sibling, 0 replies; 61+ messages in thread
From: Byungchul Park @ 2017-08-25  4:39 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: mingo, tj, boqun.feng, david, johannes, oleg, linux-kernel, kernel-team

On Thu, Aug 24, 2017 at 04:02:40PM +0200, Peter Zijlstra wrote:
> > > +	if (c == XHLOCK_PROC) {

I found this now. Are you trying to invalidate it w/o checking force?
No, we _should not_ do this. It's worse than work-around code.

No reason to do this here. Please communicate with me more or understand
how this code works before applying it.

> > > +		/* verified the former, ensure the latter */
> > > +		WARN_ON_ONCE(!force && cur->lockdep_depth);
> > > +		invalidate_xhlock(&xhlock(cur->xhlock_idx));
> > >  	}

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
  2017-08-23 11:58 ` [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation Peter Zijlstra
  2017-08-24  2:18   ` Byungchul Park
@ 2017-08-29  6:46   ` Byungchul Park
  2017-08-29  9:01     ` Peter Zijlstra
  1 sibling, 1 reply; 61+ messages in thread
From: Byungchul Park @ 2017-08-29  6:46 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: mingo, tj, boqun.feng, david, johannes, oleg, linux-kernel, kernel-team

On Wed, Aug 23, 2017 at 01:58:47PM +0200, Peter Zijlstra wrote:
> The new completion/crossrelease annotations interact unfavourable with
> the extant flush_work()/flush_workqueue() annotations.
> 
> The problem is that when a single work class does:
> 
>   wait_for_completion(&C)
> 
> and
> 
>   complete(&C)
> 
> in different executions, we'll build dependencies like:
> 
>   lock_map_acquire(W)
>   complete_acquire(C)
> 
> and
> 
>   lock_map_acquire(W)
>   complete_release(C)
> 
> which results in the dependency chain: W->C->W, which lockdep thinks
> spells deadlock, even though there is no deadlock potential since
> works are ran concurrently.
> 
> One possibility would be to change the work 'lock' to recursive-read,
> but that would mean hitting a lockdep limitation on recursive locks.
> Also, unconditinoally switching to recursive-read here would fail to
> detect the actual deadlock on single-threaded workqueues, which do
> have a problem with this.
> 
> For now, forcefully disregard these locks for crossrelease.

Eventually, you pushed this patch to tip tree without any comment.

I don't really understand you.

How does a maintainer choose a very work-around method and avoid
problems rather than fix a root cause? I am very disappointed.

But, I have nothing to do against your will.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
  2017-08-25  1:11       ` Byungchul Park
@ 2017-08-29  8:59         ` Peter Zijlstra
  2017-08-29 14:23           ` [tip:locking/core] locking/lockdep: Untangle xhlock history save/restore from task independence tip-bot for Peter Zijlstra
                             ` (2 more replies)
  0 siblings, 3 replies; 61+ messages in thread
From: Peter Zijlstra @ 2017-08-29  8:59 UTC (permalink / raw)
  To: Byungchul Park
  Cc: mingo, tj, boqun.feng, david, johannes, oleg, linux-kernel, kernel-team

On Fri, Aug 25, 2017 at 10:11:14AM +0900, Byungchul Park wrote:
> I meant, this seems to be led from your mis-understanding of
> crossrelease_hist_{start, end}().

I have, several times now, explained why PROC is special.

You seem to still think it can be used like the soft/hard-irq ones, this
is fundamentally not so.

Does something like so help?

---
Subject: lockdep: Untangle xhlock history save/restore from task independence

Where XHLOCK_{SOFT,HARD} are save/restore points in the xhlocks[] to
ensure the temporal IRQ events don't interact with task state, the
XHLOCK_PROC is a fundament different beast that just happens to share
the interface.

The purpose of XHLOCK_PROC is to annotate independent execution inside
one task. For example workqueues, each work should appear to run in its
own 'pristine' 'task'.

Remove XHLOCK_PROC in favour of its own interface to avoid confusion.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 include/linux/irqflags.h |  4 +--
 include/linux/lockdep.h  |  7 +++--
 kernel/locking/lockdep.c | 79 +++++++++++++++++++++++-------------------------
 kernel/workqueue.c       |  9 +++---
 4 files changed, 48 insertions(+), 51 deletions(-)

diff --git a/include/linux/irqflags.h b/include/linux/irqflags.h
index 9bc050bc81b2..5fdd93bb9300 100644
--- a/include/linux/irqflags.h
+++ b/include/linux/irqflags.h
@@ -26,7 +26,7 @@
 # define trace_hardirq_enter()			\
 do {						\
 	current->hardirq_context++;		\
-	crossrelease_hist_start(XHLOCK_HARD, 0);\
+	crossrelease_hist_start(XHLOCK_HARD);	\
 } while (0)
 # define trace_hardirq_exit()			\
 do {						\
@@ -36,7 +36,7 @@ do {						\
 # define lockdep_softirq_enter()		\
 do {						\
 	current->softirq_context++;		\
-	crossrelease_hist_start(XHLOCK_SOFT, 0);\
+	crossrelease_hist_start(XHLOCK_SOFT);	\
 } while (0)
 # define lockdep_softirq_exit()			\
 do {						\
diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index 78bb7133abed..bfa8e0b0d6f1 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -551,7 +551,6 @@ struct pin_cookie { };
 enum xhlock_context_t {
 	XHLOCK_HARD,
 	XHLOCK_SOFT,
-	XHLOCK_PROC,
 	XHLOCK_CTX_NR,
 };
 
@@ -580,8 +579,9 @@ extern void lock_commit_crosslock(struct lockdep_map *lock);
 #define STATIC_LOCKDEP_MAP_INIT(_name, _key) \
 	{ .name = (_name), .key = (void *)(_key), .cross = 0, }
 
-extern void crossrelease_hist_start(enum xhlock_context_t c, bool force);
+extern void crossrelease_hist_start(enum xhlock_context_t c);
 extern void crossrelease_hist_end(enum xhlock_context_t c);
+extern void lockdep_invariant_state(bool force);
 extern void lockdep_init_task(struct task_struct *task);
 extern void lockdep_free_task(struct task_struct *task);
 #else /* !CROSSRELEASE */
@@ -593,8 +593,9 @@ extern void lockdep_free_task(struct task_struct *task);
 #define STATIC_LOCKDEP_MAP_INIT(_name, _key) \
 	{ .name = (_name), .key = (void *)(_key), }
 
-static inline void crossrelease_hist_start(enum xhlock_context_t c, bool force) {}
+static inline void crossrelease_hist_start(enum xhlock_context_t c) {}
 static inline void crossrelease_hist_end(enum xhlock_context_t c) {}
+static inline void lockdep_invariant_state(bool force) {}
 static inline void lockdep_init_task(struct task_struct *task) {}
 static inline void lockdep_free_task(struct task_struct *task) {}
 #endif /* CROSSRELEASE */
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index f73ca595b81e..44c8d0d17170 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -4623,13 +4623,8 @@ asmlinkage __visible void lockdep_sys_exit(void)
 	/*
 	 * The lock history for each syscall should be independent. So wipe the
 	 * slate clean on return to userspace.
-	 *
-	 * crossrelease_hist_end() works well here even when getting here
-	 * without starting (i.e. just after forking), because it rolls back
-	 * the index to point to the last entry, which is already invalid.
 	 */
-	crossrelease_hist_end(XHLOCK_PROC);
-	crossrelease_hist_start(XHLOCK_PROC, false);
+	lockdep_invariant_state(false);
 }
 
 void lockdep_rcu_suspicious(const char *file, const int line, const char *s)
@@ -4723,19 +4718,47 @@ static inline void invalidate_xhlock(struct hist_lock *xhlock)
 }
 
 /*
- * Lock history stacks; we have 3 nested lock history stacks:
+ * Lock history stacks; we have 2 nested lock history stacks:
  *
  *   HARD(IRQ)
  *   SOFT(IRQ)
- *   PROC(ess)
  *
  * The thing is that once we complete a HARD/SOFT IRQ the future task locks
  * should not depend on any of the locks observed while running the IRQ.  So
  * what we do is rewind the history buffer and erase all our knowledge of that
  * temporal event.
- *
- * The PROCess one is special though; it is used to annotate independence
- * inside a task.
+ */
+
+void crossrelease_hist_start(enum xhlock_context_t c)
+{
+	struct task_struct *cur = current;
+
+	if (!cur->xhlocks)
+		return;
+
+	cur->xhlock_idx_hist[c] = cur->xhlock_idx;
+	cur->hist_id_save[c]    = cur->hist_id;
+}
+
+void crossrelease_hist_end(enum xhlock_context_t c)
+{
+	struct task_struct *cur = current;
+
+	if (cur->xhlocks) {
+		unsigned int idx = cur->xhlock_idx_hist[c];
+		struct hist_lock *h = &xhlock(idx);
+
+		cur->xhlock_idx = idx;
+
+		/* Check if the ring was overwritten. */
+		if (h->hist_id != cur->hist_id_save[c])
+			invalidate_xhlock(h);
+	}
+}
+
+/*
+ * lockdep_invariant_state() is used to annotate independence inside a task, to
+ * make one task look like multiple independent 'tasks'.
  *
  * Take for instance workqueues; each work is independent of the last. The
  * completion of a future work does not depend on the completion of a past work
@@ -4758,40 +4781,14 @@ static inline void invalidate_xhlock(struct hist_lock *xhlock)
  * entry. Similarly, independence per-definition means it does not depend on
  * prior state.
  */
-void crossrelease_hist_start(enum xhlock_context_t c, bool force)
+void lockdep_invariant_state(bool force)
 {
-	struct task_struct *cur = current;
-
-	if (!cur->xhlocks)
-		return;
-
 	/*
 	 * We call this at an invariant point, no current state, no history.
+	 * Verify the former, enforce the latter.
 	 */
-	if (c == XHLOCK_PROC) {
-		/* verified the former, ensure the latter */
-		WARN_ON_ONCE(!force && cur->lockdep_depth);
-		invalidate_xhlock(&xhlock(cur->xhlock_idx));
-	}
-
-	cur->xhlock_idx_hist[c] = cur->xhlock_idx;
-	cur->hist_id_save[c]    = cur->hist_id;
-}
-
-void crossrelease_hist_end(enum xhlock_context_t c)
-{
-	struct task_struct *cur = current;
-
-	if (cur->xhlocks) {
-		unsigned int idx = cur->xhlock_idx_hist[c];
-		struct hist_lock *h = &xhlock(idx);
-
-		cur->xhlock_idx = idx;
-
-		/* Check if the ring was overwritten. */
-		if (h->hist_id != cur->hist_id_save[c])
-			invalidate_xhlock(h);
-	}
+	WARN_ON_ONCE(!force && current->lockdep_depth);
+	invalidate_xhlock(&xhlock(current->xhlock_idx));
 }
 
 static int cross_lock(struct lockdep_map *lock)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index c0331891dec1..ab3c0dc8c7ed 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -2094,8 +2094,8 @@ __acquires(&pool->lock)
 	lock_map_acquire(&pwq->wq->lockdep_map);
 	lock_map_acquire(&lockdep_map);
 	/*
-	 * Strictly speaking we should do start(PROC) without holding any
-	 * locks, that is, before these two lock_map_acquire()'s.
+	 * Strictly speaking we should mark the invariant state without holding
+	 * any locks, that is, before these two lock_map_acquire()'s.
 	 *
 	 * However, that would result in:
 	 *
@@ -2107,14 +2107,14 @@ __acquires(&pool->lock)
 	 * Which would create W1->C->W1 dependencies, even though there is no
 	 * actual deadlock possible. There are two solutions, using a
 	 * read-recursive acquire on the work(queue) 'locks', but this will then
-	 * hit the lockdep limitation on recursive locks, or simly discard
+	 * hit the lockdep limitation on recursive locks, or simply discard
 	 * these locks.
 	 *
 	 * AFAICT there is no possible deadlock scenario between the
 	 * flush_work() and complete() primitives (except for single-threaded
 	 * workqueues), so hiding them isn't a problem.
 	 */
-	crossrelease_hist_start(XHLOCK_PROC, true);
+	lockdep_invariant_state(true);
 	trace_workqueue_execute_start(work);
 	worker->current_func(work);
 	/*
@@ -2122,7 +2122,6 @@ __acquires(&pool->lock)
 	 * point will only record its address.
 	 */
 	trace_workqueue_execute_end(work);
-	crossrelease_hist_end(XHLOCK_PROC);
 	lock_map_release(&lockdep_map);
 	lock_map_release(&pwq->wq->lockdep_map);
 

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
  2017-08-29  6:46   ` Byungchul Park
@ 2017-08-29  9:01     ` Peter Zijlstra
  2017-08-29 16:12       ` Byungchul Park
  0 siblings, 1 reply; 61+ messages in thread
From: Peter Zijlstra @ 2017-08-29  9:01 UTC (permalink / raw)
  To: Byungchul Park
  Cc: mingo, tj, boqun.feng, david, johannes, oleg, linux-kernel, kernel-team

On Tue, Aug 29, 2017 at 03:46:38PM +0900, Byungchul Park wrote:

> How does a maintainer choose a very work-around method and avoid
> problems rather than fix a root cause? I am very disappointed.

Time.. we need this sorted before we push the whole lot to Linus in the
next window. Fixing the recursive-read thing is far more work (although
Boqun did post some patches for that, which I still have to look at).

^ permalink raw reply	[flat|nested] 61+ messages in thread

* [tip:locking/core] locking/lockdep: Untangle xhlock history save/restore from task independence
  2017-08-29  8:59         ` Peter Zijlstra
@ 2017-08-29 14:23           ` tip-bot for Peter Zijlstra
  2017-08-29 16:02           ` [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation Byungchul Park
  2017-08-30  2:09           ` Byungchul Park
  2 siblings, 0 replies; 61+ messages in thread
From: tip-bot for Peter Zijlstra @ 2017-08-29 14:23 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: byungchul.park, mingo, tglx, linux-kernel, hpa, peterz, torvalds

Commit-ID:  f52be5708076b75a045ac52c6fef3fffb8300525
Gitweb:     http://git.kernel.org/tip/f52be5708076b75a045ac52c6fef3fffb8300525
Author:     Peter Zijlstra <peterz@infradead.org>
AuthorDate: Tue, 29 Aug 2017 10:59:39 +0200
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Tue, 29 Aug 2017 15:14:38 +0200

locking/lockdep: Untangle xhlock history save/restore from task independence

Where XHLOCK_{SOFT,HARD} are save/restore points in the xhlocks[] to
ensure the temporal IRQ events don't interact with task state, the
XHLOCK_PROC is a fundament different beast that just happens to share
the interface.

The purpose of XHLOCK_PROC is to annotate independent execution inside
one task. For example workqueues, each work should appear to run in its
own 'pristine' 'task'.

Remove XHLOCK_PROC in favour of its own interface to avoid confusion.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Byungchul Park <byungchul.park@lge.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: boqun.feng@gmail.com
Cc: david@fromorbit.com
Cc: johannes@sipsolutions.net
Cc: kernel-team@lge.com
Cc: oleg@redhat.com
Cc: tj@kernel.org
Link: http://lkml.kernel.org/r/20170829085939.ggmb6xiohw67micb@hirez.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 include/linux/irqflags.h |  4 +--
 include/linux/lockdep.h  |  7 +++--
 kernel/locking/lockdep.c | 79 +++++++++++++++++++++++-------------------------
 kernel/workqueue.c       |  9 +++---
 4 files changed, 48 insertions(+), 51 deletions(-)

diff --git a/include/linux/irqflags.h b/include/linux/irqflags.h
index 9bc050b..5fdd93b 100644
--- a/include/linux/irqflags.h
+++ b/include/linux/irqflags.h
@@ -26,7 +26,7 @@
 # define trace_hardirq_enter()			\
 do {						\
 	current->hardirq_context++;		\
-	crossrelease_hist_start(XHLOCK_HARD, 0);\
+	crossrelease_hist_start(XHLOCK_HARD);	\
 } while (0)
 # define trace_hardirq_exit()			\
 do {						\
@@ -36,7 +36,7 @@ do {						\
 # define lockdep_softirq_enter()		\
 do {						\
 	current->softirq_context++;		\
-	crossrelease_hist_start(XHLOCK_SOFT, 0);\
+	crossrelease_hist_start(XHLOCK_SOFT);	\
 } while (0)
 # define lockdep_softirq_exit()			\
 do {						\
diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index 78bb713..bfa8e0b 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -551,7 +551,6 @@ struct pin_cookie { };
 enum xhlock_context_t {
 	XHLOCK_HARD,
 	XHLOCK_SOFT,
-	XHLOCK_PROC,
 	XHLOCK_CTX_NR,
 };
 
@@ -580,8 +579,9 @@ extern void lock_commit_crosslock(struct lockdep_map *lock);
 #define STATIC_LOCKDEP_MAP_INIT(_name, _key) \
 	{ .name = (_name), .key = (void *)(_key), .cross = 0, }
 
-extern void crossrelease_hist_start(enum xhlock_context_t c, bool force);
+extern void crossrelease_hist_start(enum xhlock_context_t c);
 extern void crossrelease_hist_end(enum xhlock_context_t c);
+extern void lockdep_invariant_state(bool force);
 extern void lockdep_init_task(struct task_struct *task);
 extern void lockdep_free_task(struct task_struct *task);
 #else /* !CROSSRELEASE */
@@ -593,8 +593,9 @@ extern void lockdep_free_task(struct task_struct *task);
 #define STATIC_LOCKDEP_MAP_INIT(_name, _key) \
 	{ .name = (_name), .key = (void *)(_key), }
 
-static inline void crossrelease_hist_start(enum xhlock_context_t c, bool force) {}
+static inline void crossrelease_hist_start(enum xhlock_context_t c) {}
 static inline void crossrelease_hist_end(enum xhlock_context_t c) {}
+static inline void lockdep_invariant_state(bool force) {}
 static inline void lockdep_init_task(struct task_struct *task) {}
 static inline void lockdep_free_task(struct task_struct *task) {}
 #endif /* CROSSRELEASE */
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index f73ca59..44c8d0d 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -4623,13 +4623,8 @@ asmlinkage __visible void lockdep_sys_exit(void)
 	/*
 	 * The lock history for each syscall should be independent. So wipe the
 	 * slate clean on return to userspace.
-	 *
-	 * crossrelease_hist_end() works well here even when getting here
-	 * without starting (i.e. just after forking), because it rolls back
-	 * the index to point to the last entry, which is already invalid.
 	 */
-	crossrelease_hist_end(XHLOCK_PROC);
-	crossrelease_hist_start(XHLOCK_PROC, false);
+	lockdep_invariant_state(false);
 }
 
 void lockdep_rcu_suspicious(const char *file, const int line, const char *s)
@@ -4723,19 +4718,47 @@ static inline void invalidate_xhlock(struct hist_lock *xhlock)
 }
 
 /*
- * Lock history stacks; we have 3 nested lock history stacks:
+ * Lock history stacks; we have 2 nested lock history stacks:
  *
  *   HARD(IRQ)
  *   SOFT(IRQ)
- *   PROC(ess)
  *
  * The thing is that once we complete a HARD/SOFT IRQ the future task locks
  * should not depend on any of the locks observed while running the IRQ.  So
  * what we do is rewind the history buffer and erase all our knowledge of that
  * temporal event.
- *
- * The PROCess one is special though; it is used to annotate independence
- * inside a task.
+ */
+
+void crossrelease_hist_start(enum xhlock_context_t c)
+{
+	struct task_struct *cur = current;
+
+	if (!cur->xhlocks)
+		return;
+
+	cur->xhlock_idx_hist[c] = cur->xhlock_idx;
+	cur->hist_id_save[c]    = cur->hist_id;
+}
+
+void crossrelease_hist_end(enum xhlock_context_t c)
+{
+	struct task_struct *cur = current;
+
+	if (cur->xhlocks) {
+		unsigned int idx = cur->xhlock_idx_hist[c];
+		struct hist_lock *h = &xhlock(idx);
+
+		cur->xhlock_idx = idx;
+
+		/* Check if the ring was overwritten. */
+		if (h->hist_id != cur->hist_id_save[c])
+			invalidate_xhlock(h);
+	}
+}
+
+/*
+ * lockdep_invariant_state() is used to annotate independence inside a task, to
+ * make one task look like multiple independent 'tasks'.
  *
  * Take for instance workqueues; each work is independent of the last. The
  * completion of a future work does not depend on the completion of a past work
@@ -4758,40 +4781,14 @@ static inline void invalidate_xhlock(struct hist_lock *xhlock)
  * entry. Similarly, independence per-definition means it does not depend on
  * prior state.
  */
-void crossrelease_hist_start(enum xhlock_context_t c, bool force)
+void lockdep_invariant_state(bool force)
 {
-	struct task_struct *cur = current;
-
-	if (!cur->xhlocks)
-		return;
-
 	/*
 	 * We call this at an invariant point, no current state, no history.
+	 * Verify the former, enforce the latter.
 	 */
-	if (c == XHLOCK_PROC) {
-		/* verified the former, ensure the latter */
-		WARN_ON_ONCE(!force && cur->lockdep_depth);
-		invalidate_xhlock(&xhlock(cur->xhlock_idx));
-	}
-
-	cur->xhlock_idx_hist[c] = cur->xhlock_idx;
-	cur->hist_id_save[c]    = cur->hist_id;
-}
-
-void crossrelease_hist_end(enum xhlock_context_t c)
-{
-	struct task_struct *cur = current;
-
-	if (cur->xhlocks) {
-		unsigned int idx = cur->xhlock_idx_hist[c];
-		struct hist_lock *h = &xhlock(idx);
-
-		cur->xhlock_idx = idx;
-
-		/* Check if the ring was overwritten. */
-		if (h->hist_id != cur->hist_id_save[c])
-			invalidate_xhlock(h);
-	}
+	WARN_ON_ONCE(!force && current->lockdep_depth);
+	invalidate_xhlock(&xhlock(current->xhlock_idx));
 }
 
 static int cross_lock(struct lockdep_map *lock)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index c033189..ab3c0dc 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -2094,8 +2094,8 @@ __acquires(&pool->lock)
 	lock_map_acquire(&pwq->wq->lockdep_map);
 	lock_map_acquire(&lockdep_map);
 	/*
-	 * Strictly speaking we should do start(PROC) without holding any
-	 * locks, that is, before these two lock_map_acquire()'s.
+	 * Strictly speaking we should mark the invariant state without holding
+	 * any locks, that is, before these two lock_map_acquire()'s.
 	 *
 	 * However, that would result in:
 	 *
@@ -2107,14 +2107,14 @@ __acquires(&pool->lock)
 	 * Which would create W1->C->W1 dependencies, even though there is no
 	 * actual deadlock possible. There are two solutions, using a
 	 * read-recursive acquire on the work(queue) 'locks', but this will then
-	 * hit the lockdep limitation on recursive locks, or simly discard
+	 * hit the lockdep limitation on recursive locks, or simply discard
 	 * these locks.
 	 *
 	 * AFAICT there is no possible deadlock scenario between the
 	 * flush_work() and complete() primitives (except for single-threaded
 	 * workqueues), so hiding them isn't a problem.
 	 */
-	crossrelease_hist_start(XHLOCK_PROC, true);
+	lockdep_invariant_state(true);
 	trace_workqueue_execute_start(work);
 	worker->current_func(work);
 	/*
@@ -2122,7 +2122,6 @@ __acquires(&pool->lock)
 	 * point will only record its address.
 	 */
 	trace_workqueue_execute_end(work);
-	crossrelease_hist_end(XHLOCK_PROC);
 	lock_map_release(&lockdep_map);
 	lock_map_release(&pwq->wq->lockdep_map);
 

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
  2017-08-29  8:59         ` Peter Zijlstra
  2017-08-29 14:23           ` [tip:locking/core] locking/lockdep: Untangle xhlock history save/restore from task independence tip-bot for Peter Zijlstra
@ 2017-08-29 16:02           ` Byungchul Park
  2017-08-29 18:47             ` Peter Zijlstra
  2017-08-30  2:09           ` Byungchul Park
  2 siblings, 1 reply; 61+ messages in thread
From: Byungchul Park @ 2017-08-29 16:02 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Byungchul Park, Ingo Molnar, Tejun Heo, Boqun Feng, david,
	johannes, oleg, linux-kernel, kernel-team

On Tue, Aug 29, 2017 at 5:59 PM, Peter Zijlstra <peterz@infradead.org> wrote:
> On Fri, Aug 25, 2017 at 10:11:14AM +0900, Byungchul Park wrote:
>> I meant, this seems to be led from your mis-understanding of
>> crossrelease_hist_{start, end}().
>
> I have, several times now, explained why PROC is special.

I rather have explained why it's not, more times than you did, and you
have not read my explanation. Anyway, I am seriously curious about
why. Of course, I remember you said "PROC is special", but not _why_.
I really want to know _why_ PROC(=each work) should be handled
differently from others. Please show me an example except wq case
where you just tried to avoid problems than fix them.

> You seem to still think it can be used like the soft/hard-irq ones, this
> is fundamentally not so.

I wonder why, seriously.

>
> Does something like so help?
>
> ---
> Subject: lockdep: Untangle xhlock history save/restore from task independence
>
> Where XHLOCK_{SOFT,HARD} are save/restore points in the xhlocks[] to
> ensure the temporal IRQ events don't interact with task state, the
> XHLOCK_PROC is a fundament different beast that just happens to share
> the interface.
>
> The purpose of XHLOCK_PROC is to annotate independent execution inside
> one task. For example workqueues, each work should appear to run in its
> own 'pristine' 'task'.
>
> Remove XHLOCK_PROC in favour of its own interface to avoid confusion.
>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> ---
>  include/linux/irqflags.h |  4 +--
>  include/linux/lockdep.h  |  7 +++--
>  kernel/locking/lockdep.c | 79 +++++++++++++++++++++++-------------------------
>  kernel/workqueue.c       |  9 +++---
>  4 files changed, 48 insertions(+), 51 deletions(-)
>
> diff --git a/include/linux/irqflags.h b/include/linux/irqflags.h
> index 9bc050bc81b2..5fdd93bb9300 100644
> --- a/include/linux/irqflags.h
> +++ b/include/linux/irqflags.h
> @@ -26,7 +26,7 @@
>  # define trace_hardirq_enter()                 \
>  do {                                           \
>         current->hardirq_context++;             \
> -       crossrelease_hist_start(XHLOCK_HARD, 0);\
> +       crossrelease_hist_start(XHLOCK_HARD);   \
>  } while (0)
>  # define trace_hardirq_exit()                  \
>  do {                                           \
> @@ -36,7 +36,7 @@ do {                                          \
>  # define lockdep_softirq_enter()               \
>  do {                                           \
>         current->softirq_context++;             \
> -       crossrelease_hist_start(XHLOCK_SOFT, 0);\
> +       crossrelease_hist_start(XHLOCK_SOFT);   \
>  } while (0)
>  # define lockdep_softirq_exit()                        \
>  do {                                           \
> diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
> index 78bb7133abed..bfa8e0b0d6f1 100644
> --- a/include/linux/lockdep.h
> +++ b/include/linux/lockdep.h
> @@ -551,7 +551,6 @@ struct pin_cookie { };
>  enum xhlock_context_t {
>         XHLOCK_HARD,
>         XHLOCK_SOFT,
> -       XHLOCK_PROC,
>         XHLOCK_CTX_NR,
>  };
>
> @@ -580,8 +579,9 @@ extern void lock_commit_crosslock(struct lockdep_map *lock);
>  #define STATIC_LOCKDEP_MAP_INIT(_name, _key) \
>         { .name = (_name), .key = (void *)(_key), .cross = 0, }
>
> -extern void crossrelease_hist_start(enum xhlock_context_t c, bool force);
> +extern void crossrelease_hist_start(enum xhlock_context_t c);
>  extern void crossrelease_hist_end(enum xhlock_context_t c);
> +extern void lockdep_invariant_state(bool force);
>  extern void lockdep_init_task(struct task_struct *task);
>  extern void lockdep_free_task(struct task_struct *task);
>  #else /* !CROSSRELEASE */
> @@ -593,8 +593,9 @@ extern void lockdep_free_task(struct task_struct *task);
>  #define STATIC_LOCKDEP_MAP_INIT(_name, _key) \
>         { .name = (_name), .key = (void *)(_key), }
>
> -static inline void crossrelease_hist_start(enum xhlock_context_t c, bool force) {}
> +static inline void crossrelease_hist_start(enum xhlock_context_t c) {}
>  static inline void crossrelease_hist_end(enum xhlock_context_t c) {}
> +static inline void lockdep_invariant_state(bool force) {}
>  static inline void lockdep_init_task(struct task_struct *task) {}
>  static inline void lockdep_free_task(struct task_struct *task) {}
>  #endif /* CROSSRELEASE */
> diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
> index f73ca595b81e..44c8d0d17170 100644
> --- a/kernel/locking/lockdep.c
> +++ b/kernel/locking/lockdep.c
> @@ -4623,13 +4623,8 @@ asmlinkage __visible void lockdep_sys_exit(void)
>         /*
>          * The lock history for each syscall should be independent. So wipe the
>          * slate clean on return to userspace.
> -        *
> -        * crossrelease_hist_end() works well here even when getting here
> -        * without starting (i.e. just after forking), because it rolls back
> -        * the index to point to the last entry, which is already invalid.
>          */
> -       crossrelease_hist_end(XHLOCK_PROC);
> -       crossrelease_hist_start(XHLOCK_PROC, false);
> +       lockdep_invariant_state(false);
>  }
>
>  void lockdep_rcu_suspicious(const char *file, const int line, const char *s)
> @@ -4723,19 +4718,47 @@ static inline void invalidate_xhlock(struct hist_lock *xhlock)
>  }
>
>  /*
> - * Lock history stacks; we have 3 nested lock history stacks:
> + * Lock history stacks; we have 2 nested lock history stacks:
>   *
>   *   HARD(IRQ)
>   *   SOFT(IRQ)
> - *   PROC(ess)
>   *
>   * The thing is that once we complete a HARD/SOFT IRQ the future task locks
>   * should not depend on any of the locks observed while running the IRQ.  So
>   * what we do is rewind the history buffer and erase all our knowledge of that
>   * temporal event.
> - *
> - * The PROCess one is special though; it is used to annotate independence
> - * inside a task.
> + */
> +
> +void crossrelease_hist_start(enum xhlock_context_t c)
> +{
> +       struct task_struct *cur = current;
> +
> +       if (!cur->xhlocks)
> +               return;
> +
> +       cur->xhlock_idx_hist[c] = cur->xhlock_idx;
> +       cur->hist_id_save[c]    = cur->hist_id;
> +}
> +
> +void crossrelease_hist_end(enum xhlock_context_t c)
> +{
> +       struct task_struct *cur = current;
> +
> +       if (cur->xhlocks) {
> +               unsigned int idx = cur->xhlock_idx_hist[c];
> +               struct hist_lock *h = &xhlock(idx);
> +
> +               cur->xhlock_idx = idx;
> +
> +               /* Check if the ring was overwritten. */
> +               if (h->hist_id != cur->hist_id_save[c])
> +                       invalidate_xhlock(h);
> +       }
> +}
> +
> +/*
> + * lockdep_invariant_state() is used to annotate independence inside a task, to
> + * make one task look like multiple independent 'tasks'.
>   *
>   * Take for instance workqueues; each work is independent of the last. The
>   * completion of a future work does not depend on the completion of a past work
> @@ -4758,40 +4781,14 @@ static inline void invalidate_xhlock(struct hist_lock *xhlock)
>   * entry. Similarly, independence per-definition means it does not depend on
>   * prior state.
>   */
> -void crossrelease_hist_start(enum xhlock_context_t c, bool force)
> +void lockdep_invariant_state(bool force)
>  {
> -       struct task_struct *cur = current;
> -
> -       if (!cur->xhlocks)
> -               return;
> -
>         /*
>          * We call this at an invariant point, no current state, no history.
> +        * Verify the former, enforce the latter.
>          */
> -       if (c == XHLOCK_PROC) {
> -               /* verified the former, ensure the latter */
> -               WARN_ON_ONCE(!force && cur->lockdep_depth);
> -               invalidate_xhlock(&xhlock(cur->xhlock_idx));
> -       }
> -
> -       cur->xhlock_idx_hist[c] = cur->xhlock_idx;
> -       cur->hist_id_save[c]    = cur->hist_id;
> -}
> -
> -void crossrelease_hist_end(enum xhlock_context_t c)
> -{
> -       struct task_struct *cur = current;
> -
> -       if (cur->xhlocks) {
> -               unsigned int idx = cur->xhlock_idx_hist[c];
> -               struct hist_lock *h = &xhlock(idx);
> -
> -               cur->xhlock_idx = idx;
> -
> -               /* Check if the ring was overwritten. */
> -               if (h->hist_id != cur->hist_id_save[c])
> -                       invalidate_xhlock(h);
> -       }
> +       WARN_ON_ONCE(!force && current->lockdep_depth);
> +       invalidate_xhlock(&xhlock(current->xhlock_idx));
>  }
>
>  static int cross_lock(struct lockdep_map *lock)
> diff --git a/kernel/workqueue.c b/kernel/workqueue.c
> index c0331891dec1..ab3c0dc8c7ed 100644
> --- a/kernel/workqueue.c
> +++ b/kernel/workqueue.c
> @@ -2094,8 +2094,8 @@ __acquires(&pool->lock)
>         lock_map_acquire(&pwq->wq->lockdep_map);
>         lock_map_acquire(&lockdep_map);
>         /*
> -        * Strictly speaking we should do start(PROC) without holding any
> -        * locks, that is, before these two lock_map_acquire()'s.
> +        * Strictly speaking we should mark the invariant state without holding
> +        * any locks, that is, before these two lock_map_acquire()'s.
>          *
>          * However, that would result in:
>          *
> @@ -2107,14 +2107,14 @@ __acquires(&pool->lock)
>          * Which would create W1->C->W1 dependencies, even though there is no
>          * actual deadlock possible. There are two solutions, using a
>          * read-recursive acquire on the work(queue) 'locks', but this will then
> -        * hit the lockdep limitation on recursive locks, or simly discard
> +        * hit the lockdep limitation on recursive locks, or simply discard
>          * these locks.
>          *
>          * AFAICT there is no possible deadlock scenario between the
>          * flush_work() and complete() primitives (except for single-threaded
>          * workqueues), so hiding them isn't a problem.
>          */
> -       crossrelease_hist_start(XHLOCK_PROC, true);
> +       lockdep_invariant_state(true);
>         trace_workqueue_execute_start(work);
>         worker->current_func(work);
>         /*
> @@ -2122,7 +2122,6 @@ __acquires(&pool->lock)
>          * point will only record its address.
>          */
>         trace_workqueue_execute_end(work);
> -       crossrelease_hist_end(XHLOCK_PROC);
>         lock_map_release(&lockdep_map);
>         lock_map_release(&pwq->wq->lockdep_map);
>



-- 
Thanks,
Byungchul

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
  2017-08-29  9:01     ` Peter Zijlstra
@ 2017-08-29 16:12       ` Byungchul Park
  0 siblings, 0 replies; 61+ messages in thread
From: Byungchul Park @ 2017-08-29 16:12 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Byungchul Park, Ingo Molnar, Tejun Heo, Boqun Feng, david,
	johannes, oleg, linux-kernel, kernel-team

On Tue, Aug 29, 2017 at 6:01 PM, Peter Zijlstra <peterz@infradead.org> wrote:
> On Tue, Aug 29, 2017 at 03:46:38PM +0900, Byungchul Park wrote:
>
>> How does a maintainer choose a very work-around method and avoid
>> problems rather than fix a root cause? I am very disappointed.
>
> Time.. we need this sorted before we push the whole lot to Linus in the
> next window. Fixing the recursive-read thing is far more work (although
> Boqun did post some patches for that, which I still have to look at).

As I said, it's not a problem of whether read-acquire should be used or not, of
course I thought so, for the first time. But now I see what problems are, as I
already answered to TJ. The work which makes read-acquire work well is
worth itself. But, problems we should focus on now is that manual acquire(work,
wq) should be replaced with others or removed.

-- 
Thanks,
Byungchul

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
  2017-08-29 16:02           ` [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation Byungchul Park
@ 2017-08-29 18:47             ` Peter Zijlstra
  0 siblings, 0 replies; 61+ messages in thread
From: Peter Zijlstra @ 2017-08-29 18:47 UTC (permalink / raw)
  To: Byungchul Park
  Cc: Byungchul Park, Ingo Molnar, Tejun Heo, Boqun Feng, david,
	johannes, oleg, linux-kernel, kernel-team

On Wed, Aug 30, 2017 at 01:02:39AM +0900, Byungchul Park wrote:
> On Tue, Aug 29, 2017 at 5:59 PM, Peter Zijlstra <peterz@infradead.org> wrote:
> > On Fri, Aug 25, 2017 at 10:11:14AM +0900, Byungchul Park wrote:
> >> I meant, this seems to be led from your mis-understanding of
> >> crossrelease_hist_{start, end}().
> >
> > I have, several times now, explained why PROC is special.
> 
> I rather have explained why it's not, more times than you did, and you
> have not read my explanation. Anyway, I am seriously curious about
> why. Of course, I remember you said "PROC is special", but not _why_.
> I really want to know _why_ PROC(=each work) should be handled
> differently from others.

It is a question of need. We don't need more.

At points where we hold no locks and know we don't depend on prior
state, we can simply throw away history to get what we want, independent
execution.

We don't loose anything by it and its simpler. It is not much different
from that work_id thing you had previously.

And its fairly fundamental, every site where we know prior state is
irrelevant, we must not hold any locks, because at that point you
explicitly throw away dependencies (like we do for the wq thing now).

> Please show me an example except wq case
> where you just tried to avoid problems than fix them.

wq isn't broken, the annotations there are fine. The interaction between
the existing annotations and crossrelease are unfortunate but that
doesn't mean they're no good.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
  2017-08-29  8:59         ` Peter Zijlstra
  2017-08-29 14:23           ` [tip:locking/core] locking/lockdep: Untangle xhlock history save/restore from task independence tip-bot for Peter Zijlstra
  2017-08-29 16:02           ` [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation Byungchul Park
@ 2017-08-30  2:09           ` Byungchul Park
  2017-08-30  7:41             ` Byungchul Park
  2 siblings, 1 reply; 61+ messages in thread
From: Byungchul Park @ 2017-08-30  2:09 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: mingo, tj, boqun.feng, david, johannes, oleg, linux-kernel, kernel-team

On Tue, Aug 29, 2017 at 10:59:39AM +0200, Peter Zijlstra wrote:
> Subject: lockdep: Untangle xhlock history save/restore from task independence
> 
> Where XHLOCK_{SOFT,HARD} are save/restore points in the xhlocks[] to
> ensure the temporal IRQ events don't interact with task state, the
> XHLOCK_PROC is a fundament different beast that just happens to share
> the interface.
> 
> The purpose of XHLOCK_PROC is to annotate independent execution inside
> one task. For example workqueues, each work should appear to run in its
> own 'pristine' 'task'.
> 
> Remove XHLOCK_PROC in favour of its own interface to avoid confusion.

Much better to me than the patch you did previously, but, see blow.

> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> ---
> diff --git a/kernel/workqueue.c b/kernel/workqueue.c
> index c0331891dec1..ab3c0dc8c7ed 100644
> --- a/kernel/workqueue.c
> +++ b/kernel/workqueue.c
> @@ -2107,14 +2107,14 @@ __acquires(&pool->lock)
>  	 * Which would create W1->C->W1 dependencies, even though there is no
>  	 * actual deadlock possible. There are two solutions, using a
>  	 * read-recursive acquire on the work(queue) 'locks', but this will then
> -	 * hit the lockdep limitation on recursive locks, or simly discard
> +	 * hit the lockdep limitation on recursive locks, or simply discard
>  	 * these locks.
>  	 *
>  	 * AFAICT there is no possible deadlock scenario between the
>  	 * flush_work() and complete() primitives (except for single-threaded
>  	 * workqueues), so hiding them isn't a problem.
>  	 */
> -	crossrelease_hist_start(XHLOCK_PROC, true);
> +	lockdep_invariant_state(true);

This is what I am always curious about. It would be ok if you agree with
removing this work-around after fixing acquire things in wq. But, you
keep to say this is essencial.

You should focus on what dependencies actually are, than saparating
contexts unnecessarily. Of course, we have to do it for each work, _BUT_
not between outside of work and each work since there might be
dependencies between them certainly.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
  2017-08-30  2:09           ` Byungchul Park
@ 2017-08-30  7:41             ` Byungchul Park
  2017-08-30  8:53               ` Peter Zijlstra
  0 siblings, 1 reply; 61+ messages in thread
From: Byungchul Park @ 2017-08-30  7:41 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: mingo, tj, boqun.feng, david, johannes, oleg, linux-kernel, kernel-team

On Wed, Aug 30, 2017 at 11:09:53AM +0900, Byungchul Park wrote:
> On Tue, Aug 29, 2017 at 10:59:39AM +0200, Peter Zijlstra wrote:
> > Subject: lockdep: Untangle xhlock history save/restore from task independence
> > 
> > Where XHLOCK_{SOFT,HARD} are save/restore points in the xhlocks[] to
> > ensure the temporal IRQ events don't interact with task state, the
> > XHLOCK_PROC is a fundament different beast that just happens to share
> > the interface.
> > 
> > The purpose of XHLOCK_PROC is to annotate independent execution inside
> > one task. For example workqueues, each work should appear to run in its
> > own 'pristine' 'task'.
> > 
> > Remove XHLOCK_PROC in favour of its own interface to avoid confusion.
> 
> Much better to me than the patch you did previously, but, see blow.
> 
> > Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> > ---
> > diff --git a/kernel/workqueue.c b/kernel/workqueue.c
> > index c0331891dec1..ab3c0dc8c7ed 100644
> > --- a/kernel/workqueue.c
> > +++ b/kernel/workqueue.c
> > @@ -2107,14 +2107,14 @@ __acquires(&pool->lock)
> >  	 * Which would create W1->C->W1 dependencies, even though there is no
> >  	 * actual deadlock possible. There are two solutions, using a
> >  	 * read-recursive acquire on the work(queue) 'locks', but this will then
> > -	 * hit the lockdep limitation on recursive locks, or simly discard
> > +	 * hit the lockdep limitation on recursive locks, or simply discard
> >  	 * these locks.
> >  	 *
> >  	 * AFAICT there is no possible deadlock scenario between the
> >  	 * flush_work() and complete() primitives (except for single-threaded
> >  	 * workqueues), so hiding them isn't a problem.
> >  	 */
> > -	crossrelease_hist_start(XHLOCK_PROC, true);
> > +	lockdep_invariant_state(true);
> 
> This is what I am always curious about. It would be ok if you agree with
> removing this work-around after fixing acquire things in wq. But, you
> keep to say this is essencial.
> 
> You should focus on what dependencies actually are, than saparating
> contexts unnecessarily. Of course, we have to do it for each work, _BUT_
> not between outside of work and each work since there might be
> dependencies between them certainly.

You have never answered it. I'm curious about your answer. If you can't,
I think you have to revert all your patches. All yours are wrong.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
  2017-08-30  7:41             ` Byungchul Park
@ 2017-08-30  8:53               ` Peter Zijlstra
  2017-08-30  9:01                 ` Byungchul Park
  0 siblings, 1 reply; 61+ messages in thread
From: Peter Zijlstra @ 2017-08-30  8:53 UTC (permalink / raw)
  To: Byungchul Park
  Cc: mingo, tj, boqun.feng, david, johannes, oleg, linux-kernel, kernel-team

On Wed, Aug 30, 2017 at 04:41:17PM +0900, Byungchul Park wrote:
> On Wed, Aug 30, 2017 at 11:09:53AM +0900, Byungchul Park wrote:

> > > Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> > > ---
> > > diff --git a/kernel/workqueue.c b/kernel/workqueue.c
> > > index c0331891dec1..ab3c0dc8c7ed 100644
> > > --- a/kernel/workqueue.c
> > > +++ b/kernel/workqueue.c
> > > @@ -2107,14 +2107,14 @@ __acquires(&pool->lock)
> > >  	 * Which would create W1->C->W1 dependencies, even though there is no
> > >  	 * actual deadlock possible. There are two solutions, using a
> > >  	 * read-recursive acquire on the work(queue) 'locks', but this will then
> > > -	 * hit the lockdep limitation on recursive locks, or simly discard
> > > +	 * hit the lockdep limitation on recursive locks, or simply discard
> > >  	 * these locks.
> > >  	 *
> > >  	 * AFAICT there is no possible deadlock scenario between the
> > >  	 * flush_work() and complete() primitives (except for single-threaded
> > >  	 * workqueues), so hiding them isn't a problem.
> > >  	 */
> > > -	crossrelease_hist_start(XHLOCK_PROC, true);
> > > +	lockdep_invariant_state(true);
> > 
> > This is what I am always curious about. It would be ok if you agree with
> > removing this work-around after fixing acquire things in wq. But, you
> > keep to say this is essencial.
> > 
> > You should focus on what dependencies actually are, than saparating
> > contexts unnecessarily. Of course, we have to do it for each work, _BUT_
> > not between outside of work and each work since there might be
> > dependencies between them certainly.
> 
> You have never answered it. I'm curious about your answer. If you can't,
> I think you have to revert all your patches. All yours are wrong.

Because I don't understand what you're on about. And my patches actually
work.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* RE: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
  2017-08-30  8:53               ` Peter Zijlstra
@ 2017-08-30  9:01                 ` Byungchul Park
  2017-08-30  9:12                   ` Peter Zijlstra
  0 siblings, 1 reply; 61+ messages in thread
From: Byungchul Park @ 2017-08-30  9:01 UTC (permalink / raw)
  To: 'Peter Zijlstra'
  Cc: mingo, tj, boqun.feng, david, johannes, oleg, linux-kernel, kernel-team

> -----Original Message-----
> From: Peter Zijlstra [mailto:peterz@infradead.org]
> Sent: Wednesday, August 30, 2017 5:54 PM
> To: Byungchul Park
> Cc: mingo@kernel.org; tj@kernel.org; boqun.feng@gmail.com;
> david@fromorbit.com; johannes@sipsolutions.net; oleg@redhat.com; linux-
> kernel@vger.kernel.org; kernel-team@lge.com
> Subject: Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
> 
> On Wed, Aug 30, 2017 at 04:41:17PM +0900, Byungchul Park wrote:
> > On Wed, Aug 30, 2017 at 11:09:53AM +0900, Byungchul Park wrote:
> 
> > > > Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> > > > ---
> > > > diff --git a/kernel/workqueue.c b/kernel/workqueue.c
> > > > index c0331891dec1..ab3c0dc8c7ed 100644
> > > > --- a/kernel/workqueue.c
> > > > +++ b/kernel/workqueue.c
> > > > @@ -2107,14 +2107,14 @@ __acquires(&pool->lock)
> > > >  	 * Which would create W1->C->W1 dependencies, even though
> there is no
> > > >  	 * actual deadlock possible. There are two solutions, using
> a
> > > >  	 * read-recursive acquire on the work(queue) 'locks', but
> this will then
> > > > -	 * hit the lockdep limitation on recursive locks, or simly
> discard
> > > > +	 * hit the lockdep limitation on recursive locks, or simply
> discard
> > > >  	 * these locks.
> > > >  	 *
> > > >  	 * AFAICT there is no possible deadlock scenario between the
> > > >  	 * flush_work() and complete() primitives (except for
> single-threaded
> > > >  	 * workqueues), so hiding them isn't a problem.
> > > >  	 */
> > > > -	crossrelease_hist_start(XHLOCK_PROC, true);
> > > > +	lockdep_invariant_state(true);
> > >
> > > This is what I am always curious about. It would be ok if you agree
> with
> > > removing this work-around after fixing acquire things in wq. But, you
> > > keep to say this is essencial.
> > >
> > > You should focus on what dependencies actually are, than saparating
> > > contexts unnecessarily. Of course, we have to do it for each work,
> _BUT_
> > > not between outside of work and each work since there might be
> > > dependencies between them certainly.
> >
> > You have never answered it. I'm curious about your answer. If you can't,
> > I think you have to revert all your patches. All yours are wrong.
> 
> Because I don't understand what you're on about. And my patches actually
> work.

My point is that we inevitably lose valuable dependencies by yours. That's
why I've endlessly asked you 'do you have any reason you try those patches?'
a ton of times. And you have never answered it.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
  2017-08-30  9:01                 ` Byungchul Park
@ 2017-08-30  9:12                   ` Peter Zijlstra
  2017-08-30  9:14                     ` Peter Zijlstra
  2017-08-30  9:24                     ` Byungchul Park
  0 siblings, 2 replies; 61+ messages in thread
From: Peter Zijlstra @ 2017-08-30  9:12 UTC (permalink / raw)
  To: Byungchul Park
  Cc: mingo, tj, boqun.feng, david, johannes, oleg, linux-kernel, kernel-team

On Wed, Aug 30, 2017 at 06:01:59PM +0900, Byungchul Park wrote:
> My point is that we inevitably lose valuable dependencies by yours. That's
> why I've endlessly asked you 'do you have any reason you try those patches?'
> a ton of times. And you have never answered it.

The only dependencies that are lost are those between the first work and
the setup of the workqueue thread.

And there obviously _should_ not be any dependencies between those. A
work should not depend on the setup of the thread.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
  2017-08-30  9:12                   ` Peter Zijlstra
@ 2017-08-30  9:14                     ` Peter Zijlstra
  2017-08-30  9:35                       ` Byungchul Park
  2017-08-30  9:24                     ` Byungchul Park
  1 sibling, 1 reply; 61+ messages in thread
From: Peter Zijlstra @ 2017-08-30  9:14 UTC (permalink / raw)
  To: Byungchul Park
  Cc: mingo, tj, boqun.feng, david, johannes, oleg, linux-kernel, kernel-team

On Wed, Aug 30, 2017 at 11:12:23AM +0200, Peter Zijlstra wrote:
> On Wed, Aug 30, 2017 at 06:01:59PM +0900, Byungchul Park wrote:
> > My point is that we inevitably lose valuable dependencies by yours. That's
> > why I've endlessly asked you 'do you have any reason you try those patches?'
> > a ton of times. And you have never answered it.
> 
> The only dependencies that are lost are those between the first work and
> the setup of the workqueue thread.
> 
> And there obviously _should_ not be any dependencies between those. A
> work should not depend on the setup of the thread.

Furthermore, the save/restore can't preserve those dependencies. The
moment a work exhausts xhlocks[] they are gone. So by assuming the first
work _will_ exhaust the history there is effectively nothing lost.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* RE: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
  2017-08-30  9:12                   ` Peter Zijlstra
  2017-08-30  9:14                     ` Peter Zijlstra
@ 2017-08-30  9:24                     ` Byungchul Park
  2017-08-30 11:25                       ` Byungchul Park
  2017-08-31  8:07                       ` Peter Zijlstra
  1 sibling, 2 replies; 61+ messages in thread
From: Byungchul Park @ 2017-08-30  9:24 UTC (permalink / raw)
  To: 'Peter Zijlstra'
  Cc: mingo, tj, boqun.feng, david, johannes, oleg, linux-kernel, kernel-team

> -----Original Message-----
> From: Peter Zijlstra [mailto:peterz@infradead.org]
> Sent: Wednesday, August 30, 2017 6:12 PM
> To: Byungchul Park
> Cc: mingo@kernel.org; tj@kernel.org; boqun.feng@gmail.com;
> david@fromorbit.com; johannes@sipsolutions.net; oleg@redhat.com; linux-
> kernel@vger.kernel.org; kernel-team@lge.com
> Subject: Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
> 
> On Wed, Aug 30, 2017 at 06:01:59PM +0900, Byungchul Park wrote:
> > My point is that we inevitably lose valuable dependencies by yours.
> That's
> > why I've endlessly asked you 'do you have any reason you try those
> patches?'
> > a ton of times. And you have never answered it.
> 
> The only dependencies that are lost are those between the first work and
> the setup of the workqueue thread.
> 
> And there obviously _should_ not be any dependencies between those. A

100% right. Since there obviously should not be any, it would be better
to check them. So I've endlessly asked you 'do you have any reason removing
the opportunity for that check?'. Overhead? Logical problem? Or want to
believe workqueue setup code perfect forever? I mean, is it a problem if we
check them?

> work should not depend on the setup of the thread.

100% right.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* RE: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
  2017-08-30  9:14                     ` Peter Zijlstra
@ 2017-08-30  9:35                       ` Byungchul Park
  0 siblings, 0 replies; 61+ messages in thread
From: Byungchul Park @ 2017-08-30  9:35 UTC (permalink / raw)
  To: 'Peter Zijlstra'
  Cc: mingo, tj, boqun.feng, david, johannes, oleg, linux-kernel, kernel-team

> -----Original Message-----
> From: Peter Zijlstra [mailto:peterz@infradead.org]
> Sent: Wednesday, August 30, 2017 6:14 PM
> To: Byungchul Park
> Cc: mingo@kernel.org; tj@kernel.org; boqun.feng@gmail.com;
> david@fromorbit.com; johannes@sipsolutions.net; oleg@redhat.com; linux-
> kernel@vger.kernel.org; kernel-team@lge.com
> Subject: Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
> 
> On Wed, Aug 30, 2017 at 11:12:23AM +0200, Peter Zijlstra wrote:
> > On Wed, Aug 30, 2017 at 06:01:59PM +0900, Byungchul Park wrote:
> > > My point is that we inevitably lose valuable dependencies by yours.
> That's
> > > why I've endlessly asked you 'do you have any reason you try those
> patches?'
> > > a ton of times. And you have never answered it.
> >
> > The only dependencies that are lost are those between the first work and
> > the setup of the workqueue thread.
> >
> > And there obviously _should_ not be any dependencies between those. A
> > work should not depend on the setup of the thread.
> 
> Furthermore, the save/restore can't preserve those dependencies. The
> moment a work exhausts xhlocks[] they are gone. So by assuming the first

They are gone _one time_ only once it has been overwritten, and
Recovered at next turn, with original code. But you made it
un-recoverable even at the next time and lose all valuable
dependencies unconditionally.

> work _will_ exhaust the history there is effectively nothing lost.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
  2017-08-30  9:24                     ` Byungchul Park
@ 2017-08-30 11:25                       ` Byungchul Park
  2017-08-30 12:49                         ` Byungchul Park
                                           ` (2 more replies)
  2017-08-31  8:07                       ` Peter Zijlstra
  1 sibling, 3 replies; 61+ messages in thread
From: Byungchul Park @ 2017-08-30 11:25 UTC (permalink / raw)
  To: 'Peter Zijlstra'
  Cc: mingo, tj, boqun.feng, david, johannes, oleg, linux-kernel, kernel-team

On Wed, Aug 30, 2017 at 06:24:39PM +0900, Byungchul Park wrote:
> > -----Original Message-----
> > From: Peter Zijlstra [mailto:peterz@infradead.org]
> > Sent: Wednesday, August 30, 2017 6:12 PM
> > To: Byungchul Park
> > Cc: mingo@kernel.org; tj@kernel.org; boqun.feng@gmail.com;
> > david@fromorbit.com; johannes@sipsolutions.net; oleg@redhat.com; linux-
> > kernel@vger.kernel.org; kernel-team@lge.com
> > Subject: Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
> > 
> > On Wed, Aug 30, 2017 at 06:01:59PM +0900, Byungchul Park wrote:
> > > My point is that we inevitably lose valuable dependencies by yours.
> > That's
> > > why I've endlessly asked you 'do you have any reason you try those
> > patches?'
> > > a ton of times. And you have never answered it.
> > 
> > The only dependencies that are lost are those between the first work and
> > the setup of the workqueue thread.
> > 
> > And there obviously _should_ not be any dependencies between those. A
> 
> 100% right. Since there obviously should not be any, it would be better
> to check them. So I've endlessly asked you 'do you have any reason removing
> the opportunity for that check?'. Overhead? Logical problem? Or want to
> believe workqueue setup code perfect forever? I mean, is it a problem if we
> check them?
> 
> > work should not depend on the setup of the thread.
> 
> 100% right.

For example - I'm giving you the same example repeatedly:

context X                 context Y
---------                 ---------
                          wait_for_completion(C)
acquire(A)
process_one_work()
   acquire(B)
   work->fn()
      complete(C)

Please check C->A and C->B.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
  2017-08-30 11:25                       ` Byungchul Park
@ 2017-08-30 12:49                         ` Byungchul Park
  2017-08-31  7:26                         ` Byungchul Park
  2017-08-31  8:04                         ` Peter Zijlstra
  2 siblings, 0 replies; 61+ messages in thread
From: Byungchul Park @ 2017-08-30 12:49 UTC (permalink / raw)
  To: Byungchul Park
  Cc: Peter Zijlstra, Ingo Molnar, Tejun Heo, Boqun Feng, david,
	Johannes Berg, oleg, linux-kernel, kernel-team

On Wed, Aug 30, 2017 at 8:25 PM, Byungchul Park <byungchul.park@lge.com> wrote:
> On Wed, Aug 30, 2017 at 06:24:39PM +0900, Byungchul Park wrote:
>> > -----Original Message-----
>> > From: Peter Zijlstra [mailto:peterz@infradead.org]
>> > Sent: Wednesday, August 30, 2017 6:12 PM
>> > To: Byungchul Park
>> > Cc: mingo@kernel.org; tj@kernel.org; boqun.feng@gmail.com;
>> > david@fromorbit.com; johannes@sipsolutions.net; oleg@redhat.com; linux-
>> > kernel@vger.kernel.org; kernel-team@lge.com
>> > Subject: Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
>> >
>> > On Wed, Aug 30, 2017 at 06:01:59PM +0900, Byungchul Park wrote:
>> > > My point is that we inevitably lose valuable dependencies by yours.
>> > That's
>> > > why I've endlessly asked you 'do you have any reason you try those
>> > patches?'
>> > > a ton of times. And you have never answered it.
>> >
>> > The only dependencies that are lost are those between the first work and
>> > the setup of the workqueue thread.
>> >
>> > And there obviously _should_ not be any dependencies between those. A
>>
>> 100% right. Since there obviously should not be any, it would be better
>> to check them. So I've endlessly asked you 'do you have any reason removing
>> the opportunity for that check?'. Overhead? Logical problem? Or want to
>> believe workqueue setup code perfect forever? I mean, is it a problem if we
>> check them?
>>
>> > work should not depend on the setup of the thread.
>>
>> 100% right.
>
> For example - I'm giving you the same example repeatedly:
>
> context X                 context Y
> ---------                 ---------
>                           wait_for_completion(C)
> acquire(A)
> process_one_work()
>    acquire(B)
>    work->fn()
>       complete(C)
>
> Please check C->A and C->B.

s/check/let lockdep check/


-- 
Thanks,
Byungchul

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
  2017-08-30 11:25                       ` Byungchul Park
  2017-08-30 12:49                         ` Byungchul Park
@ 2017-08-31  7:26                         ` Byungchul Park
  2017-08-31  8:04                         ` Peter Zijlstra
  2 siblings, 0 replies; 61+ messages in thread
From: Byungchul Park @ 2017-08-31  7:26 UTC (permalink / raw)
  To: 'Peter Zijlstra'
  Cc: mingo, tj, boqun.feng, david, johannes, oleg, linux-kernel, kernel-team

On Wed, Aug 30, 2017 at 08:25:46PM +0900, Byungchul Park wrote:
> On Wed, Aug 30, 2017 at 06:24:39PM +0900, Byungchul Park wrote:
> > > -----Original Message-----
> > > From: Peter Zijlstra [mailto:peterz@infradead.org]
> > > Sent: Wednesday, August 30, 2017 6:12 PM
> > > To: Byungchul Park
> > > Cc: mingo@kernel.org; tj@kernel.org; boqun.feng@gmail.com;
> > > david@fromorbit.com; johannes@sipsolutions.net; oleg@redhat.com; linux-
> > > kernel@vger.kernel.org; kernel-team@lge.com
> > > Subject: Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
> > > 
> > > On Wed, Aug 30, 2017 at 06:01:59PM +0900, Byungchul Park wrote:
> > > > My point is that we inevitably lose valuable dependencies by yours.
> > > That's
> > > > why I've endlessly asked you 'do you have any reason you try those
> > > patches?'
> > > > a ton of times. And you have never answered it.
> > > 
> > > The only dependencies that are lost are those between the first work and
> > > the setup of the workqueue thread.
> > > 
> > > And there obviously _should_ not be any dependencies between those. A
> > 
> > 100% right. Since there obviously should not be any, it would be better
> > to check them. So I've endlessly asked you 'do you have any reason removing
> > the opportunity for that check?'. Overhead? Logical problem? Or want to
> > believe workqueue setup code perfect forever? I mean, is it a problem if we
> > check them?
> > 
> > > work should not depend on the setup of the thread.
> > 
> > 100% right.
> 
> For example - I'm giving you the same example repeatedly:
> 
> context X                 context Y
> ---------                 ---------
>                           wait_for_completion(C)
> acquire(A)
> process_one_work()
>    acquire(B)
>    work->fn()
>       complete(C)
> 
> Please let lockdep check C->A and C->B.

You always stop answering whenever I ask you for opinion with this
example. I'm really curious. Could you let me know your opinion about
this example?

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
  2017-08-30 11:25                       ` Byungchul Park
  2017-08-30 12:49                         ` Byungchul Park
  2017-08-31  7:26                         ` Byungchul Park
@ 2017-08-31  8:04                         ` Peter Zijlstra
  2017-08-31  8:15                           ` Byungchul Park
  2 siblings, 1 reply; 61+ messages in thread
From: Peter Zijlstra @ 2017-08-31  8:04 UTC (permalink / raw)
  To: Byungchul Park
  Cc: mingo, tj, boqun.feng, david, johannes, oleg, linux-kernel, kernel-team

On Wed, Aug 30, 2017 at 08:25:46PM +0900, Byungchul Park wrote:

> For example - I'm giving you the same example repeatedly:
> 
> context X                 context Y
> ---------                 ---------
>                           wait_for_completion(C)
> acquire(A)
> process_one_work()
>    acquire(B)
>    work->fn()
>       complete(C)
> 
> Please check C->A and C->B.

Is there a caller of procesS_one_work() that holds a lock?

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
  2017-08-30  9:24                     ` Byungchul Park
  2017-08-30 11:25                       ` Byungchul Park
@ 2017-08-31  8:07                       ` Peter Zijlstra
  1 sibling, 0 replies; 61+ messages in thread
From: Peter Zijlstra @ 2017-08-31  8:07 UTC (permalink / raw)
  To: Byungchul Park
  Cc: mingo, tj, boqun.feng, david, johannes, oleg, linux-kernel, kernel-team

On Wed, Aug 30, 2017 at 06:24:39PM +0900, Byungchul Park wrote:
> > And there obviously _should_ not be any dependencies between those. A
> 
> 100% right. Since there obviously should not be any, it would be better
> to check them. So I've endlessly asked you 'do you have any reason removing
> the opportunity for that check?'. Overhead? Logical problem? Or want to
> believe workqueue setup code perfect forever? I mean, is it a problem if we
> check them?

Nothing is perfect. And cross-release less that others. Covering that
case for the first few works really isn't worth it.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
  2017-08-31  8:04                         ` Peter Zijlstra
@ 2017-08-31  8:15                           ` Byungchul Park
  2017-08-31  8:34                             ` Peter Zijlstra
  0 siblings, 1 reply; 61+ messages in thread
From: Byungchul Park @ 2017-08-31  8:15 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: mingo, tj, boqun.feng, david, johannes, oleg, linux-kernel, kernel-team

On Thu, Aug 31, 2017 at 10:04:42AM +0200, Peter Zijlstra wrote:
> On Wed, Aug 30, 2017 at 08:25:46PM +0900, Byungchul Park wrote:
> 
> > For example - I'm giving you the same example repeatedly:
> > 
> > context X                 context Y
> > ---------                 ---------
> >                           wait_for_completion(C)
> > acquire(A)
> > process_one_work()
> >    acquire(B)
> >    work->fn()
> >       complete(C)
> > 
> > Please check C->A and C->B.
> 
> Is there a caller of procesS_one_work() that holds a lock?

It's not important. Ok, check the following, instead:

context X                 context Y
---------                 ---------
                          wait_for_completion(C)
acquire(A)
release(A)
process_one_work()
   acquire(B)
   release(B)
   work->fn()
      complete(C)

We don't need to lose C->A and C->B dependencies unnecessarily.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
  2017-08-31  8:15                           ` Byungchul Park
@ 2017-08-31  8:34                             ` Peter Zijlstra
  2017-09-01  2:05                               ` Byungchul Park
  0 siblings, 1 reply; 61+ messages in thread
From: Peter Zijlstra @ 2017-08-31  8:34 UTC (permalink / raw)
  To: Byungchul Park
  Cc: mingo, tj, boqun.feng, david, johannes, oleg, linux-kernel, kernel-team

On Thu, Aug 31, 2017 at 05:15:01PM +0900, Byungchul Park wrote:
> It's not important. Ok, check the following, instead:
> 
> context X                 context Y
> ---------                 ---------
>                           wait_for_completion(C)
> acquire(A)
> release(A)
> process_one_work()
>    acquire(B)
>    release(B)
>    work->fn()
>       complete(C)
> 
> We don't need to lose C->A and C->B dependencies unnecessarily.

I really can't be arsed about them. Its really only the first few works
that will retain that dependency anyway, even if you were to retain
them.

All of that is contained in kernel/kthread and kernel/workqueue and can
be audited if needed. Its a very limited amount of code.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
  2017-08-31  8:34                             ` Peter Zijlstra
@ 2017-09-01  2:05                               ` Byungchul Park
  2017-09-01  9:47                                 ` Peter Zijlstra
  0 siblings, 1 reply; 61+ messages in thread
From: Byungchul Park @ 2017-09-01  2:05 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: mingo, tj, boqun.feng, david, johannes, oleg, linux-kernel, kernel-team

On Thu, Aug 31, 2017 at 10:34:53AM +0200, Peter Zijlstra wrote:
> On Thu, Aug 31, 2017 at 05:15:01PM +0900, Byungchul Park wrote:
> > It's not important. Ok, check the following, instead:
> > 
> > context X                 context Y
> > ---------                 ---------
> >                           wait_for_completion(C)
> > acquire(A)
> > release(A)
> > process_one_work()
> >    acquire(B)
> >    release(B)
> >    work->fn()
> >       complete(C)
> > 
> > We don't need to lose C->A and C->B dependencies unnecessarily.
> 
> I really can't be arsed about them. Its really only the first few works
> that will retain that dependency anyway, even if you were to retain
> them.

Wrong.

Every 'work' doing complete() for different classes of completion
variable suffers from losing valuable dependencies, every time, not
first few ones.

Remind we are talking about dependencies wrt cross-lock, not between
_holding_ locks. If you invalidate xhlock whenever work->fn(), we cannot
build dependencies like C->A and C->B every time. Right?

> All of that is contained in kernel/kthread and kernel/workqueue and can
> be audited if needed. Its a very limited amount of code.

I mean, doing it automatically w/o additional overhead is better than
considering the limited amount of code manually every time changing
kernel code. Do as you please.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
  2017-09-01  2:05                               ` Byungchul Park
@ 2017-09-01  9:47                                 ` Peter Zijlstra
  2017-09-01 10:16                                   ` Byungchul Park
  0 siblings, 1 reply; 61+ messages in thread
From: Peter Zijlstra @ 2017-09-01  9:47 UTC (permalink / raw)
  To: Byungchul Park
  Cc: mingo, tj, boqun.feng, david, johannes, oleg, linux-kernel, kernel-team

On Fri, Sep 01, 2017 at 11:05:12AM +0900, Byungchul Park wrote:
> On Thu, Aug 31, 2017 at 10:34:53AM +0200, Peter Zijlstra wrote:
> > On Thu, Aug 31, 2017 at 05:15:01PM +0900, Byungchul Park wrote:
> > > It's not important. Ok, check the following, instead:
> > > 
> > > context X                 context Y
> > > ---------                 ---------
> > >                           wait_for_completion(C)
> > > acquire(A)
> > > release(A)
> > > process_one_work()
> > >    acquire(B)
> > >    release(B)
> > >    work->fn()
> > >       complete(C)
> > > 
> > > We don't need to lose C->A and C->B dependencies unnecessarily.
> > 
> > I really can't be arsed about them. Its really only the first few works
> > that will retain that dependency anyway, even if you were to retain
> > them.
> 
> Wrong.
> 
> Every 'work' doing complete() for different classes of completion
> variable suffers from losing valuable dependencies, every time, not
> first few ones.

The moment you overrun the history array its gone. So yes, only the
first few works will ever see them

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
  2017-09-01  9:47                                 ` Peter Zijlstra
@ 2017-09-01 10:16                                   ` Byungchul Park
  2017-09-01 12:09                                     ` 박병철/선임연구원/SW Platform(연)AOT팀(byungchul.park@lge.com)
  2017-09-01 12:38                                     ` Peter Zijlstra
  0 siblings, 2 replies; 61+ messages in thread
From: Byungchul Park @ 2017-09-01 10:16 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: mingo, tj, boqun.feng, david, johannes, oleg, linux-kernel, kernel-team

On Fri, Sep 01, 2017 at 11:47:47AM +0200, Peter Zijlstra wrote:
> On Fri, Sep 01, 2017 at 11:05:12AM +0900, Byungchul Park wrote:
> > On Thu, Aug 31, 2017 at 10:34:53AM +0200, Peter Zijlstra wrote:
> > > On Thu, Aug 31, 2017 at 05:15:01PM +0900, Byungchul Park wrote:
> > > > It's not important. Ok, check the following, instead:
> > > > 
> > > > context X                 context Y
> > > > ---------                 ---------
> > > >                           wait_for_completion(C)
> > > > acquire(A)
> > > > release(A)
> > > > process_one_work()
> > > >    acquire(B)
> > > >    release(B)
> > > >    work->fn()
> > > >       complete(C)
> > > > 
> > > > We don't need to lose C->A and C->B dependencies unnecessarily.
> > > 
> > > I really can't be arsed about them. Its really only the first few works
> > > that will retain that dependency anyway, even if you were to retain
> > > them.
> > 
> > Wrong.
> > 
> > Every 'work' doing complete() for different classes of completion
> > variable suffers from losing valuable dependencies, every time, not
> > first few ones.
> 
> The moment you overrun the history array its gone. So yes, only the

It would be gone _only_ at the time the history overrun, and then it
will be built again. So, you are wrong.

Let me show you an example: (I hope you also show examples.)

context X                  context Y
---------                  ---------
                           wait_for_completion(D)
while (true)
   acquire(A)
   release(A)
   process_one_work()
      acquire(B)
      release(B)
      work->fn()
         complete(C)
   acquire(D)
   release(D)

When happening an overrun in a 'work', 'A' and 'B' will be gone _only_
at the time, and then 'D', 'A' and 'B' will be queued into the xhlock
*again* from the next loop on, and they can be used to generate useful
dependencies again.

You are being confused now. Acquisitions we are focusing now are not
_stacked_ like hlocks, but _accumulated_ continuously onto the ring
buffer e.i. xhlock array.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* RE: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
  2017-09-01 10:16                                   ` Byungchul Park
@ 2017-09-01 12:09                                     ` 박병철/선임연구원/SW Platform(연)AOT팀(byungchul.park@lge.com)
  2017-09-01 12:38                                     ` Peter Zijlstra
  1 sibling, 0 replies; 61+ messages in thread
From: 박병철/선임연구원/SW Platform(연)AOT팀(byungchul.park@lge.com) @ 2017-09-01 12:09 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: mingo, tj, boqun.feng, david, johannes, oleg, linux-kernel, kernel-team

> -----Original Message-----
> From: Byungchul Park [mailto:byungchul.park@lge.com]
> Sent: Friday, September 01, 2017 7:16 PM
> To: Peter Zijlstra
> Cc: mingo@kernel.org; tj@kernel.org; boqun.feng@gmail.com;
> david@fromorbit.com; johannes@sipsolutions.net; oleg@redhat.com; linux-
> kernel@vger.kernel.org; kernel-team@lge.com
> Subject: Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
> 
> On Fri, Sep 01, 2017 at 11:47:47AM +0200, Peter Zijlstra wrote:
> > On Fri, Sep 01, 2017 at 11:05:12AM +0900, Byungchul Park wrote:
> > > On Thu, Aug 31, 2017 at 10:34:53AM +0200, Peter Zijlstra wrote:
> > > > On Thu, Aug 31, 2017 at 05:15:01PM +0900, Byungchul Park wrote:
> > > > > It's not important. Ok, check the following, instead:
> > > > >
> > > > > context X                 context Y
> > > > > ---------                 ---------
> > > > >                           wait_for_completion(C)
> > > > > acquire(A)
> > > > > release(A)
> > > > > process_one_work()
> > > > >    acquire(B)
> > > > >    release(B)
> > > > >    work->fn()
> > > > >       complete(C)
> > > > >
> > > > > We don't need to lose C->A and C->B dependencies unnecessarily.
> > > >
> > > > I really can't be arsed about them. Its really only the first few
> works
> > > > that will retain that dependency anyway, even if you were to retain
> > > > them.
> > >
> > > Wrong.
> > >
> > > Every 'work' doing complete() for different classes of completion
> > > variable suffers from losing valuable dependencies, every time, not
> > > first few ones.
> >
> > The moment you overrun the history array its gone. So yes, only the
> 
> It would be gone _only_ at the time the history overrun, and then it
> will be built again. So, you are wrong.
> 
> Let me show you an example: (I hope you also show examples.)
> 
> context X                  context Y
> ---------                  ---------
>                            wait_for_completion(D)
> while (true)
>    acquire(A)
>    release(A)
>    process_one_work()
>       acquire(B)
>       release(B)
>       work->fn()
>          complete(C)
>    acquire(D)
>    release(D)
> 
> When happening an overrun in a 'work', 'A' and 'B' will be gone _only_
> at the time, and then 'D', 'A' and 'B' will be queued into the xhlock
> *again* from the next loop on, and they can be used to generate useful
> dependencies again.
> 
> You are being confused now. Acquisitions we are focusing now are not
> _stacked_ like hlocks, but _accumulated_ continuously onto the ring
> buffer e.i. xhlock array.

Agree?

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
  2017-09-01 10:16                                   ` Byungchul Park
  2017-09-01 12:09                                     ` 박병철/선임연구원/SW Platform(연)AOT팀(byungchul.park@lge.com)
@ 2017-09-01 12:38                                     ` Peter Zijlstra
  2017-09-01 13:51                                       ` Byungchul Park
  1 sibling, 1 reply; 61+ messages in thread
From: Peter Zijlstra @ 2017-09-01 12:38 UTC (permalink / raw)
  To: Byungchul Park
  Cc: mingo, tj, boqun.feng, david, johannes, oleg, linux-kernel, kernel-team

On Fri, Sep 01, 2017 at 07:16:29PM +0900, Byungchul Park wrote:

> It would be gone _only_ at the time the history overrun, and then it
> will be built again. So, you are wrong.

How will it ever be build again? You only ever spawn the worker thread
_ONCE_, then it runs lots and lots of works.

We _could_ go fix it, but I really don't see it being worth the time and
effort, its a few isolated locks inside the kthread/workqueue code.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
  2017-09-01 12:38                                     ` Peter Zijlstra
@ 2017-09-01 13:51                                       ` Byungchul Park
  2017-09-01 16:38                                         ` Peter Zijlstra
  0 siblings, 1 reply; 61+ messages in thread
From: Byungchul Park @ 2017-09-01 13:51 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Byungchul Park, Ingo Molnar, Tejun Heo, Boqun Feng, david,
	Johannes Berg, oleg, linux-kernel, kernel-team

On Fri, Sep 1, 2017 at 9:38 PM, Peter Zijlstra <peterz@infradead.org> wrote:
> On Fri, Sep 01, 2017 at 07:16:29PM +0900, Byungchul Park wrote:
>
>> It would be gone _only_ at the time the history overrun, and then it
>> will be built again. So, you are wrong.

s/it will be built again/the acquisition will be added into the xhlock
array again/

Now, better to understand?

> How will it ever be build again? You only ever spawn the worker thread
> _ONCE_, then it runs lots and lots of works.
>
> We _could_ go fix it, but I really don't see it being worth the time and

We don't need to fix it spending time and effort. Just *revert* all your
wrong patches.

> effort, its a few isolated locks inside the kthread/workqueue code.

Please point out what I am wrong and what you want to say, *with* my
latest example. Doing it with my example would be very helpful to
understand you.

-- 
Thanks,
Byungchul

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
  2017-09-01 13:51                                       ` Byungchul Park
@ 2017-09-01 16:38                                         ` Peter Zijlstra
  2017-09-04  1:30                                           ` Byungchul Park
  0 siblings, 1 reply; 61+ messages in thread
From: Peter Zijlstra @ 2017-09-01 16:38 UTC (permalink / raw)
  To: Byungchul Park
  Cc: Byungchul Park, Ingo Molnar, Tejun Heo, Boqun Feng, david,
	Johannes Berg, oleg, linux-kernel, kernel-team

On Fri, Sep 01, 2017 at 10:51:48PM +0900, Byungchul Park wrote:
> On Fri, Sep 1, 2017 at 9:38 PM, Peter Zijlstra <peterz@infradead.org> wrote:
> > On Fri, Sep 01, 2017 at 07:16:29PM +0900, Byungchul Park wrote:
> >
> >> It would be gone _only_ at the time the history overrun, and then it
> >> will be built again. So, you are wrong.
> 
> s/it will be built again/the acquisition will be added into the xhlock
> array again/
> 
> Now, better to understand?

No, I still don't get it. How are we ever going to get the workqueue
thread setup code back after its spooled out?

> > How will it ever be build again? You only ever spawn the worker thread
> > _ONCE_, then it runs lots and lots of works.
> >
> > We _could_ go fix it, but I really don't see it being worth the time and
> 
> We don't need to fix it spending time and effort. Just *revert* all your
> wrong patches.

And get tangled up with the workqueue annotation again, no thanks.
Having the first few works see the thread setup isn't worth it.

And your work_id annotation had the same problem.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
  2017-09-01 16:38                                         ` Peter Zijlstra
@ 2017-09-04  1:30                                           ` Byungchul Park
  2017-09-04  2:08                                             ` Byungchul Park
  2017-09-04 11:42                                             ` Peter Zijlstra
  0 siblings, 2 replies; 61+ messages in thread
From: Byungchul Park @ 2017-09-04  1:30 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Byungchul Park, Ingo Molnar, Tejun Heo, Boqun Feng, david,
	Johannes Berg, oleg, linux-kernel, kernel-team

On Fri, Sep 01, 2017 at 06:38:52PM +0200, Peter Zijlstra wrote:
> On Fri, Sep 01, 2017 at 10:51:48PM +0900, Byungchul Park wrote:
> > On Fri, Sep 1, 2017 at 9:38 PM, Peter Zijlstra <peterz@infradead.org> wrote:
> > > On Fri, Sep 01, 2017 at 07:16:29PM +0900, Byungchul Park wrote:
> > >
> > >> It would be gone _only_ at the time the history overrun, and then it
> > >> will be built again. So, you are wrong.
> > 
> > s/it will be built again/the acquisition will be added into the xhlock
> > array again/
> > 
> > Now, better to understand?
> 
> No, I still don't get it. How are we ever going to get the workqueue
> thread setup code back after its spooled out?
> 
> > > How will it ever be build again? You only ever spawn the worker thread
> > > _ONCE_, then it runs lots and lots of works.
> > >
> > > We _could_ go fix it, but I really don't see it being worth the time and
> > 
> > We don't need to fix it spending time and effort. Just *revert* all your
> > wrong patches.
> 
> And get tangled up with the workqueue annotation again, no thanks.
> Having the first few works see the thread setup isn't worth it.
> 
> And your work_id annotation had the same problem.

I keep asking you for an example because I really understand you.

   Fix my problematic example with your patches,

   or,

   Show me a problematic scenario with my original code, you expect.

Whatever, it would be helpful to understand you.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
  2017-09-04  1:30                                           ` Byungchul Park
@ 2017-09-04  2:08                                             ` Byungchul Park
  2017-09-04 11:42                                             ` Peter Zijlstra
  1 sibling, 0 replies; 61+ messages in thread
From: Byungchul Park @ 2017-09-04  2:08 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Byungchul Park, Ingo Molnar, Tejun Heo, Boqun Feng, david,
	Johannes Berg, oleg, linux-kernel, kernel-team

On Mon, Sep 04, 2017 at 10:30:32AM +0900, Byungchul Park wrote:
> On Fri, Sep 01, 2017 at 06:38:52PM +0200, Peter Zijlstra wrote:
> > On Fri, Sep 01, 2017 at 10:51:48PM +0900, Byungchul Park wrote:
> > > On Fri, Sep 1, 2017 at 9:38 PM, Peter Zijlstra <peterz@infradead.org> wrote:
> > > > On Fri, Sep 01, 2017 at 07:16:29PM +0900, Byungchul Park wrote:
> > > >
> > > >> It would be gone _only_ at the time the history overrun, and then it
> > > >> will be built again. So, you are wrong.
> > > 
> > > s/it will be built again/the acquisition will be added into the xhlock
> > > array again/
> > > 
> > > Now, better to understand?
> > 
> > No, I still don't get it. How are we ever going to get the workqueue
> > thread setup code back after its spooled out?
> > 
> > > > How will it ever be build again? You only ever spawn the worker thread
> > > > _ONCE_, then it runs lots and lots of works.
> > > >
> > > > We _could_ go fix it, but I really don't see it being worth the time and
> > > 
> > > We don't need to fix it spending time and effort. Just *revert* all your
> > > wrong patches.
> > 
> > And get tangled up with the workqueue annotation again, no thanks.
> > Having the first few works see the thread setup isn't worth it.
> > 
> > And your work_id annotation had the same problem.
> 
> I keep asking you for an example because I really understand you.

I keep asking you for an example because I really want to understand you.

> 
>    Fix my problematic example with your patches,
> 
>    or,
> 
>    Show me a problematic scenario with my original code, you expect.
> 
> Whatever, it would be helpful to understand you.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
  2017-09-04  1:30                                           ` Byungchul Park
  2017-09-04  2:08                                             ` Byungchul Park
@ 2017-09-04 11:42                                             ` Peter Zijlstra
  2017-09-05  0:38                                               ` Byungchul Park
  1 sibling, 1 reply; 61+ messages in thread
From: Peter Zijlstra @ 2017-09-04 11:42 UTC (permalink / raw)
  To: Byungchul Park
  Cc: Byungchul Park, Ingo Molnar, Tejun Heo, Boqun Feng, david,
	Johannes Berg, oleg, linux-kernel, kernel-team

On Mon, Sep 04, 2017 at 10:30:32AM +0900, Byungchul Park wrote:
> On Fri, Sep 01, 2017 at 06:38:52PM +0200, Peter Zijlstra wrote:
> > And get tangled up with the workqueue annotation again, no thanks.
> > Having the first few works see the thread setup isn't worth it.
> > 
> > And your work_id annotation had the same problem.
> 
> I keep asking you for an example because I really understand you.
> 
>    Fix my problematic example with your patches,
> 
>    or,
> 
>    Show me a problematic scenario with my original code, you expect.
> 
> Whatever, it would be helpful to understand you.

I _really_ don't understand what you're worried about. Is it the kthread
create and workqueue init or the pool->lock that is released/acquired in
process_one_work()?

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
  2017-09-04 11:42                                             ` Peter Zijlstra
@ 2017-09-05  0:38                                               ` Byungchul Park
  2017-09-05  7:08                                                 ` Peter Zijlstra
  0 siblings, 1 reply; 61+ messages in thread
From: Byungchul Park @ 2017-09-05  0:38 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Byungchul Park, Ingo Molnar, Tejun Heo, Boqun Feng, david,
	Johannes Berg, oleg, linux-kernel, kernel-team

On Mon, Sep 04, 2017 at 01:42:48PM +0200, Peter Zijlstra wrote:
> On Mon, Sep 04, 2017 at 10:30:32AM +0900, Byungchul Park wrote:
> > On Fri, Sep 01, 2017 at 06:38:52PM +0200, Peter Zijlstra wrote:
> > > And get tangled up with the workqueue annotation again, no thanks.
> > > Having the first few works see the thread setup isn't worth it.
> > > 
> > > And your work_id annotation had the same problem.
> > 
> > I keep asking you for an example because I really understand you.
> > 
> >    Fix my problematic example with your patches,
> > 
> >    or,
> > 
> >    Show me a problematic scenario with my original code, you expect.
> > 
> > Whatever, it would be helpful to understand you.
> 
> I _really_ don't understand what you're worried about. Is it the kthread
> create and workqueue init or the pool->lock that is released/acquired in
> process_one_work()?

s/in process_one_work()/in all worker code including setup code/

Original code was already designed to handle real dependencies well. But
you invalidated it _w/o_ any reason, that's why I don't agree with your
patches. Your patches only do avoiding the wq issue now we focus on.

Look at:

 worker thread                         another context
 -------------                         ---------------
                                       wait_for_completion()
       |
       |      (1)
       v
  +---------+
  | Work  A | (2)
  +---------+
       |
       |      (3)
       v
  +---------+
  | Work  B | (4)
  +---------+
       |
       |      (5)
       v
  +---------+
  | Work  C | (6)
  +---------+
       |
       v

We have to consider whole context of the worker to build dependencies
with a crosslock e.g. wait_for_commplete().

Only thing we have to care here is to make all works e.g. (2), (4) and
(6) independent, because workqueue does _concurrency control_. As I said
last year at the very beginning, for works not applied the control e.g.
max_active == 1, we don't need that isolation. I said, it's a future work.

It would have been much easier to communicate with each other if you
*tried* to understand my examples like now or you *tried* to give me one
example at least. You didn't even *try*. Only thing I want to ask you
for is to *try* to understand my opinions on conflicts.

Now, understand what I intended? Still unsufficient?

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
  2017-09-05  0:38                                               ` Byungchul Park
@ 2017-09-05  7:08                                                 ` Peter Zijlstra
  2017-09-05  7:19                                                   ` Peter Zijlstra
  2017-09-05  8:30                                                   ` Byungchul Park
  0 siblings, 2 replies; 61+ messages in thread
From: Peter Zijlstra @ 2017-09-05  7:08 UTC (permalink / raw)
  To: Byungchul Park
  Cc: Byungchul Park, Ingo Molnar, Tejun Heo, Boqun Feng, david,
	Johannes Berg, oleg, linux-kernel, kernel-team

On Tue, Sep 05, 2017 at 09:38:45AM +0900, Byungchul Park wrote:
> On Mon, Sep 04, 2017 at 01:42:48PM +0200, Peter Zijlstra wrote:
> > On Mon, Sep 04, 2017 at 10:30:32AM +0900, Byungchul Park wrote:
> > > On Fri, Sep 01, 2017 at 06:38:52PM +0200, Peter Zijlstra wrote:
> > > > And get tangled up with the workqueue annotation again, no thanks.
> > > > Having the first few works see the thread setup isn't worth it.
> > > > 
> > > > And your work_id annotation had the same problem.
> > > 
> > > I keep asking you for an example because I really understand you.
> > > 
> > >    Fix my problematic example with your patches,
> > > 
> > >    or,
> > > 
> > >    Show me a problematic scenario with my original code, you expect.
> > > 
> > > Whatever, it would be helpful to understand you.
> > 
> > I _really_ don't understand what you're worried about. Is it the kthread
> > create and workqueue init or the pool->lock that is released/acquired in
> > process_one_work()?
> 
> s/in process_one_work()/in all worker code including setup code/
> 
> Original code was already designed to handle real dependencies well. But
> you invalidated it _w/o_ any reason, that's why I don't agree with your
> patches.

The reasons:

 - it avoids the interaction with the workqueue annotation
 - it makes each work consistent
 - its not different from what you did with work_id:

    https://lkml.kernel.org/r/1489479542-27030-6-git-send-email-byungchul.park@lge.com

    crossrelease_work_start() vs same_context_xhlock() { if
    (xhlock->work_id == curr->workid) ... }

> Your patches only do avoiding the wq issue now we focus on.
> 
> Look at:
> 
>  worker thread                         another context
>  -------------                         ---------------
>                                        wait_for_completion()
>        |
>        |      (1)
>        v
>   +---------+
>   | Work  A | (2)
>   +---------+
>        |
>        |      (3)
>        v
>   +---------+
>   | Work  B | (4)
>   +---------+
>        |
>        |      (5)
>        v
>   +---------+
>   | Work  C | (6)
>   +---------+
>        |
>        v
> 
> We have to consider whole context of the worker to build dependencies
> with a crosslock e.g. wait_for_commplete().
> 
> Only thing we have to care here is to make all works e.g. (2), (4) and
> (6) independent, because workqueue does _concurrency control_. As I said
> last year at the very beginning, for works not applied the control e.g.
> max_active == 1, we don't need that isolation. I said, it's a future work.
> 
> It would have been much easier to communicate with each other if you
> *tried* to understand my examples like now or you *tried* to give me one
> example at least. You didn't even *try*. Only thing I want to ask you
> for is to *try* to understand my opinions on conflicts.
> 
> Now, understand what I intended? Still unsufficient?

So you worry about max_active==1 ? Or you worry about pool->lock or
about the thread setup? I'm still not sure.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
  2017-09-05  7:08                                                 ` Peter Zijlstra
@ 2017-09-05  7:19                                                   ` Peter Zijlstra
  2017-09-05  8:57                                                     ` Byungchul Park
  2017-09-05  8:30                                                   ` Byungchul Park
  1 sibling, 1 reply; 61+ messages in thread
From: Peter Zijlstra @ 2017-09-05  7:19 UTC (permalink / raw)
  To: Byungchul Park
  Cc: Byungchul Park, Ingo Molnar, Tejun Heo, Boqun Feng, david,
	Johannes Berg, oleg, linux-kernel, kernel-team

On Tue, Sep 05, 2017 at 09:08:25AM +0200, Peter Zijlstra wrote:
> So you worry about max_active==1 ? Or you worry about pool->lock or
> about the thread setup? I'm still not sure.

So the thing about pool->lock is that its a leaf lock, we take nothing
inside it. Futhermore its a spinlock and therefore blocking things like
completions or page-lock cannot form a deadlock with it.

It is also fully isolated inside workqueue.c and easy to audit.

This is why I really can't be arsed about it.


And the whole setup stuff isn't properly preserved between works in any
case, only the first few works would ever see that history, so why
bother.


We _could_ save/restore the setup history, by doing a complete copy of
it and restoring that, but that's not what crossrelease did, and I
really don't see the point.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
  2017-09-05  7:08                                                 ` Peter Zijlstra
  2017-09-05  7:19                                                   ` Peter Zijlstra
@ 2017-09-05  8:30                                                   ` Byungchul Park
  1 sibling, 0 replies; 61+ messages in thread
From: Byungchul Park @ 2017-09-05  8:30 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Byungchul Park, Ingo Molnar, Tejun Heo, Boqun Feng, david,
	Johannes Berg, oleg, linux-kernel, kernel-team

On Tue, Sep 05, 2017 at 09:08:25AM +0200, Peter Zijlstra wrote:
> > Your patches only do avoiding the wq issue now we focus on.
> > 
> > Look at:
> > 
> >  worker thread                         another context
> >  -------------                         ---------------
> >                                        wait_for_completion()
> >        |
> >        |      (1)
> >        v
> >   +---------+
> >   | Work  A | (2)
> >   +---------+
> >        |
> >        |      (3)
> >        v
> >   +---------+
> >   | Work  B | (4)
> >   +---------+
> >        |
> >        |      (5)
> >        v
> >   +---------+
> >   | Work  C | (6)
> >   +---------+
> >        |
> >        v
> > 
> > We have to consider whole context of the worker to build dependencies
> > with a crosslock e.g. wait_for_commplete().
> > 
> > Only thing we have to care here is to make all works e.g. (2), (4) and
> > (6) independent, because workqueue does _concurrency control_. As I said
> > last year at the very beginning, for works not applied the control e.g.
> > max_active == 1, we don't need that isolation. I said, it's a future work.
> > 
> > It would have been much easier to communicate with each other if you
> > *tried* to understand my examples like now or you *tried* to give me one
> > example at least. You didn't even *try*. Only thing I want to ask you
> > for is to *try* to understand my opinions on conflicts.
> > 
> > Now, understand what I intended? Still unsufficient?
> 
> So you worry about max_active==1 ? Or you worry about pool->lock or
> about the thread setup? I'm still not sure.

It's close to the letter. Precisely, I worry about (1), (3), (5) and so
on, since they certainly create dependencies with crosslocks e.g.
completion in my example.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
  2017-09-05  7:19                                                   ` Peter Zijlstra
@ 2017-09-05  8:57                                                     ` Byungchul Park
  2017-09-05  9:36                                                       ` Peter Zijlstra
  0 siblings, 1 reply; 61+ messages in thread
From: Byungchul Park @ 2017-09-05  8:57 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Byungchul Park, Ingo Molnar, Tejun Heo, Boqun Feng, david,
	Johannes Berg, oleg, linux-kernel, kernel-team

On Tue, Sep 05, 2017 at 09:19:30AM +0200, Peter Zijlstra wrote:
> On Tue, Sep 05, 2017 at 09:08:25AM +0200, Peter Zijlstra wrote:
> > So you worry about max_active==1 ? Or you worry about pool->lock or
> > about the thread setup? I'm still not sure.
> 
> So the thing about pool->lock is that its a leaf lock, we take nothing

I think the following sentence is a key, I hope...

Leaf locks can also create dependecies with *crosslocks*. These
dependencies are not built between holding locks like typical locks.

> inside it. Futhermore its a spinlock and therefore blocking things like
> completions or page-lock cannot form a deadlock with it.

I agree. Now we should be only interested in blocking things.

> It is also fully isolated inside workqueue.c and easy to audit.
> 
> This is why I really can't be arsed about it.
> 
> And the whole setup stuff isn't properly preserved between works in any
> case, only the first few works would ever see that history, so why
> bother.

As I said in another reply, what about (1), (3) and (5) in my example?

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
  2017-09-05  8:57                                                     ` Byungchul Park
@ 2017-09-05  9:36                                                       ` Peter Zijlstra
  2017-09-05 10:31                                                         ` Byungchul Park
  0 siblings, 1 reply; 61+ messages in thread
From: Peter Zijlstra @ 2017-09-05  9:36 UTC (permalink / raw)
  To: Byungchul Park
  Cc: Byungchul Park, Ingo Molnar, Tejun Heo, Boqun Feng, david,
	Johannes Berg, oleg, linux-kernel, kernel-team

On Tue, Sep 05, 2017 at 05:57:27PM +0900, Byungchul Park wrote:
> On Tue, Sep 05, 2017 at 09:19:30AM +0200, Peter Zijlstra wrote:
> > On Tue, Sep 05, 2017 at 09:08:25AM +0200, Peter Zijlstra wrote:
> > > So you worry about max_active==1 ? Or you worry about pool->lock or
> > > about the thread setup? I'm still not sure.
> > 
> > So the thing about pool->lock is that its a leaf lock, we take nothing
> 
> I think the following sentence is a key, I hope...
> 
> Leaf locks can also create dependecies with *crosslocks*. These
> dependencies are not built between holding locks like typical locks.

They can create dependencies, but they _cannot_ create deadlocks. So
there's no value in those dependencies.

> > And the whole setup stuff isn't properly preserved between works in any
> > case, only the first few works would ever see that history, so why
> > bother.
> 
> As I said in another reply, what about (1), (3) and (5) in my example?

So for single-threaded workqueues, I'd like to get recursive-read sorted
and then we can make the lockdep_invariant_state() conditional.

Using recurisve-read lock for the wq lockdep_map's has the same effect
as your might thing without having to introduce new magic.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
  2017-09-05  9:36                                                       ` Peter Zijlstra
@ 2017-09-05 10:31                                                         ` Byungchul Park
  2017-09-05 10:52                                                           ` Peter Zijlstra
  2017-09-05 10:58                                                           ` Byungchul Park
  0 siblings, 2 replies; 61+ messages in thread
From: Byungchul Park @ 2017-09-05 10:31 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Byungchul Park, Ingo Molnar, Tejun Heo, Boqun Feng, david,
	Johannes Berg, oleg, linux-kernel, kernel-team

On Tue, Sep 05, 2017 at 11:36:24AM +0200, Peter Zijlstra wrote:
> On Tue, Sep 05, 2017 at 05:57:27PM +0900, Byungchul Park wrote:
> > On Tue, Sep 05, 2017 at 09:19:30AM +0200, Peter Zijlstra wrote:
> > > On Tue, Sep 05, 2017 at 09:08:25AM +0200, Peter Zijlstra wrote:
> > > > So you worry about max_active==1 ? Or you worry about pool->lock or
> > > > about the thread setup? I'm still not sure.
> > > 
> > > So the thing about pool->lock is that its a leaf lock, we take nothing
> > 
> > I think the following sentence is a key, I hope...
> > 
> > Leaf locks can also create dependecies with *crosslocks*. These
> > dependencies are not built between holding locks like typical locks.
> 
> They can create dependencies, but they _cannot_ create deadlocks. So
> there's no value in those dependencies.

Let me show you a possible scenario with a leaf lock:

                        lock(A)
   lock(A)              wait_for_completion(B)
   unlock(A)            ...
   ...                  unlock(A)
   process_one_work()
      work->func()
         complete(B)

It's a deadlock by a lead lock A and completion B.

> > > And the whole setup stuff isn't properly preserved between works in any
> > > case, only the first few works would ever see that history, so why
> > > bother.
> > 
> > As I said in another reply, what about (1), (3) and (5) in my example?
> 
> So for single-threaded workqueues, I'd like to get recursive-read sorted
> and then we can make the lockdep_invariant_state() conditional.
> 
> Using recurisve-read lock for the wq lockdep_map's has the same effect
> as your might thing without having to introduce new magic.

Recursive-read and the hint I proposed(a.k.a. might) should be used for
their different specific applications. Both meaning and constraints of
them are totally different.

Using a right function semantically is more important than making it
just work, as you know. Wrong?

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
  2017-09-05 10:31                                                         ` Byungchul Park
@ 2017-09-05 10:52                                                           ` Peter Zijlstra
  2017-09-05 11:24                                                             ` Byungchul Park
  2017-09-05 10:58                                                           ` Byungchul Park
  1 sibling, 1 reply; 61+ messages in thread
From: Peter Zijlstra @ 2017-09-05 10:52 UTC (permalink / raw)
  To: Byungchul Park
  Cc: Byungchul Park, Ingo Molnar, Tejun Heo, Boqun Feng, david,
	Johannes Berg, oleg, linux-kernel, kernel-team

On Tue, Sep 05, 2017 at 07:31:44PM +0900, Byungchul Park wrote:
> Let me show you a possible scenario with a leaf lock:
> 
>                         lock(A)
>    lock(A)              wait_for_completion(B)
>    unlock(A)            ...
>    ...                  unlock(A)
>    process_one_work()
>       work->func()
>          complete(B)
> 
> It's a deadlock by a lead lock A and completion B.

By having wait_for_completion() in it, A is not a leaf lock.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
  2017-09-05 10:31                                                         ` Byungchul Park
  2017-09-05 10:52                                                           ` Peter Zijlstra
@ 2017-09-05 10:58                                                           ` Byungchul Park
  2017-09-05 13:46                                                             ` Peter Zijlstra
  1 sibling, 1 reply; 61+ messages in thread
From: Byungchul Park @ 2017-09-05 10:58 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Byungchul Park, Ingo Molnar, Tejun Heo, Boqun Feng, david,
	Johannes Berg, oleg, linux-kernel, kernel-team

On Tue, Sep 05, 2017 at 07:31:44PM +0900, Byungchul Park wrote:
> Recursive-read and the hint I proposed(a.k.a. might) should be used for
> their different specific applications. Both meaning and constraints of
> them are totally different.
> 
> Using a right function semantically is more important than making it
> just work, as you know. Wrong?

For example, _semantically_:

   lock(A) -> recursive-read(A), end in a deadlock, while
   lock(A) -> might(A)         , is like nothing.

   recursive-read(A) -> might(A), is like nothing, while
   might(A) -> recursive-read(A), end in a deadlock.

   And so on...

Of course, in the following cases, the results are same:

   recursive-read(A) -> recursive-read(A), is like nothing, and also
   might(A)          -> might(A)         , is like nothing.

   recursive-read(A) -> lock(A), end in a deadlock, and also
   might(A)          -> lock(A), end in a deadlock.

Futhermore, recursive-read-might() can be used if needed, since their
semantics are orthogonal so they can be used in mixed forms.

I really hope you accept the new semantics... I think current workqueue
code exactly needs the semantics.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
  2017-09-05 10:52                                                           ` Peter Zijlstra
@ 2017-09-05 11:24                                                             ` Byungchul Park
  0 siblings, 0 replies; 61+ messages in thread
From: Byungchul Park @ 2017-09-05 11:24 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Byungchul Park, Ingo Molnar, Tejun Heo, Boqun Feng, david,
	Johannes Berg, oleg, linux-kernel, kernel-team

On Tue, Sep 05, 2017 at 12:52:36PM +0200, Peter Zijlstra wrote:
> On Tue, Sep 05, 2017 at 07:31:44PM +0900, Byungchul Park wrote:
> > Let me show you a possible scenario with a leaf lock:
> > 
> >                         lock(A)
> >    lock(A)              wait_for_completion(B)
> >    unlock(A)            ...
> >    ...                  unlock(A)
> >    process_one_work()
> >       work->func()
> >          complete(B)
> > 
> > It's a deadlock by a lead lock A and completion B.
> 
> By having wait_for_completion() in it, A is not a leaf lock.

I see. After all, you want to force to use only leaf locks in (1), (3)
and (5) forever in future. I really don't understand why you want to
force it and use them carefully always in head, *only* for that locks,
though original code even makes it unnecessary.

But ok...

Let's discuss the issue later if necessary.

Again, I think your patches are worth nothing but avoiding the
workqueue issue. I really hope you think it more, but it's ok if you
don't want...

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
  2017-09-05 10:58                                                           ` Byungchul Park
@ 2017-09-05 13:46                                                             ` Peter Zijlstra
  2017-09-05 23:52                                                               ` Byungchul Park
  2017-09-06  0:48                                                               ` Byungchul Park
  0 siblings, 2 replies; 61+ messages in thread
From: Peter Zijlstra @ 2017-09-05 13:46 UTC (permalink / raw)
  To: Byungchul Park
  Cc: Byungchul Park, Ingo Molnar, Tejun Heo, Boqun Feng, david,
	Johannes Berg, oleg, linux-kernel, kernel-team

On Tue, Sep 05, 2017 at 07:58:38PM +0900, Byungchul Park wrote:
> On Tue, Sep 05, 2017 at 07:31:44PM +0900, Byungchul Park wrote:
> > Recursive-read and the hint I proposed(a.k.a. might) should be used for
> > their different specific applications. Both meaning and constraints of
> > them are totally different.
> > 
> > Using a right function semantically is more important than making it
> > just work, as you know. Wrong?

> Of course, in the following cases, the results are same:
> 
>    recursive-read(A) -> recursive-read(A), is like nothing, and also
>    might(A)          -> might(A)         , is like nothing.
> 
>    recursive-read(A) -> lock(A), end in a deadlock, and also
>    might(A)          -> lock(A), end in a deadlock.

And these are exactly the cases we need.

> Futhermore, recursive-read-might() can be used if needed, since their
> semantics are orthogonal so they can be used in mixed forms.
> 
> I really hope you accept the new semantics... I think current workqueue
> code exactly needs the semantics.

I really don't want to introduce this extra state if we don't have to.
And as you already noted, this 'might' thing of yours doesn't belong in
the .read argument, since as you say its orthogonal.

recursive-read
wait_for_completion()
			recursive-read
			complete()

is fundamentally not a deadlock, we don't need anything extra.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
  2017-09-05 13:46                                                             ` Peter Zijlstra
@ 2017-09-05 23:52                                                               ` Byungchul Park
  2017-09-06  0:42                                                                 ` Boqun Feng
  2017-09-06  0:48                                                               ` Byungchul Park
  1 sibling, 1 reply; 61+ messages in thread
From: Byungchul Park @ 2017-09-05 23:52 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Byungchul Park, Ingo Molnar, Tejun Heo, Boqun Feng, david,
	Johannes Berg, oleg, linux-kernel, kernel-team

On Tue, Sep 05, 2017 at 03:46:43PM +0200, Peter Zijlstra wrote:
> On Tue, Sep 05, 2017 at 07:58:38PM +0900, Byungchul Park wrote:
> > On Tue, Sep 05, 2017 at 07:31:44PM +0900, Byungchul Park wrote:
> > > Recursive-read and the hint I proposed(a.k.a. might) should be used for
> > > their different specific applications. Both meaning and constraints of
> > > them are totally different.
> > > 
> > > Using a right function semantically is more important than making it
> > > just work, as you know. Wrong?
> 
> > Of course, in the following cases, the results are same:
> > 
> >    recursive-read(A) -> recursive-read(A), is like nothing, and also
> >    might(A)          -> might(A)         , is like nothing.
> > 
> >    recursive-read(A) -> lock(A), end in a deadlock, and also
> >    might(A)          -> lock(A), end in a deadlock.
> 
> And these are exactly the cases we need.
> 
> > Futhermore, recursive-read-might() can be used if needed, since their
> > semantics are orthogonal so they can be used in mixed forms.
> > 
> > I really hope you accept the new semantics... I think current workqueue
> > code exactly needs the semantics.
> 
> I really don't want to introduce this extra state if we don't have to.

OK. If the workqueue is only user of the weird lockdep annotations, then
it might be better to defer introducing the extra state until needed.

But, the 'might' thing I introduced would be necessary if more users
want to report deadlocks at the time for crosslocks with speculative
acquisitions like the workqueue does, since the recursive-read thing
would generate false dependencies much more than we want, while the
'might' thing generate them just as many as we want.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
  2017-09-05 23:52                                                               ` Byungchul Park
@ 2017-09-06  0:42                                                                 ` Boqun Feng
  2017-09-06  1:32                                                                   ` Byungchul Park
  0 siblings, 1 reply; 61+ messages in thread
From: Boqun Feng @ 2017-09-06  0:42 UTC (permalink / raw)
  To: Byungchul Park
  Cc: Peter Zijlstra, Byungchul Park, Ingo Molnar, Tejun Heo, david,
	Johannes Berg, oleg, linux-kernel, kernel-team

[-- Attachment #1: Type: text/plain, Size: 2305 bytes --]

On Wed, Sep 06, 2017 at 08:52:35AM +0900, Byungchul Park wrote:
> On Tue, Sep 05, 2017 at 03:46:43PM +0200, Peter Zijlstra wrote:
> > On Tue, Sep 05, 2017 at 07:58:38PM +0900, Byungchul Park wrote:
> > > On Tue, Sep 05, 2017 at 07:31:44PM +0900, Byungchul Park wrote:
> > > > Recursive-read and the hint I proposed(a.k.a. might) should be used for
> > > > their different specific applications. Both meaning and constraints of
> > > > them are totally different.
> > > > 
> > > > Using a right function semantically is more important than making it
> > > > just work, as you know. Wrong?
> > 
> > > Of course, in the following cases, the results are same:
> > > 
> > >    recursive-read(A) -> recursive-read(A), is like nothing, and also
> > >    might(A)          -> might(A)         , is like nothing.
> > > 
> > >    recursive-read(A) -> lock(A), end in a deadlock, and also
> > >    might(A)          -> lock(A), end in a deadlock.
> > 
> > And these are exactly the cases we need.
> > 
> > > Futhermore, recursive-read-might() can be used if needed, since their
> > > semantics are orthogonal so they can be used in mixed forms.
> > > 
> > > I really hope you accept the new semantics... I think current workqueue
> > > code exactly needs the semantics.
> > 
> > I really don't want to introduce this extra state if we don't have to.
> 
> OK. If the workqueue is only user of the weird lockdep annotations, then
> it might be better to defer introducing the extra state until needed.
> 
> But, the 'might' thing I introduced would be necessary if more users
> want to report deadlocks at the time for crosslocks with speculative
> acquisitions like the workqueue does, since the recursive-read thing
> would generate false dependencies much more than we want, while the

What do you mean by "false dependencies"? AFAICT, recursive-read could
have dependencies to the following cross commit, for example:

	A(a)
			ARR(a)
			RRR(a)
	WFC(X)
			C(X)

This is a deadlock, no?

In my upcoming v2 for recursive-read support, I'm going to make this
detectable. But please note as crossrelease doesn't have any selftests
as normal lockdep stuffs, I may miss something subtle.

Regards,
Boqun

> 'might' thing generate them just as many as we want.
> 

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
  2017-09-05 13:46                                                             ` Peter Zijlstra
  2017-09-05 23:52                                                               ` Byungchul Park
@ 2017-09-06  0:48                                                               ` Byungchul Park
  1 sibling, 0 replies; 61+ messages in thread
From: Byungchul Park @ 2017-09-06  0:48 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Byungchul Park, Ingo Molnar, Tejun Heo, Boqun Feng, david,
	Johannes Berg, oleg, linux-kernel, kernel-team

On Tue, Sep 05, 2017 at 03:46:43PM +0200, Peter Zijlstra wrote:
> On Tue, Sep 05, 2017 at 07:58:38PM +0900, Byungchul Park wrote:
> > On Tue, Sep 05, 2017 at 07:31:44PM +0900, Byungchul Park wrote:
> > > Recursive-read and the hint I proposed(a.k.a. might) should be used for
> > > their different specific applications. Both meaning and constraints of
> > > them are totally different.
> > > 
> > > Using a right function semantically is more important than making it
> > > just work, as you know. Wrong?
> 
> > Of course, in the following cases, the results are same:
> > 
> >    recursive-read(A) -> recursive-read(A), is like nothing, and also
> >    might(A)          -> might(A)         , is like nothing.
> > 
> >    recursive-read(A) -> lock(A), end in a deadlock, and also
> >    might(A)          -> lock(A), end in a deadlock.
> 
> And these are exactly the cases we need.
> 
> > Futhermore, recursive-read-might() can be used if needed, since their
> > semantics are orthogonal so they can be used in mixed forms.
> > 
> > I really hope you accept the new semantics... I think current workqueue
> > code exactly needs the semantics.
> 
> I really don't want to introduce this extra state if we don't have to.
> And as you already noted, this 'might' thing of yours doesn't belong in
> the .read argument, since as you say its orthogonal.

Right. Of course, it can be changed to be a proper form if allowed. I
was afraid to introduce another new function instead of using an arg.

> recursive-read
> wait_for_completion()
> 			recursive-read
> 			complete()
> 
> is fundamentally not a deadlock, we don't need anything extra.

It might be ok wrt the workqueue. But, I think generally the
recursive-read is not a good option for that purpose.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
  2017-09-06  0:42                                                                 ` Boqun Feng
@ 2017-09-06  1:32                                                                   ` Byungchul Park
  2017-09-06 23:59                                                                     ` Byungchul Park
  2017-09-07  0:11                                                                     ` Byungchul Park
  0 siblings, 2 replies; 61+ messages in thread
From: Byungchul Park @ 2017-09-06  1:32 UTC (permalink / raw)
  To: Boqun Feng
  Cc: Peter Zijlstra, Byungchul Park, Ingo Molnar, Tejun Heo, david,
	Johannes Berg, oleg, linux-kernel, kernel-team

On Wed, Sep 06, 2017 at 08:42:11AM +0800, Boqun Feng wrote:
> > OK. If the workqueue is only user of the weird lockdep annotations, then
> > it might be better to defer introducing the extra state until needed.
> > 
> > But, the 'might' thing I introduced would be necessary if more users
> > want to report deadlocks at the time for crosslocks with speculative
> > acquisitions like the workqueue does, since the recursive-read thing
> > would generate false dependencies much more than we want, while the
> 
> What do you mean by "false dependencies"? AFAICT, recursive-read could

All locks used in every work->func() generate false dependencies with
'work' and 'wq', while any flush works are not involved. It's inevitable.

Moreover, it's also possible to generate more false ones between the
pseudo acquisitions, if real acquisitions are used for that speculative
purpose e.i. recursive-read here, which are anyway real ones.

Moreover, it's also possible to generate more false ones between holding
locks and the pseudo ones, of course, the workqueue code is not the case
for now.

Moreover, it's also possible to generate more false ones between the
pseudo ones and crosslocks on commit, once making crossrelease work even
for recursive-read things.

Whatever...

Only thing I want to say is, don't make code just work, but make code
use right ones semantically for its specific application. Otherwise,
we cannot avoid side effects we don't expect. Of course, these side
effects might be not visible at the moment, IOW, generating false
dependencies might be not problems at the moment, but I just want to
avoid not doing something in the right way, if possible. That's all.

> have dependencies to the following cross commit, for example:
> 
> 	A(a)
> 			ARR(a)
> 			RRR(a)
> 	WFC(X)
> 			C(X)
> 
> This is a deadlock, no?

This is obviously a deadlock.

> In my upcoming v2 for recursive-read support, I'm going to make this
> detectable. But please note as crossrelease doesn't have any selftests

I hope you make the recursive-read things work successfully.

> as normal lockdep stuffs, I may miss something subtle.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
  2017-09-06  1:32                                                                   ` Byungchul Park
@ 2017-09-06 23:59                                                                     ` Byungchul Park
  2017-09-07  0:11                                                                     ` Byungchul Park
  1 sibling, 0 replies; 61+ messages in thread
From: Byungchul Park @ 2017-09-06 23:59 UTC (permalink / raw)
  To: Boqun Feng
  Cc: Peter Zijlstra, Byungchul Park, Ingo Molnar, Tejun Heo, david,
	Johannes Berg, oleg, linux-kernel, kernel-team

On Wed, Sep 06, 2017 at 10:32:54AM +0900, Byungchul Park wrote:
> Moreover, it's also possible to generate more false ones between the
> pseudo acquisitions, if real acquisitions are used for that speculative
> purpose e.i. recursive-read here, which are anyway real ones.

Of course, this problem can be ignored if we *only* use recursive-read
acquisitions for the speculative purpose, though current workqueue
code uses both recursive-read and normal(write) for that.

IOW, as long as we leave the write acquisions for that purpose, this
would still be a problem.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation
  2017-09-06  1:32                                                                   ` Byungchul Park
  2017-09-06 23:59                                                                     ` Byungchul Park
@ 2017-09-07  0:11                                                                     ` Byungchul Park
  1 sibling, 0 replies; 61+ messages in thread
From: Byungchul Park @ 2017-09-07  0:11 UTC (permalink / raw)
  To: Boqun Feng, peterz
  Cc: Peter Zijlstra, Byungchul Park, Ingo Molnar, Tejun Heo, david,
	Johannes Berg, oleg, linux-kernel, kernel-team

On Wed, Sep 06, 2017 at 10:32:54AM +0900, Byungchul Park wrote:
> > What do you mean by "false dependencies"? AFAICT, recursive-read could
> 
> All locks used in every work->func() generate false dependencies with
> 'work' and 'wq', while any flush works are not involved. It's inevitable.
> 
> Moreover, it's also possible to generate more false ones between the
> pseudo acquisitions, if real acquisitions are used for that speculative
> purpose e.i. recursive-read here, which are anyway real ones.
> 
> Moreover, it's also possible to generate more false ones between holding
> locks and the pseudo ones, of course, the workqueue code is not the case
> for now.
> 
> Moreover, it's also possible to generate more false ones between the
> pseudo ones and crosslocks on commit, once making crossrelease work even
> for recursive-read things.

Hi Peter,

What do you think about the above? Just let me know please. It's ok if
you think it's not needed yet.

Or, if you think it's necessary from now on, I'm going ahead for that
work, starting from renaming the 'might' thing.

^ permalink raw reply	[flat|nested] 61+ messages in thread

end of thread, other threads:[~2017-09-07  0:12 UTC | newest]

Thread overview: 61+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-08-23 11:58 [PATCH 0/4] workqueue and lockdep stuffs Peter Zijlstra
2017-08-23 11:58 ` [PATCH 1/4] workqueue: Use TASK_IDLE Peter Zijlstra
2017-08-23 13:31   ` Tejun Heo
2017-08-23 11:58 ` [PATCH 2/4] lockdep/selftests: Add mixed read-write ABBA tests Peter Zijlstra
2017-08-23 11:58 ` [PATCH 3/4] workqueue/lockdep: Fix flush_work() annotation Peter Zijlstra
2017-08-23 11:58 ` [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation Peter Zijlstra
2017-08-24  2:18   ` Byungchul Park
2017-08-24 14:02     ` Peter Zijlstra
2017-08-25  1:11       ` Byungchul Park
2017-08-29  8:59         ` Peter Zijlstra
2017-08-29 14:23           ` [tip:locking/core] locking/lockdep: Untangle xhlock history save/restore from task independence tip-bot for Peter Zijlstra
2017-08-29 16:02           ` [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation Byungchul Park
2017-08-29 18:47             ` Peter Zijlstra
2017-08-30  2:09           ` Byungchul Park
2017-08-30  7:41             ` Byungchul Park
2017-08-30  8:53               ` Peter Zijlstra
2017-08-30  9:01                 ` Byungchul Park
2017-08-30  9:12                   ` Peter Zijlstra
2017-08-30  9:14                     ` Peter Zijlstra
2017-08-30  9:35                       ` Byungchul Park
2017-08-30  9:24                     ` Byungchul Park
2017-08-30 11:25                       ` Byungchul Park
2017-08-30 12:49                         ` Byungchul Park
2017-08-31  7:26                         ` Byungchul Park
2017-08-31  8:04                         ` Peter Zijlstra
2017-08-31  8:15                           ` Byungchul Park
2017-08-31  8:34                             ` Peter Zijlstra
2017-09-01  2:05                               ` Byungchul Park
2017-09-01  9:47                                 ` Peter Zijlstra
2017-09-01 10:16                                   ` Byungchul Park
2017-09-01 12:09                                     ` 박병철/선임연구원/SW Platform(연)AOT팀(byungchul.park@lge.com)
2017-09-01 12:38                                     ` Peter Zijlstra
2017-09-01 13:51                                       ` Byungchul Park
2017-09-01 16:38                                         ` Peter Zijlstra
2017-09-04  1:30                                           ` Byungchul Park
2017-09-04  2:08                                             ` Byungchul Park
2017-09-04 11:42                                             ` Peter Zijlstra
2017-09-05  0:38                                               ` Byungchul Park
2017-09-05  7:08                                                 ` Peter Zijlstra
2017-09-05  7:19                                                   ` Peter Zijlstra
2017-09-05  8:57                                                     ` Byungchul Park
2017-09-05  9:36                                                       ` Peter Zijlstra
2017-09-05 10:31                                                         ` Byungchul Park
2017-09-05 10:52                                                           ` Peter Zijlstra
2017-09-05 11:24                                                             ` Byungchul Park
2017-09-05 10:58                                                           ` Byungchul Park
2017-09-05 13:46                                                             ` Peter Zijlstra
2017-09-05 23:52                                                               ` Byungchul Park
2017-09-06  0:42                                                                 ` Boqun Feng
2017-09-06  1:32                                                                   ` Byungchul Park
2017-09-06 23:59                                                                     ` Byungchul Park
2017-09-07  0:11                                                                     ` Byungchul Park
2017-09-06  0:48                                                               ` Byungchul Park
2017-09-05  8:30                                                   ` Byungchul Park
2017-08-31  8:07                       ` Peter Zijlstra
2017-08-25  4:39       ` Byungchul Park
2017-08-29  6:46   ` Byungchul Park
2017-08-29  9:01     ` Peter Zijlstra
2017-08-29 16:12       ` Byungchul Park
2017-08-23 13:32 ` [PATCH 0/4] workqueue and lockdep stuffs Tejun Heo
2017-08-23 13:45   ` Peter Zijlstra

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.