linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/3] Fix the workqueue and lockdep annotation issue
@ 2017-09-05  2:29 Byungchul Park
  2017-09-05  2:29 ` [PATCH 1/3] lockdep: Use enum type on hlock->read instead of magic number Byungchul Park
                   ` (2 more replies)
  0 siblings, 3 replies; 11+ messages in thread
From: Byungchul Park @ 2017-09-05  2:29 UTC (permalink / raw)
  To: tj, johannes.berg, peterz, mingo
  Cc: tglx, oleg, david, linux-kernel, kernel-team

We introduced the following commit to detect deadlocks caused by
wait_for_completion() in flush_{workqueue, work}() and other locks. But
now LOCKDEP_COMPLETIONS is introduced, such works are automatically done
by LOCKDEP_COMPLETIONS.

   commit 4e6045f134784f4b158b3c0f7a282b04bd816887
   workqueue: debug flushing deadlocks with lockdep

However, LOCKDEP_COMPLETIONS has limitation that it cannot report
deadlocks at the real time. So we have to leave some annotations for now,
but some redundant acquisitions can be removed. And we need to make
workqueue code co-work with LOCKDEP_COMPLETIONS.

Byungchul Park (3):
  lockdep: Use enum type on hlock->read instead of magic number
  lockdep: Introduce lock_acquire_might()
  lockdep: Remove unnecessary acquisitions wrt workqueue flush

 include/linux/lockdep.h  |  2 ++
 kernel/locking/lockdep.c | 56 ++++++++++++++++++++++++++++++++++--------------
 kernel/workqueue.c       | 18 +++++++---------
 3 files changed, 50 insertions(+), 26 deletions(-)

-- 
1.9.1

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 1/3] lockdep: Use enum type on hlock->read instead of magic number
  2017-09-05  2:29 [PATCH 0/3] Fix the workqueue and lockdep annotation issue Byungchul Park
@ 2017-09-05  2:29 ` Byungchul Park
  2017-09-05  2:29 ` [PATCH 2/3] lockdep: Introduce lock_acquire_might() Byungchul Park
  2017-09-05  2:29 ` [PATCH 3/3] lockdep: Remove unnecessary acquisitions wrt workqueue flush Byungchul Park
  2 siblings, 0 replies; 11+ messages in thread
From: Byungchul Park @ 2017-09-05  2:29 UTC (permalink / raw)
  To: tj, johannes.berg, peterz, mingo
  Cc: tglx, oleg, david, linux-kernel, kernel-team

Currently hlock->read can have the following values:

   0: write type
   1: read type
   2: read recursion type

It would be better to use human readable type than magic numbers.
Furthermore, this way it's easier to extend it so that hlock->read
has another value.

Signed-off-by: Byungchul Park <byungchul.park@lge.com>
---
 kernel/locking/lockdep.c | 40 ++++++++++++++++++++++++----------------
 1 file changed, 24 insertions(+), 16 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 44c8d0d..0ac2f70 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -76,6 +76,14 @@
 #define lock_stat 0
 #endif
 
+#define LT_WRITE  0 /* 0: write type */
+#define LT_READ_N 1 /* 1: read non-recursion type */
+#define LT_READ_R 2 /* 2: read recursion type */
+
+#define is_write(a)  ((a) == LT_WRITE)
+#define is_read(a)   ((a) == LT_READ_N || (a) == LT_READ_R)
+#define is_read_r(a) ((a) == LT_READ_R)
+
 /*
  * lockdep_lock: protects the lockdep graph, the hashes and the
  *               class/list/hash allocators.
@@ -271,9 +279,9 @@ static void lock_release_holdtime(struct held_lock *hlock)
 	holdtime = lockstat_clock() - hlock->holdtime_stamp;
 
 	stats = get_lock_stats(hlock_class(hlock));
-	if (hlock->read)
+	if (is_read(hlock->read))
 		lock_time_inc(&stats->read_holdtime, holdtime);
-	else
+	else if (is_write(hlock->read))
 		lock_time_inc(&stats->write_holdtime, holdtime);
 	put_lock_stats(stats);
 }
@@ -1828,7 +1836,7 @@ static inline void inc_chains(void)
 		 * Allow read-after-read recursion of the same
 		 * lock class (i.e. read_lock(lock)+read_lock(lock)):
 		 */
-		if ((read == 2) && prev->read)
+		if (is_read_r(read) && is_read(prev->read))
 			return 2;
 
 		/*
@@ -1906,7 +1914,7 @@ static inline void inc_chains(void)
 	 * write-lock never takes any other locks, then the reads are
 	 * equivalent to a NOP.
 	 */
-	if (next->read == 2 || prev->read == 2)
+	if (is_read_r(next->read) || is_read_r(prev->read))
 		return 1;
 	/*
 	 * Is the <prev> -> <next> dependency already present?
@@ -2016,7 +2024,7 @@ static inline void inc_chains(void)
 			 * Only non-recursive-read entries get new dependencies
 			 * added:
 			 */
-			if (hlock->read != 2 && hlock->check) {
+			if (!is_read_r(hlock->read) && hlock->check) {
 				int ret = check_prev_add(curr, hlock, next,
 							 distance, &trace, save);
 				if (!ret)
@@ -2460,7 +2468,7 @@ static int validate_chain(struct task_struct *curr, struct lockdep_map *lock,
 		 * trylock entries):
 		 */
 		if (ret == 2)
-			hlock->read = 2;
+			hlock->read = LT_READ_R;
 		/*
 		 * Add dependency only if this lock is not the head
 		 * of the chain, and if it's not a secondary read-lock:
@@ -2848,7 +2856,7 @@ enum mark_type {
 		hlock = curr->held_locks + i;
 
 		usage_bit = 2 + (mark << 2); /* ENABLED */
-		if (hlock->read)
+		if (is_read(hlock->read))
 			usage_bit += 1; /* READ */
 
 		BUG_ON(usage_bit >= LOCK_USAGE_STATES);
@@ -3060,7 +3068,7 @@ static int mark_irqflags(struct task_struct *curr, struct held_lock *hlock)
 	 * mark the lock as used in these contexts:
 	 */
 	if (!hlock->trylock) {
-		if (hlock->read) {
+		if (is_read(hlock->read)) {
 			if (curr->hardirq_context)
 				if (!mark_lock(curr, hlock,
 						LOCK_USED_IN_HARDIRQ_READ))
@@ -3079,7 +3087,7 @@ static int mark_irqflags(struct task_struct *curr, struct held_lock *hlock)
 		}
 	}
 	if (!hlock->hardirqs_off) {
-		if (hlock->read) {
+		if (is_read(hlock->read)) {
 			if (!mark_lock(curr, hlock,
 					LOCK_ENABLED_HARDIRQ_READ))
 				return 0;
@@ -3715,7 +3723,7 @@ static int __lock_downgrade(struct lockdep_map *lock, unsigned long ip)
 	curr->curr_chain_key = hlock->prev_chain_key;
 
 	WARN(hlock->read, "downgrading a read lock");
-	hlock->read = 1;
+	hlock->read = LT_READ_N;
 	hlock->acquire_ip = ip;
 
 	if (reacquire_held_locks(curr, depth, i))
@@ -4167,7 +4175,7 @@ void lock_unpin_lock(struct lockdep_map *lock, struct pin_cookie cookie)
 	if (contending_point < LOCKSTAT_POINTS)
 		stats->contending_point[contending_point]++;
 	if (lock->cpu != smp_processor_id())
-		stats->bounces[bounce_contended + !!hlock->read]++;
+		stats->bounces[bounce_contended + is_read(hlock->read)]++;
 	put_lock_stats(stats);
 }
 
@@ -4209,13 +4217,13 @@ void lock_unpin_lock(struct lockdep_map *lock, struct pin_cookie cookie)
 
 	stats = get_lock_stats(hlock_class(hlock));
 	if (waittime) {
-		if (hlock->read)
+		if (is_read(hlock->read))
 			lock_time_inc(&stats->read_waittime, waittime);
-		else
+		else if (is_write(hlock->read))
 			lock_time_inc(&stats->write_waittime, waittime);
 	}
 	if (lock->cpu != cpu)
-		stats->bounces[bounce_acquired + !!hlock->read]++;
+		stats->bounces[bounce_acquired + is_read(hlock->read)]++;
 	put_lock_stats(stats);
 
 	lock->cpu = cpu;
@@ -4819,7 +4827,7 @@ static inline struct lock_class *xlock_class(struct cross_lock *xlock)
  */
 static inline int depend_before(struct held_lock *hlock)
 {
-	return hlock->read != 2 && hlock->check && !hlock->trylock;
+	return !is_read_r(hlock->read) && hlock->check && !hlock->trylock;
 }
 
 /*
@@ -4827,7 +4835,7 @@ static inline int depend_before(struct held_lock *hlock)
  */
 static inline int depend_after(struct held_lock *hlock)
 {
-	return hlock->read != 2 && hlock->check;
+	return !is_read_r(hlock->read) && hlock->check;
 }
 
 /*
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 2/3] lockdep: Introduce lock_acquire_might()
  2017-09-05  2:29 [PATCH 0/3] Fix the workqueue and lockdep annotation issue Byungchul Park
  2017-09-05  2:29 ` [PATCH 1/3] lockdep: Use enum type on hlock->read instead of magic number Byungchul Park
@ 2017-09-05  2:29 ` Byungchul Park
  2017-09-05  7:22   ` Peter Zijlstra
  2017-09-05  2:29 ` [PATCH 3/3] lockdep: Remove unnecessary acquisitions wrt workqueue flush Byungchul Park
  2 siblings, 1 reply; 11+ messages in thread
From: Byungchul Park @ 2017-09-05  2:29 UTC (permalink / raw)
  To: tj, johannes.berg, peterz, mingo
  Cc: tglx, oleg, david, linux-kernel, kernel-team

>From the point of view of crossrelease, we can never be aware of the
release context in advance, until we get to the lock_release().
However, this way we cannot report deadlocks occured at the time.

Sometimes, we want to report that kind of problems, taking a risk
generating false dependencies e.g. lock_acquire()s in workqueue code,
which inevitably generate false ones with all acquisitions in works.

It would be better to provide another primitive, lock_acquire_might()
for that purpose so that lockdep internal can be aware of what users
expect and get chances to enhance to avoid false ones.

The primitive should:

   1. work as if it's trylock, since links between lock_acquire_might()
      and later ones are only meaningful. Remind this should be used to
      do what crossrelease commit does, in advance.

   2. make acquisitions by lock_acquire_might() ignored on the commit.

TODO: Although it's worth reporting potential problems, we have to do
our best to avoid adding false dependencies into graph, while current
lockdep code(not only crossrelease code) does nothing to care it.

Signed-off-by: Byungchul Park <byungchul.park@lge.com>
---
 include/linux/lockdep.h  |  2 ++
 kernel/locking/lockdep.c | 20 ++++++++++++++++++--
 2 files changed, 20 insertions(+), 2 deletions(-)

diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index bfa8e0b..68c806a 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -679,6 +679,7 @@ static inline void print_irqtrace_events(struct task_struct *curr)
 #define lock_acquire_exclusive(l, s, t, n, i)		lock_acquire(l, s, t, 0, 1, n, i)
 #define lock_acquire_shared(l, s, t, n, i)		lock_acquire(l, s, t, 1, 1, n, i)
 #define lock_acquire_shared_recursive(l, s, t, n, i)	lock_acquire(l, s, t, 2, 1, n, i)
+#define lock_acquire_might(l, s, t, n, i)		lock_acquire(l, s, t, 3, 1, n, i)
 
 #define spin_acquire(l, s, t, i)		lock_acquire_exclusive(l, s, t, NULL, i)
 #define spin_acquire_nest(l, s, t, n, i)	lock_acquire_exclusive(l, s, t, n, i)
@@ -704,6 +705,7 @@ static inline void print_irqtrace_events(struct task_struct *curr)
 #define lock_map_acquire(l)			lock_acquire_exclusive(l, 0, 0, NULL, _THIS_IP_)
 #define lock_map_acquire_read(l)		lock_acquire_shared_recursive(l, 0, 0, NULL, _THIS_IP_)
 #define lock_map_acquire_tryread(l)		lock_acquire_shared_recursive(l, 0, 1, NULL, _THIS_IP_)
+#define lock_map_acquire_might(l)		lock_acquire_might(l, 0, 1, NULL, _THIS_IP_)
 #define lock_map_release(l)			lock_release(l, 1, _THIS_IP_)
 
 #ifdef CONFIG_PROVE_LOCKING
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 0ac2f70..963c176 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -80,9 +80,25 @@
 #define LT_READ_N 1 /* 1: read non-recursion type */
 #define LT_READ_R 2 /* 2: read recursion type */
 
+/*
+ * This is used to detect deadlocks at the real time for
+ * crosslocks. For example, workqueue code uses acquisitions
+ * manually for that purpose taking a risk creating false
+ * dependencies. It would be better for lockdep to be aware
+ * what users such as workqueue code want in that case and
+ * get chances to enhance lockdep internal code to avoid
+ * false dependencies as far as possible.
+ *
+ * TODO: For now, this flag is only used to skip this kind
+ * of acquisitions on commit. But, adding these false ones
+ * into graph itself should be avoided if possible.
+ */
+#define LT_MIGHT  3 /* 3: might acquire */
+
 #define is_write(a)  ((a) == LT_WRITE)
 #define is_read(a)   ((a) == LT_READ_N || (a) == LT_READ_R)
 #define is_read_r(a) ((a) == LT_READ_R)
+#define is_might(a)  ((a) == LT_MIGHT)
 
 /*
  * lockdep_lock: protects the lockdep graph, the hashes and the
@@ -4827,7 +4843,7 @@ static inline struct lock_class *xlock_class(struct cross_lock *xlock)
  */
 static inline int depend_before(struct held_lock *hlock)
 {
-	return !is_read_r(hlock->read) && hlock->check && !hlock->trylock;
+	return !is_read_r(hlock->read) && !is_might(hlock->read) && hlock->check && !hlock->trylock;
 }
 
 /*
@@ -4835,7 +4851,7 @@ static inline int depend_before(struct held_lock *hlock)
  */
 static inline int depend_after(struct held_lock *hlock)
 {
-	return !is_read_r(hlock->read) && hlock->check;
+	return !is_read_r(hlock->read) && !is_might(hlock->read) && hlock->check;
 }
 
 /*
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 3/3] lockdep: Remove unnecessary acquisitions wrt workqueue flush
  2017-09-05  2:29 [PATCH 0/3] Fix the workqueue and lockdep annotation issue Byungchul Park
  2017-09-05  2:29 ` [PATCH 1/3] lockdep: Use enum type on hlock->read instead of magic number Byungchul Park
  2017-09-05  2:29 ` [PATCH 2/3] lockdep: Introduce lock_acquire_might() Byungchul Park
@ 2017-09-05  2:29 ` Byungchul Park
  2017-09-05  7:25   ` Peter Zijlstra
  2 siblings, 1 reply; 11+ messages in thread
From: Byungchul Park @ 2017-09-05  2:29 UTC (permalink / raw)
  To: tj, johannes.berg, peterz, mingo
  Cc: tglx, oleg, david, linux-kernel, kernel-team

Workqueue added manual acquisitions to catch deadlock cases. Now
crossrelease was introduced, some of those are redundant because
crossrelease-enabled wait_for_completeion() also does it. Removed it.

Also, lock_map_acquire() in process_one_work() is too strong for
that purpose. lock_map_acquire_might() is enough. Replaced it.

Signed-off-by: Byungchul Park <byungchul.park@lge.com>
---
 kernel/workqueue.c | 18 ++++++++----------
 1 file changed, 8 insertions(+), 10 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index ab3c0dc..8b728d1 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -2091,8 +2091,8 @@ static void process_one_work(struct worker *worker, struct work_struct *work)
 
 	spin_unlock_irq(&pool->lock);
 
-	lock_map_acquire(&pwq->wq->lockdep_map);
-	lock_map_acquire(&lockdep_map);
+	lock_map_acquire_might(&pwq->wq->lockdep_map);
+	lock_map_acquire_might(&lockdep_map);
 	/*
 	 * Strictly speaking we should mark the invariant state without holding
 	 * any locks, that is, before these two lock_map_acquire()'s.
@@ -2504,7 +2504,7 @@ static void insert_wq_barrier(struct pool_workqueue *pwq,
 	 */
 	lockdep_init_map_crosslock((struct lockdep_map *)&barr->done.map,
 				   "(complete)wq_barr::done",
-				   target->lockdep_map.key, 1);
+				   target->lockdep_map.key, 0);
 	__init_completion(&barr->done);
 	barr->task = current;
 
@@ -2611,16 +2611,17 @@ void flush_workqueue(struct workqueue_struct *wq)
 	struct wq_flusher this_flusher = {
 		.list = LIST_HEAD_INIT(this_flusher.list),
 		.flush_color = -1,
-		.done = COMPLETION_INITIALIZER_ONSTACK(this_flusher.done),
 	};
 	int next_color;
 
+	lockdep_init_map_crosslock((struct lockdep_map *)&this_flusher.done.map,
+				   "(complete)wq_flusher::done",
+				   wq->lockdep_map.key, 0);
+	__init_completion(&this_flusher.done);
+
 	if (WARN_ON(!wq_online))
 		return;
 
-	lock_map_acquire(&wq->lockdep_map);
-	lock_map_release(&wq->lockdep_map);
-
 	mutex_lock(&wq->mutex);
 
 	/*
@@ -2883,9 +2884,6 @@ bool flush_work(struct work_struct *work)
 	if (WARN_ON(!wq_online))
 		return false;
 
-	lock_map_acquire(&work->lockdep_map);
-	lock_map_release(&work->lockdep_map);
-
 	if (start_flush_work(work, &barr)) {
 		wait_for_completion(&barr.done);
 		destroy_work_on_stack(&barr.work);
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/3] lockdep: Introduce lock_acquire_might()
  2017-09-05  2:29 ` [PATCH 2/3] lockdep: Introduce lock_acquire_might() Byungchul Park
@ 2017-09-05  7:22   ` Peter Zijlstra
  2017-09-12  0:35     ` Byungchul Park
  0 siblings, 1 reply; 11+ messages in thread
From: Peter Zijlstra @ 2017-09-05  7:22 UTC (permalink / raw)
  To: Byungchul Park
  Cc: tj, johannes.berg, mingo, tglx, oleg, david, linux-kernel, kernel-team

On Tue, Sep 05, 2017 at 11:29:13AM +0900, Byungchul Park wrote:
> From the point of view of crossrelease, we can never be aware of the
> release context in advance, until we get to the lock_release().
> However, this way we cannot report deadlocks occured at the time.
> 
> Sometimes, we want to report that kind of problems, taking a risk
> generating false dependencies e.g. lock_acquire()s in workqueue code,
> which inevitably generate false ones with all acquisitions in works.
> 
> It would be better to provide another primitive, lock_acquire_might()
> for that purpose so that lockdep internal can be aware of what users
> expect and get chances to enhance to avoid false ones.
> 
> The primitive should:
> 
>    1. work as if it's trylock, since links between lock_acquire_might()
>       and later ones are only meaningful. Remind this should be used to
>       do what crossrelease commit does, in advance.
> 
>    2. make acquisitions by lock_acquire_might() ignored on the commit.
> 

Shees, talk about ugly... Also might-lock has a different meaning.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 3/3] lockdep: Remove unnecessary acquisitions wrt workqueue flush
  2017-09-05  2:29 ` [PATCH 3/3] lockdep: Remove unnecessary acquisitions wrt workqueue flush Byungchul Park
@ 2017-09-05  7:25   ` Peter Zijlstra
  2017-09-05  7:36     ` 박병철/선임연구원/SW Platform(연)AOT팀(byungchul.park@lge.com)
  0 siblings, 1 reply; 11+ messages in thread
From: Peter Zijlstra @ 2017-09-05  7:25 UTC (permalink / raw)
  To: Byungchul Park
  Cc: tj, johannes.berg, mingo, tglx, oleg, david, linux-kernel, kernel-team

On Tue, Sep 05, 2017 at 11:29:14AM +0900, Byungchul Park wrote:

> Also, lock_map_acquire() in process_one_work() is too strong for
> that purpose. lock_map_acquire_might() is enough. Replaced it.

NAK!! traditional annotations are superior to cross-release. They are
not timing dependent.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* RE: [PATCH 3/3] lockdep: Remove unnecessary acquisitions wrt workqueue flush
  2017-09-05  7:25   ` Peter Zijlstra
@ 2017-09-05  7:36     ` 박병철/선임연구원/SW Platform(연)AOT팀(byungchul.park@lge.com)
  2017-09-05  8:16       ` Peter Zijlstra
  2017-09-05  8:19       ` Byungchul Park
  0 siblings, 2 replies; 11+ messages in thread
From: 박병철/선임연구원/SW Platform(연)AOT팀(byungchul.park@lge.com) @ 2017-09-05  7:36 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: tj, johannes.berg, mingo, tglx, oleg, david, linux-kernel, kernel-team

> -----Original Message-----
> From: Peter Zijlstra [mailto:peterz@infradead.org]
> Sent: Tuesday, September 05, 2017 4:26 PM
> To: Byungchul Park
> Cc: tj@kernel.org; johannes.berg@intel.com; mingo@kernel.org;
> tglx@linutronix.de; oleg@redhat.com; david@fromorbit.com; linux-
> kernel@vger.kernel.org; kernel-team@lge.com
> Subject: Re: [PATCH 3/3] lockdep: Remove unnecessary acquisitions wrt
> workqueue flush
> 
> On Tue, Sep 05, 2017 at 11:29:14AM +0900, Byungchul Park wrote:
> 
> > Also, lock_map_acquire() in process_one_work() is too strong for that
> > purpose. lock_map_acquire_might() is enough. Replaced it.
> 
> NAK!! traditional annotations are superior to cross-release. They are not
> timing dependent.

You seem to mis-understand this. This also make them timing independent.
I also agree that we need timing independent report in workqueue code.
That's actually why I propose this patch.

I just tried to do it in a right way.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 3/3] lockdep: Remove unnecessary acquisitions wrt workqueue flush
  2017-09-05  7:36     ` 박병철/선임연구원/SW Platform(연)AOT팀(byungchul.park@lge.com)
@ 2017-09-05  8:16       ` Peter Zijlstra
  2017-09-05  8:19       ` Byungchul Park
  1 sibling, 0 replies; 11+ messages in thread
From: Peter Zijlstra @ 2017-09-05  8:16 UTC (permalink / raw)
  To: �ں�ö/���ӿ�����/SW
	Platform(��)AOT��(byungchul.park@lge.com)
  Cc: tj, johannes.berg, mingo, tglx, oleg, david, linux-kernel, kernel-team

On Tue, Sep 05, 2017 at 04:36:18PM +0900, �ں�ö/���ӿ�����/SW Platform(��)AOT��(byungchul.park@lge.com) wrote:
> > -----Original Message-----
> > From: Peter Zijlstra [mailto:peterz@infradead.org]
> > Sent: Tuesday, September 05, 2017 4:26 PM
> > To: Byungchul Park
> > Cc: tj@kernel.org; johannes.berg@intel.com; mingo@kernel.org;
> > tglx@linutronix.de; oleg@redhat.com; david@fromorbit.com; linux-
> > kernel@vger.kernel.org; kernel-team@lge.com
> > Subject: Re: [PATCH 3/3] lockdep: Remove unnecessary acquisitions wrt
> > workqueue flush
> > 
> > On Tue, Sep 05, 2017 at 11:29:14AM +0900, Byungchul Park wrote:
> > 
> > > Also, lock_map_acquire() in process_one_work() is too strong for that
> > > purpose. lock_map_acquire_might() is enough. Replaced it.
> > 
> > NAK!! traditional annotations are superior to cross-release. They are not
> > timing dependent.
> 
> You seem to mis-understand this. This also make them timing independent.
> I also agree that we need timing independent report in workqueue code.
> That's actually why I propose this patch.

Then clearly it needs comments and changelog to explain how it does
things.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 3/3] lockdep: Remove unnecessary acquisitions wrt workqueue flush
  2017-09-05  7:36     ` 박병철/선임연구원/SW Platform(연)AOT팀(byungchul.park@lge.com)
  2017-09-05  8:16       ` Peter Zijlstra
@ 2017-09-05  8:19       ` Byungchul Park
  1 sibling, 0 replies; 11+ messages in thread
From: Byungchul Park @ 2017-09-05  8:19 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: tj, johannes.berg, mingo, tglx, oleg, david, linux-kernel, kernel-team

On Tue, Sep 05, 2017 at 04:36:18PM +0900, �ں�ö/���ӿ�����/SW Platform(��)AOT��(byungchul.park@lge.com) wrote:
> > -----Original Message-----
> > From: Peter Zijlstra [mailto:peterz@infradead.org]
> > Sent: Tuesday, September 05, 2017 4:26 PM
> > To: Byungchul Park
> > Cc: tj@kernel.org; johannes.berg@intel.com; mingo@kernel.org;
> > tglx@linutronix.de; oleg@redhat.com; david@fromorbit.com; linux-
> > kernel@vger.kernel.org; kernel-team@lge.com
> > Subject: Re: [PATCH 3/3] lockdep: Remove unnecessary acquisitions wrt
> > workqueue flush
> > 
> > On Tue, Sep 05, 2017 at 11:29:14AM +0900, Byungchul Park wrote:
> > 
> > > Also, lock_map_acquire() in process_one_work() is too strong for that
> > > purpose. lock_map_acquire_might() is enough. Replaced it.
> > 
> > NAK!! traditional annotations are superior to cross-release. They are not
> > timing dependent.
> 
> You seem to mis-understand this. This also make them timing independent.
> I also agree that we need timing independent report in workqueue code.
> That's actually why I propose this patch.
> 
> I just tried to do it in a right way.

Adding insufficient comments seems to lead to mis-understand what
lock_map_acquire_might() does. I will enhance it. (And might need to
rename it to a better one as you pointed out in another reply.)

I introduced it to give a hint to lockdep that "It's not actual
acquisition but pseudo one, informing that the context might be the
commit context" so that lockdep can report warnings at the real time
as current code does.

This hint is useful to report run time deadlocks, timing independently,
but not need to be considered on commit. And the hint should be relaxed
as far as possible. I just did that.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/3] lockdep: Introduce lock_acquire_might()
  2017-09-05  7:22   ` Peter Zijlstra
@ 2017-09-12  0:35     ` Byungchul Park
  2017-09-26  0:58       ` Byungchul Park
  0 siblings, 1 reply; 11+ messages in thread
From: Byungchul Park @ 2017-09-12  0:35 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: tj, johannes.berg, mingo, tglx, oleg, david, linux-kernel, kernel-team

On Tue, Sep 05, 2017 at 09:22:39AM +0200, Peter Zijlstra wrote:
> On Tue, Sep 05, 2017 at 11:29:13AM +0900, Byungchul Park wrote:
> > From the point of view of crossrelease, we can never be aware of the
> > release context in advance, until we get to the lock_release().
> > However, this way we cannot report deadlocks occured at the time.
> > 
> > Sometimes, we want to report that kind of problems, taking a risk
> > generating false dependencies e.g. lock_acquire()s in workqueue code,
> > which inevitably generate false ones with all acquisitions in works.
> > 
> > It would be better to provide another primitive, lock_acquire_might()
> > for that purpose so that lockdep internal can be aware of what users
> > expect and get chances to enhance to avoid false ones.
> > 
> > The primitive should:
> > 
> >    1. work as if it's trylock, since links between lock_acquire_might()
> >       and later ones are only meaningful. Remind this should be used to
> >       do what crossrelease commit does, in advance.
> > 
> >    2. make acquisitions by lock_acquire_might() ignored on the commit.
> > 
> 
> Shees, talk about ugly... Also might-lock has a different meaning.

OK. The description should be modified. I think I failed to explain what
I intended. What do you think about the following, which I saied in
another thread?

   If we use real acquisitions instead of 'might' for that speculative
   purpose as the workqueue code currently does:

   (1) All locks used in every work->func() generate false dependencies
   with 'work' lockdep_map and 'wq' lockdep_map, while any flush works
   are not involved. But, it's inevitable.

   (2) Moreover, it also generates more false ones between the real
   acquisitions. Of course, it can be avoidable if we force to use only
   recursive-read for that purpose, which is not true for now.

   (3) Moreover, it also generates more false ones between holding locks
   and the real ones. Of course, the workqueue code is not the case for
   now.

   (4) Moreover, it also generates more false ones between the real ones
   and a crosslock on commit, once crossrelease is able to work for
   recursive-read things.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/3] lockdep: Introduce lock_acquire_might()
  2017-09-12  0:35     ` Byungchul Park
@ 2017-09-26  0:58       ` Byungchul Park
  0 siblings, 0 replies; 11+ messages in thread
From: Byungchul Park @ 2017-09-26  0:58 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: tj, johannes.berg, mingo, tglx, oleg, david, linux-kernel, kernel-team

On Tue, Sep 12, 2017 at 09:35:02AM +0900, Byungchul Park wrote:
> On Tue, Sep 05, 2017 at 09:22:39AM +0200, Peter Zijlstra wrote:
> > On Tue, Sep 05, 2017 at 11:29:13AM +0900, Byungchul Park wrote:
> > > From the point of view of crossrelease, we can never be aware of the
> > > release context in advance, until we get to the lock_release().
> > > However, this way we cannot report deadlocks occured at the time.
> > > 
> > > Sometimes, we want to report that kind of problems, taking a risk
> > > generating false dependencies e.g. lock_acquire()s in workqueue code,
> > > which inevitably generate false ones with all acquisitions in works.
> > > 
> > > It would be better to provide another primitive, lock_acquire_might()
> > > for that purpose so that lockdep internal can be aware of what users
> > > expect and get chances to enhance to avoid false ones.
> > > 
> > > The primitive should:
> > > 
> > >    1. work as if it's trylock, since links between lock_acquire_might()
> > >       and later ones are only meaningful. Remind this should be used to
> > >       do what crossrelease commit does, in advance.
> > > 
> > >    2. make acquisitions by lock_acquire_might() ignored on the commit.
> > > 
> > 
> > Shees, talk about ugly... Also might-lock has a different meaning.
> 
> OK. The description should be modified. I think I failed to explain what
> I intended. What do you think about the following, which I saied in
> another thread?
> 
>    If we use real acquisitions instead of 'might' for that speculative
>    purpose as the workqueue code currently does:
> 
>    (1) All locks used in every work->func() generate false dependencies
>    with 'work' lockdep_map and 'wq' lockdep_map, while any flush works
>    are not involved. But, it's inevitable.
> 
>    (2) Moreover, it also generates more false ones between the real
>    acquisitions. Of course, it can be avoidable if we force to use only
>    recursive-read for that purpose, which is not true for now.
> 
>    (3) Moreover, it also generates more false ones between holding locks
>    and the real ones. Of course, the workqueue code is not the case for
>    now.
> 
>    (4) Moreover, it also generates more false ones between the real ones
>    and a crosslock on commit, once crossrelease is able to work for
>    recursive-read things.

Here, I want to discuss 'might' thing. IMHO, new primitive should be
used instead of read-recursive on speculative acquisitons used in e.g.
workqueue or smp/hotplug.

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2017-09-26  0:58 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-09-05  2:29 [PATCH 0/3] Fix the workqueue and lockdep annotation issue Byungchul Park
2017-09-05  2:29 ` [PATCH 1/3] lockdep: Use enum type on hlock->read instead of magic number Byungchul Park
2017-09-05  2:29 ` [PATCH 2/3] lockdep: Introduce lock_acquire_might() Byungchul Park
2017-09-05  7:22   ` Peter Zijlstra
2017-09-12  0:35     ` Byungchul Park
2017-09-26  0:58       ` Byungchul Park
2017-09-05  2:29 ` [PATCH 3/3] lockdep: Remove unnecessary acquisitions wrt workqueue flush Byungchul Park
2017-09-05  7:25   ` Peter Zijlstra
2017-09-05  7:36     ` 박병철/선임연구원/SW Platform(연)AOT팀(byungchul.park@lge.com)
2017-09-05  8:16       ` Peter Zijlstra
2017-09-05  8:19       ` Byungchul Park

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).