From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754105AbdIEC3X (ORCPT ); Mon, 4 Sep 2017 22:29:23 -0400 Received: from LGEAMRELO11.lge.com ([156.147.23.51]:60135 "EHLO lgeamrelo11.lge.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753681AbdIEC3V (ORCPT ); Mon, 4 Sep 2017 22:29:21 -0400 X-Original-SENDERIP: 156.147.1.126 X-Original-MAILFROM: byungchul.park@lge.com X-Original-SENDERIP: 10.177.222.33 X-Original-MAILFROM: byungchul.park@lge.com From: Byungchul Park To: tj@kernel.org, johannes.berg@intel.com, peterz@infradead.org, mingo@kernel.org Cc: tglx@linutronix.de, oleg@redhat.com, david@fromorbit.com, linux-kernel@vger.kernel.org, kernel-team@lge.com Subject: [PATCH 2/3] lockdep: Introduce lock_acquire_might() Date: Tue, 5 Sep 2017 11:29:13 +0900 Message-Id: <1504578554-4137-3-git-send-email-byungchul.park@lge.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1504578554-4137-1-git-send-email-byungchul.park@lge.com> References: <1504578554-4137-1-git-send-email-byungchul.park@lge.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org >>From the point of view of crossrelease, we can never be aware of the release context in advance, until we get to the lock_release(). However, this way we cannot report deadlocks occured at the time. Sometimes, we want to report that kind of problems, taking a risk generating false dependencies e.g. lock_acquire()s in workqueue code, which inevitably generate false ones with all acquisitions in works. It would be better to provide another primitive, lock_acquire_might() for that purpose so that lockdep internal can be aware of what users expect and get chances to enhance to avoid false ones. The primitive should: 1. work as if it's trylock, since links between lock_acquire_might() and later ones are only meaningful. Remind this should be used to do what crossrelease commit does, in advance. 2. make acquisitions by lock_acquire_might() ignored on the commit. TODO: Although it's worth reporting potential problems, we have to do our best to avoid adding false dependencies into graph, while current lockdep code(not only crossrelease code) does nothing to care it. Signed-off-by: Byungchul Park --- include/linux/lockdep.h | 2 ++ kernel/locking/lockdep.c | 20 ++++++++++++++++++-- 2 files changed, 20 insertions(+), 2 deletions(-) diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h index bfa8e0b..68c806a 100644 --- a/include/linux/lockdep.h +++ b/include/linux/lockdep.h @@ -679,6 +679,7 @@ static inline void print_irqtrace_events(struct task_struct *curr) #define lock_acquire_exclusive(l, s, t, n, i) lock_acquire(l, s, t, 0, 1, n, i) #define lock_acquire_shared(l, s, t, n, i) lock_acquire(l, s, t, 1, 1, n, i) #define lock_acquire_shared_recursive(l, s, t, n, i) lock_acquire(l, s, t, 2, 1, n, i) +#define lock_acquire_might(l, s, t, n, i) lock_acquire(l, s, t, 3, 1, n, i) #define spin_acquire(l, s, t, i) lock_acquire_exclusive(l, s, t, NULL, i) #define spin_acquire_nest(l, s, t, n, i) lock_acquire_exclusive(l, s, t, n, i) @@ -704,6 +705,7 @@ static inline void print_irqtrace_events(struct task_struct *curr) #define lock_map_acquire(l) lock_acquire_exclusive(l, 0, 0, NULL, _THIS_IP_) #define lock_map_acquire_read(l) lock_acquire_shared_recursive(l, 0, 0, NULL, _THIS_IP_) #define lock_map_acquire_tryread(l) lock_acquire_shared_recursive(l, 0, 1, NULL, _THIS_IP_) +#define lock_map_acquire_might(l) lock_acquire_might(l, 0, 1, NULL, _THIS_IP_) #define lock_map_release(l) lock_release(l, 1, _THIS_IP_) #ifdef CONFIG_PROVE_LOCKING diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c index 0ac2f70..963c176 100644 --- a/kernel/locking/lockdep.c +++ b/kernel/locking/lockdep.c @@ -80,9 +80,25 @@ #define LT_READ_N 1 /* 1: read non-recursion type */ #define LT_READ_R 2 /* 2: read recursion type */ +/* + * This is used to detect deadlocks at the real time for + * crosslocks. For example, workqueue code uses acquisitions + * manually for that purpose taking a risk creating false + * dependencies. It would be better for lockdep to be aware + * what users such as workqueue code want in that case and + * get chances to enhance lockdep internal code to avoid + * false dependencies as far as possible. + * + * TODO: For now, this flag is only used to skip this kind + * of acquisitions on commit. But, adding these false ones + * into graph itself should be avoided if possible. + */ +#define LT_MIGHT 3 /* 3: might acquire */ + #define is_write(a) ((a) == LT_WRITE) #define is_read(a) ((a) == LT_READ_N || (a) == LT_READ_R) #define is_read_r(a) ((a) == LT_READ_R) +#define is_might(a) ((a) == LT_MIGHT) /* * lockdep_lock: protects the lockdep graph, the hashes and the @@ -4827,7 +4843,7 @@ static inline struct lock_class *xlock_class(struct cross_lock *xlock) */ static inline int depend_before(struct held_lock *hlock) { - return !is_read_r(hlock->read) && hlock->check && !hlock->trylock; + return !is_read_r(hlock->read) && !is_might(hlock->read) && hlock->check && !hlock->trylock; } /* @@ -4835,7 +4851,7 @@ static inline int depend_before(struct held_lock *hlock) */ static inline int depend_after(struct held_lock *hlock) { - return !is_read_r(hlock->read) && hlock->check; + return !is_read_r(hlock->read) && !is_might(hlock->read) && hlock->check; } /* -- 1.9.1