All of lore.kernel.org
 help / color / mirror / Atom feed
* Q: lockdep_assert_held_read() after downgrade_write()
@ 2017-01-30 21:25 J. R. Okajima
  2017-01-30 21:30 ` Jens Axboe
  0 siblings, 1 reply; 20+ messages in thread
From: J. R. Okajima @ 2017-01-30 21:25 UTC (permalink / raw)
  To: peterz; +Cc: linux-kernel, axboe, darrick.wong, david

Peter Zijlstra,

May I ask you a question?
v4.10-rc1 got a commit
	f831948 2016-11-30 locking/lockdep: Provide a type check for lock_is_held
I've tested a little and lockdep splat a stack trace.

{
	DECLARE_RWSEM(rw);
	static struct lock_class_key key;
	lockdep_set_class(&rw, &key);

	down_read(&rw);
	lockdep_assert_held_read(&rw);
	up_read(&rw);

	down_write(&rw);
	lockdep_assert_held_exclusive(&rw);
	up_write(&rw);

	downgrade_write(&rw);
	lockdep_assert_held_read(&rw);	<-- here
	up_read(&rw);
}

I was expecting that lockdep_assert_held_read() splat nothing after
downgrade_write(). Is this warning an intentional behaviour?

Also the final up_read() gives me a warning too. It is produced at
	lockdep.c:3514:lock_release(): DEBUG_LOCKS_WARN_ON(depth <= 0)

As an additional information, I increased some lockdep constants.
Do you think this is related?

include/linux/lockdep.h
+#define MAX_LOCKDEP_SUBCLASSES		(8UL + 4)
+#define MAX_LOCKDEP_KEYS_BITS		(13 + 3)

kernel/locking/lockdep_internals.h
+#define MAX_LOCKDEP_ENTRIES	(32768UL << 5)
+#define MAX_LOCKDEP_CHAINS_BITS	(16 + 5)
+#define MAX_STACK_TRACE_ENTRIES	(524288UL << 5)


J. R. Okajima

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Q: lockdep_assert_held_read() after downgrade_write()
  2017-01-30 21:25 Q: lockdep_assert_held_read() after downgrade_write() J. R. Okajima
@ 2017-01-30 21:30 ` Jens Axboe
  2017-01-31 10:36   ` Peter Zijlstra
  2017-01-31 15:40   ` J. R. Okajima
  0 siblings, 2 replies; 20+ messages in thread
From: Jens Axboe @ 2017-01-30 21:30 UTC (permalink / raw)
  To: J. R. Okajima, peterz; +Cc: linux-kernel, darrick.wong, david

On 01/30/2017 02:25 PM, J. R. Okajima wrote:
> Peter Zijlstra,
> 
> May I ask you a question?
> v4.10-rc1 got a commit
> 	f831948 2016-11-30 locking/lockdep: Provide a type check for lock_is_held
> I've tested a little and lockdep splat a stack trace.
> 
> {
> 	DECLARE_RWSEM(rw);
> 	static struct lock_class_key key;
> 	lockdep_set_class(&rw, &key);
> 
> 	down_read(&rw);
> 	lockdep_assert_held_read(&rw);
> 	up_read(&rw);
> 
> 	down_write(&rw);
> 	lockdep_assert_held_exclusive(&rw);
> 	up_write(&rw);
> 
> 	downgrade_write(&rw);
> 	lockdep_assert_held_read(&rw);	<-- here
> 	up_read(&rw);
> }
> 
> I was expecting that lockdep_assert_held_read() splat nothing after
> downgrade_write(). Is this warning an intentional behaviour?
> 
> Also the final up_read() gives me a warning too. It is produced at
> 	lockdep.c:3514:lock_release(): DEBUG_LOCKS_WARN_ON(depth <= 0)

I don't think you understand how it works. downgrade_write() turns a write
lock into read held. To make that last sequence valid, you'd need:

	down_write(&rw);
	downgrade_write(&rw);
	lockdep_assert_held_read(&rw)
	up_read(&rw);

or just not drop up_write() from the last section.

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Q: lockdep_assert_held_read() after downgrade_write()
  2017-01-30 21:30 ` Jens Axboe
@ 2017-01-31 10:36   ` Peter Zijlstra
  2017-01-31 11:25     ` Peter Zijlstra
  2017-01-31 15:40   ` J. R. Okajima
  1 sibling, 1 reply; 20+ messages in thread
From: Peter Zijlstra @ 2017-01-31 10:36 UTC (permalink / raw)
  To: Jens Axboe; +Cc: J. R. Okajima, linux-kernel, darrick.wong, david

On Mon, Jan 30, 2017 at 02:30:45PM -0700, Jens Axboe wrote:
> On 01/30/2017 02:25 PM, J. R. Okajima wrote:
> > Peter Zijlstra,
> > 
> > May I ask you a question?
> > v4.10-rc1 got a commit
> > 	f831948 2016-11-30 locking/lockdep: Provide a type check for lock_is_held
> > I've tested a little and lockdep splat a stack trace.
> > 
> > {
> > 	DECLARE_RWSEM(rw);
> > 	static struct lock_class_key key;
> > 	lockdep_set_class(&rw, &key);
> > 
> > 	down_read(&rw);
> > 	lockdep_assert_held_read(&rw);
> > 	up_read(&rw);
> > 
> > 	down_write(&rw);
> > 	lockdep_assert_held_exclusive(&rw);
> > 	up_write(&rw);
> > 
> > 	downgrade_write(&rw);
> > 	lockdep_assert_held_read(&rw);	<-- here
> > 	up_read(&rw);
> > }
> > 
> > I was expecting that lockdep_assert_held_read() splat nothing after
> > downgrade_write(). Is this warning an intentional behaviour?
> > 
> > Also the final up_read() gives me a warning too. It is produced at
> > 	lockdep.c:3514:lock_release(): DEBUG_LOCKS_WARN_ON(depth <= 0)
> 
> I don't think you understand how it works. downgrade_write() turns a write
> lock into read held. To make that last sequence valid, you'd need:

Correct, and I'm surprised that didn't explode in different ways.

> 
> 	down_write(&rw);
> 	downgrade_write(&rw);
> 	lockdep_assert_held_read(&rw)
> 	up_read(&rw);
> 
> or just not drop up_write() from the last section.

Right, but also, there seems to be a missing lockdep annotation to make
that work. That is, downgrade_write() doesn't have a lockdep annotation,
so it (lockdep) will still think its a write lock.


Let me try and fix both issues.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Q: lockdep_assert_held_read() after downgrade_write()
  2017-01-31 10:36   ` Peter Zijlstra
@ 2017-01-31 11:25     ` Peter Zijlstra
  2017-01-31 14:23       ` Waiman Long
  0 siblings, 1 reply; 20+ messages in thread
From: Peter Zijlstra @ 2017-01-31 11:25 UTC (permalink / raw)
  To: Jens Axboe
  Cc: J. R. Okajima, linux-kernel, darrick.wong, david, longman, dave

On Tue, Jan 31, 2017 at 11:36:20AM +0100, Peter Zijlstra wrote:
> On Mon, Jan 30, 2017 at 02:30:45PM -0700, Jens Axboe wrote:

> > I don't think you understand how it works. downgrade_write() turns a write
> > lock into read held. To make that last sequence valid, you'd need:
> 
> Correct, and I'm surprised that didn't explode in different ways.
> 
> > 
> > 	down_write(&rw);
> > 	downgrade_write(&rw);
> > 	lockdep_assert_held_read(&rw)
> > 	up_read(&rw);
> > 
> > or just not drop up_write() from the last section.
> 
> Right, but also, there seems to be a missing lockdep annotation to make
> that work. That is, downgrade_write() doesn't have a lockdep annotation,
> so it (lockdep) will still think its a write lock.
> 
> 
> Let me try and fix both issues.

Something like so I suppose,... completely untested.

There could be a good reason for the current lockdep behaviour, but I
cannot remember.

---
diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
index 45ba475d4be3..dfa9e40f83d5 100644
--- a/kernel/locking/rwsem.c
+++ b/kernel/locking/rwsem.c
@@ -123,10 +123,9 @@ EXPORT_SYMBOL(up_write);
  */
 void downgrade_write(struct rw_semaphore *sem)
 {
-	/*
-	 * lockdep: a downgraded write will live on as a write
-	 * dependency.
-	 */
+	rwsem_release(&sem->dep_map, 1, _RET_IP_);
+	rwsem_acquire_read(&sem->dep_map, 0, 0, _RET_IP_);
+
 	rwsem_set_reader_owned(sem);
 	__downgrade_write(sem);
 }
diff --git a/kernel/locking/rwsem.h b/kernel/locking/rwsem.h
index a699f4048ba1..3bd584c81b0b 100644
--- a/kernel/locking/rwsem.h
+++ b/kernel/locking/rwsem.h
@@ -40,8 +40,10 @@ static inline void rwsem_set_reader_owned(struct rw_semaphore *sem)
 	 * do a write to the rwsem cacheline when it is really necessary
 	 * to minimize cacheline contention.
 	 */
-	if (sem->owner != RWSEM_READER_OWNED)
+	if (sem->owner != RWSEM_READER_OWNED) {
+		WARN_ON_ONCE(sem->owner != current);
 		WRITE_ONCE(sem->owner, RWSEM_READER_OWNED);
+	}
 }
 
 static inline bool rwsem_owner_is_writer(struct task_struct *owner)

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: Q: lockdep_assert_held_read() after downgrade_write()
  2017-01-31 11:25     ` Peter Zijlstra
@ 2017-01-31 14:23       ` Waiman Long
  2017-01-31 14:26         ` Peter Zijlstra
  0 siblings, 1 reply; 20+ messages in thread
From: Waiman Long @ 2017-01-31 14:23 UTC (permalink / raw)
  To: Peter Zijlstra, Jens Axboe
  Cc: J. R. Okajima, linux-kernel, darrick.wong, david, dave

On 01/31/2017 06:25 AM, Peter Zijlstra wrote:
> On Tue, Jan 31, 2017 at 11:36:20AM +0100, Peter Zijlstra wrote:
>> On Mon, Jan 30, 2017 at 02:30:45PM -0700, Jens Axboe wrote:
>>> I don't think you understand how it works. downgrade_write() turns a write
>>> lock into read held. To make that last sequence valid, you'd need:
>> Correct, and I'm surprised that didn't explode in different ways.
>>
>>> 	down_write(&rw);
>>> 	downgrade_write(&rw);
>>> 	lockdep_assert_held_read(&rw)
>>> 	up_read(&rw);
>>>
>>> or just not drop up_write() from the last section.
>> Right, but also, there seems to be a missing lockdep annotation to make
>> that work. That is, downgrade_write() doesn't have a lockdep annotation,
>> so it (lockdep) will still think its a write lock.
>>
>>
>> Let me try and fix both issues.
> Something like so I suppose,... completely untested.
>
> There could be a good reason for the current lockdep behaviour, but I
> cannot remember.
>
> ---
> diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
> index 45ba475d4be3..dfa9e40f83d5 100644
> --- a/kernel/locking/rwsem.c
> +++ b/kernel/locking/rwsem.c
> @@ -123,10 +123,9 @@ EXPORT_SYMBOL(up_write);
>   */
>  void downgrade_write(struct rw_semaphore *sem)
>  {
> -	/*
> -	 * lockdep: a downgraded write will live on as a write
> -	 * dependency.
> -	 */
> +	rwsem_release(&sem->dep_map, 1, _RET_IP_);
> +	rwsem_acquire_read(&sem->dep_map, 0, 0, _RET_IP_);
> +
>  	rwsem_set_reader_owned(sem);
>  	__downgrade_write(sem);
>  }
> diff --git a/kernel/locking/rwsem.h b/kernel/locking/rwsem.h
> index a699f4048ba1..3bd584c81b0b 100644
> --- a/kernel/locking/rwsem.h
> +++ b/kernel/locking/rwsem.h
> @@ -40,8 +40,10 @@ static inline void rwsem_set_reader_owned(struct rw_semaphore *sem)
>  	 * do a write to the rwsem cacheline when it is really necessary
>  	 * to minimize cacheline contention.
>  	 */
> -	if (sem->owner != RWSEM_READER_OWNED)
> +	if (sem->owner != RWSEM_READER_OWNED) {
> +		WARN_ON_ONCE(sem->owner != current);
>  		WRITE_ONCE(sem->owner, RWSEM_READER_OWNED);
> +	}
>  }
>  
>  static inline bool rwsem_owner_is_writer(struct task_struct *owner)

I don't think you can do a WARN_ON_ONCE() check for sem->owner !=
current here. If the rwsem starts from an unlock state, sem->owner will
be NULL and an incorrect warning message will be printed.

Cheers,
Longman

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Q: lockdep_assert_held_read() after downgrade_write()
  2017-01-31 14:23       ` Waiman Long
@ 2017-01-31 14:26         ` Peter Zijlstra
  0 siblings, 0 replies; 20+ messages in thread
From: Peter Zijlstra @ 2017-01-31 14:26 UTC (permalink / raw)
  To: Waiman Long
  Cc: Jens Axboe, J. R. Okajima, linux-kernel, darrick.wong, david, dave

On Tue, Jan 31, 2017 at 09:23:08AM -0500, Waiman Long wrote:
> On 01/31/2017 06:25 AM, Peter Zijlstra wrote:
> > @@ -40,8 +40,10 @@ static inline void rwsem_set_reader_owned(struct rw_semaphore *sem)
> >  	 * do a write to the rwsem cacheline when it is really necessary
> >  	 * to minimize cacheline contention.
> >  	 */
> > -	if (sem->owner != RWSEM_READER_OWNED)
> > +	if (sem->owner != RWSEM_READER_OWNED) {
> > +		WARN_ON_ONCE(sem->owner != current);
> >  		WRITE_ONCE(sem->owner, RWSEM_READER_OWNED);
> > +	}
> >  }
> >  
> >  static inline bool rwsem_owner_is_writer(struct task_struct *owner)
> 
> I don't think you can do a WARN_ON_ONCE() check for sem->owner !=
> current here. If the rwsem starts from an unlock state, sem->owner will
> be NULL and an incorrect warning message will be printed.

Argh, I only looked at the downgrade_write() user and forgot to look if
it was used elsewhere.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Q: lockdep_assert_held_read() after downgrade_write()
  2017-01-30 21:30 ` Jens Axboe
  2017-01-31 10:36   ` Peter Zijlstra
@ 2017-01-31 15:40   ` J. R. Okajima
  2017-01-31 16:32     ` Peter Zijlstra
  1 sibling, 1 reply; 20+ messages in thread
From: J. R. Okajima @ 2017-01-31 15:40 UTC (permalink / raw)
  To: Jens Axboe; +Cc: peterz, linux-kernel, darrick.wong, david

Jens Axboe:
> I don't think you understand how it works. downgrade_write() turns a write
> lock into read held. To make that last sequence valid, you'd need:
>
> 	down_write(&rw);
> 	downgrade_write(&rw);
> 	lockdep_assert_held_read(&rw)
> 	up_read(&rw);
>
> or just not drop up_write() from the last section.

Arg...
It is my bonehead mistake that I inserted up_write() before
downgrade_write(). Sorry about that.
Fortunately Peter Zijlstra reviewed downgrade_write() and sent a
patch. Thank you, it passed my first test.

Now allow me going on the second test (based upon Peter's patch)

- two rwsem, rwA and rwB.
- the locking order is rwA first, and then rwB.
- good case
  down_read(rwA)
  down_read(rwB)
  up_read(rwB)
  up_read(rwA)

  down_write(rwA)
  down_write(rwB)
  up_write(rwB)
  up_write(rwA)

- questionable case
  down_write(rwA)
  down_write(rwB)
  downgrade_write(rwA)
  downgrade_write(rwB)
  up_read(rwB)
  up_read(rwA)

These two downgrade_write() have their strict order? If so, what is
that?
Do the added two lines
+	rwsem_release(&sem->dep_map, 1, _RET_IP_);
+	rwsem_acquire_read(&sem->dep_map, 0, 0, _RET_IP_);
produce a traditional AB-BA deadlock warning, don't they?


J. R. Okajima

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Q: lockdep_assert_held_read() after downgrade_write()
  2017-01-31 15:40   ` J. R. Okajima
@ 2017-01-31 16:32     ` Peter Zijlstra
  2017-02-02 16:33       ` J. R. Okajima
  2017-02-02 16:38       ` [PATCH 1/3] lockdep: consolidate by new find_held_lock() J. R. Okajima
  0 siblings, 2 replies; 20+ messages in thread
From: Peter Zijlstra @ 2017-01-31 16:32 UTC (permalink / raw)
  To: J. R. Okajima; +Cc: Jens Axboe, linux-kernel, darrick.wong, david

On Wed, Feb 01, 2017 at 12:40:03AM +0900, J. R. Okajima wrote:

> Now allow me going on the second test (based upon Peter's patch)
> 
> - two rwsem, rwA and rwB.
> - the locking order is rwA first, and then rwB.
> - good case
>   down_read(rwA)
>   down_read(rwB)
>   up_read(rwB)
>   up_read(rwA)
> 
>   down_write(rwA)
>   down_write(rwB)
>   up_write(rwB)
>   up_write(rwA)
> 
> - questionable case
>   down_write(rwA)
>   down_write(rwB)
>   downgrade_write(rwA)
>   downgrade_write(rwB)
>   up_read(rwB)
>   up_read(rwA)
> 
> These two downgrade_write() have their strict order? If so, what is
> that?
> Do the added two lines
> +	rwsem_release(&sem->dep_map, 1, _RET_IP_);
> +	rwsem_acquire_read(&sem->dep_map, 0, 0, _RET_IP_);
> produce a traditional AB-BA deadlock warning, don't they?

Blergh, yes, because we do a full release.

Does something like the below work better? The annotation in
downgrade_write() would look something like:

+	lock_downgrade(&sem->dep_map, 1, _RET_IP_);

Not even compile tested and lacks the !LOCKDEP build bits.

---
diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index 1e327bb..76cf149 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -361,6 +361,8 @@ static inline void lock_set_subclass(struct lockdep_map *lock,
 	lock_set_class(lock, lock->name, lock->key, subclass, ip);
 }
 
+extern void lock_downgrade(struct lockdep_map *lock, int read, unsigned long ip);
+
 extern void lockdep_set_current_reclaim_state(gfp_t gfp_mask);
 extern void lockdep_clear_current_reclaim_state(void);
 extern void lockdep_trace_alloc(gfp_t mask);
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 7c38f8f..88517b6 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -3488,6 +3488,63 @@ __lock_set_class(struct lockdep_map *lock, const char *name,
 	return 1;
 }
 
+static int __lock_downgrade(struct lockdep_map *lock, int read, unsigned long ip)
+{
+	struct task_struct *curr = current;
+	struct held_lock *hlock, *prev_hlock;
+	struct lock_class *class;
+	unsigned int depth;
+	int i;
+
+	depth = curr->lockdep_depth;
+	/*
+	 * This function is about (re)setting the class of a held lock,
+	 * yet we're not actually holding any locks. Naughty user!
+	 */
+	if (DEBUG_LOCKS_WARN_ON(!depth))
+		return 0;
+
+	prev_hlock = NULL;
+	for (i = depth-1; i >= 0; i--) {
+		hlock = curr->held_locks + i;
+		/*
+		 * We must not cross into another context:
+		 */
+		if (prev_hlock && prev_hlock->irq_context != hlock->irq_context)
+			break;
+		if (match_held_lock(hlock, lock))
+			goto found_it;
+		prev_hlock = hlock;
+	}
+	return print_unlock_imbalance_bug(curr, lock, ip);
+
+found_it:
+	curr->lockdep_depth = i;
+	curr->curr_chain_key = hlock->prev_chain_key;
+
+	WARN(hlock->read, "downgrading a read lock");
+	hlock->read = read;
+	hlock->acquire_ip = ip;
+
+	for (; i < depth; i++) {
+		hlock = curr->held_locks + i;
+		if (!__lock_acquire(hlock->instance,
+			hlock_class(hlock)->subclass, hlock->trylock,
+				hlock->read, hlock->check, hlock->hardirqs_off,
+				hlock->nest_lock, hlock->acquire_ip,
+				hlock->references, hlock->pin_count))
+			return 0;
+	}
+
+	/*
+	 * I took it apart and put it back together again, except now I have
+	 * these 'spare' parts.. where shall I put them.
+	 */
+	if (DEBUG_LOCKS_WARN_ON(curr->lockdep_depth != depth))
+		return 0;
+	return 1;
+}
+
 /*
  * Remove the lock to the list of currently held locks - this gets
  * called on mutex_unlock()/spin_unlock*() (or on a failed
@@ -3732,6 +3789,23 @@ void lock_set_class(struct lockdep_map *lock, const char *name,
 }
 EXPORT_SYMBOL_GPL(lock_set_class);
 
+void lock_downgrade(struct lockdep_map *lock, int read, unsigned long ip)
+{
+	unsigned long flags;
+
+	if (unlikely(current->lockdep_recursion))
+		return;
+
+	raw_local_irq_save(flags);
+	current->lockdep_recursion = 1;
+	check_flags(flags);
+	if (__lock_downgrade(lock, read, ip))
+		check_chain_key(current);
+	current->lockdep_recursion = 0;
+	raw_local_irq_restore(flags);
+}
+EXPORT_SYMBOL_GPL(lock_downgrade);
+
 /*
  * We are not always called with irqs disabled - do that here,
  * and also avoid lockdep recursion:

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: Q: lockdep_assert_held_read() after downgrade_write()
  2017-01-31 16:32     ` Peter Zijlstra
@ 2017-02-02 16:33       ` J. R. Okajima
  2017-02-02 16:38       ` [PATCH 1/3] lockdep: consolidate by new find_held_lock() J. R. Okajima
  1 sibling, 0 replies; 20+ messages in thread
From: J. R. Okajima @ 2017-02-02 16:33 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: Jens Axboe, linux-kernel, darrick.wong, david

Peter Zijlstra:
> Does something like the below work better? The annotation in
> downgrade_write() would look something like:
>
> +	lock_downgrade(&sem->dep_map, 1, _RET_IP_);
>
> Not even compile tested and lacks the !LOCKDEP build bits.

Thanks for the patch.
It seems working expectedly. I began writing a similar patch locally
with minor consolidations by adding a new function or two. I will send a
patch series. Please review and merge them into v4.10. If you don't like
the patch, especially the new function name, feel free to change it.
I don't know whether !LOCKDEP build bits are necessary or not.


J. R. Okajima

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH 1/3] lockdep: consolidate by new find_held_lock()
  2017-01-31 16:32     ` Peter Zijlstra
  2017-02-02 16:33       ` J. R. Okajima
@ 2017-02-02 16:38       ` J. R. Okajima
  2017-02-02 16:38         ` [PATCH 2/3] lockdep: consolidate by new validate_held_lock() J. R. Okajima
                           ` (2 more replies)
  1 sibling, 3 replies; 20+ messages in thread
From: J. R. Okajima @ 2017-02-02 16:38 UTC (permalink / raw)
  To: peterz; +Cc: linux-kernel

A simple consolidataion. The behaviour should not change.

Signed-off-by: J. R. Okajima <hooanon05g@gmail.com>
---
 kernel/locking/lockdep.c | 114 ++++++++++++++++++++++-------------------------
 1 file changed, 54 insertions(+), 60 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 85d9222..b7a2001 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -3428,13 +3428,49 @@ static int match_held_lock(struct held_lock *hlock, struct lockdep_map *lock)
 	return 0;
 }
 
+/* @depth must not be zero */
+static struct held_lock *find_held_lock(struct task_struct *curr,
+					struct lockdep_map *lock,
+					unsigned int depth, int *idx)
+{
+	struct held_lock *ret, *hlock, *prev_hlock;
+	int i;
+
+	i = depth - 1;
+	hlock = curr->held_locks + i;
+	ret = hlock;
+	if (match_held_lock(hlock, lock))
+		goto out;
+
+	ret = NULL;
+	for (i--, prev_hlock = hlock--;
+	     i >= 0;
+	     i--, prev_hlock = hlock--) {
+		/*
+		 * We must not cross into another context:
+		 */
+		if (prev_hlock->irq_context != hlock->irq_context) {
+			ret = NULL;
+			break;
+		}
+		if (match_held_lock(hlock, lock)) {
+			ret = hlock;
+			break;
+		}
+	}
+
+out:
+	*idx = i;
+	return ret;
+}
+
 static int
 __lock_set_class(struct lockdep_map *lock, const char *name,
 		 struct lock_class_key *key, unsigned int subclass,
 		 unsigned long ip)
 {
 	struct task_struct *curr = current;
-	struct held_lock *hlock, *prev_hlock;
+	struct held_lock *hlock;
 	struct lock_class *class;
 	unsigned int depth;
 	int i;
@@ -3447,21 +3483,10 @@ __lock_set_class(struct lockdep_map *lock, const char *name,
 	if (DEBUG_LOCKS_WARN_ON(!depth))
 		return 0;
 
-	prev_hlock = NULL;
-	for (i = depth-1; i >= 0; i--) {
-		hlock = curr->held_locks + i;
-		/*
-		 * We must not cross into another context:
-		 */
-		if (prev_hlock && prev_hlock->irq_context != hlock->irq_context)
-			break;
-		if (match_held_lock(hlock, lock))
-			goto found_it;
-		prev_hlock = hlock;
-	}
-	return print_unlock_imbalance_bug(curr, lock, ip);
+	hlock = find_held_lock(curr, lock, depth, &i);
+	if (!hlock)
+		return print_unlock_imbalance_bug(curr, lock, ip);
 
-found_it:
 	lockdep_init_map(lock, name, key, 0);
 	class = register_lock_class(lock, subclass, 0);
 	hlock->class_idx = class - lock_classes + 1;
@@ -3499,7 +3524,7 @@ static int
 __lock_release(struct lockdep_map *lock, int nested, unsigned long ip)
 {
 	struct task_struct *curr = current;
-	struct held_lock *hlock, *prev_hlock;
+	struct held_lock *hlock;
 	unsigned int depth;
 	int i;
 
@@ -3518,21 +3543,10 @@ __lock_release(struct lockdep_map *lock, int nested, unsigned long ip)
 	 * Check whether the lock exists in the current stack
 	 * of held locks:
 	 */
-	prev_hlock = NULL;
-	for (i = depth-1; i >= 0; i--) {
-		hlock = curr->held_locks + i;
-		/*
-		 * We must not cross into another context:
-		 */
-		if (prev_hlock && prev_hlock->irq_context != hlock->irq_context)
-			break;
-		if (match_held_lock(hlock, lock))
-			goto found_it;
-		prev_hlock = hlock;
-	}
-	return print_unlock_imbalance_bug(curr, lock, ip);
+	hlock = find_held_lock(curr, lock, depth, &i);
+	if (!hlock)
+		return print_unlock_imbalance_bug(curr, lock, ip);
 
-found_it:
 	if (hlock->instance == lock)
 		lock_release_holdtime(hlock);
 
@@ -3894,7 +3908,7 @@ static void
 __lock_contended(struct lockdep_map *lock, unsigned long ip)
 {
 	struct task_struct *curr = current;
-	struct held_lock *hlock, *prev_hlock;
+	struct held_lock *hlock;
 	struct lock_class_stats *stats;
 	unsigned int depth;
 	int i, contention_point, contending_point;
@@ -3907,22 +3921,12 @@ __lock_contended(struct lockdep_map *lock, unsigned long ip)
 	if (DEBUG_LOCKS_WARN_ON(!depth))
 		return;
 
-	prev_hlock = NULL;
-	for (i = depth-1; i >= 0; i--) {
-		hlock = curr->held_locks + i;
-		/*
-		 * We must not cross into another context:
-		 */
-		if (prev_hlock && prev_hlock->irq_context != hlock->irq_context)
-			break;
-		if (match_held_lock(hlock, lock))
-			goto found_it;
-		prev_hlock = hlock;
+	hlock = find_held_lock(curr, lock, depth, &i);
+	if (!hlock) {
+		print_lock_contention_bug(curr, lock, ip);
+		return;
 	}
-	print_lock_contention_bug(curr, lock, ip);
-	return;
 
-found_it:
 	if (hlock->instance != lock)
 		return;
 
@@ -3946,7 +3950,7 @@ static void
 __lock_acquired(struct lockdep_map *lock, unsigned long ip)
 {
 	struct task_struct *curr = current;
-	struct held_lock *hlock, *prev_hlock;
+	struct held_lock *hlock;
 	struct lock_class_stats *stats;
 	unsigned int depth;
 	u64 now, waittime = 0;
@@ -3960,22 +3964,12 @@ __lock_acquired(struct lockdep_map *lock, unsigned long ip)
 	if (DEBUG_LOCKS_WARN_ON(!depth))
 		return;
 
-	prev_hlock = NULL;
-	for (i = depth-1; i >= 0; i--) {
-		hlock = curr->held_locks + i;
-		/*
-		 * We must not cross into another context:
-		 */
-		if (prev_hlock && prev_hlock->irq_context != hlock->irq_context)
-			break;
-		if (match_held_lock(hlock, lock))
-			goto found_it;
-		prev_hlock = hlock;
+	hlock = find_held_lock(curr, lock, depth, &i);
+	if (!hlock) {
+		print_lock_contention_bug(curr, lock, _RET_IP_);
+		return;
 	}
-	print_lock_contention_bug(curr, lock, _RET_IP_);
-	return;
 
-found_it:
 	if (hlock->instance != lock)
 		return;
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 2/3] lockdep: consolidate by new validate_held_lock()
  2017-02-02 16:38       ` [PATCH 1/3] lockdep: consolidate by new find_held_lock() J. R. Okajima
@ 2017-02-02 16:38         ` J. R. Okajima
  2017-02-14 12:09           ` Peter Zijlstra
  2017-03-16 11:25           ` [tip:locking/core] locking/lockdep: Factor out the validate_held_lock() helper function tip-bot for J. R. Okajima
  2017-02-02 16:38         ` [PATCH 3/3] lockdep: new annotation lock_downgrade() J. R. Okajima
  2017-03-16 11:24         ` [tip:locking/core] locking/lockdep: Factor out the find_held_lock() helper function tip-bot for J. R. Okajima
  2 siblings, 2 replies; 20+ messages in thread
From: J. R. Okajima @ 2017-02-02 16:38 UTC (permalink / raw)
  To: peterz; +Cc: linux-kernel

A simple consolidataion. The behaviour should not change.

Signed-off-by: J. R. Okajima <hooanon05g@gmail.com>
---
 kernel/locking/lockdep.c | 39 +++++++++++++++++++++------------------
 1 file changed, 21 insertions(+), 18 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index b7a2001..7dc8f8e 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -3464,6 +3464,23 @@ static struct held_lock *find_held_lock(struct task_struct *curr,
 	return ret;
 }
 
+static int validate_held_lock(struct task_struct *curr, unsigned int depth,
+			      int idx)
+{
+	struct held_lock *hlock;
+
+	for (hlock = curr->held_locks + idx; idx < depth; idx++, hlock++)
+		if (!__lock_acquire(hlock->instance,
+				    hlock_class(hlock)->subclass,
+				    hlock->trylock,
+				    hlock->read, hlock->check,
+				    hlock->hardirqs_off,
+				    hlock->nest_lock, hlock->acquire_ip,
+				    hlock->references, hlock->pin_count))
+			return 1;
+	return 0;
+}
+
 static int
 __lock_set_class(struct lockdep_map *lock, const char *name,
 		 struct lock_class_key *key, unsigned int subclass,
@@ -3494,15 +3511,8 @@ __lock_set_class(struct lockdep_map *lock, const char *name,
 	curr->lockdep_depth = i;
 	curr->curr_chain_key = hlock->prev_chain_key;
 
-	for (; i < depth; i++) {
-		hlock = curr->held_locks + i;
-		if (!__lock_acquire(hlock->instance,
-			hlock_class(hlock)->subclass, hlock->trylock,
-				hlock->read, hlock->check, hlock->hardirqs_off,
-				hlock->nest_lock, hlock->acquire_ip,
-				hlock->references, hlock->pin_count))
-			return 0;
-	}
+	if (validate_held_lock(curr, depth, i))
+		return 0;
 
 	/*
 	 * I took it apart and put it back together again, except now I have
@@ -3573,15 +3583,8 @@ __lock_release(struct lockdep_map *lock, int nested, unsigned long ip)
 	curr->lockdep_depth = i;
 	curr->curr_chain_key = hlock->prev_chain_key;
 
-	for (i++; i < depth; i++) {
-		hlock = curr->held_locks + i;
-		if (!__lock_acquire(hlock->instance,
-			hlock_class(hlock)->subclass, hlock->trylock,
-				hlock->read, hlock->check, hlock->hardirqs_off,
-				hlock->nest_lock, hlock->acquire_ip,
-				hlock->references, hlock->pin_count))
-			return 0;
-	}
+	if (validate_held_lock(curr, depth, i + 1))
+		return 0;
 
 	/*
 	 * We had N bottles of beer on the wall, we drank one, but now
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 3/3] lockdep: new annotation lock_downgrade()
  2017-02-02 16:38       ` [PATCH 1/3] lockdep: consolidate by new find_held_lock() J. R. Okajima
  2017-02-02 16:38         ` [PATCH 2/3] lockdep: consolidate by new validate_held_lock() J. R. Okajima
@ 2017-02-02 16:38         ` J. R. Okajima
  2017-02-02 17:59           ` kbuild test robot
  2017-03-16 11:25           ` [tip:locking/core] locking/lockdep: Add new check to lock_downgrade() tip-bot for J. R. Okajima
  2017-03-16 11:24         ` [tip:locking/core] locking/lockdep: Factor out the find_held_lock() helper function tip-bot for J. R. Okajima
  2 siblings, 2 replies; 20+ messages in thread
From: J. R. Okajima @ 2017-02-02 16:38 UTC (permalink / raw)
  To: peterz; +Cc: linux-kernel

The commit
	f831948 2016-11-30 locking/lockdep: Provide a type check for lock_is_held
didn't fully support rwsem. Here downgrade_write() supports the added type.

Originally-written-by: Peter Zijlstra <peterz@infradead.org>
See-also: http://marc.info/?l=linux-kernel&m=148581164003149&w=2
Signed-off-by: J. R. Okajima <hooanon05g@gmail.com>
---
 include/linux/lockdep.h  |  2 ++
 kernel/locking/lockdep.c | 55 ++++++++++++++++++++++++++++++++++++++++++++++++
 kernel/locking/rwsem.c   |  6 ++----
 3 files changed, 59 insertions(+), 4 deletions(-)

diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index 0345cbf..22f304c 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -361,6 +361,8 @@ static inline void lock_set_subclass(struct lockdep_map *lock,
 	lock_set_class(lock, lock->name, lock->key, subclass, ip);
 }
 
+extern void lock_downgrade(struct lockdep_map *lock, unsigned long ip);
+
 extern void lockdep_set_current_reclaim_state(gfp_t gfp_mask);
 extern void lockdep_clear_current_reclaim_state(void);
 extern void lockdep_trace_alloc(gfp_t mask);
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 7dc8f8e..6a4a740 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -3523,6 +3523,44 @@ __lock_set_class(struct lockdep_map *lock, const char *name,
 	return 1;
 }
 
+static int __lock_downgrade(struct lockdep_map *lock, unsigned long ip)
+{
+	struct task_struct *curr = current;
+	struct held_lock *hlock;
+	unsigned int depth;
+	int i;
+
+	depth = curr->lockdep_depth;
+	/*
+	 * This function is about (re)setting the class of a held lock,
+	 * yet we're not actually holding any locks. Naughty user!
+	 */
+	if (DEBUG_LOCKS_WARN_ON(!depth))
+		return 0;
+
+	hlock = find_held_lock(curr, lock, depth, &i);
+	if (!hlock)
+		return print_unlock_imbalance_bug(curr, lock, ip);
+
+	curr->lockdep_depth = i;
+	curr->curr_chain_key = hlock->prev_chain_key;
+
+	WARN(hlock->read, "downgrading a read lock");
+	hlock->read = 1;
+	hlock->acquire_ip = ip;
+
+	if (validate_held_lock(curr, depth, i))
+		return 0;
+
+	/*
+	 * I took it apart and put it back together again, except now I have
+	 * these 'spare' parts.. where shall I put them.
+	 */
+	if (DEBUG_LOCKS_WARN_ON(curr->lockdep_depth != depth))
+		return 0;
+	return 1;
+}
+
 /*
  * Remove the lock to the list of currently held locks - this gets
  * called on mutex_unlock()/spin_unlock*() (or on a failed
@@ -3749,6 +3787,23 @@ void lock_set_class(struct lockdep_map *lock, const char *name,
 }
 EXPORT_SYMBOL_GPL(lock_set_class);
 
+void lock_downgrade(struct lockdep_map *lock, unsigned long ip)
+{
+	unsigned long flags;
+
+	if (unlikely(current->lockdep_recursion))
+		return;
+
+	raw_local_irq_save(flags);
+	current->lockdep_recursion = 1;
+	check_flags(flags);
+	if (__lock_downgrade(lock, ip))
+		check_chain_key(current);
+	current->lockdep_recursion = 0;
+	raw_local_irq_restore(flags);
+}
+EXPORT_SYMBOL_GPL(lock_downgrade);
+
 /*
  * We are not always called with irqs disabled - do that here,
  * and also avoid lockdep recursion:
diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
index 45ba475..31db3ef 100644
--- a/kernel/locking/rwsem.c
+++ b/kernel/locking/rwsem.c
@@ -123,10 +123,8 @@ EXPORT_SYMBOL(up_write);
  */
 void downgrade_write(struct rw_semaphore *sem)
 {
-	/*
-	 * lockdep: a downgraded write will live on as a write
-	 * dependency.
-	 */
+	lock_downgrade(&sem->dep_map, _RET_IP_);
+
 	rwsem_set_reader_owned(sem);
 	__downgrade_write(sem);
 }
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH 3/3] lockdep: new annotation lock_downgrade()
  2017-02-02 16:38         ` [PATCH 3/3] lockdep: new annotation lock_downgrade() J. R. Okajima
@ 2017-02-02 17:59           ` kbuild test robot
  2017-02-02 18:45             ` Peter Zijlstra
  2017-03-16 11:25           ` [tip:locking/core] locking/lockdep: Add new check to lock_downgrade() tip-bot for J. R. Okajima
  1 sibling, 1 reply; 20+ messages in thread
From: kbuild test robot @ 2017-02-02 17:59 UTC (permalink / raw)
  To: J. R. Okajima; +Cc: kbuild-all, peterz, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 1508 bytes --]

Hi Okajima,

[auto build test ERROR on tip/locking/core]
[also build test ERROR on v4.10-rc6 next-20170202]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/J-R-Okajima/lockdep-consolidate-by-new-find_held_lock/20170203-010303
config: x86_64-randconfig-x018-201705 (attached as .config)
compiler: gcc-6 (Debian 6.2.0-3) 6.2.0 20160901
reproduce:
        # save the attached .config to linux build tree
        make ARCH=x86_64 

All errors (new ones prefixed by >>):

   kernel/locking/rwsem.c: In function 'downgrade_write':
>> kernel/locking/rwsem.c:126:2: error: implicit declaration of function 'lock_downgrade' [-Werror=implicit-function-declaration]
     lock_downgrade(&sem->dep_map, _RET_IP_);
     ^~~~~~~~~~~~~~
>> kernel/locking/rwsem.c:126:21: error: 'struct rw_semaphore' has no member named 'dep_map'
     lock_downgrade(&sem->dep_map, _RET_IP_);
                        ^~
   cc1: some warnings being treated as errors

vim +/lock_downgrade +126 kernel/locking/rwsem.c

   120	
   121	/*
   122	 * downgrade write lock to read lock
   123	 */
   124	void downgrade_write(struct rw_semaphore *sem)
   125	{
 > 126		lock_downgrade(&sem->dep_map, _RET_IP_);
   127	
   128		rwsem_set_reader_owned(sem);
   129		__downgrade_write(sem);

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 22012 bytes --]

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 3/3] lockdep: new annotation lock_downgrade()
  2017-02-02 17:59           ` kbuild test robot
@ 2017-02-02 18:45             ` Peter Zijlstra
  2017-02-02 21:05               ` J. R. Okajima
  0 siblings, 1 reply; 20+ messages in thread
From: Peter Zijlstra @ 2017-02-02 18:45 UTC (permalink / raw)
  To: kbuild test robot; +Cc: J. R. Okajima, kbuild-all, linux-kernel

On Fri, Feb 03, 2017 at 01:59:37AM +0800, kbuild test robot wrote:
> Hi Okajima,
> 
> [auto build test ERROR on tip/locking/core]
> [also build test ERROR on v4.10-rc6 next-20170202]
> [if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
> 
> url:    https://github.com/0day-ci/linux/commits/J-R-Okajima/lockdep-consolidate-by-new-find_held_lock/20170203-010303
> config: x86_64-randconfig-x018-201705 (attached as .config)
> compiler: gcc-6 (Debian 6.2.0-3) 6.2.0 20160901
> reproduce:
>         # save the attached .config to linux build tree
>         make ARCH=x86_64 
> 
> All errors (new ones prefixed by >>):
> 
>    kernel/locking/rwsem.c: In function 'downgrade_write':
> >> kernel/locking/rwsem.c:126:2: error: implicit declaration of function 'lock_downgrade' [-Werror=implicit-function-declaration]
>      lock_downgrade(&sem->dep_map, _RET_IP_);
>      ^~~~~~~~~~~~~~
> >> kernel/locking/rwsem.c:126:21: error: 'struct rw_semaphore' has no member named 'dep_map'
>      lock_downgrade(&sem->dep_map, _RET_IP_);
>                         ^~
>    cc1: some warnings being treated as errors
> 
> vim +/lock_downgrade +126 kernel/locking/rwsem.c
> 
>    120	
>    121	/*
>    122	 * downgrade write lock to read lock
>    123	 */
>    124	void downgrade_write(struct rw_semaphore *sem)
>    125	{
>  > 126		lock_downgrade(&sem->dep_map, _RET_IP_);
>    127	
>    128		rwsem_set_reader_owned(sem);
>    129		__downgrade_write(sem);

This is what you need !LOCKDEP stubs for ;-)

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 3/3] lockdep: new annotation lock_downgrade()
  2017-02-02 18:45             ` Peter Zijlstra
@ 2017-02-02 21:05               ` J. R. Okajima
  2017-02-14 12:11                 ` Peter Zijlstra
  0 siblings, 1 reply; 20+ messages in thread
From: J. R. Okajima @ 2017-02-02 21:05 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: kbuild test robot, kbuild-all, linux-kernel

Peter Zijlstra:
> > >> kernel/locking/rwsem.c:126:2: error: implicit declaration of functi=
on 'lock_downgrade' [-Werror=3Dimplicit-function-declaration]
> >      lock_downgrade(&sem->dep_map, _RET_IP_);
> >      ^~~~~~~~~~~~~~
	:::
> This is what you need !LOCKDEP stubs for ;-)

Ok, here is the update.
Just one line added.


J. R. Okajima


commit 6874cbfb3c4f757efecbeb800bfd4db1050698f6
Author: J. R. Okajima <hooanon05g@gmail.com>
Date:   Thu Feb 2 23:41:38 2017 +0900

    lockdep: new annotation lock_downgrade()
    =

    The commit
    	f831948 2016-11-30 locking/lockdep: Provide a type check for lock_is_=
held
    didn't fully support rwsem. Here downgrade_write() supports the added =
type.
    =

    Originally-written-by: Peter Zijlstra <peterz@infradead.org>
    See-also: http://marc.info/?l=3Dlinux-kernel&m=3D148581164003149&w=3D2
    Signed-off-by: J. R. Okajima <hooanon05g@gmail.com>

diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index 0345cbf..ba75d06 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -361,6 +361,8 @@ static inline void lock_set_subclass(struct lockdep_ma=
p *lock,
 	lock_set_class(lock, lock->name, lock->key, subclass, ip);
 }
 =

+extern void lock_downgrade(struct lockdep_map *lock, unsigned long ip);
+
 extern void lockdep_set_current_reclaim_state(gfp_t gfp_mask);
 extern void lockdep_clear_current_reclaim_state(void);
 extern void lockdep_trace_alloc(gfp_t mask);
@@ -413,6 +415,7 @@ static inline void lockdep_on(void)
 =

 # define lock_acquire(l, s, t, r, c, n, i)	do { } while (0)
 # define lock_release(l, n, i)			do { } while (0)
+# define lock_downgrade(l, i)			do { } while (0)
 # define lock_set_class(l, n, k, s, i)		do { } while (0)
 # define lock_set_subclass(l, s, i)		do { } while (0)
 # define lockdep_set_current_reclaim_state(g)	do { } while (0)
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 7dc8f8e..6a4a740 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -3523,6 +3523,44 @@ __lock_set_class(struct lockdep_map *lock, const ch=
ar *name,
 	return 1;
 }
 =

+static int __lock_downgrade(struct lockdep_map *lock, unsigned long ip)
+{
+	struct task_struct *curr =3D current;
+	struct held_lock *hlock;
+	unsigned int depth;
+	int i;
+
+	depth =3D curr->lockdep_depth;
+	/*
+	 * This function is about (re)setting the class of a held lock,
+	 * yet we're not actually holding any locks. Naughty user!
+	 */
+	if (DEBUG_LOCKS_WARN_ON(!depth))
+		return 0;
+
+	hlock =3D find_held_lock(curr, lock, depth, &i);
+	if (!hlock)
+		return print_unlock_imbalance_bug(curr, lock, ip);
+
+	curr->lockdep_depth =3D i;
+	curr->curr_chain_key =3D hlock->prev_chain_key;
+
+	WARN(hlock->read, "downgrading a read lock");
+	hlock->read =3D 1;
+	hlock->acquire_ip =3D ip;
+
+	if (validate_held_lock(curr, depth, i))
+		return 0;
+
+	/*
+	 * I took it apart and put it back together again, except now I have
+	 * these 'spare' parts.. where shall I put them.
+	 */
+	if (DEBUG_LOCKS_WARN_ON(curr->lockdep_depth !=3D depth))
+		return 0;
+	return 1;
+}
+
 /*
  * Remove the lock to the list of currently held locks - this gets
  * called on mutex_unlock()/spin_unlock*() (or on a failed
@@ -3749,6 +3787,23 @@ void lock_set_class(struct lockdep_map *lock, const=
 char *name,
 }
 EXPORT_SYMBOL_GPL(lock_set_class);
 =

+void lock_downgrade(struct lockdep_map *lock, unsigned long ip)
+{
+	unsigned long flags;
+
+	if (unlikely(current->lockdep_recursion))
+		return;
+
+	raw_local_irq_save(flags);
+	current->lockdep_recursion =3D 1;
+	check_flags(flags);
+	if (__lock_downgrade(lock, ip))
+		check_chain_key(current);
+	current->lockdep_recursion =3D 0;
+	raw_local_irq_restore(flags);
+}
+EXPORT_SYMBOL_GPL(lock_downgrade);
+
 /*
  * We are not always called with irqs disabled - do that here,
  * and also avoid lockdep recursion:
diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
index 45ba475..31db3ef 100644
--- a/kernel/locking/rwsem.c
+++ b/kernel/locking/rwsem.c
@@ -123,10 +123,8 @@ EXPORT_SYMBOL(up_write);
  */
 void downgrade_write(struct rw_semaphore *sem)
 {
-	/*
-	 * lockdep: a downgraded write will live on as a write
-	 * dependency.
-	 */
+	lock_downgrade(&sem->dep_map, _RET_IP_);
+
 	rwsem_set_reader_owned(sem);
 	__downgrade_write(sem);
 }

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH 2/3] lockdep: consolidate by new validate_held_lock()
  2017-02-02 16:38         ` [PATCH 2/3] lockdep: consolidate by new validate_held_lock() J. R. Okajima
@ 2017-02-14 12:09           ` Peter Zijlstra
  2017-03-16 11:25           ` [tip:locking/core] locking/lockdep: Factor out the validate_held_lock() helper function tip-bot for J. R. Okajima
  1 sibling, 0 replies; 20+ messages in thread
From: Peter Zijlstra @ 2017-02-14 12:09 UTC (permalink / raw)
  To: J. R. Okajima; +Cc: linux-kernel

On Fri, Feb 03, 2017 at 01:38:16AM +0900, J. R. Okajima wrote:
> A simple consolidataion. The behaviour should not change.
> 
> Signed-off-by: J. R. Okajima <hooanon05g@gmail.com>
> ---
>  kernel/locking/lockdep.c | 39 +++++++++++++++++++++------------------
>  1 file changed, 21 insertions(+), 18 deletions(-)
> 
> diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
> index b7a2001..7dc8f8e 100644
> --- a/kernel/locking/lockdep.c
> +++ b/kernel/locking/lockdep.c
> @@ -3464,6 +3464,23 @@ static struct held_lock *find_held_lock(struct task_struct *curr,
>  	return ret;
>  }
>  
> +static int validate_held_lock(struct task_struct *curr, unsigned int depth,
> +			      int idx)
> +{
> +	struct held_lock *hlock;
> +
> +	for (hlock = curr->held_locks + idx; idx < depth; idx++, hlock++)
> +		if (!__lock_acquire(hlock->instance,
> +				    hlock_class(hlock)->subclass,
> +				    hlock->trylock,
> +				    hlock->read, hlock->check,
> +				    hlock->hardirqs_off,
> +				    hlock->nest_lock, hlock->acquire_ip,
> +				    hlock->references, hlock->pin_count))
> +			return 1;
> +	return 0;
> +}

I added the extra { } required by coding style and renamed the function
to reacquire_held_locks().

Plural because it has the loop, and reacquire because that is what it
does. Alternatively 'rebuild' is also possible if someone really doesn't
like reacquire.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 3/3] lockdep: new annotation lock_downgrade()
  2017-02-02 21:05               ` J. R. Okajima
@ 2017-02-14 12:11                 ` Peter Zijlstra
  0 siblings, 0 replies; 20+ messages in thread
From: Peter Zijlstra @ 2017-02-14 12:11 UTC (permalink / raw)
  To: J. R. Okajima; +Cc: kbuild test robot, kbuild-all, linux-kernel

On Fri, Feb 03, 2017 at 06:05:31AM +0900, J. R. Okajima wrote:
> Peter Zijlstra:
> > > >> kernel/locking/rwsem.c:126:2: error: implicit declaration of functi=
> on 'lock_downgrade' [-Werror=3Dimplicit-function-declaration]
> > >      lock_downgrade(&sem->dep_map, _RET_IP_);
> > >      ^~~~~~~~~~~~~~
> 	:::
> > This is what you need !LOCKDEP stubs for ;-)
> 
> Ok, here is the update.
> Just one line added.

Thanks!

(this version was white-space mangled but I took the old one and added
the one line).

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [tip:locking/core] locking/lockdep: Factor out the find_held_lock() helper function
  2017-02-02 16:38       ` [PATCH 1/3] lockdep: consolidate by new find_held_lock() J. R. Okajima
  2017-02-02 16:38         ` [PATCH 2/3] lockdep: consolidate by new validate_held_lock() J. R. Okajima
  2017-02-02 16:38         ` [PATCH 3/3] lockdep: new annotation lock_downgrade() J. R. Okajima
@ 2017-03-16 11:24         ` tip-bot for J. R. Okajima
  2 siblings, 0 replies; 20+ messages in thread
From: tip-bot for J. R. Okajima @ 2017-03-16 11:24 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: hooanon05g, linux-kernel, mingo, torvalds, paulmck, peterz, akpm,
	hpa, tglx

Commit-ID:  41c2c5b86a5e1a691ddacfc03b631b87a0b19043
Gitweb:     http://git.kernel.org/tip/41c2c5b86a5e1a691ddacfc03b631b87a0b19043
Author:     J. R. Okajima <hooanon05g@gmail.com>
AuthorDate: Fri, 3 Feb 2017 01:38:15 +0900
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Thu, 16 Mar 2017 09:57:06 +0100

locking/lockdep: Factor out the find_held_lock() helper function

A simple consolidataion to factor out repeated patterns.

The behaviour should not change.

Signed-off-by: J. R. Okajima <hooanon05g@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1486053497-9948-1-git-send-email-hooanon05g@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/locking/lockdep.c | 114 ++++++++++++++++++++++-------------------------
 1 file changed, 54 insertions(+), 60 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index a95e5d1..0d28b82 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -3437,13 +3437,49 @@ static int match_held_lock(struct held_lock *hlock, struct lockdep_map *lock)
 	return 0;
 }
 
+/* @depth must not be zero */
+static struct held_lock *find_held_lock(struct task_struct *curr,
+					struct lockdep_map *lock,
+					unsigned int depth, int *idx)
+{
+	struct held_lock *ret, *hlock, *prev_hlock;
+	int i;
+
+	i = depth - 1;
+	hlock = curr->held_locks + i;
+	ret = hlock;
+	if (match_held_lock(hlock, lock))
+		goto out;
+
+	ret = NULL;
+	for (i--, prev_hlock = hlock--;
+	     i >= 0;
+	     i--, prev_hlock = hlock--) {
+		/*
+		 * We must not cross into another context:
+		 */
+		if (prev_hlock->irq_context != hlock->irq_context) {
+			ret = NULL;
+			break;
+		}
+		if (match_held_lock(hlock, lock)) {
+			ret = hlock;
+			break;
+		}
+	}
+
+out:
+	*idx = i;
+	return ret;
+}
+
 static int
 __lock_set_class(struct lockdep_map *lock, const char *name,
 		 struct lock_class_key *key, unsigned int subclass,
 		 unsigned long ip)
 {
 	struct task_struct *curr = current;
-	struct held_lock *hlock, *prev_hlock;
+	struct held_lock *hlock;
 	struct lock_class *class;
 	unsigned int depth;
 	int i;
@@ -3456,21 +3492,10 @@ __lock_set_class(struct lockdep_map *lock, const char *name,
 	if (DEBUG_LOCKS_WARN_ON(!depth))
 		return 0;
 
-	prev_hlock = NULL;
-	for (i = depth-1; i >= 0; i--) {
-		hlock = curr->held_locks + i;
-		/*
-		 * We must not cross into another context:
-		 */
-		if (prev_hlock && prev_hlock->irq_context != hlock->irq_context)
-			break;
-		if (match_held_lock(hlock, lock))
-			goto found_it;
-		prev_hlock = hlock;
-	}
-	return print_unlock_imbalance_bug(curr, lock, ip);
+	hlock = find_held_lock(curr, lock, depth, &i);
+	if (!hlock)
+		return print_unlock_imbalance_bug(curr, lock, ip);
 
-found_it:
 	lockdep_init_map(lock, name, key, 0);
 	class = register_lock_class(lock, subclass, 0);
 	hlock->class_idx = class - lock_classes + 1;
@@ -3508,7 +3533,7 @@ static int
 __lock_release(struct lockdep_map *lock, int nested, unsigned long ip)
 {
 	struct task_struct *curr = current;
-	struct held_lock *hlock, *prev_hlock;
+	struct held_lock *hlock;
 	unsigned int depth;
 	int i;
 
@@ -3527,21 +3552,10 @@ __lock_release(struct lockdep_map *lock, int nested, unsigned long ip)
 	 * Check whether the lock exists in the current stack
 	 * of held locks:
 	 */
-	prev_hlock = NULL;
-	for (i = depth-1; i >= 0; i--) {
-		hlock = curr->held_locks + i;
-		/*
-		 * We must not cross into another context:
-		 */
-		if (prev_hlock && prev_hlock->irq_context != hlock->irq_context)
-			break;
-		if (match_held_lock(hlock, lock))
-			goto found_it;
-		prev_hlock = hlock;
-	}
-	return print_unlock_imbalance_bug(curr, lock, ip);
+	hlock = find_held_lock(curr, lock, depth, &i);
+	if (!hlock)
+		return print_unlock_imbalance_bug(curr, lock, ip);
 
-found_it:
 	if (hlock->instance == lock)
 		lock_release_holdtime(hlock);
 
@@ -3903,7 +3917,7 @@ static void
 __lock_contended(struct lockdep_map *lock, unsigned long ip)
 {
 	struct task_struct *curr = current;
-	struct held_lock *hlock, *prev_hlock;
+	struct held_lock *hlock;
 	struct lock_class_stats *stats;
 	unsigned int depth;
 	int i, contention_point, contending_point;
@@ -3916,22 +3930,12 @@ __lock_contended(struct lockdep_map *lock, unsigned long ip)
 	if (DEBUG_LOCKS_WARN_ON(!depth))
 		return;
 
-	prev_hlock = NULL;
-	for (i = depth-1; i >= 0; i--) {
-		hlock = curr->held_locks + i;
-		/*
-		 * We must not cross into another context:
-		 */
-		if (prev_hlock && prev_hlock->irq_context != hlock->irq_context)
-			break;
-		if (match_held_lock(hlock, lock))
-			goto found_it;
-		prev_hlock = hlock;
+	hlock = find_held_lock(curr, lock, depth, &i);
+	if (!hlock) {
+		print_lock_contention_bug(curr, lock, ip);
+		return;
 	}
-	print_lock_contention_bug(curr, lock, ip);
-	return;
 
-found_it:
 	if (hlock->instance != lock)
 		return;
 
@@ -3955,7 +3959,7 @@ static void
 __lock_acquired(struct lockdep_map *lock, unsigned long ip)
 {
 	struct task_struct *curr = current;
-	struct held_lock *hlock, *prev_hlock;
+	struct held_lock *hlock;
 	struct lock_class_stats *stats;
 	unsigned int depth;
 	u64 now, waittime = 0;
@@ -3969,22 +3973,12 @@ __lock_acquired(struct lockdep_map *lock, unsigned long ip)
 	if (DEBUG_LOCKS_WARN_ON(!depth))
 		return;
 
-	prev_hlock = NULL;
-	for (i = depth-1; i >= 0; i--) {
-		hlock = curr->held_locks + i;
-		/*
-		 * We must not cross into another context:
-		 */
-		if (prev_hlock && prev_hlock->irq_context != hlock->irq_context)
-			break;
-		if (match_held_lock(hlock, lock))
-			goto found_it;
-		prev_hlock = hlock;
+	hlock = find_held_lock(curr, lock, depth, &i);
+	if (!hlock) {
+		print_lock_contention_bug(curr, lock, _RET_IP_);
+		return;
 	}
-	print_lock_contention_bug(curr, lock, _RET_IP_);
-	return;
 
-found_it:
 	if (hlock->instance != lock)
 		return;
 

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [tip:locking/core] locking/lockdep: Factor out the validate_held_lock() helper function
  2017-02-02 16:38         ` [PATCH 2/3] lockdep: consolidate by new validate_held_lock() J. R. Okajima
  2017-02-14 12:09           ` Peter Zijlstra
@ 2017-03-16 11:25           ` tip-bot for J. R. Okajima
  1 sibling, 0 replies; 20+ messages in thread
From: tip-bot for J. R. Okajima @ 2017-03-16 11:25 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: peterz, akpm, hooanon05g, torvalds, tglx, paulmck, hpa,
	linux-kernel, mingo

Commit-ID:  e969970be033841d4c16b2e8ec8a3608347db861
Gitweb:     http://git.kernel.org/tip/e969970be033841d4c16b2e8ec8a3608347db861
Author:     J. R. Okajima <hooanon05g@gmail.com>
AuthorDate: Fri, 3 Feb 2017 01:38:16 +0900
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Thu, 16 Mar 2017 09:57:07 +0100

locking/lockdep: Factor out the validate_held_lock() helper function

Behaviour should not change.

Signed-off-by: J. R. Okajima <hooanon05g@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1486053497-9948-2-git-send-email-hooanon05g@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/locking/lockdep.c | 40 ++++++++++++++++++++++------------------
 1 file changed, 22 insertions(+), 18 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 0d28b82..da79548 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -3473,6 +3473,24 @@ out:
 	return ret;
 }
 
+static int reacquire_held_locks(struct task_struct *curr, unsigned int depth,
+			      int idx)
+{
+	struct held_lock *hlock;
+
+	for (hlock = curr->held_locks + idx; idx < depth; idx++, hlock++) {
+		if (!__lock_acquire(hlock->instance,
+				    hlock_class(hlock)->subclass,
+				    hlock->trylock,
+				    hlock->read, hlock->check,
+				    hlock->hardirqs_off,
+				    hlock->nest_lock, hlock->acquire_ip,
+				    hlock->references, hlock->pin_count))
+			return 1;
+	}
+	return 0;
+}
+
 static int
 __lock_set_class(struct lockdep_map *lock, const char *name,
 		 struct lock_class_key *key, unsigned int subclass,
@@ -3503,15 +3521,8 @@ __lock_set_class(struct lockdep_map *lock, const char *name,
 	curr->lockdep_depth = i;
 	curr->curr_chain_key = hlock->prev_chain_key;
 
-	for (; i < depth; i++) {
-		hlock = curr->held_locks + i;
-		if (!__lock_acquire(hlock->instance,
-			hlock_class(hlock)->subclass, hlock->trylock,
-				hlock->read, hlock->check, hlock->hardirqs_off,
-				hlock->nest_lock, hlock->acquire_ip,
-				hlock->references, hlock->pin_count))
-			return 0;
-	}
+	if (reacquire_held_locks(curr, depth, i))
+		return 0;
 
 	/*
 	 * I took it apart and put it back together again, except now I have
@@ -3582,15 +3593,8 @@ __lock_release(struct lockdep_map *lock, int nested, unsigned long ip)
 	curr->lockdep_depth = i;
 	curr->curr_chain_key = hlock->prev_chain_key;
 
-	for (i++; i < depth; i++) {
-		hlock = curr->held_locks + i;
-		if (!__lock_acquire(hlock->instance,
-			hlock_class(hlock)->subclass, hlock->trylock,
-				hlock->read, hlock->check, hlock->hardirqs_off,
-				hlock->nest_lock, hlock->acquire_ip,
-				hlock->references, hlock->pin_count))
-			return 0;
-	}
+	if (reacquire_held_locks(curr, depth, i + 1))
+		return 0;
 
 	/*
 	 * We had N bottles of beer on the wall, we drank one, but now

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [tip:locking/core] locking/lockdep: Add new check to lock_downgrade()
  2017-02-02 16:38         ` [PATCH 3/3] lockdep: new annotation lock_downgrade() J. R. Okajima
  2017-02-02 17:59           ` kbuild test robot
@ 2017-03-16 11:25           ` tip-bot for J. R. Okajima
  1 sibling, 0 replies; 20+ messages in thread
From: tip-bot for J. R. Okajima @ 2017-03-16 11:25 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: torvalds, akpm, tglx, paulmck, linux-kernel, hpa, mingo,
	hooanon05g, peterz

Commit-ID:  6419c4af777a773a45a1b1af735de0fcd9a7dcc7
Gitweb:     http://git.kernel.org/tip/6419c4af777a773a45a1b1af735de0fcd9a7dcc7
Author:     J. R. Okajima <hooanon05g@gmail.com>
AuthorDate: Fri, 3 Feb 2017 01:38:17 +0900
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Thu, 16 Mar 2017 09:57:07 +0100

locking/lockdep: Add new check to lock_downgrade()

Commit:

  f8319483f57f ("locking/lockdep: Provide a type check for lock_is_held")

didn't fully cover rwsems as downgrade_write() was left out.

Introduce lock_downgrade() and use it to add new checks.

See-also: http://marc.info/?l=linux-kernel&m=148581164003149&w=2
Originally-written-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: J. R. Okajima <hooanon05g@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1486053497-9948-3-git-send-email-hooanon05g@gmail.com
[ Rewrote the changelog. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 include/linux/lockdep.h  |  3 +++
 kernel/locking/lockdep.c | 55 ++++++++++++++++++++++++++++++++++++++++++++++++
 kernel/locking/rwsem.c   |  6 ++----
 3 files changed, 60 insertions(+), 4 deletions(-)

diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index 1e327bb..fffe49f 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -361,6 +361,8 @@ static inline void lock_set_subclass(struct lockdep_map *lock,
 	lock_set_class(lock, lock->name, lock->key, subclass, ip);
 }
 
+extern void lock_downgrade(struct lockdep_map *lock, unsigned long ip);
+
 extern void lockdep_set_current_reclaim_state(gfp_t gfp_mask);
 extern void lockdep_clear_current_reclaim_state(void);
 extern void lockdep_trace_alloc(gfp_t mask);
@@ -411,6 +413,7 @@ static inline void lockdep_on(void)
 
 # define lock_acquire(l, s, t, r, c, n, i)	do { } while (0)
 # define lock_release(l, n, i)			do { } while (0)
+# define lock_downgrade(l, i)			do { } while (0)
 # define lock_set_class(l, n, k, s, i)		do { } while (0)
 # define lock_set_subclass(l, s, i)		do { } while (0)
 # define lockdep_set_current_reclaim_state(g)	do { } while (0)
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index da79548..b1a1cef 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -3533,6 +3533,44 @@ __lock_set_class(struct lockdep_map *lock, const char *name,
 	return 1;
 }
 
+static int __lock_downgrade(struct lockdep_map *lock, unsigned long ip)
+{
+	struct task_struct *curr = current;
+	struct held_lock *hlock;
+	unsigned int depth;
+	int i;
+
+	depth = curr->lockdep_depth;
+	/*
+	 * This function is about (re)setting the class of a held lock,
+	 * yet we're not actually holding any locks. Naughty user!
+	 */
+	if (DEBUG_LOCKS_WARN_ON(!depth))
+		return 0;
+
+	hlock = find_held_lock(curr, lock, depth, &i);
+	if (!hlock)
+		return print_unlock_imbalance_bug(curr, lock, ip);
+
+	curr->lockdep_depth = i;
+	curr->curr_chain_key = hlock->prev_chain_key;
+
+	WARN(hlock->read, "downgrading a read lock");
+	hlock->read = 1;
+	hlock->acquire_ip = ip;
+
+	if (reacquire_held_locks(curr, depth, i))
+		return 0;
+
+	/*
+	 * I took it apart and put it back together again, except now I have
+	 * these 'spare' parts.. where shall I put them.
+	 */
+	if (DEBUG_LOCKS_WARN_ON(curr->lockdep_depth != depth))
+		return 0;
+	return 1;
+}
+
 /*
  * Remove the lock to the list of currently held locks - this gets
  * called on mutex_unlock()/spin_unlock*() (or on a failed
@@ -3759,6 +3797,23 @@ void lock_set_class(struct lockdep_map *lock, const char *name,
 }
 EXPORT_SYMBOL_GPL(lock_set_class);
 
+void lock_downgrade(struct lockdep_map *lock, unsigned long ip)
+{
+	unsigned long flags;
+
+	if (unlikely(current->lockdep_recursion))
+		return;
+
+	raw_local_irq_save(flags);
+	current->lockdep_recursion = 1;
+	check_flags(flags);
+	if (__lock_downgrade(lock, ip))
+		check_chain_key(current);
+	current->lockdep_recursion = 0;
+	raw_local_irq_restore(flags);
+}
+EXPORT_SYMBOL_GPL(lock_downgrade);
+
 /*
  * We are not always called with irqs disabled - do that here,
  * and also avoid lockdep recursion:
diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
index 90a74cc..4d48b1c 100644
--- a/kernel/locking/rwsem.c
+++ b/kernel/locking/rwsem.c
@@ -124,10 +124,8 @@ EXPORT_SYMBOL(up_write);
  */
 void downgrade_write(struct rw_semaphore *sem)
 {
-	/*
-	 * lockdep: a downgraded write will live on as a write
-	 * dependency.
-	 */
+	lock_downgrade(&sem->dep_map, _RET_IP_);
+
 	rwsem_set_reader_owned(sem);
 	__downgrade_write(sem);
 }

^ permalink raw reply related	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2017-03-16 11:26 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-01-30 21:25 Q: lockdep_assert_held_read() after downgrade_write() J. R. Okajima
2017-01-30 21:30 ` Jens Axboe
2017-01-31 10:36   ` Peter Zijlstra
2017-01-31 11:25     ` Peter Zijlstra
2017-01-31 14:23       ` Waiman Long
2017-01-31 14:26         ` Peter Zijlstra
2017-01-31 15:40   ` J. R. Okajima
2017-01-31 16:32     ` Peter Zijlstra
2017-02-02 16:33       ` J. R. Okajima
2017-02-02 16:38       ` [PATCH 1/3] lockdep: consolidate by new find_held_lock() J. R. Okajima
2017-02-02 16:38         ` [PATCH 2/3] lockdep: consolidate by new validate_held_lock() J. R. Okajima
2017-02-14 12:09           ` Peter Zijlstra
2017-03-16 11:25           ` [tip:locking/core] locking/lockdep: Factor out the validate_held_lock() helper function tip-bot for J. R. Okajima
2017-02-02 16:38         ` [PATCH 3/3] lockdep: new annotation lock_downgrade() J. R. Okajima
2017-02-02 17:59           ` kbuild test robot
2017-02-02 18:45             ` Peter Zijlstra
2017-02-02 21:05               ` J. R. Okajima
2017-02-14 12:11                 ` Peter Zijlstra
2017-03-16 11:25           ` [tip:locking/core] locking/lockdep: Add new check to lock_downgrade() tip-bot for J. R. Okajima
2017-03-16 11:24         ` [tip:locking/core] locking/lockdep: Factor out the find_held_lock() helper function tip-bot for J. R. Okajima

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.