linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: + kernel-locking-lockdepc-make-lockdep-initialize-itself-on-demand.patch added to -mm tree
       [not found] <56b11d2d.vVw1kB2la7Y+70xF%akpm@linux-foundation.org>
@ 2016-02-03  7:44 ` Ingo Molnar
  2016-02-03  9:42   ` Andrey Ryabinin
  2016-02-03 16:51   ` Andrew Morton
  0 siblings, 2 replies; 14+ messages in thread
From: Ingo Molnar @ 2016-02-03  7:44 UTC (permalink / raw)
  To: akpm
  Cc: aryabinin, krinkin.m.u, mingo, peterz, mm-commits, linux-kernel,
	Peter Zijlstra, Thomas Gleixner


* akpm@linux-foundation.org <akpm@linux-foundation.org> wrote:

> 
> The patch titled
>      Subject: kernel/locking/lockdep.c: make lockdep initialize itself on demand
> has been added to the -mm tree.  Its filename is
>      kernel-locking-lockdepc-make-lockdep-initialize-itself-on-demand.patch
> 
> This patch should soon appear at
>     http://ozlabs.org/~akpm/mmots/broken-out/kernel-locking-lockdepc-make-lockdep-initialize-itself-on-demand.patch
> and later at
>     http://ozlabs.org/~akpm/mmotm/broken-out/kernel-locking-lockdepc-make-lockdep-initialize-itself-on-demand.patch
> 
> Before you just go and hit "reply", please:
>    a) Consider who else should be cc'ed
>    b) Prefer to cc a suitable mailing list as well
>    c) Ideally: find the original patch on the mailing list and do a
>       reply-to-all to that, adding suitable additional cc's
> 
> *** Remember to use Documentation/SubmitChecklist when testing your code ***
> 
> The -mm tree is included into linux-next and is updated
> there every 3-4 working days
> 
> ------------------------------------------------------
> From: Andrew Morton <akpm@linux-foundation.org>
> Subject: kernel/locking/lockdep.c: make lockdep initialize itself on demand
> 
> Mike said:
> 
> : CONFIG_UBSAN_ALIGNMENT breaks x86-64 kernel with lockdep enabled, i.  e
> : kernel with CONFIG_UBSAN_ALIGNMENT fails to load without even any error
> : message.
> : 
> : The problem is that ubsan callbacks use spinlocks and might be called
> : before lockdep is initialized.  Particularly this line in the
> : reserve_ebda_region function causes problem:
> : 
> : lowmem = *(unsigned short *)__va(BIOS_LOWMEM_KILOBYTES);
> : 
> : If i put lockdep_init() before reserve_ebda_region call in
> : x86_64_start_reservations kernel loads well.
> 
> Fix this ordering issue permanently: change lockdep so that it ensures
> that the hash tables are initialized when they are about to be used.
> 
> The overhead will be pretty small: a test-n-branch in places where lockdep
> is about to do a lot of work anyway.
> 
> Possibly lockdep_initialized should be made __read_mostly.
> 
> A better fix would be to simply initialize these (32768 entry) arrays of
> empty list_heads at compile time, but I don't think there's a way of
> teaching gcc to do this.
> 
> We could write a little script which, at compile time, emits a file
> containing
> 
> 	[0] = LIST_HEAD_INIT(__chainhash_table[0]),
> 	[1] = LIST_HEAD_INIT(__chainhash_table[1]),
> 	...
> 	[32767] = LIST_HEAD_INIT(__chainhash_table[32767]),
> 
> and then #include this file into lockdep.c.  Sounds like a lot of fuss.
> 
> Reported-by: Mike Krinkin <krinkin.m.u@gmail.com>
> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
> Cc: Ingo Molnar <mingo@elte.hu>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
> ---
> 
>  kernel/locking/lockdep.c |   35 ++++++++++++++++++++++++++---------
>  1 file changed, 26 insertions(+), 9 deletions(-)
> 
> diff -puN kernel/locking/lockdep.c~kernel-locking-lockdepc-make-lockdep-initialize-itself-on-demand kernel/locking/lockdep.c
> --- a/kernel/locking/lockdep.c~kernel-locking-lockdepc-make-lockdep-initialize-itself-on-demand
> +++ a/kernel/locking/lockdep.c
> @@ -290,9 +290,20 @@ LIST_HEAD(all_lock_classes);
>  #define CLASSHASH_BITS		(MAX_LOCKDEP_KEYS_BITS - 1)
>  #define CLASSHASH_SIZE		(1UL << CLASSHASH_BITS)
>  #define __classhashfn(key)	hash_long((unsigned long)key, CLASSHASH_BITS)
> -#define classhashentry(key)	(classhash_table + __classhashfn((key)))
>  
> -static struct list_head classhash_table[CLASSHASH_SIZE];
> +static struct list_head __classhash_table[CLASSHASH_SIZE];
> +
> +static inline struct list_head *get_classhash_table(void)
> +{
> +	if (unlikely(!lockdep_initialized))
> +		lockdep_init();
> +	return __classhash_table;
> +}
> +
> +static inline struct list_head *classhashentry(struct lockdep_subclass_key *key)
> +{
> +	return get_classhash_table() + __classhashfn(key);
> +}
>  
>  /*
>   * We put the lock dependency chains into a hash-table as well, to cache
> @@ -301,9 +312,15 @@ static struct list_head classhash_table[
>  #define CHAINHASH_BITS		(MAX_LOCKDEP_CHAINS_BITS-1)
>  #define CHAINHASH_SIZE		(1UL << CHAINHASH_BITS)
>  #define __chainhashfn(chain)	hash_long(chain, CHAINHASH_BITS)
> -#define chainhashentry(chain)	(chainhash_table + __chainhashfn((chain)))
>  
> -static struct list_head chainhash_table[CHAINHASH_SIZE];
> +static struct list_head __chainhash_table[CHAINHASH_SIZE];
> +
> +static struct list_head *chainhashentry(unsigned long chain)
> +{
> +	if (unlikely(!lockdep_initialized))
> +		lockdep_init();
> +	return __chainhash_table + __chainhashfn(chain);
> +}

Yuck, I don't really like this.

Lockdep initialization must happen early on, and it should happen in a well 
defined place, not be opportunistic (and relatively random) like this, making it 
dependent on config options and calling contexts.

Also, in addition to properly ordering UBSAN initialization, how about fixing the 
silent crash by adding a lockdep warning to that place instead of an auto-init? 

The warning will turn lockdep off safely and will generate an actionable kernel 
message and stackdump upon which the init ordering fix can be done.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: + kernel-locking-lockdepc-make-lockdep-initialize-itself-on-demand.patch added to -mm tree
  2016-02-03  7:44 ` + kernel-locking-lockdepc-make-lockdep-initialize-itself-on-demand.patch added to -mm tree Ingo Molnar
@ 2016-02-03  9:42   ` Andrey Ryabinin
  2016-02-03  9:49     ` Ingo Molnar
  2016-02-03 16:51   ` Andrew Morton
  1 sibling, 1 reply; 14+ messages in thread
From: Andrey Ryabinin @ 2016-02-03  9:42 UTC (permalink / raw)
  To: Ingo Molnar, akpm
  Cc: krinkin.m.u, mingo, peterz, mm-commits, linux-kernel,
	Peter Zijlstra, Thomas Gleixner



On 02/03/2016 10:44 AM, Ingo Molnar wrote:

> Yuck, I don't really like this.
> 
> Lockdep initialization must happen early on, and it should happen in a well 
> defined place, not be opportunistic (and relatively random) like this, making it 
> dependent on config options and calling contexts.
> 
> Also, in addition to properly ordering UBSAN initialization, how about fixing the 
> silent crash by adding a lockdep warning to that place instead of an auto-init? 
> 
> The warning will turn lockdep off safely and will generate an actionable kernel 
> message and stackdump upon which the init ordering fix can be done.
> 

Something like this already done for DEBUG_LOCKDEP=y (except it initializes lockdep instead of turning it off).

look_up_lock_class():
...
	#ifdef CONFIG_DEBUG_LOCKDEP
		/*
		 * If the architecture calls into lockdep before initializing
		 * the hashes then we'll warn about it later. (we cannot printk
		 * right now)
		 */
		if (unlikely(!lockdep_initialized)) {
			lockdep_init();
			lockdep_init_error = 1;
			lock_init_error = lock->name;
			save_stack_trace(&lockdep_init_trace);
		}
	#endif


Silent crash happens only in DEBUG_LOCKDEP=n && LOCKDEP=y combination.
So, what about simply removing this #ifdef (and the other one in lockdep_info() )?
I

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: + kernel-locking-lockdepc-make-lockdep-initialize-itself-on-demand.patch added to -mm tree
  2016-02-03  9:42   ` Andrey Ryabinin
@ 2016-02-03  9:49     ` Ingo Molnar
  0 siblings, 0 replies; 14+ messages in thread
From: Ingo Molnar @ 2016-02-03  9:49 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: akpm, krinkin.m.u, mingo, peterz, mm-commits, linux-kernel,
	Peter Zijlstra, Thomas Gleixner


* Andrey Ryabinin <aryabinin@virtuozzo.com> wrote:

> 
> 
> On 02/03/2016 10:44 AM, Ingo Molnar wrote:
> 
> > Yuck, I don't really like this.
> > 
> > Lockdep initialization must happen early on, and it should happen in a well 
> > defined place, not be opportunistic (and relatively random) like this, making it 
> > dependent on config options and calling contexts.
> > 
> > Also, in addition to properly ordering UBSAN initialization, how about fixing the 
> > silent crash by adding a lockdep warning to that place instead of an auto-init? 
> > 
> > The warning will turn lockdep off safely and will generate an actionable kernel 
> > message and stackdump upon which the init ordering fix can be done.
> > 
> 
> Something like this already done for DEBUG_LOCKDEP=y (except it initializes lockdep instead of turning it off).
> 
> look_up_lock_class():
> ...
> 	#ifdef CONFIG_DEBUG_LOCKDEP
> 		/*
> 		 * If the architecture calls into lockdep before initializing
> 		 * the hashes then we'll warn about it later. (we cannot printk
> 		 * right now)
> 		 */
> 		if (unlikely(!lockdep_initialized)) {
> 			lockdep_init();
> 			lockdep_init_error = 1;
> 			lock_init_error = lock->name;
> 			save_stack_trace(&lockdep_init_trace);
> 		}
> 	#endif

well, this is different, as we still generate an error - so it's not a 'permanent 
solution' as the changelog says.

> Silent crash happens only in DEBUG_LOCKDEP=n && LOCKDEP=y combination.
> So, what about simply removing this #ifdef (and the other one in lockdep_info() )?

That's fine with me, as long as we also fix the init bug that triggers this code.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: + kernel-locking-lockdepc-make-lockdep-initialize-itself-on-demand.patch added to -mm tree
  2016-02-03  7:44 ` + kernel-locking-lockdepc-make-lockdep-initialize-itself-on-demand.patch added to -mm tree Ingo Molnar
  2016-02-03  9:42   ` Andrey Ryabinin
@ 2016-02-03 16:51   ` Andrew Morton
  2016-02-03 20:40     ` Andrew Morton
  2016-02-09 11:12     ` Ingo Molnar
  1 sibling, 2 replies; 14+ messages in thread
From: Andrew Morton @ 2016-02-03 16:51 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: aryabinin, krinkin.m.u, mingo, peterz, linux-kernel,
	Peter Zijlstra, Thomas Gleixner

On Wed, 3 Feb 2016 08:44:30 +0100 Ingo Molnar <mingo@kernel.org> wrote:

> > Mike said:
> > 
> > : CONFIG_UBSAN_ALIGNMENT breaks x86-64 kernel with lockdep enabled, i.  e
> > : kernel with CONFIG_UBSAN_ALIGNMENT fails to load without even any error
> > : message.
> > : 
> > : The problem is that ubsan callbacks use spinlocks and might be called
> > : before lockdep is initialized.  Particularly this line in the
> > : reserve_ebda_region function causes problem:
> > : 
> > : lowmem = *(unsigned short *)__va(BIOS_LOWMEM_KILOBYTES);
> > : 
> > : If i put lockdep_init() before reserve_ebda_region call in
> > : x86_64_start_reservations kernel loads well.
> > 
> > Fix this ordering issue permanently: change lockdep so that it ensures
> > that the hash tables are initialized when they are about to be used.
> > 
> > The overhead will be pretty small: a test-n-branch in places where lockdep
> > is about to do a lot of work anyway.
> > 
> > Possibly lockdep_initialized should be made __read_mostly.
> > 
> > A better fix would be to simply initialize these (32768 entry) arrays of
> > empty list_heads at compile time, but I don't think there's a way of
> > teaching gcc to do this.
> > 
> > We could write a little script which, at compile time, emits a file
> > containing
> > 
> > 	[0] = LIST_HEAD_INIT(__chainhash_table[0]),
> > 	[1] = LIST_HEAD_INIT(__chainhash_table[1]),
> > 	...
> > 	[32767] = LIST_HEAD_INIT(__chainhash_table[32767]),
> > 
> > and then #include this file into lockdep.c.  Sounds like a lot of fuss.
> > 
> 
> ...
>
> Yuck, I don't really like this.
> 
> Lockdep initialization must happen early on,

It should happen at compile time.

> and it should happen in a well 
> defined place, not be opportunistic (and relatively random) like this, making it 
> dependent on config options and calling contexts.

That's an unusable assertion, sorry.

Initializing lockdep in the above manner guarantees that it is initialized
before it is used.  It is *much* more reliable than "try to initialize
it before some piece of code which hasn't even been written yet tries
to take a lock".

The conceptual problem is that if some piece of code does
spin_lock_init() or DEFINE_SPINLOCK(), that lock isn't necessarily
initialized yet.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: + kernel-locking-lockdepc-make-lockdep-initialize-itself-on-demand.patch added to -mm tree
  2016-02-03 16:51   ` Andrew Morton
@ 2016-02-03 20:40     ` Andrew Morton
  2016-02-03 21:44       ` Andrew Morton
  2016-02-09 11:12     ` Ingo Molnar
  1 sibling, 1 reply; 14+ messages in thread
From: Andrew Morton @ 2016-02-03 20:40 UTC (permalink / raw)
  To: Ingo Molnar, aryabinin, krinkin.m.u, mingo, peterz, linux-kernel,
	Peter Zijlstra, Thomas Gleixner

On Wed, 3 Feb 2016 08:51:11 -0800 Andrew Morton <akpm@linux-foundation.org> wrote:

> > Lockdep initialization must happen early on,
> 
> It should happen at compile time.

Mike asked "but why not just use hlist".  And indeed I think that fixes
the problem because hlist_heads *are* initialized at compile time.  For
them, NULL is the empty state.

This compiles.  Probably we're now doing some unnecessary initialization
with this approach - lockdep_init() and lockdep_initialized can simply
be zapped, I think?

Plus of course we save a buncha memory.

--- a/kernel/locking/lockdep.c~a
+++ a/kernel/locking/lockdep.c
@@ -292,7 +292,7 @@ LIST_HEAD(all_lock_classes);
 #define __classhashfn(key)	hash_long((unsigned long)key, CLASSHASH_BITS)
 #define classhashentry(key)	(classhash_table + __classhashfn((key)))
 
-static struct list_head classhash_table[CLASSHASH_SIZE];
+static struct hlist_head classhash_table[CLASSHASH_SIZE];
 
 /*
  * We put the lock dependency chains into a hash-table as well, to cache
@@ -303,7 +303,7 @@ static struct list_head classhash_table[
 #define __chainhashfn(chain)	hash_long(chain, CHAINHASH_BITS)
 #define chainhashentry(chain)	(chainhash_table + __chainhashfn((chain)))
 
-static struct list_head chainhash_table[CHAINHASH_SIZE];
+static struct hlist_head chainhash_table[CHAINHASH_SIZE];
 
 /*
  * The hash key of the lock dependency chains is a hash itself too:
@@ -666,7 +666,7 @@ static inline struct lock_class *
 look_up_lock_class(struct lockdep_map *lock, unsigned int subclass)
 {
 	struct lockdep_subclass_key *key;
-	struct list_head *hash_head;
+	struct hlist_head *hash_head;
 	struct lock_class *class;
 
 #ifdef CONFIG_DEBUG_LOCKDEP
@@ -719,7 +719,7 @@ look_up_lock_class(struct lockdep_map *l
 	if (DEBUG_LOCKS_WARN_ON(!irqs_disabled()))
 		return NULL;
 
-	list_for_each_entry_rcu(class, hash_head, hash_entry) {
+	hlist_for_each_entry_rcu(class, hash_head, hash_entry) {
 		if (class->key == key) {
 			/*
 			 * Huh! same key, different name? Did someone trample
@@ -742,7 +742,7 @@ static inline struct lock_class *
 register_lock_class(struct lockdep_map *lock, unsigned int subclass, int force)
 {
 	struct lockdep_subclass_key *key;
-	struct list_head *hash_head;
+	struct hlist_head *hash_head;
 	struct lock_class *class;
 
 	DEBUG_LOCKS_WARN_ON(!irqs_disabled());
@@ -774,7 +774,7 @@ register_lock_class(struct lockdep_map *
 	 * We have to do the hash-walk again, to avoid races
 	 * with another CPU:
 	 */
-	list_for_each_entry_rcu(class, hash_head, hash_entry) {
+	hlist_for_each_entry_rcu(class, hash_head, hash_entry) {
 		if (class->key == key)
 			goto out_unlock_set;
 	}
@@ -805,7 +805,7 @@ register_lock_class(struct lockdep_map *
 	 * We use RCU's safe list-add method to make
 	 * parallel walking of the hash-list safe:
 	 */
-	list_add_tail_rcu(&class->hash_entry, hash_head);
+	hlist_add_head_rcu(&class->hash_entry, hash_head);
 	/*
 	 * Add it to the global list of classes:
 	 */
@@ -2017,7 +2017,7 @@ static inline int lookup_chain_cache(str
 				     u64 chain_key)
 {
 	struct lock_class *class = hlock_class(hlock);
-	struct list_head *hash_head = chainhashentry(chain_key);
+	struct hlist_head *hash_head = chainhashentry(chain_key);
 	struct lock_chain *chain;
 	struct held_lock *hlock_curr;
 	int i, j;
@@ -2033,7 +2033,7 @@ static inline int lookup_chain_cache(str
 	 * We can walk it lock-free, because entries only get added
 	 * to the hash:
 	 */
-	list_for_each_entry_rcu(chain, hash_head, entry) {
+	hlist_for_each_entry_rcu(chain, hash_head, entry) {
 		if (chain->chain_key == chain_key) {
 cache_hit:
 			debug_atomic_inc(chain_lookup_hits);
@@ -2057,7 +2057,7 @@ cache_hit:
 	/*
 	 * We have to walk the chain again locked - to avoid duplicates:
 	 */
-	list_for_each_entry(chain, hash_head, entry) {
+	hlist_for_each_entry(chain, hash_head, entry) {
 		if (chain->chain_key == chain_key) {
 			graph_unlock();
 			goto cache_hit;
@@ -2091,7 +2091,7 @@ cache_hit:
 		}
 		chain_hlocks[chain->base + j] = class - lock_classes;
 	}
-	list_add_tail_rcu(&chain->entry, hash_head);
+	hlist_add_head_rcu(&chain->entry, hash_head);
 	debug_atomic_inc(chain_lookup_misses);
 	inc_chains();
 
@@ -3875,7 +3875,7 @@ void lockdep_reset(void)
 	nr_process_chains = 0;
 	debug_locks = 1;
 	for (i = 0; i < CHAINHASH_SIZE; i++)
-		INIT_LIST_HEAD(chainhash_table + i);
+		INIT_HLIST_HEAD(chainhash_table + i);
 	raw_local_irq_restore(flags);
 }
 
@@ -3894,7 +3894,7 @@ static void zap_class(struct lock_class
 	/*
 	 * Unhash the class and remove it from the all_lock_classes list:
 	 */
-	list_del_rcu(&class->hash_entry);
+	hlist_del_rcu(&class->hash_entry);
 	list_del_rcu(&class->lock_entry);
 
 	RCU_INIT_POINTER(class->key, NULL);
@@ -3917,7 +3917,7 @@ static inline int within(const void *add
 void lockdep_free_key_range(void *start, unsigned long size)
 {
 	struct lock_class *class;
-	struct list_head *head;
+	struct hlist_head *head;
 	unsigned long flags;
 	int i;
 	int locked;
@@ -3930,9 +3930,9 @@ void lockdep_free_key_range(void *start,
 	 */
 	for (i = 0; i < CLASSHASH_SIZE; i++) {
 		head = classhash_table + i;
-		if (list_empty(head))
+		if (!head)
 			continue;
-		list_for_each_entry_rcu(class, head, hash_entry) {
+		hlist_for_each_entry_rcu(class, head, hash_entry) {
 			if (within(class->key, start, size))
 				zap_class(class);
 			else if (within(class->name, start, size))
@@ -3962,7 +3962,7 @@ void lockdep_free_key_range(void *start,
 void lockdep_reset_lock(struct lockdep_map *lock)
 {
 	struct lock_class *class;
-	struct list_head *head;
+	struct hlist_head *head;
 	unsigned long flags;
 	int i, j;
 	int locked;
@@ -3987,9 +3987,9 @@ void lockdep_reset_lock(struct lockdep_m
 	locked = graph_lock();
 	for (i = 0; i < CLASSHASH_SIZE; i++) {
 		head = classhash_table + i;
-		if (list_empty(head))
+		if (!head)
 			continue;
-		list_for_each_entry_rcu(class, head, hash_entry) {
+		hlist_for_each_entry_rcu(class, head, hash_entry) {
 			int match = 0;
 
 			for (j = 0; j < NR_LOCKDEP_CACHING_CLASSES; j++)
@@ -4027,10 +4027,10 @@ void lockdep_init(void)
 		return;
 
 	for (i = 0; i < CLASSHASH_SIZE; i++)
-		INIT_LIST_HEAD(classhash_table + i);
+		INIT_HLIST_HEAD(classhash_table + i);
 
 	for (i = 0; i < CHAINHASH_SIZE; i++)
-		INIT_LIST_HEAD(chainhash_table + i);
+		INIT_HLIST_HEAD(chainhash_table + i);
 
 	lockdep_initialized = 1;
 }
diff -puN include/linux/lockdep.h~a include/linux/lockdep.h
--- a/include/linux/lockdep.h~a
+++ a/include/linux/lockdep.h
@@ -66,7 +66,7 @@ struct lock_class {
 	/*
 	 * class-hash:
 	 */
-	struct list_head		hash_entry;
+	struct hlist_node		hash_entry;
 
 	/*
 	 * global list of all lock-classes:
@@ -199,7 +199,7 @@ struct lock_chain {
 	u8				irq_context;
 	u8				depth;
 	u16				base;
-	struct list_head		entry;
+	struct hlist_node		entry;
 	u64				chain_key;
 };
 
_

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: + kernel-locking-lockdepc-make-lockdep-initialize-itself-on-demand.patch added to -mm tree
  2016-02-03 20:40     ` Andrew Morton
@ 2016-02-03 21:44       ` Andrew Morton
  2016-02-04 13:13         ` Andrey Ryabinin
  0 siblings, 1 reply; 14+ messages in thread
From: Andrew Morton @ 2016-02-03 21:44 UTC (permalink / raw)
  To: Ingo Molnar, aryabinin, krinkin.m.u, mingo, peterz, linux-kernel,
	Peter Zijlstra, Thomas Gleixner

On Wed, 3 Feb 2016 12:40:09 -0800 Andrew Morton <akpm@linux-foundation.org> wrote:

> On Wed, 3 Feb 2016 08:51:11 -0800 Andrew Morton <akpm@linux-foundation.org> wrote:
> 
> > > Lockdep initialization must happen early on,
> > 
> > It should happen at compile time.
> 
> Mike asked "but why not just use hlist".  And indeed I think that fixes
> the problem because hlist_heads *are* initialized at compile time.  For
> them, NULL is the empty state.
> 
> This compiles.  Probably we're now doing some unnecessary initialization
> with this approach - lockdep_init() and lockdep_initialized can simply
> be zapped, I think?
> 
> Plus of course we save a buncha memory.

Builds, runs, works.


From: Andrew Morton <akpm@linux-foundation.org>
Subject: kernel/locking/lockdep.c: convert hash tables to hlists

Mike said:

: CONFIG_UBSAN_ALIGNMENT breaks x86-64 kernel with lockdep enabled, i.  e
: kernel with CONFIG_UBSAN_ALIGNMENT fails to load without even any error
: message.
: 
: The problem is that ubsan callbacks use spinlocks and might be called
: before lockdep is initialized.  Particularly this line in the
: reserve_ebda_region function causes problem:
: 
: lowmem = *(unsigned short *)__va(BIOS_LOWMEM_KILOBYTES);
: 
: If i put lockdep_init() before reserve_ebda_region call in
: x86_64_start_reservations kernel loads well.

Fix this ordering issue permanently: change lockdep so that it uses hlists
for the hash tables.  Unlike a list_head, an hlist_head is in its
initialized state when it is all-zeroes, so lockdep is ready for operation
immediately upon boot - lockdep_init() need not have run.

The patch will also save some memory.

Probably lockdep_init() and lockdep_initialized can be done away with now.

Reported-by: Mike Krinkin <krinkin.m.u@gmail.com>
Suggested-by: Mike Krinkin <krinkin.m.u@gmail.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 include/linux/lockdep.h  |    4 +--
 kernel/locking/lockdep.c |   42 ++++++++++++++++++-------------------
 2 files changed, 23 insertions(+), 23 deletions(-)

diff -puN kernel/locking/lockdep.c~kernel-locking-lockdepc-convert-hash-tables-to-hlists kernel/locking/lockdep.c
--- a/kernel/locking/lockdep.c~kernel-locking-lockdepc-convert-hash-tables-to-hlists
+++ a/kernel/locking/lockdep.c
@@ -292,7 +292,7 @@ LIST_HEAD(all_lock_classes);
 #define __classhashfn(key)	hash_long((unsigned long)key, CLASSHASH_BITS)
 #define classhashentry(key)	(classhash_table + __classhashfn((key)))
 
-static struct list_head classhash_table[CLASSHASH_SIZE];
+static struct hlist_head classhash_table[CLASSHASH_SIZE];
 
 /*
  * We put the lock dependency chains into a hash-table as well, to cache
@@ -303,7 +303,7 @@ static struct list_head classhash_table[
 #define __chainhashfn(chain)	hash_long(chain, CHAINHASH_BITS)
 #define chainhashentry(chain)	(chainhash_table + __chainhashfn((chain)))
 
-static struct list_head chainhash_table[CHAINHASH_SIZE];
+static struct hlist_head chainhash_table[CHAINHASH_SIZE];
 
 /*
  * The hash key of the lock dependency chains is a hash itself too:
@@ -666,7 +666,7 @@ static inline struct lock_class *
 look_up_lock_class(struct lockdep_map *lock, unsigned int subclass)
 {
 	struct lockdep_subclass_key *key;
-	struct list_head *hash_head;
+	struct hlist_head *hash_head;
 	struct lock_class *class;
 
 #ifdef CONFIG_DEBUG_LOCKDEP
@@ -719,7 +719,7 @@ look_up_lock_class(struct lockdep_map *l
 	if (DEBUG_LOCKS_WARN_ON(!irqs_disabled()))
 		return NULL;
 
-	list_for_each_entry_rcu(class, hash_head, hash_entry) {
+	hlist_for_each_entry_rcu(class, hash_head, hash_entry) {
 		if (class->key == key) {
 			/*
 			 * Huh! same key, different name? Did someone trample
@@ -742,7 +742,7 @@ static inline struct lock_class *
 register_lock_class(struct lockdep_map *lock, unsigned int subclass, int force)
 {
 	struct lockdep_subclass_key *key;
-	struct list_head *hash_head;
+	struct hlist_head *hash_head;
 	struct lock_class *class;
 
 	DEBUG_LOCKS_WARN_ON(!irqs_disabled());
@@ -774,7 +774,7 @@ register_lock_class(struct lockdep_map *
 	 * We have to do the hash-walk again, to avoid races
 	 * with another CPU:
 	 */
-	list_for_each_entry_rcu(class, hash_head, hash_entry) {
+	hlist_for_each_entry_rcu(class, hash_head, hash_entry) {
 		if (class->key == key)
 			goto out_unlock_set;
 	}
@@ -805,7 +805,7 @@ register_lock_class(struct lockdep_map *
 	 * We use RCU's safe list-add method to make
 	 * parallel walking of the hash-list safe:
 	 */
-	list_add_tail_rcu(&class->hash_entry, hash_head);
+	hlist_add_head_rcu(&class->hash_entry, hash_head);
 	/*
 	 * Add it to the global list of classes:
 	 */
@@ -2017,7 +2017,7 @@ static inline int lookup_chain_cache(str
 				     u64 chain_key)
 {
 	struct lock_class *class = hlock_class(hlock);
-	struct list_head *hash_head = chainhashentry(chain_key);
+	struct hlist_head *hash_head = chainhashentry(chain_key);
 	struct lock_chain *chain;
 	struct held_lock *hlock_curr;
 	int i, j;
@@ -2033,7 +2033,7 @@ static inline int lookup_chain_cache(str
 	 * We can walk it lock-free, because entries only get added
 	 * to the hash:
 	 */
-	list_for_each_entry_rcu(chain, hash_head, entry) {
+	hlist_for_each_entry_rcu(chain, hash_head, entry) {
 		if (chain->chain_key == chain_key) {
 cache_hit:
 			debug_atomic_inc(chain_lookup_hits);
@@ -2057,7 +2057,7 @@ cache_hit:
 	/*
 	 * We have to walk the chain again locked - to avoid duplicates:
 	 */
-	list_for_each_entry(chain, hash_head, entry) {
+	hlist_for_each_entry(chain, hash_head, entry) {
 		if (chain->chain_key == chain_key) {
 			graph_unlock();
 			goto cache_hit;
@@ -2091,7 +2091,7 @@ cache_hit:
 		}
 		chain_hlocks[chain->base + j] = class - lock_classes;
 	}
-	list_add_tail_rcu(&chain->entry, hash_head);
+	hlist_add_head_rcu(&chain->entry, hash_head);
 	debug_atomic_inc(chain_lookup_misses);
 	inc_chains();
 
@@ -3875,7 +3875,7 @@ void lockdep_reset(void)
 	nr_process_chains = 0;
 	debug_locks = 1;
 	for (i = 0; i < CHAINHASH_SIZE; i++)
-		INIT_LIST_HEAD(chainhash_table + i);
+		INIT_HLIST_HEAD(chainhash_table + i);
 	raw_local_irq_restore(flags);
 }
 
@@ -3894,7 +3894,7 @@ static void zap_class(struct lock_class
 	/*
 	 * Unhash the class and remove it from the all_lock_classes list:
 	 */
-	list_del_rcu(&class->hash_entry);
+	hlist_del_rcu(&class->hash_entry);
 	list_del_rcu(&class->lock_entry);
 
 	RCU_INIT_POINTER(class->key, NULL);
@@ -3917,7 +3917,7 @@ static inline int within(const void *add
 void lockdep_free_key_range(void *start, unsigned long size)
 {
 	struct lock_class *class;
-	struct list_head *head;
+	struct hlist_head *head;
 	unsigned long flags;
 	int i;
 	int locked;
@@ -3930,9 +3930,9 @@ void lockdep_free_key_range(void *start,
 	 */
 	for (i = 0; i < CLASSHASH_SIZE; i++) {
 		head = classhash_table + i;
-		if (list_empty(head))
+		if (!head)
 			continue;
-		list_for_each_entry_rcu(class, head, hash_entry) {
+		hlist_for_each_entry_rcu(class, head, hash_entry) {
 			if (within(class->key, start, size))
 				zap_class(class);
 			else if (within(class->name, start, size))
@@ -3962,7 +3962,7 @@ void lockdep_free_key_range(void *start,
 void lockdep_reset_lock(struct lockdep_map *lock)
 {
 	struct lock_class *class;
-	struct list_head *head;
+	struct hlist_head *head;
 	unsigned long flags;
 	int i, j;
 	int locked;
@@ -3987,9 +3987,9 @@ void lockdep_reset_lock(struct lockdep_m
 	locked = graph_lock();
 	for (i = 0; i < CLASSHASH_SIZE; i++) {
 		head = classhash_table + i;
-		if (list_empty(head))
+		if (!head)
 			continue;
-		list_for_each_entry_rcu(class, head, hash_entry) {
+		hlist_for_each_entry_rcu(class, head, hash_entry) {
 			int match = 0;
 
 			for (j = 0; j < NR_LOCKDEP_CACHING_CLASSES; j++)
@@ -4027,10 +4027,10 @@ void lockdep_init(void)
 		return;
 
 	for (i = 0; i < CLASSHASH_SIZE; i++)
-		INIT_LIST_HEAD(classhash_table + i);
+		INIT_HLIST_HEAD(classhash_table + i);
 
 	for (i = 0; i < CHAINHASH_SIZE; i++)
-		INIT_LIST_HEAD(chainhash_table + i);
+		INIT_HLIST_HEAD(chainhash_table + i);
 
 	lockdep_initialized = 1;
 }
diff -puN include/linux/lockdep.h~kernel-locking-lockdepc-convert-hash-tables-to-hlists include/linux/lockdep.h
--- a/include/linux/lockdep.h~kernel-locking-lockdepc-convert-hash-tables-to-hlists
+++ a/include/linux/lockdep.h
@@ -66,7 +66,7 @@ struct lock_class {
 	/*
 	 * class-hash:
 	 */
-	struct list_head		hash_entry;
+	struct hlist_node		hash_entry;
 
 	/*
 	 * global list of all lock-classes:
@@ -199,7 +199,7 @@ struct lock_chain {
 	u8				irq_context;
 	u8				depth;
 	u16				base;
-	struct list_head		entry;
+	struct hlist_node		entry;
 	u64				chain_key;
 };
 
_

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: + kernel-locking-lockdepc-make-lockdep-initialize-itself-on-demand.patch added to -mm tree
  2016-02-03 21:44       ` Andrew Morton
@ 2016-02-04 13:13         ` Andrey Ryabinin
  2016-02-08  9:33           ` Michael Ellerman
  2016-02-09 11:00           ` Ingo Molnar
  0 siblings, 2 replies; 14+ messages in thread
From: Andrey Ryabinin @ 2016-02-04 13:13 UTC (permalink / raw)
  To: Andrew Morton, Ingo Molnar, krinkin.m.u, mingo, peterz,
	linux-kernel, Peter Zijlstra, Thomas Gleixner

On 02/04/2016 12:44 AM, Andrew Morton wrote:
> 
> Probably lockdep_init() and lockdep_initialized can be done away with now.
> 

Yup, it probably should be folded into your patch, or we could hold this off for 4.6.


From: Andrey Ryabinin <aryabinin@virtuozzo.com>                                                                                                                                                                     
Subject: kernel/lockdep: eliminate lockdep_init()

Lockdep is initialized at compile time now. Get rid of lockdep_init().

Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
---
 arch/c6x/kernel/setup.c                       |  2 -
 arch/microblaze/kernel/setup.c                |  2 -
 arch/powerpc/kernel/setup_32.c                |  2 -
 arch/powerpc/kernel/setup_64.c                |  3 --
 arch/s390/kernel/early.c                      |  1 -
 arch/sparc/kernel/head_64.S                   |  8 ----
 arch/x86/lguest/boot.c                        |  6 ---
 include/linux/lockdep.h                       |  2 -
 init/main.c                                   |  5 ---
 kernel/locking/lockdep.c                      | 59 ---------------------------
 tools/lib/lockdep/common.c                    |  5 ---
 tools/lib/lockdep/include/liblockdep/common.h |  1 -
 tools/lib/lockdep/preload.c                   |  2 -
 13 files changed, 98 deletions(-)

diff --git a/arch/c6x/kernel/setup.c b/arch/c6x/kernel/setup.c
index 72e17f7..786e36e 100644
--- a/arch/c6x/kernel/setup.c
+++ b/arch/c6x/kernel/setup.c
@@ -281,8 +281,6 @@ notrace void __init machine_init(unsigned long dt_ptr)
 	 */
 	set_ist(_vectors_start);
 
-	lockdep_init();
-
 	/*
 	 * dtb is passed in from bootloader.
 	 * fdt is linked in blob.
diff --git a/arch/microblaze/kernel/setup.c b/arch/microblaze/kernel/setup.c
index 89a2a93..f31ebb5 100644
--- a/arch/microblaze/kernel/setup.c
+++ b/arch/microblaze/kernel/setup.c
@@ -130,8 +130,6 @@ void __init machine_early_init(const char *cmdline, unsigned int ram,
 	memset(__bss_start, 0, __bss_stop-__bss_start);
 	memset(_ssbss, 0, _esbss-_ssbss);
 
-	lockdep_init();
-
 /* initialize device tree for usage in early_printk */
 	early_init_devtree(_fdt_start);
 
diff --git a/arch/powerpc/kernel/setup_32.c b/arch/powerpc/kernel/setup_32.c
index ad8c9db..d544fa3 100644
--- a/arch/powerpc/kernel/setup_32.c
+++ b/arch/powerpc/kernel/setup_32.c
@@ -114,8 +114,6 @@ extern unsigned int memset_nocache_branch; /* Insn to be replaced by NOP */
 
 notrace void __init machine_init(u64 dt_ptr)
 {
-	lockdep_init();
-
 	/* Enable early debugging if any specified (see udbg.h) */
 	udbg_early_init();
 
diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index 5c03a6a..f98be83 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -255,9 +255,6 @@ void __init early_setup(unsigned long dt_ptr)
 	setup_paca(&boot_paca);
 	fixup_boot_paca();
 
-	/* Initialize lockdep early or else spinlocks will blow */
-	lockdep_init();
-
 	/* -------- printk is now safe to use ------- */
 
 	/* Enable early debugging if any specified (see udbg.h) */
diff --git a/arch/s390/kernel/early.c b/arch/s390/kernel/early.c
index c55576b..a0684de 100644
--- a/arch/s390/kernel/early.c
+++ b/arch/s390/kernel/early.c
@@ -448,7 +448,6 @@ void __init startup_init(void)
 	rescue_initrd();
 	clear_bss_section();
 	init_kernel_storage_key();
-	lockdep_init();
 	lockdep_off();
 	setup_lowcore_early();
 	setup_facility_list();
diff --git a/arch/sparc/kernel/head_64.S b/arch/sparc/kernel/head_64.S
index f2d30ca..cd1f592 100644
--- a/arch/sparc/kernel/head_64.S
+++ b/arch/sparc/kernel/head_64.S
@@ -696,14 +696,6 @@ tlb_fixup_done:
 	call	__bzero
 	 sub	%o1, %o0, %o1
 
-#ifdef CONFIG_LOCKDEP
-	/* We have this call this super early, as even prom_init can grab
-	 * spinlocks and thus call into the lockdep code.
-	 */
-	call	lockdep_init
-	 nop
-#endif
-
 	call	prom_init
 	 mov	%l7, %o0			! OpenPROM cif handler
 
diff --git a/arch/x86/lguest/boot.c b/arch/x86/lguest/boot.c
index 4ba229a..f56cc41 100644
--- a/arch/x86/lguest/boot.c
+++ b/arch/x86/lguest/boot.c
@@ -1520,12 +1520,6 @@ __init void lguest_init(void)
 	 */
 	reserve_top_address(lguest_data.reserve_mem);
 
-	/*
-	 * If we don't initialize the lock dependency checker now, it crashes
-	 * atomic_notifier_chain_register, then paravirt_disable_iospace.
-	 */
-	lockdep_init();
-
 	/* Hook in our special panic hypercall code. */
 	atomic_notifier_chain_register(&panic_notifier_list, &paniced);
 
diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index 4dca42f..d026b19 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -261,7 +261,6 @@ struct held_lock {
 /*
  * Initialization, self-test and debugging-output methods:
  */
-extern void lockdep_init(void);
 extern void lockdep_info(void);
 extern void lockdep_reset(void);
 extern void lockdep_reset_lock(struct lockdep_map *lock);
@@ -392,7 +391,6 @@ static inline void lockdep_on(void)
 # define lockdep_set_current_reclaim_state(g)	do { } while (0)
 # define lockdep_clear_current_reclaim_state()	do { } while (0)
 # define lockdep_trace_alloc(g)			do { } while (0)
-# define lockdep_init()				do { } while (0)
 # define lockdep_info()				do { } while (0)
 # define lockdep_init_map(lock, name, key, sub) \
 		do { (void)(name); (void)(key); } while (0)
diff --git a/init/main.c b/init/main.c
index 58c9e37..b3008bc 100644
--- a/init/main.c
+++ b/init/main.c
@@ -499,11 +499,6 @@ asmlinkage __visible void __init start_kernel(void)
 	char *command_line;
 	char *after_dashes;
 
-	/*
-	 * Need to run as early as possible, to initialize the
-	 * lockdep hash:
-	 */
-	lockdep_init();
 	set_task_stack_end_magic(&init_task);
 	smp_setup_processor_id();
 	debug_objects_early_init();
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 92e4dc6..c873a51 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -123,8 +123,6 @@ static inline int debug_locks_off_graph_unlock(void)
 	return ret;
 }
 
-static int lockdep_initialized;
-
 unsigned long nr_list_entries;
 static struct lock_list list_entries[MAX_LOCKDEP_ENTRIES];
 
@@ -434,19 +432,6 @@ unsigned int max_lockdep_depth;
 
 #ifdef CONFIG_DEBUG_LOCKDEP
 /*
- * We cannot printk in early bootup code. Not even early_printk()
- * might work. So we mark any initialization errors and printk
- * about it later on, in lockdep_info().
- */
-static int lockdep_init_error;
-static const char *lock_init_error;
-static unsigned long lockdep_init_trace_data[20];
-static struct stack_trace lockdep_init_trace = {
-	.max_entries = ARRAY_SIZE(lockdep_init_trace_data),
-	.entries = lockdep_init_trace_data,
-};
-
-/*
  * Various lockdep statistics:
  */
 DEFINE_PER_CPU(struct lockdep_stats, lockdep_stats);
@@ -669,20 +654,6 @@ look_up_lock_class(struct lockdep_map *lock, unsigned int subclass)
 	struct hlist_head *hash_head;
 	struct lock_class *class;
 
-#ifdef CONFIG_DEBUG_LOCKDEP
-	/*
-	 * If the architecture calls into lockdep before initializing
-	 * the hashes then we'll warn about it later. (we cannot printk
-	 * right now)
-	 */
-	if (unlikely(!lockdep_initialized)) {
-		lockdep_init();
-		lockdep_init_error = 1;
-		lock_init_error = lock->name;
-		save_stack_trace(&lockdep_init_trace);
-	}
-#endif
-
 	if (unlikely(subclass >= MAX_LOCKDEP_SUBCLASSES)) {
 		debug_locks_off();
 		printk(KERN_ERR
@@ -4013,28 +3984,6 @@ out_restore:
 	raw_local_irq_restore(flags);
 }
 
-void lockdep_init(void)
-{
-	int i;
-
-	/*
-	 * Some architectures have their own start_kernel()
-	 * code which calls lockdep_init(), while we also
-	 * call lockdep_init() from the start_kernel() itself,
-	 * and we want to initialize the hashes only once:
-	 */
-	if (lockdep_initialized)
-		return;
-
-	for (i = 0; i < CLASSHASH_SIZE; i++)
-		INIT_HLIST_HEAD(classhash_table + i);
-
-	for (i = 0; i < CHAINHASH_SIZE; i++)
-		INIT_HLIST_HEAD(chainhash_table + i);
-
-	lockdep_initialized = 1;
-}
-
 void __init lockdep_info(void)
 {
 	printk("Lock dependency validator: Copyright (c) 2006 Red Hat, Inc., Ingo Molnar\n");
@@ -4061,14 +4010,6 @@ void __init lockdep_info(void)
 
 	printk(" per task-struct memory footprint: %lu bytes\n",
 		sizeof(struct held_lock) * MAX_LOCK_DEPTH);
-
-#ifdef CONFIG_DEBUG_LOCKDEP
-	if (lockdep_init_error) {
-		printk("WARNING: lockdep init error: lock '%s' was acquired before lockdep_init().\n", lock_init_error);
-		printk("Call stack leading to lockdep invocation was:\n");
-		print_stack_trace(&lockdep_init_trace, 0);
-	}
-#endif
 }
 
 static void
diff --git a/tools/lib/lockdep/common.c b/tools/lib/lockdep/common.c
index 9be6633..d1c89cc 100644
--- a/tools/lib/lockdep/common.c
+++ b/tools/lib/lockdep/common.c
@@ -11,11 +11,6 @@ static __thread struct task_struct current_obj;
 bool debug_locks = true;
 bool debug_locks_silent;
 
-__attribute__((constructor)) static void liblockdep_init(void)
-{
-	lockdep_init();
-}
-
 __attribute__((destructor)) static void liblockdep_exit(void)
 {
 	debug_check_no_locks_held();
diff --git a/tools/lib/lockdep/include/liblockdep/common.h b/tools/lib/lockdep/include/liblockdep/common.h
index a60c14b..6e66277 100644
--- a/tools/lib/lockdep/include/liblockdep/common.h
+++ b/tools/lib/lockdep/include/liblockdep/common.h
@@ -44,7 +44,6 @@ void lock_acquire(struct lockdep_map *lock, unsigned int subclass,
 void lock_release(struct lockdep_map *lock, int nested,
 			unsigned long ip);
 extern void debug_check_no_locks_freed(const void *from, unsigned long len);
-extern void lockdep_init(void);
 
 #define STATIC_LOCKDEP_MAP_INIT(_name, _key) \
 	{ .name = (_name), .key = (void *)(_key), }
diff --git a/tools/lib/lockdep/preload.c b/tools/lib/lockdep/preload.c
index 21cdf86..5284484 100644
--- a/tools/lib/lockdep/preload.c
+++ b/tools/lib/lockdep/preload.c
@@ -439,7 +439,5 @@ __attribute__((constructor)) static void init_preload(void)
 	ll_pthread_rwlock_unlock = dlsym(RTLD_NEXT, "pthread_rwlock_unlock");
 #endif
 
-	lockdep_init();
-
 	__init_state = done;
 }
-- 
2.4.10

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: + kernel-locking-lockdepc-make-lockdep-initialize-itself-on-demand.patch added to -mm tree
  2016-02-04 13:13         ` Andrey Ryabinin
@ 2016-02-08  9:33           ` Michael Ellerman
  2016-02-09 11:00           ` Ingo Molnar
  1 sibling, 0 replies; 14+ messages in thread
From: Michael Ellerman @ 2016-02-08  9:33 UTC (permalink / raw)
  To: Andrey Ryabinin, Andrew Morton, Ingo Molnar, krinkin.m.u, mingo,
	peterz, linux-kernel, Peter Zijlstra, Thomas Gleixner

On Thu, 2016-02-04 at 16:13 +0300, Andrey Ryabinin wrote:
> On 02/04/2016 12:44 AM, Andrew Morton wrote:
> > Probably lockdep_init() and lockdep_initialized can be done away with now.
>
> Yup, it probably should be folded into your patch, or we could hold this off for 4.6.
>
> From: Andrey Ryabinin <aryabinin@virtuozzo.com>
>
> Subject: kernel/lockdep: eliminate lockdep_init()
>
> Lockdep is initialized at compile time now. Get rid of lockdep_init().
>
> Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>

> diff --git a/arch/powerpc/kernel/setup_32.c b/arch/powerpc/kernel/setup_32.c
> index ad8c9db..d544fa3 100644
> --- a/arch/powerpc/kernel/setup_32.c
> +++ b/arch/powerpc/kernel/setup_32.c
> @@ -114,8 +114,6 @@ extern unsigned int memset_nocache_branch; /* Insn to be replaced by NOP */
>
>  notrace void __init machine_init(u64 dt_ptr)
>  {
> -	lockdep_init();
> -
>  	/* Enable early debugging if any specified (see udbg.h) */
>  	udbg_early_init();
>
> diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
> index 5c03a6a..f98be83 100644
> --- a/arch/powerpc/kernel/setup_64.c
> +++ b/arch/powerpc/kernel/setup_64.c
> @@ -255,9 +255,6 @@ void __init early_setup(unsigned long dt_ptr)
>  	setup_paca(&boot_paca);
>  	fixup_boot_paca();
>
> -	/* Initialize lockdep early or else spinlocks will blow */
> -	lockdep_init();
> -
>  	/* -------- printk is now safe to use ------- */
>
>  	/* Enable early debugging if any specified (see udbg.h) */

Yes please. That has been a royal pain over the years and is still fragile,
very happy to see the back of it.

Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc)

cheers

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: + kernel-locking-lockdepc-make-lockdep-initialize-itself-on-demand.patch added to -mm tree
  2016-02-04 13:13         ` Andrey Ryabinin
  2016-02-08  9:33           ` Michael Ellerman
@ 2016-02-09 11:00           ` Ingo Molnar
  1 sibling, 0 replies; 14+ messages in thread
From: Ingo Molnar @ 2016-02-09 11:00 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrew Morton, krinkin.m.u, mingo, peterz, linux-kernel,
	Peter Zijlstra, Thomas Gleixner


* Andrey Ryabinin <aryabinin@virtuozzo.com> wrote:

> On 02/04/2016 12:44 AM, Andrew Morton wrote:
> > 
> > Probably lockdep_init() and lockdep_initialized can be done away with now.
> > 
> 
> Yup, it probably should be folded into your patch, or we could hold this off for 4.6.
> 
> 
> From: Andrey Ryabinin <aryabinin@virtuozzo.com>                                                                                                                                                                     
> Subject: kernel/lockdep: eliminate lockdep_init()
> 
> Lockdep is initialized at compile time now. Get rid of lockdep_init().
> 
> Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
> ---
>  arch/c6x/kernel/setup.c                       |  2 -
>  arch/microblaze/kernel/setup.c                |  2 -
>  arch/powerpc/kernel/setup_32.c                |  2 -
>  arch/powerpc/kernel/setup_64.c                |  3 --
>  arch/s390/kernel/early.c                      |  1 -
>  arch/sparc/kernel/head_64.S                   |  8 ----
>  arch/x86/lguest/boot.c                        |  6 ---
>  include/linux/lockdep.h                       |  2 -
>  init/main.c                                   |  5 ---
>  kernel/locking/lockdep.c                      | 59 ---------------------------
>  tools/lib/lockdep/common.c                    |  5 ---
>  tools/lib/lockdep/include/liblockdep/common.h |  1 -
>  tools/lib/lockdep/preload.c                   |  2 -
>  13 files changed, 98 deletions(-)

Very nice! Should have done this from day one on ...

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: + kernel-locking-lockdepc-make-lockdep-initialize-itself-on-demand.patch added to -mm tree
  2016-02-03 16:51   ` Andrew Morton
  2016-02-03 20:40     ` Andrew Morton
@ 2016-02-09 11:12     ` Ingo Molnar
  2016-02-09 20:17       ` Andrew Morton
  1 sibling, 1 reply; 14+ messages in thread
From: Ingo Molnar @ 2016-02-09 11:12 UTC (permalink / raw)
  To: Andrew Morton
  Cc: aryabinin, krinkin.m.u, mingo, peterz, linux-kernel,
	Peter Zijlstra, Thomas Gleixner


* Andrew Morton <akpm@linux-foundation.org> wrote:

> > and it should happen in a well defined place, not be opportunistic (and 
> > relatively random) like this, making it dependent on config options and 
> > calling contexts.
> 
> That's an unusable assertion, sorry.
> 
> Initializing lockdep in the above manner guarantees that it is initialized 
> before it is used.  It is *much* more reliable than "try to initialize it before 
> some piece of code which hasn't even been written yet tries to take a lock".

So I didn't like that patch because it called into lockdep in a messy way, without 
having any real knowledge about whether it's safe to do. Should lockdep ever grow 
more complex initialization, such a solution could break in subtle ways. I prefer 
clearly broken code with static dependencies over context-dependent broken code 
with dynamic call ordering/dependencies.

Fortunately we don't have to apply the patch:

> The conceptual problem is that if some piece of code does spin_lock_init() or 
> DEFINE_SPINLOCK(), that lock isn't necessarily initialized yet.

The conceptual problem is that the data structures are not build time initialized 
- but the hlist conversion patch solves that problem nicely!

So I'm a happy camper.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: + kernel-locking-lockdepc-make-lockdep-initialize-itself-on-demand.patch added to -mm tree
  2016-02-09 11:12     ` Ingo Molnar
@ 2016-02-09 20:17       ` Andrew Morton
  2016-02-29  9:11         ` Ingo Molnar
  0 siblings, 1 reply; 14+ messages in thread
From: Andrew Morton @ 2016-02-09 20:17 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: aryabinin, krinkin.m.u, mingo, peterz, linux-kernel,
	Peter Zijlstra, Thomas Gleixner

On Tue, 9 Feb 2016 12:12:29 +0100 Ingo Molnar <mingo@kernel.org> wrote:

> > The conceptual problem is that if some piece of code does spin_lock_init() or 
> > DEFINE_SPINLOCK(), that lock isn't necessarily initialized yet.
> 
> The conceptual problem is that the data structures are not build time initialized 
> - but the hlist conversion patch solves that problem nicely!
> 
> So I'm a happy camper.

OK, so the below has been in -next for nearly a week, no issues.  We
should get this into 4.5 to fix the CONFIG_UBSAN_ALIGNMENT issue.

Wanna ack this?  Once it's hit mainline I'll send
kernel-lockdep-eliminate-lockdep_init.patch in your direction for 4.6.


From: Andrew Morton <akpm@linux-foundation.org>
Subject: kernel/locking/lockdep.c: convert hash tables to hlists

Mike said:

: CONFIG_UBSAN_ALIGNMENT breaks x86-64 kernel with lockdep enabled, i.  e
: kernel with CONFIG_UBSAN_ALIGNMENT fails to load without even any error
: message.
: 
: The problem is that ubsan callbacks use spinlocks and might be called
: before lockdep is initialized.  Particularly this line in the
: reserve_ebda_region function causes problem:
: 
: lowmem = *(unsigned short *)__va(BIOS_LOWMEM_KILOBYTES);
: 
: If i put lockdep_init() before reserve_ebda_region call in
: x86_64_start_reservations kernel loads well.

Fix this ordering issue permanently: change lockdep so that it uses hlists
for the hash tables.  Unlike a list_head, an hlist_head is in its
initialized state when it is all-zeroes, so lockdep is ready for operation
immediately upon boot - lockdep_init() need not have run.

The patch will also save some memory.

lockdep_init() and lockdep_initialized can be done away with now - a 4.6
patch has been prepared to do this.

Reported-by: Mike Krinkin <krinkin.m.u@gmail.com>
Suggested-by: Mike Krinkin <krinkin.m.u@gmail.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 include/linux/lockdep.h  |    4 +--
 kernel/locking/lockdep.c |   42 ++++++++++++++++---------------------
 2 files changed, 21 insertions(+), 25 deletions(-)

diff -puN include/linux/lockdep.h~kernel-locking-lockdepc-convert-hash-tables-to-hlists include/linux/lockdep.h
--- a/include/linux/lockdep.h~kernel-locking-lockdepc-convert-hash-tables-to-hlists
+++ a/include/linux/lockdep.h
@@ -66,7 +66,7 @@ struct lock_class {
 	/*
 	 * class-hash:
 	 */
-	struct list_head		hash_entry;
+	struct hlist_node		hash_entry;
 
 	/*
 	 * global list of all lock-classes:
@@ -199,7 +199,7 @@ struct lock_chain {
 	u8				irq_context;
 	u8				depth;
 	u16				base;
-	struct list_head		entry;
+	struct hlist_node		entry;
 	u64				chain_key;
 };
 
diff -puN kernel/locking/lockdep.c~kernel-locking-lockdepc-convert-hash-tables-to-hlists kernel/locking/lockdep.c
--- a/kernel/locking/lockdep.c~kernel-locking-lockdepc-convert-hash-tables-to-hlists
+++ a/kernel/locking/lockdep.c
@@ -292,7 +292,7 @@ LIST_HEAD(all_lock_classes);
 #define __classhashfn(key)	hash_long((unsigned long)key, CLASSHASH_BITS)
 #define classhashentry(key)	(classhash_table + __classhashfn((key)))
 
-static struct list_head classhash_table[CLASSHASH_SIZE];
+static struct hlist_head classhash_table[CLASSHASH_SIZE];
 
 /*
  * We put the lock dependency chains into a hash-table as well, to cache
@@ -303,7 +303,7 @@ static struct list_head classhash_table[
 #define __chainhashfn(chain)	hash_long(chain, CHAINHASH_BITS)
 #define chainhashentry(chain)	(chainhash_table + __chainhashfn((chain)))
 
-static struct list_head chainhash_table[CHAINHASH_SIZE];
+static struct hlist_head chainhash_table[CHAINHASH_SIZE];
 
 /*
  * The hash key of the lock dependency chains is a hash itself too:
@@ -666,7 +666,7 @@ static inline struct lock_class *
 look_up_lock_class(struct lockdep_map *lock, unsigned int subclass)
 {
 	struct lockdep_subclass_key *key;
-	struct list_head *hash_head;
+	struct hlist_head *hash_head;
 	struct lock_class *class;
 
 #ifdef CONFIG_DEBUG_LOCKDEP
@@ -719,7 +719,7 @@ look_up_lock_class(struct lockdep_map *l
 	if (DEBUG_LOCKS_WARN_ON(!irqs_disabled()))
 		return NULL;
 
-	list_for_each_entry_rcu(class, hash_head, hash_entry) {
+	hlist_for_each_entry_rcu(class, hash_head, hash_entry) {
 		if (class->key == key) {
 			/*
 			 * Huh! same key, different name? Did someone trample
@@ -742,7 +742,7 @@ static inline struct lock_class *
 register_lock_class(struct lockdep_map *lock, unsigned int subclass, int force)
 {
 	struct lockdep_subclass_key *key;
-	struct list_head *hash_head;
+	struct hlist_head *hash_head;
 	struct lock_class *class;
 
 	DEBUG_LOCKS_WARN_ON(!irqs_disabled());
@@ -774,7 +774,7 @@ register_lock_class(struct lockdep_map *
 	 * We have to do the hash-walk again, to avoid races
 	 * with another CPU:
 	 */
-	list_for_each_entry_rcu(class, hash_head, hash_entry) {
+	hlist_for_each_entry_rcu(class, hash_head, hash_entry) {
 		if (class->key == key)
 			goto out_unlock_set;
 	}
@@ -805,7 +805,7 @@ register_lock_class(struct lockdep_map *
 	 * We use RCU's safe list-add method to make
 	 * parallel walking of the hash-list safe:
 	 */
-	list_add_tail_rcu(&class->hash_entry, hash_head);
+	hlist_add_head_rcu(&class->hash_entry, hash_head);
 	/*
 	 * Add it to the global list of classes:
 	 */
@@ -2017,7 +2017,7 @@ static inline int lookup_chain_cache(str
 				     u64 chain_key)
 {
 	struct lock_class *class = hlock_class(hlock);
-	struct list_head *hash_head = chainhashentry(chain_key);
+	struct hlist_head *hash_head = chainhashentry(chain_key);
 	struct lock_chain *chain;
 	struct held_lock *hlock_curr;
 	int i, j;
@@ -2033,7 +2033,7 @@ static inline int lookup_chain_cache(str
 	 * We can walk it lock-free, because entries only get added
 	 * to the hash:
 	 */
-	list_for_each_entry_rcu(chain, hash_head, entry) {
+	hlist_for_each_entry_rcu(chain, hash_head, entry) {
 		if (chain->chain_key == chain_key) {
 cache_hit:
 			debug_atomic_inc(chain_lookup_hits);
@@ -2057,7 +2057,7 @@ cache_hit:
 	/*
 	 * We have to walk the chain again locked - to avoid duplicates:
 	 */
-	list_for_each_entry(chain, hash_head, entry) {
+	hlist_for_each_entry(chain, hash_head, entry) {
 		if (chain->chain_key == chain_key) {
 			graph_unlock();
 			goto cache_hit;
@@ -2091,7 +2091,7 @@ cache_hit:
 		}
 		chain_hlocks[chain->base + j] = class - lock_classes;
 	}
-	list_add_tail_rcu(&chain->entry, hash_head);
+	hlist_add_head_rcu(&chain->entry, hash_head);
 	debug_atomic_inc(chain_lookup_misses);
 	inc_chains();
 
@@ -3875,7 +3875,7 @@ void lockdep_reset(void)
 	nr_process_chains = 0;
 	debug_locks = 1;
 	for (i = 0; i < CHAINHASH_SIZE; i++)
-		INIT_LIST_HEAD(chainhash_table + i);
+		INIT_HLIST_HEAD(chainhash_table + i);
 	raw_local_irq_restore(flags);
 }
 
@@ -3894,7 +3894,7 @@ static void zap_class(struct lock_class
 	/*
 	 * Unhash the class and remove it from the all_lock_classes list:
 	 */
-	list_del_rcu(&class->hash_entry);
+	hlist_del_rcu(&class->hash_entry);
 	list_del_rcu(&class->lock_entry);
 
 	RCU_INIT_POINTER(class->key, NULL);
@@ -3917,7 +3917,7 @@ static inline int within(const void *add
 void lockdep_free_key_range(void *start, unsigned long size)
 {
 	struct lock_class *class;
-	struct list_head *head;
+	struct hlist_head *head;
 	unsigned long flags;
 	int i;
 	int locked;
@@ -3930,9 +3930,7 @@ void lockdep_free_key_range(void *start,
 	 */
 	for (i = 0; i < CLASSHASH_SIZE; i++) {
 		head = classhash_table + i;
-		if (list_empty(head))
-			continue;
-		list_for_each_entry_rcu(class, head, hash_entry) {
+		hlist_for_each_entry_rcu(class, head, hash_entry) {
 			if (within(class->key, start, size))
 				zap_class(class);
 			else if (within(class->name, start, size))
@@ -3962,7 +3960,7 @@ void lockdep_free_key_range(void *start,
 void lockdep_reset_lock(struct lockdep_map *lock)
 {
 	struct lock_class *class;
-	struct list_head *head;
+	struct hlist_head *head;
 	unsigned long flags;
 	int i, j;
 	int locked;
@@ -3987,9 +3985,7 @@ void lockdep_reset_lock(struct lockdep_m
 	locked = graph_lock();
 	for (i = 0; i < CLASSHASH_SIZE; i++) {
 		head = classhash_table + i;
-		if (list_empty(head))
-			continue;
-		list_for_each_entry_rcu(class, head, hash_entry) {
+		hlist_for_each_entry_rcu(class, head, hash_entry) {
 			int match = 0;
 
 			for (j = 0; j < NR_LOCKDEP_CACHING_CLASSES; j++)
@@ -4027,10 +4023,10 @@ void lockdep_init(void)
 		return;
 
 	for (i = 0; i < CLASSHASH_SIZE; i++)
-		INIT_LIST_HEAD(classhash_table + i);
+		INIT_HLIST_HEAD(classhash_table + i);
 
 	for (i = 0; i < CHAINHASH_SIZE; i++)
-		INIT_LIST_HEAD(chainhash_table + i);
+		INIT_HLIST_HEAD(chainhash_table + i);
 
 	lockdep_initialized = 1;
 }
_

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: + kernel-locking-lockdepc-make-lockdep-initialize-itself-on-demand.patch added to -mm tree
  2016-02-09 20:17       ` Andrew Morton
@ 2016-02-29  9:11         ` Ingo Molnar
  2016-02-29  9:25           ` Ingo Molnar
  0 siblings, 1 reply; 14+ messages in thread
From: Ingo Molnar @ 2016-02-29  9:11 UTC (permalink / raw)
  To: Andrew Morton, Sasha Levin
  Cc: aryabinin, krinkin.m.u, mingo, peterz, linux-kernel,
	Peter Zijlstra, Thomas Gleixner


* Andrew Morton <akpm@linux-foundation.org> wrote:

> On Tue, 9 Feb 2016 12:12:29 +0100 Ingo Molnar <mingo@kernel.org> wrote:
> 
> > > The conceptual problem is that if some piece of code does spin_lock_init() or 
> > > DEFINE_SPINLOCK(), that lock isn't necessarily initialized yet.
> > 
> > The conceptual problem is that the data structures are not build time initialized 
> > - but the hlist conversion patch solves that problem nicely!
> > 
> > So I'm a happy camper.
> 
> OK, so the below has been in -next for nearly a week, no issues.  We
> should get this into 4.5 to fix the CONFIG_UBSAN_ALIGNMENT issue.

So I think this patch broke liblockdep:

triton:~/tip/tools/lib/lockdep> 

In file included from lockdep.c:2:0:
../../../kernel/locking/lockdep.c: In function ‘look_up_lock_class’:
../../../kernel/locking/lockdep.c:722:2: warning: implicit declaration of function ‘hlist_for_each_entry_rcu’ [-Wimplicit-function-declaration]
  hlist_for_each_entry_rcu(class, hash_head, hash_entry) {

...

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: + kernel-locking-lockdepc-make-lockdep-initialize-itself-on-demand.patch added to -mm tree
  2016-02-29  9:11         ` Ingo Molnar
@ 2016-02-29  9:25           ` Ingo Molnar
  2016-02-29  9:30             ` Andrey Ryabinin
  0 siblings, 1 reply; 14+ messages in thread
From: Ingo Molnar @ 2016-02-29  9:25 UTC (permalink / raw)
  To: Andrew Morton, Sasha Levin
  Cc: aryabinin, krinkin.m.u, mingo, peterz, linux-kernel,
	Peter Zijlstra, Thomas Gleixner


* Ingo Molnar <mingo@kernel.org> wrote:

> 
> * Andrew Morton <akpm@linux-foundation.org> wrote:
> 
> > On Tue, 9 Feb 2016 12:12:29 +0100 Ingo Molnar <mingo@kernel.org> wrote:
> > 
> > > > The conceptual problem is that if some piece of code does spin_lock_init() or 
> > > > DEFINE_SPINLOCK(), that lock isn't necessarily initialized yet.
> > > 
> > > The conceptual problem is that the data structures are not build time initialized 
> > > - but the hlist conversion patch solves that problem nicely!
> > > 
> > > So I'm a happy camper.
> > 
> > OK, so the below has been in -next for nearly a week, no issues.  We
> > should get this into 4.5 to fix the CONFIG_UBSAN_ALIGNMENT issue.
> 
> So I think this patch broke liblockdep:
> 
> triton:~/tip/tools/lib/lockdep> 
> 
> In file included from lockdep.c:2:0:
> ../../../kernel/locking/lockdep.c: In function ‘look_up_lock_class’:
> ../../../kernel/locking/lockdep.c:722:2: warning: implicit declaration of function ‘hlist_for_each_entry_rcu’ [-Wimplicit-function-declaration]
>   hlist_for_each_entry_rcu(class, hash_head, hash_entry) {

The patch below fixes it.

Thanks,

	Ingo

 tools/lib/lockdep/lockdep.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/tools/lib/lockdep/lockdep.c b/tools/lib/lockdep/lockdep.c
index f42b7e9aa48f..a0a2e3a266af 100644
--- a/tools/lib/lockdep/lockdep.c
+++ b/tools/lib/lockdep/lockdep.c
@@ -1,2 +1,8 @@
 #include <linux/lockdep.h>
+
+/* Trivial API wrappers, we don't (yet) have RCU in user-space: */
+#define hlist_for_each_entry_rcu	hlist_for_each_entry
+#define hlist_add_head_rcu		hlist_add_head
+#define hlist_del_rcu			hlist_del
+
 #include "../../../kernel/locking/lockdep.c"

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: + kernel-locking-lockdepc-make-lockdep-initialize-itself-on-demand.patch added to -mm tree
  2016-02-29  9:25           ` Ingo Molnar
@ 2016-02-29  9:30             ` Andrey Ryabinin
  0 siblings, 0 replies; 14+ messages in thread
From: Andrey Ryabinin @ 2016-02-29  9:30 UTC (permalink / raw)
  To: Ingo Molnar, Andrew Morton, Sasha Levin
  Cc: krinkin.m.u, mingo, peterz, linux-kernel, Peter Zijlstra,
	Thomas Gleixner



On 02/29/2016 12:25 PM, Ingo Molnar wrote:
> 
> * Ingo Molnar <mingo@kernel.org> wrote:
> 
>>
>> * Andrew Morton <akpm@linux-foundation.org> wrote:
>>
>>> On Tue, 9 Feb 2016 12:12:29 +0100 Ingo Molnar <mingo@kernel.org> wrote:
>>>
>>>>> The conceptual problem is that if some piece of code does spin_lock_init() or 
>>>>> DEFINE_SPINLOCK(), that lock isn't necessarily initialized yet.
>>>>
>>>> The conceptual problem is that the data structures are not build time initialized 
>>>> - but the hlist conversion patch solves that problem nicely!
>>>>
>>>> So I'm a happy camper.
>>>
>>> OK, so the below has been in -next for nearly a week, no issues.  We
>>> should get this into 4.5 to fix the CONFIG_UBSAN_ALIGNMENT issue.
>>
>> So I think this patch broke liblockdep:
>>
>> triton:~/tip/tools/lib/lockdep> 
>>
>> In file included from lockdep.c:2:0:
>> ../../../kernel/locking/lockdep.c: In function ‘look_up_lock_class’:
>> ../../../kernel/locking/lockdep.c:722:2: warning: implicit declaration of function ‘hlist_for_each_entry_rcu’ [-Wimplicit-function-declaration]
>>   hlist_for_each_entry_rcu(class, hash_head, hash_entry) {
> 
> The patch below fixes it.
> 
> Thanks,
> 
> 	Ingo
> 
>  tools/lib/lockdep/lockdep.c | 6 ++++++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/tools/lib/lockdep/lockdep.c b/tools/lib/lockdep/lockdep.c
> index f42b7e9aa48f..a0a2e3a266af 100644
> --- a/tools/lib/lockdep/lockdep.c
> +++ b/tools/lib/lockdep/lockdep.c
> @@ -1,2 +1,8 @@
>  #include <linux/lockdep.h>
> +
> +/* Trivial API wrappers, we don't (yet) have RCU in user-space: */
> +#define hlist_for_each_entry_rcu	hlist_for_each_entry
> +#define hlist_add_head_rcu		hlist_add_head
> +#define hlist_del_rcu			hlist_del
> +

I think this belongs to tools/lib/lockdep/uinclude/linux/kernel.h

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2016-02-29  9:30 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <56b11d2d.vVw1kB2la7Y+70xF%akpm@linux-foundation.org>
2016-02-03  7:44 ` + kernel-locking-lockdepc-make-lockdep-initialize-itself-on-demand.patch added to -mm tree Ingo Molnar
2016-02-03  9:42   ` Andrey Ryabinin
2016-02-03  9:49     ` Ingo Molnar
2016-02-03 16:51   ` Andrew Morton
2016-02-03 20:40     ` Andrew Morton
2016-02-03 21:44       ` Andrew Morton
2016-02-04 13:13         ` Andrey Ryabinin
2016-02-08  9:33           ` Michael Ellerman
2016-02-09 11:00           ` Ingo Molnar
2016-02-09 11:12     ` Ingo Molnar
2016-02-09 20:17       ` Andrew Morton
2016-02-29  9:11         ` Ingo Molnar
2016-02-29  9:25           ` Ingo Molnar
2016-02-29  9:30             ` Andrey Ryabinin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).