All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3] net/irda: fix lockdep annotation
@ 2017-01-16 21:10 Dmitry Vyukov
  2017-01-17 20:19 ` David Miller
  0 siblings, 1 reply; 8+ messages in thread
From: Dmitry Vyukov @ 2017-01-16 21:10 UTC (permalink / raw)
  To: davej, samuel, davem; +Cc: glider, andreyknvl, Dmitry Vyukov, netdev

The current annotation uses a global variable as recursion counter.
The variable is not atomic nor protected with a mutex, but mutated
by multiple threads. This causes lockdep bug reports episodically:

BUG: looking up invalid subclass: 4294967295
...
_raw_spin_lock_irqsave_nested+0x120/0x180
hashbin_delete+0x4fe/0x750
__irias_delete_object+0xab/0x170
irias_delete_object+0x5f/0xc0
ircomm_tty_detach_cable+0x1d5/0x3f0
...

Make the hashbin_lock_depth variable atomic to prevent bug reports.

As is this causes "unused variable 'depth'" warning without LOCKDEP.
So also change raw_spin_lock_irqsave_nested() macro to not cause
the warning without LOCKDEP. Similar to what raw_spin_lock_nested()
already does.

Signed-off-by: Dmitry Vyukov <dvyukov@google.com>
Cc: Dave Jones <davej@redhat.com>
Cc: Samuel Ortiz <samuel@sortiz.org>
Cc: David S. Miller <davem@davemloft.net>
Cc: netdev@vger.kernel.org
Fixes: c7630a4b932af ("[IrDA]: irda lockdep annotation")

---

Changes since v1:
 - Added raw_spin_lock_irqsave_nested() change
   as 0-DAY bot reported compiler warning without LOCKDEP.

Changes since v2:
 - Use ((void)(subclass), (lock)) instead of (void)subclass
   to prevent unused variable warning. Now 0-DAY complaints
   "error: void value not ignored as it ought to be".
   My compiler does not produce warning with W=1 either way,
   but the comma trick is what already used in raw_spin_lock_nested().
---
 include/linux/spinlock.h |  2 +-
 net/irda/irqueue.c       | 12 +++++++-----
 2 files changed, 8 insertions(+), 6 deletions(-)

diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
index 47dd0cebd204..beeb5a39bd8b 100644
--- a/include/linux/spinlock.h
+++ b/include/linux/spinlock.h
@@ -218,7 +218,7 @@ static inline void do_raw_spin_unlock(raw_spinlock_t *lock) __releases(lock)
 #define raw_spin_lock_irqsave_nested(lock, flags, subclass)		\
 	do {								\
 		typecheck(unsigned long, flags);			\
-		flags = _raw_spin_lock_irqsave(lock);			\
+		flags = _raw_spin_lock_irqsave(((void)(subclass), (lock))); \
 	} while (0)
 #endif
 
diff --git a/net/irda/irqueue.c b/net/irda/irqueue.c
index acbe61c7e683..b9fd74e6ca99 100644
--- a/net/irda/irqueue.c
+++ b/net/irda/irqueue.c
@@ -384,21 +384,23 @@ EXPORT_SYMBOL(hashbin_new);
  *    just supply kfree, which should take care of the job.
  */
 #ifdef CONFIG_LOCKDEP
-static int hashbin_lock_depth = 0;
+static atomic_t hashbin_lock_depth = ATOMIC_INIT(0);
 #endif
 int hashbin_delete( hashbin_t* hashbin, FREE_FUNC free_func)
 {
 	irda_queue_t* queue;
 	unsigned long flags = 0;
-	int i;
+	int i, depth = 0;
 
 	IRDA_ASSERT(hashbin != NULL, return -1;);
 	IRDA_ASSERT(hashbin->magic == HB_MAGIC, return -1;);
 
 	/* Synchronize */
 	if ( hashbin->hb_type & HB_LOCK ) {
-		spin_lock_irqsave_nested(&hashbin->hb_spinlock, flags,
-					 hashbin_lock_depth++);
+#ifdef CONFIG_LOCKDEP
+		depth = atomic_inc_return(&hashbin_lock_depth) - 1;
+#endif
+		spin_lock_irqsave_nested(&hashbin->hb_spinlock, flags, depth);
 	}
 
 	/*
@@ -423,7 +425,7 @@ int hashbin_delete( hashbin_t* hashbin, FREE_FUNC free_func)
 	if ( hashbin->hb_type & HB_LOCK) {
 		spin_unlock_irqrestore(&hashbin->hb_spinlock, flags);
 #ifdef CONFIG_LOCKDEP
-		hashbin_lock_depth--;
+		atomic_dec(&hashbin_lock_depth);
 #endif
 	}
 
-- 
2.11.0.483.g087da7b7c-goog

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v3] net/irda: fix lockdep annotation
  2017-01-16 21:10 [PATCH v3] net/irda: fix lockdep annotation Dmitry Vyukov
@ 2017-01-17 20:19 ` David Miller
  2017-01-19 10:05   ` Dmitry Vyukov
  0 siblings, 1 reply; 8+ messages in thread
From: David Miller @ 2017-01-17 20:19 UTC (permalink / raw)
  To: dvyukov; +Cc: davej, samuel, glider, andreyknvl, netdev

From: Dmitry Vyukov <dvyukov@google.com>
Date: Mon, 16 Jan 2017 22:10:52 +0100

> The current annotation uses a global variable as recursion counter.
> The variable is not atomic nor protected with a mutex, but mutated
> by multiple threads. This causes lockdep bug reports episodically:
> 
> BUG: looking up invalid subclass: 4294967295
> ...
> _raw_spin_lock_irqsave_nested+0x120/0x180
> hashbin_delete+0x4fe/0x750
> __irias_delete_object+0xab/0x170
> irias_delete_object+0x5f/0xc0
> ircomm_tty_detach_cable+0x1d5/0x3f0
> ...
> 
> Make the hashbin_lock_depth variable atomic to prevent bug reports.
> 
> As is this causes "unused variable 'depth'" warning without LOCKDEP.
> So also change raw_spin_lock_irqsave_nested() macro to not cause
> the warning without LOCKDEP. Similar to what raw_spin_lock_nested()
> already does.
> 
> Signed-off-by: Dmitry Vyukov <dvyukov@google.com>
> Cc: Dave Jones <davej@redhat.com>
> Cc: Samuel Ortiz <samuel@sortiz.org>
> Cc: David S. Miller <davem@davemloft.net>
> Cc: netdev@vger.kernel.org
> Fixes: c7630a4b932af ("[IrDA]: irda lockdep annotation")

I took a closer look at this, and really this whole lockdep thing is
more than a hack.

The basic problem is that the free_func() passed into hashbin_delete()
can trigger hashbin operations on other hashtables.

This function is destroying a hash table completely, and is doing so
in a context where no new hash table entries can be inserted.  The
last thing we do is kfree() the hashtable so this must be true.

Therefore, it is much better to just kill off all of this lockdep
stuff, and recode the teardown loop into something like:

	if (hashbin->hb_type & HB_LOCK)
		spin_lock_irqsave(&hashbin->hb_lock, flags);
	for (i = 0; i < HASHBIN_SIZE; i++) {
		while (1) {
			queue = dequeue_first((irda_queue_t **) &hashbin->hb_queue[i]);

			if (!queue)
				break;
			if (free_func) {
				if (hashbin->hb_type & HB_LOCK)
					spin_unlock_irqrestore(&hashbin->hb_lock, flags);
				free_func(queue);
				if (hashbin->hb_type & HB_LOCK)
					spin_lock_irqsave(&hashbin->hb_lock, flags);
			}
			
		}
	}
	hashbin->hb_current = NULL;
	hashbin->magic = ~HB_MAGIC;
	if (hashbin->hb_type & HB_LOCK)
		spin_unlock_irqrestore(&hashbin->hb_lock, flags);

At which point the recursive locking becomes impossible.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3] net/irda: fix lockdep annotation
  2017-01-17 20:19 ` David Miller
@ 2017-01-19 10:05   ` Dmitry Vyukov
  2017-01-19 10:33     ` Dmitry Vyukov
  2017-01-19 16:27     ` David Miller
  0 siblings, 2 replies; 8+ messages in thread
From: Dmitry Vyukov @ 2017-01-19 10:05 UTC (permalink / raw)
  To: David Miller
  Cc: Dave Jones, Samuel Ortiz, Alexander Potapenko, andreyknvl, netdev

On Tue, Jan 17, 2017 at 9:19 PM, David Miller <davem@davemloft.net> wrote:
> From: Dmitry Vyukov <dvyukov@google.com>
> Date: Mon, 16 Jan 2017 22:10:52 +0100
>
>> The current annotation uses a global variable as recursion counter.
>> The variable is not atomic nor protected with a mutex, but mutated
>> by multiple threads. This causes lockdep bug reports episodically:
>>
>> BUG: looking up invalid subclass: 4294967295
>> ...
>> _raw_spin_lock_irqsave_nested+0x120/0x180
>> hashbin_delete+0x4fe/0x750
>> __irias_delete_object+0xab/0x170
>> irias_delete_object+0x5f/0xc0
>> ircomm_tty_detach_cable+0x1d5/0x3f0
>> ...
>>
>> Make the hashbin_lock_depth variable atomic to prevent bug reports.
>>
>> As is this causes "unused variable 'depth'" warning without LOCKDEP.
>> So also change raw_spin_lock_irqsave_nested() macro to not cause
>> the warning without LOCKDEP. Similar to what raw_spin_lock_nested()
>> already does.
>>
>> Signed-off-by: Dmitry Vyukov <dvyukov@google.com>
>> Cc: Dave Jones <davej@redhat.com>
>> Cc: Samuel Ortiz <samuel@sortiz.org>
>> Cc: David S. Miller <davem@davemloft.net>
>> Cc: netdev@vger.kernel.org
>> Fixes: c7630a4b932af ("[IrDA]: irda lockdep annotation")
>
> I took a closer look at this, and really this whole lockdep thing is
> more than a hack.
>
> The basic problem is that the free_func() passed into hashbin_delete()
> can trigger hashbin operations on other hashtables.
>
> This function is destroying a hash table completely, and is doing so
> in a context where no new hash table entries can be inserted.  The
> last thing we do is kfree() the hashtable so this must be true.
>
> Therefore, it is much better to just kill off all of this lockdep
> stuff, and recode the teardown loop into something like:
>
>         if (hashbin->hb_type & HB_LOCK)
>                 spin_lock_irqsave(&hashbin->hb_lock, flags);
>         for (i = 0; i < HASHBIN_SIZE; i++) {
>                 while (1) {
>                         queue = dequeue_first((irda_queue_t **) &hashbin->hb_queue[i]);
>
>                         if (!queue)
>                                 break;
>                         if (free_func) {
>                                 if (hashbin->hb_type & HB_LOCK)
>                                         spin_unlock_irqrestore(&hashbin->hb_lock, flags);
>                                 free_func(queue);
>                                 if (hashbin->hb_type & HB_LOCK)
>                                         spin_lock_irqsave(&hashbin->hb_lock, flags);
>                         }
>
>                 }
>         }
>         hashbin->hb_current = NULL;
>         hashbin->magic = ~HB_MAGIC;
>         if (hashbin->hb_type & HB_LOCK)
>                 spin_unlock_irqrestore(&hashbin->hb_lock, flags);
>
> At which point the recursive locking becomes impossible.


Thanks for looking into it! This particular issue bothers my fuzzers
considerably. I agree that removing recursion is better.
So do how we proceed? Will you mail this as a real patch?

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3] net/irda: fix lockdep annotation
  2017-01-19 10:05   ` Dmitry Vyukov
@ 2017-01-19 10:33     ` Dmitry Vyukov
  2017-01-19 16:27     ` David Miller
  1 sibling, 0 replies; 8+ messages in thread
From: Dmitry Vyukov @ 2017-01-19 10:33 UTC (permalink / raw)
  To: David Miller
  Cc: Dave Jones, Samuel Ortiz, Alexander Potapenko, andreyknvl, netdev

On Thu, Jan 19, 2017 at 11:05 AM, Dmitry Vyukov <dvyukov@google.com> wrote:
> On Tue, Jan 17, 2017 at 9:19 PM, David Miller <davem@davemloft.net> wrote:
>> From: Dmitry Vyukov <dvyukov@google.com>
>> Date: Mon, 16 Jan 2017 22:10:52 +0100
>>
>>> The current annotation uses a global variable as recursion counter.
>>> The variable is not atomic nor protected with a mutex, but mutated
>>> by multiple threads. This causes lockdep bug reports episodically:
>>>
>>> BUG: looking up invalid subclass: 4294967295
>>> ...
>>> _raw_spin_lock_irqsave_nested+0x120/0x180
>>> hashbin_delete+0x4fe/0x750
>>> __irias_delete_object+0xab/0x170
>>> irias_delete_object+0x5f/0xc0
>>> ircomm_tty_detach_cable+0x1d5/0x3f0
>>> ...
>>>
>>> Make the hashbin_lock_depth variable atomic to prevent bug reports.
>>>
>>> As is this causes "unused variable 'depth'" warning without LOCKDEP.
>>> So also change raw_spin_lock_irqsave_nested() macro to not cause
>>> the warning without LOCKDEP. Similar to what raw_spin_lock_nested()
>>> already does.
>>>
>>> Signed-off-by: Dmitry Vyukov <dvyukov@google.com>
>>> Cc: Dave Jones <davej@redhat.com>
>>> Cc: Samuel Ortiz <samuel@sortiz.org>
>>> Cc: David S. Miller <davem@davemloft.net>
>>> Cc: netdev@vger.kernel.org
>>> Fixes: c7630a4b932af ("[IrDA]: irda lockdep annotation")
>>
>> I took a closer look at this, and really this whole lockdep thing is
>> more than a hack.
>>
>> The basic problem is that the free_func() passed into hashbin_delete()
>> can trigger hashbin operations on other hashtables.
>>
>> This function is destroying a hash table completely, and is doing so
>> in a context where no new hash table entries can be inserted.  The
>> last thing we do is kfree() the hashtable so this must be true.
>>
>> Therefore, it is much better to just kill off all of this lockdep
>> stuff, and recode the teardown loop into something like:
>>
>>         if (hashbin->hb_type & HB_LOCK)
>>                 spin_lock_irqsave(&hashbin->hb_lock, flags);
>>         for (i = 0; i < HASHBIN_SIZE; i++) {
>>                 while (1) {
>>                         queue = dequeue_first((irda_queue_t **) &hashbin->hb_queue[i]);
>>
>>                         if (!queue)
>>                                 break;
>>                         if (free_func) {
>>                                 if (hashbin->hb_type & HB_LOCK)
>>                                         spin_unlock_irqrestore(&hashbin->hb_lock, flags);
>>                                 free_func(queue);
>>                                 if (hashbin->hb_type & HB_LOCK)
>>                                         spin_lock_irqsave(&hashbin->hb_lock, flags);
>>                         }
>>
>>                 }
>>         }
>>         hashbin->hb_current = NULL;
>>         hashbin->magic = ~HB_MAGIC;
>>         if (hashbin->hb_type & HB_LOCK)
>>                 spin_unlock_irqrestore(&hashbin->hb_lock, flags);
>>
>> At which point the recursive locking becomes impossible.
>
>
> Thanks for looking into it! This particular issue bothers my fuzzers
> considerably. I agree that removing recursion is better.
> So do how we proceed? Will you mail this as a real patch?




Meanwhile I applied the following locally:

diff --git a/net/irda/irqueue.c b/net/irda/irqueue.c
index acbe61c7e683..c42d66a9ac8c 100644
--- a/net/irda/irqueue.c
+++ b/net/irda/irqueue.c
@@ -383,9 +383,6 @@ EXPORT_SYMBOL(hashbin_new);
  *    for deallocating this structure if it's complex. If not the user can
  *    just supply kfree, which should take care of the job.
  */
-#ifdef CONFIG_LOCKDEP
-static int hashbin_lock_depth = 0;
-#endif
 int hashbin_delete( hashbin_t* hashbin, FREE_FUNC free_func)
 {
        irda_queue_t* queue;
@@ -396,22 +393,22 @@ int hashbin_delete( hashbin_t* hashbin,
FREE_FUNC free_func)
        IRDA_ASSERT(hashbin->magic == HB_MAGIC, return -1;);

        /* Synchronize */
-       if ( hashbin->hb_type & HB_LOCK ) {
-               spin_lock_irqsave_nested(&hashbin->hb_spinlock, flags,
-                                        hashbin_lock_depth++);
-       }
+       if ( hashbin->hb_type & HB_LOCK )
+               spin_lock_irqsave(&hashbin->hb_spinlock, flags);

        /*
         *  Free the entries in the hashbin, TODO: use hashbin_clear when
         *  it has been shown to work
         */
-       for (i = 0; i < HASHBIN_SIZE; i ++ ) {
-               queue = dequeue_first((irda_queue_t**) &hashbin->hb_queue[i]);
-               while (queue ) {
-                       if (free_func)
-                               (*free_func)(queue);
-                       queue = dequeue_first(
-                               (irda_queue_t**) &hashbin->hb_queue[i]);
+       for (i = 0; i < HASHBIN_SIZE; i++) {
+               while ((queue = dequeue_first((irda_queue_t **)
&hashbin->hb_queue[i]))) {
+                       if (!free_func)
+                               continue;
+                       if (hashbin->hb_type & HB_LOCK)
+
spin_unlock_irqrestore(&hashbin->hb_spinlock, flags);
+                       free_func(queue);
+                       if (hashbin->hb_type & HB_LOCK)
+                               spin_lock_irqsave(&hashbin->hb_spinlock, flags);
                }
        }

@@ -420,12 +417,8 @@ int hashbin_delete( hashbin_t* hashbin, FREE_FUNC
free_func)
        hashbin->magic = ~HB_MAGIC;

        /* Release lock */
-       if ( hashbin->hb_type & HB_LOCK) {
+       if ( hashbin->hb_type & HB_LOCK)
                spin_unlock_irqrestore(&hashbin->hb_spinlock, flags);
-#ifdef CONFIG_LOCKDEP
-               hashbin_lock_depth--;
-#endif
-       }

        /*
         *  Free the hashbin structure



The more I look at this file the stranger it all looks to me... Is it
a queue? hashtable? locked or not? and an iterator at the same time?
And the free_func seems to be NULL only when we know the queue is empty anyway.

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v3] net/irda: fix lockdep annotation
  2017-01-19 10:05   ` Dmitry Vyukov
  2017-01-19 10:33     ` Dmitry Vyukov
@ 2017-01-19 16:27     ` David Miller
  2017-01-20 22:53       ` Dmitry Vyukov
  1 sibling, 1 reply; 8+ messages in thread
From: David Miller @ 2017-01-19 16:27 UTC (permalink / raw)
  To: dvyukov; +Cc: davej, samuel, glider, andreyknvl, netdev

From: Dmitry Vyukov <dvyukov@google.com>
Date: Thu, 19 Jan 2017 11:05:36 +0100

> Thanks for looking into it! This particular issue bothers my fuzzers
> considerably. I agree that removing recursion is better.
> So do how we proceed? Will you mail this as a real patch?

Someone needs to test this:

diff --git a/net/irda/irqueue.c b/net/irda/irqueue.c
index acbe61c..160dc89 100644
--- a/net/irda/irqueue.c
+++ b/net/irda/irqueue.c
@@ -383,9 +383,6 @@ EXPORT_SYMBOL(hashbin_new);
  *    for deallocating this structure if it's complex. If not the user can
  *    just supply kfree, which should take care of the job.
  */
-#ifdef CONFIG_LOCKDEP
-static int hashbin_lock_depth = 0;
-#endif
 int hashbin_delete( hashbin_t* hashbin, FREE_FUNC free_func)
 {
 	irda_queue_t* queue;
@@ -396,22 +393,27 @@ int hashbin_delete( hashbin_t* hashbin, FREE_FUNC free_func)
 	IRDA_ASSERT(hashbin->magic == HB_MAGIC, return -1;);
 
 	/* Synchronize */
-	if ( hashbin->hb_type & HB_LOCK ) {
-		spin_lock_irqsave_nested(&hashbin->hb_spinlock, flags,
-					 hashbin_lock_depth++);
-	}
+	if (hashbin->hb_type & HB_LOCK)
+		spin_lock_irqsave(&hashbin->hb_spinlock, flags);
 
 	/*
 	 *  Free the entries in the hashbin, TODO: use hashbin_clear when
 	 *  it has been shown to work
 	 */
 	for (i = 0; i < HASHBIN_SIZE; i ++ ) {
-		queue = dequeue_first((irda_queue_t**) &hashbin->hb_queue[i]);
-		while (queue ) {
-			if (free_func)
-				(*free_func)(queue);
-			queue = dequeue_first(
-				(irda_queue_t**) &hashbin->hb_queue[i]);
+		while (1) {
+			queue = dequeue_first((irda_queue_t**) &hashbin->hb_queue[i]);
+
+			if (!queue)
+				break;
+
+			if (free_func) {
+				if (hashbin->hb_type & HB_LOCK)
+					spin_unlock_irqrestore(&hashbin->hb_spinlock, flags);
+				free_func(queue);
+				if (hashbin->hb_type & HB_LOCK)
+					spin_lock_irqsave(&hashbin->hb_spinlock, flags);
+			}
 		}
 	}
 
@@ -420,12 +422,8 @@ int hashbin_delete( hashbin_t* hashbin, FREE_FUNC free_func)
 	hashbin->magic = ~HB_MAGIC;
 
 	/* Release lock */
-	if ( hashbin->hb_type & HB_LOCK) {
+	if (hashbin->hb_type & HB_LOCK)
 		spin_unlock_irqrestore(&hashbin->hb_spinlock, flags);
-#ifdef CONFIG_LOCKDEP
-		hashbin_lock_depth--;
-#endif
-	}
 
 	/*
 	 *  Free the hashbin structure

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v3] net/irda: fix lockdep annotation
  2017-01-19 16:27     ` David Miller
@ 2017-01-20 22:53       ` Dmitry Vyukov
  2017-02-17 21:13         ` Dmitry Vyukov
  0 siblings, 1 reply; 8+ messages in thread
From: Dmitry Vyukov @ 2017-01-20 22:53 UTC (permalink / raw)
  To: David Miller
  Cc: Dave Jones, Samuel Ortiz, Alexander Potapenko, andreyknvl, netdev

On Thu, Jan 19, 2017 at 5:27 PM, David Miller <davem@davemloft.net> wrote:
> From: Dmitry Vyukov <dvyukov@google.com>
> Date: Thu, 19 Jan 2017 11:05:36 +0100
>
>> Thanks for looking into it! This particular issue bothers my fuzzers
>> considerably. I agree that removing recursion is better.
>> So do how we proceed? Will you mail this as a real patch?
>
> Someone needs to test this:


I've stressed this with the fuzzer for a day.
It gets rid of the lockdep warning, and I did not notice any other
related crashes.

Tested-by: Dmitry Vyukov <dvyukov@google.com>

Thanks!


> diff --git a/net/irda/irqueue.c b/net/irda/irqueue.c
> index acbe61c..160dc89 100644
> --- a/net/irda/irqueue.c
> +++ b/net/irda/irqueue.c
> @@ -383,9 +383,6 @@ EXPORT_SYMBOL(hashbin_new);
>   *    for deallocating this structure if it's complex. If not the user can
>   *    just supply kfree, which should take care of the job.
>   */
> -#ifdef CONFIG_LOCKDEP
> -static int hashbin_lock_depth = 0;
> -#endif
>  int hashbin_delete( hashbin_t* hashbin, FREE_FUNC free_func)
>  {
>         irda_queue_t* queue;
> @@ -396,22 +393,27 @@ int hashbin_delete( hashbin_t* hashbin, FREE_FUNC free_func)
>         IRDA_ASSERT(hashbin->magic == HB_MAGIC, return -1;);
>
>         /* Synchronize */
> -       if ( hashbin->hb_type & HB_LOCK ) {
> -               spin_lock_irqsave_nested(&hashbin->hb_spinlock, flags,
> -                                        hashbin_lock_depth++);
> -       }
> +       if (hashbin->hb_type & HB_LOCK)
> +               spin_lock_irqsave(&hashbin->hb_spinlock, flags);
>
>         /*
>          *  Free the entries in the hashbin, TODO: use hashbin_clear when
>          *  it has been shown to work
>          */
>         for (i = 0; i < HASHBIN_SIZE; i ++ ) {
> -               queue = dequeue_first((irda_queue_t**) &hashbin->hb_queue[i]);
> -               while (queue ) {
> -                       if (free_func)
> -                               (*free_func)(queue);
> -                       queue = dequeue_first(
> -                               (irda_queue_t**) &hashbin->hb_queue[i]);
> +               while (1) {
> +                       queue = dequeue_first((irda_queue_t**) &hashbin->hb_queue[i]);
> +
> +                       if (!queue)
> +                               break;
> +
> +                       if (free_func) {
> +                               if (hashbin->hb_type & HB_LOCK)
> +                                       spin_unlock_irqrestore(&hashbin->hb_spinlock, flags);
> +                               free_func(queue);
> +                               if (hashbin->hb_type & HB_LOCK)
> +                                       spin_lock_irqsave(&hashbin->hb_spinlock, flags);
> +                       }
>                 }
>         }
>
> @@ -420,12 +422,8 @@ int hashbin_delete( hashbin_t* hashbin, FREE_FUNC free_func)
>         hashbin->magic = ~HB_MAGIC;
>
>         /* Release lock */
> -       if ( hashbin->hb_type & HB_LOCK) {
> +       if (hashbin->hb_type & HB_LOCK)
>                 spin_unlock_irqrestore(&hashbin->hb_spinlock, flags);
> -#ifdef CONFIG_LOCKDEP
> -               hashbin_lock_depth--;
> -#endif
> -       }
>
>         /*
>          *  Free the hashbin structure

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3] net/irda: fix lockdep annotation
  2017-01-20 22:53       ` Dmitry Vyukov
@ 2017-02-17 21:13         ` Dmitry Vyukov
  2017-02-17 21:20           ` David Miller
  0 siblings, 1 reply; 8+ messages in thread
From: Dmitry Vyukov @ 2017-02-17 21:13 UTC (permalink / raw)
  To: David Miller
  Cc: Dave Jones, Samuel Ortiz, Alexander Potapenko, andreyknvl, netdev

On Fri, Jan 20, 2017 at 11:53 PM, Dmitry Vyukov <dvyukov@google.com> wrote:
> On Thu, Jan 19, 2017 at 5:27 PM, David Miller <davem@davemloft.net> wrote:
>> From: Dmitry Vyukov <dvyukov@google.com>
>> Date: Thu, 19 Jan 2017 11:05:36 +0100
>>
>>> Thanks for looking into it! This particular issue bothers my fuzzers
>>> considerably. I agree that removing recursion is better.
>>> So do how we proceed? Will you mail this as a real patch?
>>
>> Someone needs to test this:
>
>
> I've stressed this with the fuzzer for a day.
> It gets rid of the lockdep warning, and I did not notice any other
> related crashes.
>
> Tested-by: Dmitry Vyukov <dvyukov@google.com>
>
> Thanks!


Hello David,

Have you merged this into some tree? Can't find it. If not, please
queue this for the next release.
This hurts us badly, but I don't want to turn off lockdep entirely.

Thanks

>> diff --git a/net/irda/irqueue.c b/net/irda/irqueue.c
>> index acbe61c..160dc89 100644
>> --- a/net/irda/irqueue.c
>> +++ b/net/irda/irqueue.c
>> @@ -383,9 +383,6 @@ EXPORT_SYMBOL(hashbin_new);
>>   *    for deallocating this structure if it's complex. If not the user can
>>   *    just supply kfree, which should take care of the job.
>>   */
>> -#ifdef CONFIG_LOCKDEP
>> -static int hashbin_lock_depth = 0;
>> -#endif
>>  int hashbin_delete( hashbin_t* hashbin, FREE_FUNC free_func)
>>  {
>>         irda_queue_t* queue;
>> @@ -396,22 +393,27 @@ int hashbin_delete( hashbin_t* hashbin, FREE_FUNC free_func)
>>         IRDA_ASSERT(hashbin->magic == HB_MAGIC, return -1;);
>>
>>         /* Synchronize */
>> -       if ( hashbin->hb_type & HB_LOCK ) {
>> -               spin_lock_irqsave_nested(&hashbin->hb_spinlock, flags,
>> -                                        hashbin_lock_depth++);
>> -       }
>> +       if (hashbin->hb_type & HB_LOCK)
>> +               spin_lock_irqsave(&hashbin->hb_spinlock, flags);
>>
>>         /*
>>          *  Free the entries in the hashbin, TODO: use hashbin_clear when
>>          *  it has been shown to work
>>          */
>>         for (i = 0; i < HASHBIN_SIZE; i ++ ) {
>> -               queue = dequeue_first((irda_queue_t**) &hashbin->hb_queue[i]);
>> -               while (queue ) {
>> -                       if (free_func)
>> -                               (*free_func)(queue);
>> -                       queue = dequeue_first(
>> -                               (irda_queue_t**) &hashbin->hb_queue[i]);
>> +               while (1) {
>> +                       queue = dequeue_first((irda_queue_t**) &hashbin->hb_queue[i]);
>> +
>> +                       if (!queue)
>> +                               break;
>> +
>> +                       if (free_func) {
>> +                               if (hashbin->hb_type & HB_LOCK)
>> +                                       spin_unlock_irqrestore(&hashbin->hb_spinlock, flags);
>> +                               free_func(queue);
>> +                               if (hashbin->hb_type & HB_LOCK)
>> +                                       spin_lock_irqsave(&hashbin->hb_spinlock, flags);
>> +                       }
>>                 }
>>         }
>>
>> @@ -420,12 +422,8 @@ int hashbin_delete( hashbin_t* hashbin, FREE_FUNC free_func)
>>         hashbin->magic = ~HB_MAGIC;
>>
>>         /* Release lock */
>> -       if ( hashbin->hb_type & HB_LOCK) {
>> +       if (hashbin->hb_type & HB_LOCK)
>>                 spin_unlock_irqrestore(&hashbin->hb_spinlock, flags);
>> -#ifdef CONFIG_LOCKDEP
>> -               hashbin_lock_depth--;
>> -#endif
>> -       }
>>
>>         /*
>>          *  Free the hashbin structure

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3] net/irda: fix lockdep annotation
  2017-02-17 21:13         ` Dmitry Vyukov
@ 2017-02-17 21:20           ` David Miller
  0 siblings, 0 replies; 8+ messages in thread
From: David Miller @ 2017-02-17 21:20 UTC (permalink / raw)
  To: dvyukov; +Cc: davej, samuel, glider, andreyknvl, netdev

From: Dmitry Vyukov <dvyukov@google.com>
Date: Fri, 17 Feb 2017 22:13:58 +0100

> On Fri, Jan 20, 2017 at 11:53 PM, Dmitry Vyukov <dvyukov@google.com> wrote:
>> On Thu, Jan 19, 2017 at 5:27 PM, David Miller <davem@davemloft.net> wrote:
>>> From: Dmitry Vyukov <dvyukov@google.com>
>>> Date: Thu, 19 Jan 2017 11:05:36 +0100
>>>
>>>> Thanks for looking into it! This particular issue bothers my fuzzers
>>>> considerably. I agree that removing recursion is better.
>>>> So do how we proceed? Will you mail this as a real patch?
>>>
>>> Someone needs to test this:
>>
>>
>> I've stressed this with the fuzzer for a day.
>> It gets rid of the lockdep warning, and I did not notice any other
>> related crashes.
>>
>> Tested-by: Dmitry Vyukov <dvyukov@google.com>
>>
>> Thanks!
> 
> 
> Hello David,
> 
> Have you merged this into some tree? Can't find it. If not, please
> queue this for the next release.
> This hurts us badly, but I don't want to turn off lockdep entirely.

Aha, thanks for reminding me.  I just committed the following and will
queue it up for -stable too:

====================
[PATCH] irda: Fix lockdep annotations in hashbin_delete().

A nested lock depth was added to the hasbin_delete() code but it
doesn't actually work some well and results in tons of lockdep splats.

Fix the code instead to properly drop the lock around the operation
and just keep peeking the head of the hashbin queue.

Reported-by: Dmitry Vyukov <dvyukov@google.com>
Tested-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
---
 net/irda/irqueue.c | 34 ++++++++++++++++------------------
 1 file changed, 16 insertions(+), 18 deletions(-)

diff --git a/net/irda/irqueue.c b/net/irda/irqueue.c
index acbe61c..160dc89 100644
--- a/net/irda/irqueue.c
+++ b/net/irda/irqueue.c
@@ -383,9 +383,6 @@ EXPORT_SYMBOL(hashbin_new);
  *    for deallocating this structure if it's complex. If not the user can
  *    just supply kfree, which should take care of the job.
  */
-#ifdef CONFIG_LOCKDEP
-static int hashbin_lock_depth = 0;
-#endif
 int hashbin_delete( hashbin_t* hashbin, FREE_FUNC free_func)
 {
 	irda_queue_t* queue;
@@ -396,22 +393,27 @@ int hashbin_delete( hashbin_t* hashbin, FREE_FUNC free_func)
 	IRDA_ASSERT(hashbin->magic == HB_MAGIC, return -1;);
 
 	/* Synchronize */
-	if ( hashbin->hb_type & HB_LOCK ) {
-		spin_lock_irqsave_nested(&hashbin->hb_spinlock, flags,
-					 hashbin_lock_depth++);
-	}
+	if (hashbin->hb_type & HB_LOCK)
+		spin_lock_irqsave(&hashbin->hb_spinlock, flags);
 
 	/*
 	 *  Free the entries in the hashbin, TODO: use hashbin_clear when
 	 *  it has been shown to work
 	 */
 	for (i = 0; i < HASHBIN_SIZE; i ++ ) {
-		queue = dequeue_first((irda_queue_t**) &hashbin->hb_queue[i]);
-		while (queue ) {
-			if (free_func)
-				(*free_func)(queue);
-			queue = dequeue_first(
-				(irda_queue_t**) &hashbin->hb_queue[i]);
+		while (1) {
+			queue = dequeue_first((irda_queue_t**) &hashbin->hb_queue[i]);
+
+			if (!queue)
+				break;
+
+			if (free_func) {
+				if (hashbin->hb_type & HB_LOCK)
+					spin_unlock_irqrestore(&hashbin->hb_spinlock, flags);
+				free_func(queue);
+				if (hashbin->hb_type & HB_LOCK)
+					spin_lock_irqsave(&hashbin->hb_spinlock, flags);
+			}
 		}
 	}
 
@@ -420,12 +422,8 @@ int hashbin_delete( hashbin_t* hashbin, FREE_FUNC free_func)
 	hashbin->magic = ~HB_MAGIC;
 
 	/* Release lock */
-	if ( hashbin->hb_type & HB_LOCK) {
+	if (hashbin->hb_type & HB_LOCK)
 		spin_unlock_irqrestore(&hashbin->hb_spinlock, flags);
-#ifdef CONFIG_LOCKDEP
-		hashbin_lock_depth--;
-#endif
-	}
 
 	/*
 	 *  Free the hashbin structure
-- 
2.4.11

^ permalink raw reply related	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2017-02-17 21:27 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-01-16 21:10 [PATCH v3] net/irda: fix lockdep annotation Dmitry Vyukov
2017-01-17 20:19 ` David Miller
2017-01-19 10:05   ` Dmitry Vyukov
2017-01-19 10:33     ` Dmitry Vyukov
2017-01-19 16:27     ` David Miller
2017-01-20 22:53       ` Dmitry Vyukov
2017-02-17 21:13         ` Dmitry Vyukov
2017-02-17 21:20           ` David Miller

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.