All of lore.kernel.org
 help / color / mirror / Atom feed
* Real deadlock being suppressed in sbitmap
@ 2019-01-14 17:14 Steven Rostedt
  2019-01-14 19:43 ` Jens Axboe
  2019-01-15  3:23 ` Ming Lei
  0 siblings, 2 replies; 8+ messages in thread
From: Steven Rostedt @ 2019-01-14 17:14 UTC (permalink / raw)
  To: Jens Axboe
  Cc: LKML, Linus Torvalds, Andrew Morton, Peter Zijlstra,
	Thomas Gleixner, Ingo Molnar, Clark Williams, Bart Van Assche,
	Ming Lei

It was brought to my attention (by this creating a splat in the RT tree
too) this code:

static inline bool sbitmap_deferred_clear(struct sbitmap *sb, int index)
{
	unsigned long mask, val;
	unsigned long __maybe_unused flags;
	bool ret = false;

	/* Silence bogus lockdep warning */
#if defined(CONFIG_LOCKDEP)
	local_irq_save(flags);
#endif
	spin_lock(&sb->map[index].swap_lock);

Commit 58ab5e32e6f ("sbitmap: silence bogus lockdep IRQ warning")
states the following:

    For this case, it's a false positive. The swap_lock is used from process
    context only, when we swap the bits in the word and cleared mask. We
    also end up doing that when we are getting a driver tag, from the
    blk_mq_mark_tag_wait(), and from there we hold the waitqueue lock with
    IRQs disabled. However, this isn't from an actual IRQ, it's still
    process context.

The thing is, lockdep doesn't define a lock as "irq-safe" based on it
being taken under interrupts disabled or not. It detects when locks are
used in actual interrupts. Further in that commit we have this:

   [  106.097386] fio/1043 [HC0[0]:SC0[0]:HE0:SE1] is trying to acquire:
    [  106.098231] 000000004c43fa71
    (&(&sb->map[i].swap_lock)->rlock){+.+.}, at: sbitmap_get+0xd5/0x22c
    [  106.099431]
    [  106.099431] and this task is already holding:
    [  106.100229] 000000007eec8b2f
    (&(&hctx->dispatch_wait_lock)->rlock){....}, at:
    blk_mq_dispatch_rq_list+0x4c1/0xd7c
    [  106.101630] which would create a new lock dependency:
    [  106.102326]  (&(&hctx->dispatch_wait_lock)->rlock){....} ->
    (&(&sb->map[i].swap_lock)->rlock){+.+.}

Saying that you are trying to take the swap_lock while holding the
dispatch_wait_lock.


    [  106.103553] but this new dependency connects a SOFTIRQ-irq-safe lock:
    [  106.104580]  (&sbq->ws[i].wait){..-.}

Which means that there's already a chain of:

 sbq->ws[i].wait -> dispatch_wait_lock

    [  106.104582]
    [  106.104582] ... which became SOFTIRQ-irq-safe at:
    [  106.105751]   _raw_spin_lock_irqsave+0x4b/0x82
    [  106.106284]   __wake_up_common_lock+0x119/0x1b9
    [  106.106825]   sbitmap_queue_wake_up+0x33f/0x383
    [  106.107456]   sbitmap_queue_clear+0x4c/0x9a
    [  106.108046]   __blk_mq_free_request+0x188/0x1d3
    [  106.108581]   blk_mq_free_request+0x23b/0x26b
    [  106.109102]   scsi_end_request+0x345/0x5d7
    [  106.109587]   scsi_io_completion+0x4b5/0x8f0
    [  106.110099]   scsi_finish_command+0x412/0x456
    [  106.110615]   scsi_softirq_done+0x23f/0x29b
    [  106.111115]   blk_done_softirq+0x2a7/0x2e6
    [  106.111608]   __do_softirq+0x360/0x6ad
    [  106.112062]   run_ksoftirqd+0x2f/0x5b
    [  106.112499]   smpboot_thread_fn+0x3a5/0x3db
    [  106.113000]   kthread+0x1d4/0x1e4
    [  106.113457]   ret_from_fork+0x3a/0x50


We see that sbq->ws[i].wait was taken from a softirq context.



    [  106.131226] Chain exists of:
    [  106.131226]   &sbq->ws[i].wait -->
    &(&hctx->dispatch_wait_lock)->rlock -->
    &(&sb->map[i].swap_lock)->rlock

This is telling us that we now have a chain of:

 sbq->ws[i].wait -> dispatch_wait_lock -> swap_lock

    [  106.131226]
    [  106.132865]  Possible interrupt unsafe locking scenario:
    [  106.132865]
    [  106.133659]        CPU0                    CPU1
    [  106.134194]        ----                    ----
    [  106.134733]   lock(&(&sb->map[i].swap_lock)->rlock);
    [  106.135318]                                local_irq_disable();
    [  106.136014]                                lock(&sbq->ws[i].wait);
    [  106.136747]
    lock(&(&hctx->dispatch_wait_lock)->rlock);
    [  106.137742]   <Interrupt>
    [  106.138110]     lock(&sbq->ws[i].wait);
    [  106.138625]
    [  106.138625]  *** DEADLOCK ***
    [  106.138625]

I need to make this more than just two levels deep. Here's the issue:


	CPU0			CPU1			CPU2
	----			----			----
  lock(swap_lock)
			local_irq_disable()
			lock(dispatch_lock);
							local_irq_disable()
							lock(sbq->ws[i].wait)
							lock(dispatch_lock)
			lock(swap_lock)
  <interrupt>
  lock(sbq->ws[i].wait)


DEADLOCK!

In other words, it is not bogus, and can be a real potential for a
deadlock. Please talk with the lockdep maintainers before saying
there's a bogus deadlock, because lockdep is seldom wrong.

-- Steve

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Real deadlock being suppressed in sbitmap
  2019-01-14 17:14 Real deadlock being suppressed in sbitmap Steven Rostedt
@ 2019-01-14 19:43 ` Jens Axboe
  2019-01-15  3:23 ` Ming Lei
  1 sibling, 0 replies; 8+ messages in thread
From: Jens Axboe @ 2019-01-14 19:43 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: LKML, Linus Torvalds, Andrew Morton, Peter Zijlstra,
	Thomas Gleixner, Ingo Molnar, Clark Williams, Bart Van Assche,
	Ming Lei

On 1/14/19 10:14 AM, Steven Rostedt wrote:
> It was brought to my attention (by this creating a splat in the RT tree
> too) this code:
> 
> static inline bool sbitmap_deferred_clear(struct sbitmap *sb, int index)
> {
> 	unsigned long mask, val;
> 	unsigned long __maybe_unused flags;
> 	bool ret = false;
> 
> 	/* Silence bogus lockdep warning */
> #if defined(CONFIG_LOCKDEP)
> 	local_irq_save(flags);
> #endif
> 	spin_lock(&sb->map[index].swap_lock);
> 
> Commit 58ab5e32e6f ("sbitmap: silence bogus lockdep IRQ warning")
> states the following:
> 
>     For this case, it's a false positive. The swap_lock is used from process
>     context only, when we swap the bits in the word and cleared mask. We
>     also end up doing that when we are getting a driver tag, from the
>     blk_mq_mark_tag_wait(), and from there we hold the waitqueue lock with
>     IRQs disabled. However, this isn't from an actual IRQ, it's still
>     process context.
> 
> The thing is, lockdep doesn't define a lock as "irq-safe" based on it
> being taken under interrupts disabled or not. It detects when locks are
> used in actual interrupts. Further in that commit we have this:
> 
>    [  106.097386] fio/1043 [HC0[0]:SC0[0]:HE0:SE1] is trying to acquire:
>     [  106.098231] 000000004c43fa71
>     (&(&sb->map[i].swap_lock)->rlock){+.+.}, at: sbitmap_get+0xd5/0x22c
>     [  106.099431]
>     [  106.099431] and this task is already holding:
>     [  106.100229] 000000007eec8b2f
>     (&(&hctx->dispatch_wait_lock)->rlock){....}, at:
>     blk_mq_dispatch_rq_list+0x4c1/0xd7c
>     [  106.101630] which would create a new lock dependency:
>     [  106.102326]  (&(&hctx->dispatch_wait_lock)->rlock){....} ->
>     (&(&sb->map[i].swap_lock)->rlock){+.+.}
> 
> Saying that you are trying to take the swap_lock while holding the
> dispatch_wait_lock.
> 
> 
>     [  106.103553] but this new dependency connects a SOFTIRQ-irq-safe lock:
>     [  106.104580]  (&sbq->ws[i].wait){..-.}
> 
> Which means that there's already a chain of:
> 
>  sbq->ws[i].wait -> dispatch_wait_lock
> 
>     [  106.104582]
>     [  106.104582] ... which became SOFTIRQ-irq-safe at:
>     [  106.105751]   _raw_spin_lock_irqsave+0x4b/0x82
>     [  106.106284]   __wake_up_common_lock+0x119/0x1b9
>     [  106.106825]   sbitmap_queue_wake_up+0x33f/0x383
>     [  106.107456]   sbitmap_queue_clear+0x4c/0x9a
>     [  106.108046]   __blk_mq_free_request+0x188/0x1d3
>     [  106.108581]   blk_mq_free_request+0x23b/0x26b
>     [  106.109102]   scsi_end_request+0x345/0x5d7
>     [  106.109587]   scsi_io_completion+0x4b5/0x8f0
>     [  106.110099]   scsi_finish_command+0x412/0x456
>     [  106.110615]   scsi_softirq_done+0x23f/0x29b
>     [  106.111115]   blk_done_softirq+0x2a7/0x2e6
>     [  106.111608]   __do_softirq+0x360/0x6ad
>     [  106.112062]   run_ksoftirqd+0x2f/0x5b
>     [  106.112499]   smpboot_thread_fn+0x3a5/0x3db
>     [  106.113000]   kthread+0x1d4/0x1e4
>     [  106.113457]   ret_from_fork+0x3a/0x50
> 
> 
> We see that sbq->ws[i].wait was taken from a softirq context.
> 
> 
> 
>     [  106.131226] Chain exists of:
>     [  106.131226]   &sbq->ws[i].wait -->
>     &(&hctx->dispatch_wait_lock)->rlock -->
>     &(&sb->map[i].swap_lock)->rlock
> 
> This is telling us that we now have a chain of:
> 
>  sbq->ws[i].wait -> dispatch_wait_lock -> swap_lock
> 
>     [  106.131226]
>     [  106.132865]  Possible interrupt unsafe locking scenario:
>     [  106.132865]
>     [  106.133659]        CPU0                    CPU1
>     [  106.134194]        ----                    ----
>     [  106.134733]   lock(&(&sb->map[i].swap_lock)->rlock);
>     [  106.135318]                                local_irq_disable();
>     [  106.136014]                                lock(&sbq->ws[i].wait);
>     [  106.136747]
>     lock(&(&hctx->dispatch_wait_lock)->rlock);
>     [  106.137742]   <Interrupt>
>     [  106.138110]     lock(&sbq->ws[i].wait);
>     [  106.138625]
>     [  106.138625]  *** DEADLOCK ***
>     [  106.138625]
> 
> I need to make this more than just two levels deep. Here's the issue:
> 
> 
> 	CPU0			CPU1			CPU2
> 	----			----			----
>   lock(swap_lock)
> 			local_irq_disable()
> 			lock(dispatch_lock);
> 							local_irq_disable()
> 							lock(sbq->ws[i].wait)
> 							lock(dispatch_lock)
> 			lock(swap_lock)
>   <interrupt>
>   lock(sbq->ws[i].wait)
> 
> 
> DEADLOCK!
> 
> In other words, it is not bogus, and can be a real potential for a
> deadlock. Please talk with the lockdep maintainers before saying
> there's a bogus deadlock, because lockdep is seldom wrong.

Thanks Steven, your analysis looks good. I got fooled by the fact that
the path where we do grab them both is never in irq/soft-irq context,
but that doesn't change the fact that the wq lock IS grabbed in irq
context.

Patch also looks good, but I see Linus already applied it.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Real deadlock being suppressed in sbitmap
  2019-01-14 17:14 Real deadlock being suppressed in sbitmap Steven Rostedt
  2019-01-14 19:43 ` Jens Axboe
@ 2019-01-15  3:23 ` Ming Lei
  2019-01-15  3:41   ` Jens Axboe
  2019-01-15  3:50   ` Steven Rostedt
  1 sibling, 2 replies; 8+ messages in thread
From: Ming Lei @ 2019-01-15  3:23 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Jens Axboe, LKML, Linus Torvalds, Andrew Morton, Peter Zijlstra,
	Thomas Gleixner, Ingo Molnar, Clark Williams, Bart Van Assche

Hi Steven,

On Mon, Jan 14, 2019 at 12:14:14PM -0500, Steven Rostedt wrote:
> It was brought to my attention (by this creating a splat in the RT tree
> too) this code:
> 
> static inline bool sbitmap_deferred_clear(struct sbitmap *sb, int index)
> {
> 	unsigned long mask, val;
> 	unsigned long __maybe_unused flags;
> 	bool ret = false;
> 
> 	/* Silence bogus lockdep warning */
> #if defined(CONFIG_LOCKDEP)
> 	local_irq_save(flags);
> #endif
> 	spin_lock(&sb->map[index].swap_lock);
> 
> Commit 58ab5e32e6f ("sbitmap: silence bogus lockdep IRQ warning")
> states the following:
> 
>     For this case, it's a false positive. The swap_lock is used from process
>     context only, when we swap the bits in the word and cleared mask. We
>     also end up doing that when we are getting a driver tag, from the
>     blk_mq_mark_tag_wait(), and from there we hold the waitqueue lock with
>     IRQs disabled. However, this isn't from an actual IRQ, it's still
>     process context.
> 
> The thing is, lockdep doesn't define a lock as "irq-safe" based on it
> being taken under interrupts disabled or not. It detects when locks are
> used in actual interrupts. Further in that commit we have this:
> 
>    [  106.097386] fio/1043 [HC0[0]:SC0[0]:HE0:SE1] is trying to acquire:
>     [  106.098231] 000000004c43fa71
>     (&(&sb->map[i].swap_lock)->rlock){+.+.}, at: sbitmap_get+0xd5/0x22c
>     [  106.099431]
>     [  106.099431] and this task is already holding:
>     [  106.100229] 000000007eec8b2f
>     (&(&hctx->dispatch_wait_lock)->rlock){....}, at:
>     blk_mq_dispatch_rq_list+0x4c1/0xd7c
>     [  106.101630] which would create a new lock dependency:
>     [  106.102326]  (&(&hctx->dispatch_wait_lock)->rlock){....} ->
>     (&(&sb->map[i].swap_lock)->rlock){+.+.}
> 
> Saying that you are trying to take the swap_lock while holding the
> dispatch_wait_lock.
> 
> 
>     [  106.103553] but this new dependency connects a SOFTIRQ-irq-safe lock:
>     [  106.104580]  (&sbq->ws[i].wait){..-.}
> 
> Which means that there's already a chain of:
> 
>  sbq->ws[i].wait -> dispatch_wait_lock
> 
>     [  106.104582]
>     [  106.104582] ... which became SOFTIRQ-irq-safe at:
>     [  106.105751]   _raw_spin_lock_irqsave+0x4b/0x82
>     [  106.106284]   __wake_up_common_lock+0x119/0x1b9
>     [  106.106825]   sbitmap_queue_wake_up+0x33f/0x383
>     [  106.107456]   sbitmap_queue_clear+0x4c/0x9a
>     [  106.108046]   __blk_mq_free_request+0x188/0x1d3
>     [  106.108581]   blk_mq_free_request+0x23b/0x26b
>     [  106.109102]   scsi_end_request+0x345/0x5d7
>     [  106.109587]   scsi_io_completion+0x4b5/0x8f0
>     [  106.110099]   scsi_finish_command+0x412/0x456
>     [  106.110615]   scsi_softirq_done+0x23f/0x29b
>     [  106.111115]   blk_done_softirq+0x2a7/0x2e6
>     [  106.111608]   __do_softirq+0x360/0x6ad
>     [  106.112062]   run_ksoftirqd+0x2f/0x5b
>     [  106.112499]   smpboot_thread_fn+0x3a5/0x3db
>     [  106.113000]   kthread+0x1d4/0x1e4
>     [  106.113457]   ret_from_fork+0x3a/0x50
> 
> 
> We see that sbq->ws[i].wait was taken from a softirq context.

Actually sbq->ws[i].wait is taken from a softirq context only in case
of single-queue, see __blk_mq_complete_request(). For multiple queue,
sbq->ws[i].wait is taken from hardirq context.

> 
> 
> 
>     [  106.131226] Chain exists of:
>     [  106.131226]   &sbq->ws[i].wait -->
>     &(&hctx->dispatch_wait_lock)->rlock -->
>     &(&sb->map[i].swap_lock)->rlock
> 
> This is telling us that we now have a chain of:
> 
>  sbq->ws[i].wait -> dispatch_wait_lock -> swap_lock
> 
>     [  106.131226]
>     [  106.132865]  Possible interrupt unsafe locking scenario:
>     [  106.132865]
>     [  106.133659]        CPU0                    CPU1
>     [  106.134194]        ----                    ----
>     [  106.134733]   lock(&(&sb->map[i].swap_lock)->rlock);
>     [  106.135318]                                local_irq_disable();
>     [  106.136014]                                lock(&sbq->ws[i].wait);
>     [  106.136747]
>     lock(&(&hctx->dispatch_wait_lock)->rlock);
>     [  106.137742]   <Interrupt>
>     [  106.138110]     lock(&sbq->ws[i].wait);
>     [  106.138625]
>     [  106.138625]  *** DEADLOCK ***
>     [  106.138625]
> 
> I need to make this more than just two levels deep. Here's the issue:
> 
> 
> 	CPU0			CPU1			CPU2
> 	----			----			----
>   lock(swap_lock)
> 			local_irq_disable()
> 			lock(dispatch_lock);
> 							local_irq_disable()
> 							lock(sbq->ws[i].wait)
> 							lock(dispatch_lock)
> 			lock(swap_lock)
>   <interrupt>
>   lock(sbq->ws[i].wait)

I guess the above 'dispatch_lock' is actually 'dispatch_wait_lock', which
is always acquired after sbq->ws[i].wait is held, so I think the above
description about CPU1/CPU2 may not be possible or correct.

Thinking about the original lockdep log further, looks it is one real deadlock:

    [  106.132865]  Possible interrupt unsafe locking scenario:
    [  106.132865]
    [  106.133659]        CPU0                    CPU1
    [  106.134194]        ----                    ----
    [  106.134733]   lock(&(&sb->map[i].swap_lock)->rlock);
    [  106.135318]                                local_irq_disable();
    [  106.136014]                                lock(&sbq->ws[i].wait);
    [  106.136747]                                lock(&(&hctx->dispatch_wait_lock)->rlock);
    [  106.137742]   <Interrupt>
    [  106.138110]     lock(&sbq->ws[i].wait);

Given 'swap_lock' can be acquired from blk_mq_dispatch_rq_list() via
blk_mq_get_driver_tag() directly, the above deadlock may be possible.

Sounds the correct fix may be the following one, and the irqsave cost
should be fine given sbitmap_deferred_clear is only triggered when one
word is run out of.
--

diff --git a/lib/sbitmap.c b/lib/sbitmap.c
index 65c2d06250a6..24d62d7894cb 100644
--- a/lib/sbitmap.c
+++ b/lib/sbitmap.c
@@ -26,14 +26,11 @@
 static inline bool sbitmap_deferred_clear(struct sbitmap *sb, int index)
 {
 	unsigned long mask, val;
-	unsigned long __maybe_unused flags;
+	unsigned long flags;
 	bool ret = false;
 
 	/* Silence bogus lockdep warning */
-#if defined(CONFIG_LOCKDEP)
-	local_irq_save(flags);
-#endif
-	spin_lock(&sb->map[index].swap_lock);
+	spin_lock_irqsave(&sb->map[index].swap_lock, flags);
 
 	if (!sb->map[index].cleared)
 		goto out_unlock;
@@ -54,10 +51,7 @@ static inline bool sbitmap_deferred_clear(struct sbitmap *sb, int index)
 
 	ret = true;
 out_unlock:
-	spin_unlock(&sb->map[index].swap_lock);
-#if defined(CONFIG_LOCKDEP)
-	local_irq_restore(flags);
-#endif
+	spin_unlock_irqrestore(&sb->map[index].swap_lock, flags);
 	return ret;
 }
 

Thanks,
Ming

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: Real deadlock being suppressed in sbitmap
  2019-01-15  3:23 ` Ming Lei
@ 2019-01-15  3:41   ` Jens Axboe
  2019-01-15  3:46     ` Ming Lei
  2019-01-15  3:50   ` Steven Rostedt
  1 sibling, 1 reply; 8+ messages in thread
From: Jens Axboe @ 2019-01-15  3:41 UTC (permalink / raw)
  To: Ming Lei, Steven Rostedt
  Cc: LKML, Linus Torvalds, Andrew Morton, Peter Zijlstra,
	Thomas Gleixner, Ingo Molnar, Clark Williams, Bart Van Assche

On 1/14/19 8:23 PM, Ming Lei wrote:
> Hi Steven,
> 
> On Mon, Jan 14, 2019 at 12:14:14PM -0500, Steven Rostedt wrote:
>> It was brought to my attention (by this creating a splat in the RT tree
>> too) this code:
>>
>> static inline bool sbitmap_deferred_clear(struct sbitmap *sb, int index)
>> {
>> 	unsigned long mask, val;
>> 	unsigned long __maybe_unused flags;
>> 	bool ret = false;
>>
>> 	/* Silence bogus lockdep warning */
>> #if defined(CONFIG_LOCKDEP)
>> 	local_irq_save(flags);
>> #endif
>> 	spin_lock(&sb->map[index].swap_lock);
>>
>> Commit 58ab5e32e6f ("sbitmap: silence bogus lockdep IRQ warning")
>> states the following:
>>
>>     For this case, it's a false positive. The swap_lock is used from process
>>     context only, when we swap the bits in the word and cleared mask. We
>>     also end up doing that when we are getting a driver tag, from the
>>     blk_mq_mark_tag_wait(), and from there we hold the waitqueue lock with
>>     IRQs disabled. However, this isn't from an actual IRQ, it's still
>>     process context.
>>
>> The thing is, lockdep doesn't define a lock as "irq-safe" based on it
>> being taken under interrupts disabled or not. It detects when locks are
>> used in actual interrupts. Further in that commit we have this:
>>
>>    [  106.097386] fio/1043 [HC0[0]:SC0[0]:HE0:SE1] is trying to acquire:
>>     [  106.098231] 000000004c43fa71
>>     (&(&sb->map[i].swap_lock)->rlock){+.+.}, at: sbitmap_get+0xd5/0x22c
>>     [  106.099431]
>>     [  106.099431] and this task is already holding:
>>     [  106.100229] 000000007eec8b2f
>>     (&(&hctx->dispatch_wait_lock)->rlock){....}, at:
>>     blk_mq_dispatch_rq_list+0x4c1/0xd7c
>>     [  106.101630] which would create a new lock dependency:
>>     [  106.102326]  (&(&hctx->dispatch_wait_lock)->rlock){....} ->
>>     (&(&sb->map[i].swap_lock)->rlock){+.+.}
>>
>> Saying that you are trying to take the swap_lock while holding the
>> dispatch_wait_lock.
>>
>>
>>     [  106.103553] but this new dependency connects a SOFTIRQ-irq-safe lock:
>>     [  106.104580]  (&sbq->ws[i].wait){..-.}
>>
>> Which means that there's already a chain of:
>>
>>  sbq->ws[i].wait -> dispatch_wait_lock
>>
>>     [  106.104582]
>>     [  106.104582] ... which became SOFTIRQ-irq-safe at:
>>     [  106.105751]   _raw_spin_lock_irqsave+0x4b/0x82
>>     [  106.106284]   __wake_up_common_lock+0x119/0x1b9
>>     [  106.106825]   sbitmap_queue_wake_up+0x33f/0x383
>>     [  106.107456]   sbitmap_queue_clear+0x4c/0x9a
>>     [  106.108046]   __blk_mq_free_request+0x188/0x1d3
>>     [  106.108581]   blk_mq_free_request+0x23b/0x26b
>>     [  106.109102]   scsi_end_request+0x345/0x5d7
>>     [  106.109587]   scsi_io_completion+0x4b5/0x8f0
>>     [  106.110099]   scsi_finish_command+0x412/0x456
>>     [  106.110615]   scsi_softirq_done+0x23f/0x29b
>>     [  106.111115]   blk_done_softirq+0x2a7/0x2e6
>>     [  106.111608]   __do_softirq+0x360/0x6ad
>>     [  106.112062]   run_ksoftirqd+0x2f/0x5b
>>     [  106.112499]   smpboot_thread_fn+0x3a5/0x3db
>>     [  106.113000]   kthread+0x1d4/0x1e4
>>     [  106.113457]   ret_from_fork+0x3a/0x50
>>
>>
>> We see that sbq->ws[i].wait was taken from a softirq context.
> 
> Actually sbq->ws[i].wait is taken from a softirq context only in case
> of single-queue, see __blk_mq_complete_request(). For multiple queue,
> sbq->ws[i].wait is taken from hardirq context.

That's a good point, but that's just current implementation, we can't
assume any of those relationsships. Any completion can happen from
softirq or hardirq. So the patch is inadequate.

> Sounds the correct fix may be the following one, and the irqsave cost
> should be fine given sbitmap_deferred_clear is only triggered when one
> word is run out of.

Yes, the _bh() variant isn't going to cut it. Can you send this patch
against Linus's master?

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Real deadlock being suppressed in sbitmap
  2019-01-15  3:41   ` Jens Axboe
@ 2019-01-15  3:46     ` Ming Lei
  0 siblings, 0 replies; 8+ messages in thread
From: Ming Lei @ 2019-01-15  3:46 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Steven Rostedt, LKML, Linus Torvalds, Andrew Morton,
	Peter Zijlstra, Thomas Gleixner, Ingo Molnar, Clark Williams,
	Bart Van Assche

On Mon, Jan 14, 2019 at 08:41:16PM -0700, Jens Axboe wrote:
> On 1/14/19 8:23 PM, Ming Lei wrote:
> > Hi Steven,
> > 
> > On Mon, Jan 14, 2019 at 12:14:14PM -0500, Steven Rostedt wrote:
> >> It was brought to my attention (by this creating a splat in the RT tree
> >> too) this code:
> >>
> >> static inline bool sbitmap_deferred_clear(struct sbitmap *sb, int index)
> >> {
> >> 	unsigned long mask, val;
> >> 	unsigned long __maybe_unused flags;
> >> 	bool ret = false;
> >>
> >> 	/* Silence bogus lockdep warning */
> >> #if defined(CONFIG_LOCKDEP)
> >> 	local_irq_save(flags);
> >> #endif
> >> 	spin_lock(&sb->map[index].swap_lock);
> >>
> >> Commit 58ab5e32e6f ("sbitmap: silence bogus lockdep IRQ warning")
> >> states the following:
> >>
> >>     For this case, it's a false positive. The swap_lock is used from process
> >>     context only, when we swap the bits in the word and cleared mask. We
> >>     also end up doing that when we are getting a driver tag, from the
> >>     blk_mq_mark_tag_wait(), and from there we hold the waitqueue lock with
> >>     IRQs disabled. However, this isn't from an actual IRQ, it's still
> >>     process context.
> >>
> >> The thing is, lockdep doesn't define a lock as "irq-safe" based on it
> >> being taken under interrupts disabled or not. It detects when locks are
> >> used in actual interrupts. Further in that commit we have this:
> >>
> >>    [  106.097386] fio/1043 [HC0[0]:SC0[0]:HE0:SE1] is trying to acquire:
> >>     [  106.098231] 000000004c43fa71
> >>     (&(&sb->map[i].swap_lock)->rlock){+.+.}, at: sbitmap_get+0xd5/0x22c
> >>     [  106.099431]
> >>     [  106.099431] and this task is already holding:
> >>     [  106.100229] 000000007eec8b2f
> >>     (&(&hctx->dispatch_wait_lock)->rlock){....}, at:
> >>     blk_mq_dispatch_rq_list+0x4c1/0xd7c
> >>     [  106.101630] which would create a new lock dependency:
> >>     [  106.102326]  (&(&hctx->dispatch_wait_lock)->rlock){....} ->
> >>     (&(&sb->map[i].swap_lock)->rlock){+.+.}
> >>
> >> Saying that you are trying to take the swap_lock while holding the
> >> dispatch_wait_lock.
> >>
> >>
> >>     [  106.103553] but this new dependency connects a SOFTIRQ-irq-safe lock:
> >>     [  106.104580]  (&sbq->ws[i].wait){..-.}
> >>
> >> Which means that there's already a chain of:
> >>
> >>  sbq->ws[i].wait -> dispatch_wait_lock
> >>
> >>     [  106.104582]
> >>     [  106.104582] ... which became SOFTIRQ-irq-safe at:
> >>     [  106.105751]   _raw_spin_lock_irqsave+0x4b/0x82
> >>     [  106.106284]   __wake_up_common_lock+0x119/0x1b9
> >>     [  106.106825]   sbitmap_queue_wake_up+0x33f/0x383
> >>     [  106.107456]   sbitmap_queue_clear+0x4c/0x9a
> >>     [  106.108046]   __blk_mq_free_request+0x188/0x1d3
> >>     [  106.108581]   blk_mq_free_request+0x23b/0x26b
> >>     [  106.109102]   scsi_end_request+0x345/0x5d7
> >>     [  106.109587]   scsi_io_completion+0x4b5/0x8f0
> >>     [  106.110099]   scsi_finish_command+0x412/0x456
> >>     [  106.110615]   scsi_softirq_done+0x23f/0x29b
> >>     [  106.111115]   blk_done_softirq+0x2a7/0x2e6
> >>     [  106.111608]   __do_softirq+0x360/0x6ad
> >>     [  106.112062]   run_ksoftirqd+0x2f/0x5b
> >>     [  106.112499]   smpboot_thread_fn+0x3a5/0x3db
> >>     [  106.113000]   kthread+0x1d4/0x1e4
> >>     [  106.113457]   ret_from_fork+0x3a/0x50
> >>
> >>
> >> We see that sbq->ws[i].wait was taken from a softirq context.
> > 
> > Actually sbq->ws[i].wait is taken from a softirq context only in case
> > of single-queue, see __blk_mq_complete_request(). For multiple queue,
> > sbq->ws[i].wait is taken from hardirq context.
> 
> That's a good point, but that's just current implementation, we can't
> assume any of those relationsships. Any completion can happen from
> softirq or hardirq. So the patch is inadequate.
> 
> > Sounds the correct fix may be the following one, and the irqsave cost
> > should be fine given sbitmap_deferred_clear is only triggered when one
> > word is run out of.
> 
> Yes, the _bh() variant isn't going to cut it. Can you send this patch
> against Linus's master?

OK, will post it out soon.

Thanks,
Ming

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Real deadlock being suppressed in sbitmap
  2019-01-15  3:23 ` Ming Lei
  2019-01-15  3:41   ` Jens Axboe
@ 2019-01-15  3:50   ` Steven Rostedt
  2019-01-15  4:14     ` Ming Lei
  1 sibling, 1 reply; 8+ messages in thread
From: Steven Rostedt @ 2019-01-15  3:50 UTC (permalink / raw)
  To: Ming Lei
  Cc: Jens Axboe, LKML, Linus Torvalds, Andrew Morton, Peter Zijlstra,
	Thomas Gleixner, Ingo Molnar, Clark Williams, Bart Van Assche

On Tue, 15 Jan 2019 11:23:56 +0800
Ming Lei <ming.lei@redhat.com> wrote:

> Given 'swap_lock' can be acquired from blk_mq_dispatch_rq_list() via
> blk_mq_get_driver_tag() directly, the above deadlock may be possible.
> 
> Sounds the correct fix may be the following one, and the irqsave cost
> should be fine given sbitmap_deferred_clear is only triggered when one
> word is run out of.

Since the lockdep splat only showed SOFTIRQ issues, I figured I would
only protect it from that. Linus already accepted my patch, can you run
tests on that kernel with LOCKDEP enabled and see if it will trigger
with IRQ issues, then we can most definitely upgrade that to
spin_lock_irqsave(). But I was trying to keep the overhead down, as
that's a bit more heavy weight than a spin_lock_bh().

-- Steve

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Real deadlock being suppressed in sbitmap
  2019-01-15  3:50   ` Steven Rostedt
@ 2019-01-15  4:14     ` Ming Lei
  2019-01-15  4:25       ` Steven Rostedt
  0 siblings, 1 reply; 8+ messages in thread
From: Ming Lei @ 2019-01-15  4:14 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Jens Axboe, LKML, Linus Torvalds, Andrew Morton, Peter Zijlstra,
	Thomas Gleixner, Ingo Molnar, Clark Williams, Bart Van Assche

On Mon, Jan 14, 2019 at 10:50:17PM -0500, Steven Rostedt wrote:
> On Tue, 15 Jan 2019 11:23:56 +0800
> Ming Lei <ming.lei@redhat.com> wrote:
> 
> > Given 'swap_lock' can be acquired from blk_mq_dispatch_rq_list() via
> > blk_mq_get_driver_tag() directly, the above deadlock may be possible.
> > 
> > Sounds the correct fix may be the following one, and the irqsave cost
> > should be fine given sbitmap_deferred_clear is only triggered when one
> > word is run out of.
> 
> Since the lockdep splat only showed SOFTIRQ issues, I figured I would
> only protect it from that. Linus already accepted my patch, can you run
> tests on that kernel with LOCKDEP enabled and see if it will trigger
> with IRQ issues, then we can most definitely upgrade that to
> spin_lock_irqsave(). But I was trying to keep the overhead down, as
> that's a bit more heavy weight than a spin_lock_bh().

As I mentioned, it should be fine given it is triggered only after one word
is run out of.

Follows the lockdep warning on the latest linus tree:

[  107.431033] ------------[ cut here ]------------

[  107.431786] IRQs not enabled as expected
[  107.432047] ================================
[  107.432633] WARNING: CPU: 2 PID: 919 at kernel/softirq.c:169 __local_bh_enable_ip+0x5c/0xe2
[  107.433302] WARNING: inconsistent lock state
[  107.433304] 5.0.0-rc2+ #554 Not tainted
[  107.434513] Modules linked in: null_blk iTCO_wdt iTCO_vendor_support crc32c_intel usb_storage virtio_scsi i2c_i801 i2c_core nvme lpc_ich nvme_core mfd_core qemu_fw_cfg ip_tables
[  107.435124] --------------------------------
[  107.435126] inconsistent {IN-SOFTIRQ-W} -> {SOFTIRQ-ON-W} usage.
[  107.435679] CPU: 2 PID: 919 Comm: fio Not tainted 5.0.0-rc2+ #554
[  107.438082] fio/917 [HC0[0]:SC0[0]:HE1:SE1] takes:
[  107.438084] 00000000b6dd09e0 (&sbq->ws[i].wait){+.?.}, at: blk_mq_dispatch_rq_list+0x149/0x45d
[  107.438696] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.10.2-2.fc27 04/01/2014
[  107.439599] {IN-SOFTIRQ-W} state was registered at:
[  107.439604]   _raw_spin_lock_irqsave+0x46/0x55
[  107.440481] RIP: 0010:__local_bh_enable_ip+0x5c/0xe2
[  107.441239]   __wake_up_common_lock+0x61/0xd0
[  107.441241]   sbitmap_queue_clear+0x38/0x59
[  107.442468] Code: 00 00 00 75 27 83 b8 a8 0c 00 00 00 75 1e 80 3d f9 15 1d 01 00 75 15 48 c7 c7 d4 43 e5 81 c6 05 e9 15 1d 01 01 e8 fa 91 ff ff <0f> 0b fa 66 0f 1f 44 00 00 e8 54 a8 0e 00 65 8b 05 57 b3 f8 7e 25
[  107.443760]   __blk_mq_free_request+0x7d/0x97
[  107.443764]   scsi_end_request+0x19d/0x2f5
[  107.444462] RSP: 0018:ffffc9000268b848 EFLAGS: 00010086
[  107.445171]   scsi_io_completion+0x290/0x52d
[  107.445173]   blk_done_softirq+0xa3/0xc0
[  107.445880] RAX: 0000000000000000 RBX: 0000000000000201 RCX: 0000000000000007
[  107.446502]   __do_softirq+0x1e7/0x3ff
[  107.446505]   run_ksoftirqd+0x2f/0x3c
[  107.447120] RDX: 0000000000000000 RSI: ffffffff81ea4392 RDI: 00000000ffffffff
[  107.447122] RBP: ffffffff813fc0c7 R08: 0000000000000001 R09: 0000000000000001
[  107.450027]   smpboot_thread_fn+0x1d8/0x1ef
[  107.450030]   kthread+0x115/0x11d
[  107.450656] R10: 0000000000000001 R11: ffffc9000268b6d7 R12: 0000000000000000
[  107.451305]   ret_from_fork+0x3a/0x50
[  107.451307] irq event stamp: 1066
[  107.452050] R13: 0000000000000000 R14: 0000000000000001 R15: ffff888470254c78
[  107.452052] FS:  00007f8eeefd3740(0000) GS:ffff888477a40000(0000) knlGS:0000000000000000
[  107.452665] hardirqs last  enabled at (1063): [<ffffffff8108aa50>] __local_bh_enable_ip+0xc8/0xe2
[  107.452669] hardirqs last disabled at (1064): [<ffffffff81756287>] _raw_spin_lock_irq+0x15/0x45
[  107.453241] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  107.454355] softirqs last  enabled at (1066): [<ffffffff813fc0c7>] sbitmap_get+0xea/0x127
[  107.454357] softirqs last disabled at (1065): [<ffffffff813fc05a>] sbitmap_get+0x7d/0x127
[  107.454898] CR2: 00007f8eeefc1000 CR3: 0000000471fd2003 CR4: 0000000000760ee0
[  107.454902] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[  107.455458] 
               other info that might help us debug this:
[  107.455460]  Possible unsafe locking scenario:

[  107.456482] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[  107.456483] PKRU: 55555554
[  107.457515]        CPU0
[  107.457517]        ----
[  107.458114] Call Trace:
[  107.458119]  sbitmap_get+0xea/0x127
[  107.458585]   lock(&sbq->ws[i].wait);
[  107.459626]  __sbitmap_queue_get+0x3e/0x73
[  107.460207]   <Interrupt>
[  107.460208]     lock(&sbq->ws[i].wait);
[  107.460695]  blk_mq_get_tag+0xa6/0x2c6
[  107.461796] 
                *** DEADLOCK ***

[  107.461798] 3 locks held by fio/917:
[  107.462954]  ? wait_woken+0x6d/0x6d
[  107.464333]  #0: 00000000e24edc0f (rcu_read_lock){....}, at: hctx_lock+0x1a/0xcb
[  107.465561]  blk_mq_get_driver_tag+0x81/0xdb
[  107.466465]  #1: 00000000b6dd09e0 (&sbq->ws[i].wait){+.?.}, at: blk_mq_dispatch_rq_list+0x149/0x45d
[  107.467645]  blk_mq_dispatch_rq_list+0x1a7/0x45d
[  107.468912]  #2: 00000000b92e5983 (&(&hctx->dispatch_wait_lock)->rlock){+...}, at: blk_mq_dispatch_rq_list+0x15d/0x45d
[  107.469926]  ? _raw_spin_unlock+0x2e/0x40
[  107.471014] 
               stack backtrace:
[  107.471017] CPU: 10 PID: 917 Comm: fio Not tainted 5.0.0-rc2+ #554
[  107.471952]  blk_mq_do_dispatch_sched+0xcc/0xf2
[  107.472877] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.10.2-2.fc27 04/01/2014
[  107.472879] Call Trace:
[  107.473890]  blk_mq_sched_dispatch_requests+0xf7/0x14b
[  107.474303]  dump_stack+0x85/0xbc
[  107.474670]  __blk_mq_run_hw_queue+0xa4/0xcc
[  107.475069]  print_usage_bug+0x264/0x26f
[  107.475430]  __blk_mq_delay_run_hw_queue+0x4f/0x16b
[  107.475922]  ? check_usage_forwards+0x103/0x103
[  107.476455]  blk_mq_run_hw_queue+0xae/0xce
[  107.477027]  mark_lock+0x2e3/0x515
[  107.477030]  mark_held_locks+0x50/0x64
[  107.477418]  blk_mq_flush_plug_list+0x2f0/0x314
[  107.477969]  ? __local_bh_enable_ip+0xc8/0xe2
[  107.478520]  ? generic_make_request+0x32e/0x3d3
[  107.479354]  lockdep_hardirqs_on+0x184/0x1b4
[  107.479885]  blk_flush_plug_list+0xc0/0xe0
[  107.480381]  ? sbitmap_get+0xea/0x127
[  107.481435]  blk_finish_plug+0x25/0x32
[  107.482035]  __local_bh_enable_ip+0xc8/0xe2
[  107.483315]  blkdev_direct_IO+0x33e/0x3fe
[  107.484044]  sbitmap_get+0xea/0x127
[  107.485576]  ? aio_complete+0x3b0/0x3b0
[  107.486143]  __sbitmap_queue_get+0x3e/0x73
[  107.486769]  ? generic_file_read_iter+0x9c/0x116
[  107.487745]  blk_mq_get_tag+0xa6/0x2c6
[  107.488397]  generic_file_read_iter+0x9c/0x116
[  107.489725]  ? wait_woken+0x6d/0x6d
[  107.490086]  aio_read+0xe8/0x17c
[  107.490887]  blk_mq_get_driver_tag+0x81/0xdb
[  107.491372]  ? __lock_acquire+0x5a6/0x622
[  107.492029]  blk_mq_dispatch_rq_list+0x1a7/0x45d
[  107.492604]  ? find_held_lock+0x2b/0x6e
[  107.493375]  ? _raw_spin_unlock+0x2e/0x40
[  107.494024]  ? io_submit_one+0x395/0x908
[  107.494698]  blk_mq_do_dispatch_sched+0xcc/0xf2
[  107.495192]  io_submit_one+0x395/0x908
[  107.495726]  blk_mq_sched_dispatch_requests+0xf7/0x14b
[  107.496386]  ? find_held_lock+0x2b/0x6e
[  107.497002]  __blk_mq_run_hw_queue+0xa4/0xcc
[  107.497669]  ? __se_sys_io_submit+0xdb/0x22a
[  107.498340]  __blk_mq_delay_run_hw_queue+0x4f/0x16b
[  107.498925]  __se_sys_io_submit+0xdb/0x22a
[  107.499510]  blk_mq_run_hw_queue+0xae/0xce
[  107.500057]  ? up_read+0x1c/0x88
[  107.500659]  blk_mq_flush_plug_list+0x2f0/0x314
[  107.501245]  ? do_syscall_64+0x89/0x1bd
[  107.501736]  ? generic_make_request+0x32e/0x3d3
[  107.502297]  ? __se_sys_io_submit+0x22a/0x22a
[  107.502880]  blk_flush_plug_list+0xc0/0xe0
[  107.503562]  do_syscall_64+0x89/0x1bd
[  107.504164]  blk_finish_plug+0x25/0x32
[  107.504798]  entry_SYSCALL_64_after_hwframe+0x49/0xbe
[  107.505358]  blkdev_direct_IO+0x33e/0x3fe
[  107.505834] RIP: 0033:0x7f8eee9b6687
[  107.506514]  ? aio_complete+0x3b0/0x3b0
[  107.507094] Code: 00 00 00 49 83 38 00 75 ed 49 83 78 08 00 75 e6 8b 47 0c 39 47 08 75 de 31 c0 c3 0f 1f 84 00 00 00 00 00 b8 d1 00 00 00 0f 05 <c3> 0f 1f 84 00 00 00 00 00 b8 d2 00 00 00 0f 05 c3 0f 1f 84 00 00
[  107.507790]  ? generic_file_read_iter+0x9c/0x116
[  107.508342] RSP: 002b:00007fffc3317788 EFLAGS: 00000206 ORIG_RAX: 00000000000000d1
[  107.508957]  generic_file_read_iter+0x9c/0x116
[  107.508960]  aio_read+0xe8/0x17c
[  107.509537] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f8eee9b6687
[  107.509539] RDX: 00000000011a9180 RSI: 0000000000000001 RDI: 00007f8eeefc1000
[  107.510230]  ? __lock_acquire+0x5a6/0x622
[  107.510761] RBP: 0000000000000000 R08: 0000000000000001 R09: 00000000011a2fb0
[  107.510762] R10: 000000000000000c R11: 0000000000000206 R12: 00007f8ecc425138
[  107.511582]  ? find_held_lock+0x2b/0x6e
[  107.512137] R13: 00000000011a91b0 R14: 00000000011a2f28 R15: 00000000011a90b0
[  107.512144] irq event stamp: 1085
[  107.512820]  ? io_submit_one+0x395/0x908
[  107.513436] hardirqs last  enabled at (1083): [<ffffffff8108aa50>] __local_bh_enable_ip+0xc8/0xe2
[  107.513440] hardirqs last disabled at (1084): [<ffffffff81756287>] _raw_spin_lock_irq+0x15/0x45
[  107.514137]  io_submit_one+0x395/0x908
[  107.514721] softirqs last  enabled at (1082): [<ffffffff813fc0c7>] sbitmap_get+0xea/0x127
[  107.514724] softirqs last disabled at (1085): [<ffffffff813fc05a>] sbitmap_get+0x7d/0x127
[  107.515383]  ? find_held_lock+0x2b/0x6e
[  107.515849] ---[ end trace 64dc949ae485cd67 ]---
[  107.545737]  ? __se_sys_io_submit+0xdb/0x22a
[  107.546346]  __se_sys_io_submit+0xdb/0x22a
[  107.546931]  ? up_read+0x1c/0x88
[  107.547398]  ? do_syscall_64+0x89/0x1bd
[  107.547960]  ? __se_sys_io_submit+0x22a/0x22a
[  107.548604]  do_syscall_64+0x89/0x1bd
[  107.549132]  entry_SYSCALL_64_after_hwframe+0x49/0xbe
[  107.549849] RIP: 0033:0x7f8eee9b6687
[  107.550360] Code: 00 00 00 49 83 38 00 75 ed 49 83 78 08 00 75 e6 8b 47 0c 39 47 08 75 de 31 c0 c3 0f 1f 84 00 00 00 00 00 b8 d1 00 00 00 0f 05 <c3> 0f 1f 84 00 00 00 00 00 b8 d2 00 00 00 0f 05 c3 0f 1f 84 00 00
[  107.552971] RSP: 002b:00007fffc3317788 EFLAGS: 00000206 ORIG_RAX: 00000000000000d1
[  107.554036] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f8eee9b6687
[  107.555043] RDX: 00000000011a9100 RSI: 0000000000000001 RDI: 00007f8eeefc5000
[  107.556071] RBP: 0000000000000000 R08: 0000000000000001 R09: 00000000011a2f30
[  107.557078] R10: 000000000000000c R11: 0000000000000206 R12: 00007f8ecc3f3068
[  107.558078] R13: 00000000011a9130 R14: 00000000011a2ea8 R15: 00000000011a9030


Thanks,
Ming

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Real deadlock being suppressed in sbitmap
  2019-01-15  4:14     ` Ming Lei
@ 2019-01-15  4:25       ` Steven Rostedt
  0 siblings, 0 replies; 8+ messages in thread
From: Steven Rostedt @ 2019-01-15  4:25 UTC (permalink / raw)
  To: Ming Lei
  Cc: Jens Axboe, LKML, Linus Torvalds, Andrew Morton, Peter Zijlstra,
	Thomas Gleixner, Ingo Molnar, Clark Williams, Bart Van Assche

On Tue, 15 Jan 2019 12:14:27 +0800
Ming Lei <ming.lei@redhat.com> wrote:

> As I mentioned, it should be fine given it is triggered only after one word
> is run out of.
> 
> Follows the lockdep warning on the latest linus tree:

Thanks for following up on this. Yes, this requires the irqsave then.

-- Steve

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2019-01-15  4:26 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-01-14 17:14 Real deadlock being suppressed in sbitmap Steven Rostedt
2019-01-14 19:43 ` Jens Axboe
2019-01-15  3:23 ` Ming Lei
2019-01-15  3:41   ` Jens Axboe
2019-01-15  3:46     ` Ming Lei
2019-01-15  3:50   ` Steven Rostedt
2019-01-15  4:14     ` Ming Lei
2019-01-15  4:25       ` Steven Rostedt

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.