All of lore.kernel.org
 help / color / mirror / Atom feed
* bug in tag handling in blk-mq?
@ 2018-05-07 14:03 ` Paolo Valente
  0 siblings, 0 replies; 31+ messages in thread
From: Paolo Valente @ 2018-05-07 14:03 UTC (permalink / raw)
  To: Mike Galbraith, Jens Axboe, Christoph Hellwig
  Cc: linux-block, Ulf Hansson, LKML, Linus Walleij, Ulf Hansson,
	Oleksandr Natalenko

Hi Jens, Christoph, all,
Mike Galbraith has been experiencing hangs, on blk_mq_get_tag, only
with bfq [1].  Symptoms seem to clearly point to a problem in I/O-tag
handling, triggered by bfq because it limits the number of tags for
async and sync write requests (in bfq_limit_depth).

Fortunately, I just happened to find a way to apparently confirm it.
With the following one-liner for block/bfq-iosched.c:

@@ -554,8 +554,7 @@ static void bfq_limit_depth(unsigned int op, struct =
blk_mq_alloc_data *data)
        if (unlikely(bfqd->sb_shift !=3D bt->sb.shift))
                bfq_update_depths(bfqd, bt);
=20
-       data->shallow_depth =3D
-               =
bfqd->word_depths[!!bfqd->wr_busy_queues][op_is_sync(op)];
+       data->shallow_depth =3D 1;
=20
        bfq_log(bfqd, "[%s] wr_busy %d sync %d depth %u",
                        __func__, bfqd->wr_busy_queues, op_is_sync(op),

Mike's machine now crashes soon and systematically, while nothing bad
happens on my machines, even with heavy workloads (apart from an
expected throughput drop).

This change simply reduces to 1 the maximum possible value for the sum
of the number of async requests and of sync write requests.

This email is basically a request for help to knowledgeable people.  To
start, here are my first doubts/questions:
1) Just to be certain, I guess it is not normal that blk-mq hangs if
async requests and sync write requests can be at most one, right?
2) Do you have any hint to where I could look for, to chase this bug?
Of course, the bug may be in bfq, i.e, it may be a somehow unrelated
bfq bug that causes this hang in blk-mq, indirectly.  But it is hard
for me to understand how.

Looking forward to some help.

Thanks,
Paolo

[1] https://www.spinics.net/lists/stable/msg215036.html=

^ permalink raw reply	[flat|nested] 31+ messages in thread

* bug in tag handling in blk-mq?
@ 2018-05-07 14:03 ` Paolo Valente
  0 siblings, 0 replies; 31+ messages in thread
From: Paolo Valente @ 2018-05-07 14:03 UTC (permalink / raw)
  To: Mike Galbraith, Jens Axboe, Christoph Hellwig
  Cc: linux-block, Ulf Hansson, LKML, Linus Walleij, Ulf Hansson,
	Oleksandr Natalenko

Hi Jens, Christoph, all,
Mike Galbraith has been experiencing hangs, on blk_mq_get_tag, only
with bfq [1].  Symptoms seem to clearly point to a problem in I/O-tag
handling, triggered by bfq because it limits the number of tags for
async and sync write requests (in bfq_limit_depth).

Fortunately, I just happened to find a way to apparently confirm it.
With the following one-liner for block/bfq-iosched.c:

@@ -554,8 +554,7 @@ static void bfq_limit_depth(unsigned int op, struct blk_mq_alloc_data *data)
        if (unlikely(bfqd->sb_shift != bt->sb.shift))
                bfq_update_depths(bfqd, bt);
 
-       data->shallow_depth =
-               bfqd->word_depths[!!bfqd->wr_busy_queues][op_is_sync(op)];
+       data->shallow_depth = 1;
 
        bfq_log(bfqd, "[%s] wr_busy %d sync %d depth %u",
                        __func__, bfqd->wr_busy_queues, op_is_sync(op),

Mike's machine now crashes soon and systematically, while nothing bad
happens on my machines, even with heavy workloads (apart from an
expected throughput drop).

This change simply reduces to 1 the maximum possible value for the sum
of the number of async requests and of sync write requests.

This email is basically a request for help to knowledgeable people.  To
start, here are my first doubts/questions:
1) Just to be certain, I guess it is not normal that blk-mq hangs if
async requests and sync write requests can be at most one, right?
2) Do you have any hint to where I could look for, to chase this bug?
Of course, the bug may be in bfq, i.e, it may be a somehow unrelated
bfq bug that causes this hang in blk-mq, indirectly.  But it is hard
for me to understand how.

Looking forward to some help.

Thanks,
Paolo

[1] https://www.spinics.net/lists/stable/msg215036.html

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: bug in tag handling in blk-mq?
  2018-05-07 14:03 ` Paolo Valente
  (?)
@ 2018-05-07 16:39 ` Jens Axboe
  2018-05-07 18:02     ` Paolo Valente
  -1 siblings, 1 reply; 31+ messages in thread
From: Jens Axboe @ 2018-05-07 16:39 UTC (permalink / raw)
  To: Paolo Valente, Mike Galbraith, Christoph Hellwig
  Cc: linux-block, Ulf Hansson, LKML, Linus Walleij, Oleksandr Natalenko

On 5/7/18 8:03 AM, Paolo Valente wrote:
> Hi Jens, Christoph, all,
> Mike Galbraith has been experiencing hangs, on blk_mq_get_tag, only
> with bfq [1].  Symptoms seem to clearly point to a problem in I/O-tag
> handling, triggered by bfq because it limits the number of tags for
> async and sync write requests (in bfq_limit_depth).
> 
> Fortunately, I just happened to find a way to apparently confirm it.
> With the following one-liner for block/bfq-iosched.c:
> 
> @@ -554,8 +554,7 @@ static void bfq_limit_depth(unsigned int op, struct blk_mq_alloc_data *data)
>         if (unlikely(bfqd->sb_shift != bt->sb.shift))
>                 bfq_update_depths(bfqd, bt);
>  
> -       data->shallow_depth =
> -               bfqd->word_depths[!!bfqd->wr_busy_queues][op_is_sync(op)];
> +       data->shallow_depth = 1;
>  
>         bfq_log(bfqd, "[%s] wr_busy %d sync %d depth %u",
>                         __func__, bfqd->wr_busy_queues, op_is_sync(op),
> 
> Mike's machine now crashes soon and systematically, while nothing bad
> happens on my machines, even with heavy workloads (apart from an
> expected throughput drop).
> 
> This change simply reduces to 1 the maximum possible value for the sum
> of the number of async requests and of sync write requests.
> 
> This email is basically a request for help to knowledgeable people.  To
> start, here are my first doubts/questions:
> 1) Just to be certain, I guess it is not normal that blk-mq hangs if
> async requests and sync write requests can be at most one, right?
> 2) Do you have any hint to where I could look for, to chase this bug?
> Of course, the bug may be in bfq, i.e, it may be a somehow unrelated
> bfq bug that causes this hang in blk-mq, indirectly.  But it is hard
> for me to understand how.

CC Omar, since he implemented the shallow part. But we'll need some
traces to show where we are hung, probably also the value of the
/sys/debug/kernel/block/<dev>/ directory. For the crash mentioned, a
trace as well. Otherwise we'll be wasting a lot of time on this.

Is there a reproducer?

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: bug in tag handling in blk-mq?
  2018-05-07 16:39 ` Jens Axboe
@ 2018-05-07 18:02     ` Paolo Valente
  0 siblings, 0 replies; 31+ messages in thread
From: Paolo Valente @ 2018-05-07 18:02 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Mike Galbraith, Christoph Hellwig, linux-block, Ulf Hansson,
	LKML, Linus Walleij, Oleksandr Natalenko



> Il giorno 07 mag 2018, alle ore 18:39, Jens Axboe <axboe@kernel.dk> ha =
scritto:
>=20
> On 5/7/18 8:03 AM, Paolo Valente wrote:
>> Hi Jens, Christoph, all,
>> Mike Galbraith has been experiencing hangs, on blk_mq_get_tag, only
>> with bfq [1].  Symptoms seem to clearly point to a problem in I/O-tag
>> handling, triggered by bfq because it limits the number of tags for
>> async and sync write requests (in bfq_limit_depth).
>>=20
>> Fortunately, I just happened to find a way to apparently confirm it.
>> With the following one-liner for block/bfq-iosched.c:
>>=20
>> @@ -554,8 +554,7 @@ static void bfq_limit_depth(unsigned int op, =
struct blk_mq_alloc_data *data)
>>        if (unlikely(bfqd->sb_shift !=3D bt->sb.shift))
>>                bfq_update_depths(bfqd, bt);
>>=20
>> -       data->shallow_depth =3D
>> -               =
bfqd->word_depths[!!bfqd->wr_busy_queues][op_is_sync(op)];
>> +       data->shallow_depth =3D 1;
>>=20
>>        bfq_log(bfqd, "[%s] wr_busy %d sync %d depth %u",
>>                        __func__, bfqd->wr_busy_queues, =
op_is_sync(op),
>>=20
>> Mike's machine now crashes soon and systematically, while nothing bad
>> happens on my machines, even with heavy workloads (apart from an
>> expected throughput drop).
>>=20
>> This change simply reduces to 1 the maximum possible value for the =
sum
>> of the number of async requests and of sync write requests.
>>=20
>> This email is basically a request for help to knowledgeable people.  =
To
>> start, here are my first doubts/questions:
>> 1) Just to be certain, I guess it is not normal that blk-mq hangs if
>> async requests and sync write requests can be at most one, right?
>> 2) Do you have any hint to where I could look for, to chase this bug?
>> Of course, the bug may be in bfq, i.e, it may be a somehow unrelated
>> bfq bug that causes this hang in blk-mq, indirectly.  But it is hard
>> for me to understand how.
>=20
> CC Omar, since he implemented the shallow part. But we'll need some
> traces to show where we are hung, probably also the value of the
> /sys/debug/kernel/block/<dev>/ directory. For the crash mentioned, a
> trace as well. Otherwise we'll be wasting a lot of time on this.
>=20
> Is there a reproducer?
>=20

Ok Mike, I guess it's your turn now, for at least a stack trace.

Thanks,
Paolo

> --=20
> Jens Axboe

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: bug in tag handling in blk-mq?
@ 2018-05-07 18:02     ` Paolo Valente
  0 siblings, 0 replies; 31+ messages in thread
From: Paolo Valente @ 2018-05-07 18:02 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Mike Galbraith, Christoph Hellwig, linux-block, Ulf Hansson,
	LKML, Linus Walleij, Oleksandr Natalenko



> Il giorno 07 mag 2018, alle ore 18:39, Jens Axboe <axboe@kernel.dk> ha scritto:
> 
> On 5/7/18 8:03 AM, Paolo Valente wrote:
>> Hi Jens, Christoph, all,
>> Mike Galbraith has been experiencing hangs, on blk_mq_get_tag, only
>> with bfq [1].  Symptoms seem to clearly point to a problem in I/O-tag
>> handling, triggered by bfq because it limits the number of tags for
>> async and sync write requests (in bfq_limit_depth).
>> 
>> Fortunately, I just happened to find a way to apparently confirm it.
>> With the following one-liner for block/bfq-iosched.c:
>> 
>> @@ -554,8 +554,7 @@ static void bfq_limit_depth(unsigned int op, struct blk_mq_alloc_data *data)
>>        if (unlikely(bfqd->sb_shift != bt->sb.shift))
>>                bfq_update_depths(bfqd, bt);
>> 
>> -       data->shallow_depth =
>> -               bfqd->word_depths[!!bfqd->wr_busy_queues][op_is_sync(op)];
>> +       data->shallow_depth = 1;
>> 
>>        bfq_log(bfqd, "[%s] wr_busy %d sync %d depth %u",
>>                        __func__, bfqd->wr_busy_queues, op_is_sync(op),
>> 
>> Mike's machine now crashes soon and systematically, while nothing bad
>> happens on my machines, even with heavy workloads (apart from an
>> expected throughput drop).
>> 
>> This change simply reduces to 1 the maximum possible value for the sum
>> of the number of async requests and of sync write requests.
>> 
>> This email is basically a request for help to knowledgeable people.  To
>> start, here are my first doubts/questions:
>> 1) Just to be certain, I guess it is not normal that blk-mq hangs if
>> async requests and sync write requests can be at most one, right?
>> 2) Do you have any hint to where I could look for, to chase this bug?
>> Of course, the bug may be in bfq, i.e, it may be a somehow unrelated
>> bfq bug that causes this hang in blk-mq, indirectly.  But it is hard
>> for me to understand how.
> 
> CC Omar, since he implemented the shallow part. But we'll need some
> traces to show where we are hung, probably also the value of the
> /sys/debug/kernel/block/<dev>/ directory. For the crash mentioned, a
> trace as well. Otherwise we'll be wasting a lot of time on this.
> 
> Is there a reproducer?
> 

Ok Mike, I guess it's your turn now, for at least a stack trace.

Thanks,
Paolo

> -- 
> Jens Axboe

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: bug in tag handling in blk-mq?
  2018-05-07 18:02     ` Paolo Valente
@ 2018-05-08  4:51       ` Mike Galbraith
  -1 siblings, 0 replies; 31+ messages in thread
From: Mike Galbraith @ 2018-05-08  4:51 UTC (permalink / raw)
  To: Paolo Valente, Jens Axboe
  Cc: Christoph Hellwig, linux-block, Ulf Hansson, LKML, Linus Walleij,
	Oleksandr Natalenko

On Mon, 2018-05-07 at 20:02 +0200, Paolo Valente wrote:
>=20
>=20
> > Is there a reproducer?

Just building fat config kernels works for me.  It was highly non-
deterministic, but reproduced quickly twice in a row with Paolos hack.
=A0=A0
> Ok Mike, I guess it's your turn now, for at least a stack trace.

Sure.  I'm deadlined ATM, but will get to it.

	-Mike

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: bug in tag handling in blk-mq?
@ 2018-05-08  4:51       ` Mike Galbraith
  0 siblings, 0 replies; 31+ messages in thread
From: Mike Galbraith @ 2018-05-08  4:51 UTC (permalink / raw)
  To: Paolo Valente, Jens Axboe
  Cc: Christoph Hellwig, linux-block, Ulf Hansson, LKML, Linus Walleij,
	Oleksandr Natalenko

On Mon, 2018-05-07 at 20:02 +0200, Paolo Valente wrote:
> 
> 
> > Is there a reproducer?

Just building fat config kernels works for me.  It was highly non-
deterministic, but reproduced quickly twice in a row with Paolos hack.
  
> Ok Mike, I guess it's your turn now, for at least a stack trace.

Sure.  I'm deadlined ATM, but will get to it.

	-Mike

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: bug in tag handling in blk-mq?
  2018-05-08  4:51       ` Mike Galbraith
@ 2018-05-08  8:37         ` Mike Galbraith
  -1 siblings, 0 replies; 31+ messages in thread
From: Mike Galbraith @ 2018-05-08  8:37 UTC (permalink / raw)
  To: Paolo Valente, Jens Axboe
  Cc: Christoph Hellwig, linux-block, Ulf Hansson, LKML, Linus Walleij,
	Oleksandr Natalenko

[-- Attachment #1: Type: text/plain, Size: 1023 bytes --]

On Tue, 2018-05-08 at 06:51 +0200, Mike Galbraith wrote:
> 
> I'm deadlined ATM, but will get to it.

(Bah, even a zombie can type ccache -C; make -j8 and stare...)

kbuild again hung on the first go (yay), and post hang data written to
sdd1 survived (kernel source lives in sdb3).  Full ftrace buffer (echo
1 > events/block/enable) available off list if desired.  dmesg.txt.xz
is dmesg from post hang crashdump, attached because it contains the
tail of trace buffer, so _might_ be useful.

homer:~ # df|grep sd
/dev/sdb3      959074776 785342824 172741072  82% /
/dev/sdc3      959074776 455464912 502618984  48% /backup
/dev/sdb1         159564      7980    151584   6% /boot/efi
/dev/sdd1      961301832 393334868 519112540  44% /abuild

Kernel is virgin modulo these...

patches/remove_irritating_plus.diff
patches/add-scm-version-to-EXTRAVERSION.patch
patches/block-bfq:-postpone-rq-preparation-to-insert-or-merge.patch
patches/block-bfq:-test.patch  (hang provocation hack from Paolo)

	-Mike

[-- Attachment #2: block_debug.tar.xz --]
[-- Type: application/x-xz-compressed-tar, Size: 2292 bytes --]

[-- Attachment #3: dmesg.xz --]
[-- Type: application/x-xz, Size: 17300 bytes --]

[-- Attachment #4: dmesg.txt.xz --]
[-- Type: application/x-xz, Size: 15824 bytes --]

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: bug in tag handling in blk-mq?
@ 2018-05-08  8:37         ` Mike Galbraith
  0 siblings, 0 replies; 31+ messages in thread
From: Mike Galbraith @ 2018-05-08  8:37 UTC (permalink / raw)
  To: Paolo Valente, Jens Axboe
  Cc: Christoph Hellwig, linux-block, Ulf Hansson, LKML, Linus Walleij,
	Oleksandr Natalenko

[-- Attachment #1: Type: text/plain, Size: 1023 bytes --]

On Tue, 2018-05-08 at 06:51 +0200, Mike Galbraith wrote:
> 
> I'm deadlined ATM, but will get to it.

(Bah, even a zombie can type ccache -C; make -j8 and stare...)

kbuild again hung on the first go (yay), and post hang data written to
sdd1 survived (kernel source lives in sdb3).  Full ftrace buffer (echo
1 > events/block/enable) available off list if desired.  dmesg.txt.xz
is dmesg from post hang crashdump, attached because it contains the
tail of trace buffer, so _might_ be useful.

homer:~ # df|grep sd
/dev/sdb3      959074776 785342824 172741072  82% /
/dev/sdc3      959074776 455464912 502618984  48% /backup
/dev/sdb1         159564      7980    151584   6% /boot/efi
/dev/sdd1      961301832 393334868 519112540  44% /abuild

Kernel is virgin modulo these...

patches/remove_irritating_plus.diff
patches/add-scm-version-to-EXTRAVERSION.patch
patches/block-bfq:-postpone-rq-preparation-to-insert-or-merge.patch
patches/block-bfq:-test.patch  (hang provocation hack from Paolo)

	-Mike

[-- Attachment #2: block_debug.tar.xz --]
[-- Type: application/x-xz-compressed-tar, Size: 2292 bytes --]

[-- Attachment #3: dmesg.xz --]
[-- Type: application/x-xz, Size: 17300 bytes --]

[-- Attachment #4: dmesg.txt.xz --]
[-- Type: application/x-xz, Size: 15824 bytes --]

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: bug in tag handling in blk-mq?
  2018-05-08  8:37         ` Mike Galbraith
@ 2018-05-08 14:55           ` Jens Axboe
  -1 siblings, 0 replies; 31+ messages in thread
From: Jens Axboe @ 2018-05-08 14:55 UTC (permalink / raw)
  To: Mike Galbraith, Paolo Valente
  Cc: Christoph Hellwig, linux-block, Ulf Hansson, LKML, Linus Walleij,
	Oleksandr Natalenko

On 5/8/18 2:37 AM, Mike Galbraith wrote:
> On Tue, 2018-05-08 at 06:51 +0200, Mike Galbraith wrote:
>>
>> I'm deadlined ATM, but will get to it.
> 
> (Bah, even a zombie can type ccache -C; make -j8 and stare...)
> 
> kbuild again hung on the first go (yay), and post hang data written to
> sdd1 survived (kernel source lives in sdb3).  Full ftrace buffer (echo
> 1 > events/block/enable) available off list if desired.  dmesg.txt.xz
> is dmesg from post hang crashdump, attached because it contains the
> tail of trace buffer, so _might_ be useful.
> 
> homer:~ # df|grep sd
> /dev/sdb3 ᅵᅵᅵᅵᅵ959074776 785342824 172741072 ᅵ82% /
> /dev/sdc3 ᅵᅵᅵᅵᅵ959074776 455464912 502618984 ᅵ48% /backup
> /dev/sdb1 ᅵᅵᅵᅵᅵᅵᅵᅵ159564 ᅵᅵᅵᅵᅵ7980 ᅵᅵᅵ151584 ᅵᅵ6% /boot/efi
> /dev/sdd1 ᅵᅵᅵᅵᅵ961301832 393334868 519112540 ᅵ44% /abuild
> 
> Kernel is virginï¿œmodulo these...
> 
> patches/remove_irritating_plus.diff
> patches/add-scm-version-to-EXTRAVERSION.patch
> patches/block-bfq:-postpone-rq-preparation-to-insert-or-merge.patch
> patches/block-bfq:-test.patch  (hang provocation hack from Paolo)

All the block debug files are empty...

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: bug in tag handling in blk-mq?
@ 2018-05-08 14:55           ` Jens Axboe
  0 siblings, 0 replies; 31+ messages in thread
From: Jens Axboe @ 2018-05-08 14:55 UTC (permalink / raw)
  To: Mike Galbraith, Paolo Valente
  Cc: Christoph Hellwig, linux-block, Ulf Hansson, LKML, Linus Walleij,
	Oleksandr Natalenko

On 5/8/18 2:37 AM, Mike Galbraith wrote:
> On Tue, 2018-05-08 at 06:51 +0200, Mike Galbraith wrote:
>>
>> I'm deadlined ATM, but will get to it.
> 
> (Bah, even a zombie can type ccache -C; make -j8 and stare...)
> 
> kbuild again hung on the first go (yay), and post hang data written to
> sdd1 survived (kernel source lives in sdb3).  Full ftrace buffer (echo
> 1 > events/block/enable) available off list if desired.  dmesg.txt.xz
> is dmesg from post hang crashdump, attached because it contains the
> tail of trace buffer, so _might_ be useful.
> 
> homer:~ # df|grep sd
> /dev/sdb3      959074776 785342824 172741072  82% /
> /dev/sdc3      959074776 455464912 502618984  48% /backup
> /dev/sdb1         159564      7980    151584   6% /boot/efi
> /dev/sdd1      961301832 393334868 519112540  44% /abuild
> 
> Kernel is virgin modulo these...
> 
> patches/remove_irritating_plus.diff
> patches/add-scm-version-to-EXTRAVERSION.patch
> patches/block-bfq:-postpone-rq-preparation-to-insert-or-merge.patch
> patches/block-bfq:-test.patch  (hang provocation hack from Paolo)

All the block debug files are empty...

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: bug in tag handling in blk-mq?
  2018-05-08 14:55           ` Jens Axboe
@ 2018-05-08 16:42             ` Mike Galbraith
  -1 siblings, 0 replies; 31+ messages in thread
From: Mike Galbraith @ 2018-05-08 16:42 UTC (permalink / raw)
  To: Jens Axboe, Paolo Valente
  Cc: Christoph Hellwig, linux-block, Ulf Hansson, LKML, Linus Walleij,
	Oleksandr Natalenko

[-- Attachment #1: Type: text/plain, Size: 260 bytes --]

On Tue, 2018-05-08 at 08:55 -0600, Jens Axboe wrote:
> 
> All the block debug files are empty...

Sigh.  Take 2, this time cat debug files, having turned block tracing
off before doing anything else (so trace bits in dmesg.txt should end
AT the stall).

	-Mike

[-- Attachment #2: dmesg.xz --]
[-- Type: application/x-xz, Size: 16992 bytes --]

[-- Attachment #3: dmesg.txt.xz --]
[-- Type: application/x-xz, Size: 21656 bytes --]

[-- Attachment #4: block_debug.xz --]
[-- Type: application/x-xz, Size: 1300 bytes --]

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: bug in tag handling in blk-mq?
@ 2018-05-08 16:42             ` Mike Galbraith
  0 siblings, 0 replies; 31+ messages in thread
From: Mike Galbraith @ 2018-05-08 16:42 UTC (permalink / raw)
  To: Jens Axboe, Paolo Valente
  Cc: Christoph Hellwig, linux-block, Ulf Hansson, LKML, Linus Walleij,
	Oleksandr Natalenko

[-- Attachment #1: Type: text/plain, Size: 260 bytes --]

On Tue, 2018-05-08 at 08:55 -0600, Jens Axboe wrote:
> 
> All the block debug files are empty...

Sigh.  Take 2, this time cat debug files, having turned block tracing
off before doing anything else (so trace bits in dmesg.txt should end
AT the stall).

	-Mike

[-- Attachment #2: dmesg.xz --]
[-- Type: application/x-xz, Size: 16992 bytes --]

[-- Attachment #3: dmesg.txt.xz --]
[-- Type: application/x-xz, Size: 21656 bytes --]

[-- Attachment #4: block_debug.xz --]
[-- Type: application/x-xz, Size: 1300 bytes --]

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: bug in tag handling in blk-mq?
  2018-05-08 16:42             ` Mike Galbraith
  (?)
@ 2018-05-08 20:37             ` Jens Axboe
  2018-05-08 21:19               ` Jens Axboe
  2018-05-09  5:09                 ` Mike Galbraith
  -1 siblings, 2 replies; 31+ messages in thread
From: Jens Axboe @ 2018-05-08 20:37 UTC (permalink / raw)
  To: Mike Galbraith, Paolo Valente
  Cc: Christoph Hellwig, linux-block, Ulf Hansson, LKML, Linus Walleij,
	Oleksandr Natalenko

On 5/8/18 10:42 AM, Mike Galbraith wrote:
> On Tue, 2018-05-08 at 08:55 -0600, Jens Axboe wrote:
>>
>> All the block debug files are empty...
> 
> Sigh.  Take 2, this time cat debug files, having turned block tracing
> off before doing anything else (so trace bits in dmesg.txt should end
> AT the stall).

OK, that's better. What I see from the traces:

- You have regular IO and some non-fs IO (from scsi_execute()). This mix
  may be key.

- sdd has nothing pending, yet has 6 active waitqueues.

I'm going to see if I can reproduce this. Paolo, what kind of attempts
to reproduce this have you done?

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: bug in tag handling in blk-mq?
  2018-05-08 20:37             ` Jens Axboe
@ 2018-05-08 21:19               ` Jens Axboe
  2018-05-09  1:09                 ` Jens Axboe
  2018-05-09  5:09                 ` Mike Galbraith
  1 sibling, 1 reply; 31+ messages in thread
From: Jens Axboe @ 2018-05-08 21:19 UTC (permalink / raw)
  To: Mike Galbraith, Paolo Valente
  Cc: Christoph Hellwig, linux-block, Ulf Hansson, LKML, Linus Walleij,
	Oleksandr Natalenko

On 5/8/18 2:37 PM, Jens Axboe wrote:
> On 5/8/18 10:42 AM, Mike Galbraith wrote:
>> On Tue, 2018-05-08 at 08:55 -0600, Jens Axboe wrote:
>>>
>>> All the block debug files are empty...
>>
>> Sigh.  Take 2, this time cat debug files, having turned block tracing
>> off before doing anything else (so trace bits in dmesg.txt should end
>> AT the stall).
> 
> OK, that's better. What I see from the traces:
> 
> - You have regular IO and some non-fs IO (from scsi_execute()). This mix
>   may be key.
> 
> - sdd has nothing pending, yet has 6 active waitqueues.
> 
> I'm going to see if I can reproduce this. Paolo, what kind of attempts
> to reproduce this have you done?

No luck so far. Out of the patches you referenced, I can only find the
shallow depth change, since that's in the parent of this email. Can
you send those as well?

Perhaps also expand a bit on exactly what you are running. File system,
mount options, etc.

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: bug in tag handling in blk-mq?
  2018-05-08 21:19               ` Jens Axboe
@ 2018-05-09  1:09                 ` Jens Axboe
  2018-05-09  4:11                     ` Mike Galbraith
  0 siblings, 1 reply; 31+ messages in thread
From: Jens Axboe @ 2018-05-09  1:09 UTC (permalink / raw)
  To: Mike Galbraith, Paolo Valente
  Cc: Christoph Hellwig, linux-block, Ulf Hansson, LKML, Linus Walleij,
	Oleksandr Natalenko

On 5/8/18 3:19 PM, Jens Axboe wrote:
> On 5/8/18 2:37 PM, Jens Axboe wrote:
>> On 5/8/18 10:42 AM, Mike Galbraith wrote:
>>> On Tue, 2018-05-08 at 08:55 -0600, Jens Axboe wrote:
>>>>
>>>> All the block debug files are empty...
>>>
>>> Sigh.  Take 2, this time cat debug files, having turned block tracing
>>> off before doing anything else (so trace bits in dmesg.txt should end
>>> AT the stall).
>>
>> OK, that's better. What I see from the traces:
>>
>> - You have regular IO and some non-fs IO (from scsi_execute()). This mix
>>   may be key.
>>
>> - sdd has nothing pending, yet has 6 active waitqueues.
>>
>> I'm going to see if I can reproduce this. Paolo, what kind of attempts
>> to reproduce this have you done?
> 
> No luck so far. Out of the patches you referenced, I can only find the
> shallow depth change, since that's in the parent of this email. Can
> you send those as well?
> 
> Perhaps also expand a bit on exactly what you are running. File system,
> mount options, etc.

Alright, I managed to reproduce it. What I think is happening is that
BFQ is limiting the inflight case to something less than the wake
batch for sbitmap, which can lead to stalls. I don't have time to test
this tonight, but perhaps you can give it a go when you are back at it.
If not, I'll try tomorrow morning.

If this is the issue, I can turn it into a real patch. This is just to
confirm that the issue goes away with the below.

diff --git a/lib/sbitmap.c b/lib/sbitmap.c
index e6a9c06ec70c..94ced15b6428 100644
--- a/lib/sbitmap.c
+++ b/lib/sbitmap.c
@@ -272,6 +272,7 @@ EXPORT_SYMBOL_GPL(sbitmap_bitmap_show);
 
 static unsigned int sbq_calc_wake_batch(unsigned int depth)
 {
+#if 0
 	unsigned int wake_batch;
 
 	/*
@@ -284,6 +285,9 @@ static unsigned int sbq_calc_wake_batch(unsigned int depth)
 		wake_batch = max(1U, depth / SBQ_WAIT_QUEUES);
 
 	return wake_batch;
+#else
+	return 1;
+#endif
 }
 
 int sbitmap_queue_init_node(struct sbitmap_queue *sbq, unsigned int depth,

-- 
Jens Axboe

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* Re: bug in tag handling in blk-mq?
  2018-05-09  1:09                 ` Jens Axboe
@ 2018-05-09  4:11                     ` Mike Galbraith
  0 siblings, 0 replies; 31+ messages in thread
From: Mike Galbraith @ 2018-05-09  4:11 UTC (permalink / raw)
  To: Jens Axboe, Paolo Valente
  Cc: Christoph Hellwig, linux-block, Ulf Hansson, LKML, Linus Walleij,
	Oleksandr Natalenko

On Tue, 2018-05-08 at 19:09 -0600, Jens Axboe wrote:
> 
> Alright, I managed to reproduce it. What I think is happening is that
> BFQ is limiting the inflight case to something less than the wake
> batch for sbitmap, which can lead to stalls. I don't have time to test
> this tonight, but perhaps you can give it a go when you are back at it.
> If not, I'll try tomorrow morning.
> 
> If this is the issue, I can turn it into a real patch. This is just to
> confirm that the issue goes away with the below.

Confirmed.  Impressive high speed bug stomping.

> diff --git a/lib/sbitmap.c b/lib/sbitmap.c
> index e6a9c06ec70c..94ced15b6428 100644
> --- a/lib/sbitmap.c
> +++ b/lib/sbitmap.c
> @@ -272,6 +272,7 @@ EXPORT_SYMBOL_GPL(sbitmap_bitmap_show);
>  
>  static unsigned int sbq_calc_wake_batch(unsigned int depth)
>  {
> +#if 0
>  	unsigned int wake_batch;
>  
>  	/*
> @@ -284,6 +285,9 @@ static unsigned int sbq_calc_wake_batch(unsigned int depth)
>  		wake_batch = max(1U, depth / SBQ_WAIT_QUEUES);
>  
>  	return wake_batch;
> +#else
> +	return 1;
> +#endif
>  }
>  
>  int sbitmap_queue_init_node(struct sbitmap_queue *sbq, unsigned int depth,
> 

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: bug in tag handling in blk-mq?
@ 2018-05-09  4:11                     ` Mike Galbraith
  0 siblings, 0 replies; 31+ messages in thread
From: Mike Galbraith @ 2018-05-09  4:11 UTC (permalink / raw)
  To: Jens Axboe, Paolo Valente
  Cc: Christoph Hellwig, linux-block, Ulf Hansson, LKML, Linus Walleij,
	Oleksandr Natalenko

On Tue, 2018-05-08 at 19:09 -0600, Jens Axboe wrote:
> 
> Alright, I managed to reproduce it. What I think is happening is that
> BFQ is limiting the inflight case to something less than the wake
> batch for sbitmap, which can lead to stalls. I don't have time to test
> this tonight, but perhaps you can give it a go when you are back at it.
> If not, I'll try tomorrow morning.
> 
> If this is the issue, I can turn it into a real patch. This is just to
> confirm that the issue goes away with the below.

Confirmed.  Impressive high speed bug stomping.

> diff --git a/lib/sbitmap.c b/lib/sbitmap.c
> index e6a9c06ec70c..94ced15b6428 100644
> --- a/lib/sbitmap.c
> +++ b/lib/sbitmap.c
> @@ -272,6 +272,7 @@ EXPORT_SYMBOL_GPL(sbitmap_bitmap_show);
>  
>  static unsigned int sbq_calc_wake_batch(unsigned int depth)
>  {
> +#if 0
>  	unsigned int wake_batch;
>  
>  	/*
> @@ -284,6 +285,9 @@ static unsigned int sbq_calc_wake_batch(unsigned int depth)
>  		wake_batch = max(1U, depth / SBQ_WAIT_QUEUES);
>  
>  	return wake_batch;
> +#else
> +	return 1;
> +#endif
>  }
>  
>  int sbitmap_queue_init_node(struct sbitmap_queue *sbq, unsigned int depth,
> 

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: bug in tag handling in blk-mq?
  2018-05-09  4:11                     ` Mike Galbraith
@ 2018-05-09  5:06                       ` Paolo Valente
  -1 siblings, 0 replies; 31+ messages in thread
From: Paolo Valente @ 2018-05-09  5:06 UTC (permalink / raw)
  To: Mike Galbraith
  Cc: Jens Axboe, Christoph Hellwig, linux-block, Ulf Hansson, LKML,
	Linus Walleij, Oleksandr Natalenko



> Il giorno 09 mag 2018, alle ore 06:11, Mike Galbraith <efault@gmx.de> =
ha scritto:
>=20
> On Tue, 2018-05-08 at 19:09 -0600, Jens Axboe wrote:
>>=20
>> Alright, I managed to reproduce it. What I think is happening is that
>> BFQ is limiting the inflight case to something less than the wake
>> batch for sbitmap, which can lead to stalls. I don't have time to =
test
>> this tonight, but perhaps you can give it a go when you are back at =
it.
>> If not, I'll try tomorrow morning.
>>=20
>> If this is the issue, I can turn it into a real patch. This is just =
to
>> confirm that the issue goes away with the below.
>=20
> Confirmed.  Impressive high speed bug stomping.
>=20

Great! It's a real relief that this ghost is gone.

Thank you both,
Paolo

>> diff --git a/lib/sbitmap.c b/lib/sbitmap.c
>> index e6a9c06ec70c..94ced15b6428 100644
>> --- a/lib/sbitmap.c
>> +++ b/lib/sbitmap.c
>> @@ -272,6 +272,7 @@ EXPORT_SYMBOL_GPL(sbitmap_bitmap_show);
>>=20
>> static unsigned int sbq_calc_wake_batch(unsigned int depth)
>> {
>> +#if 0
>> 	unsigned int wake_batch;
>>=20
>> 	/*
>> @@ -284,6 +285,9 @@ static unsigned int sbq_calc_wake_batch(unsigned =
int depth)
>> 		wake_batch =3D max(1U, depth / SBQ_WAIT_QUEUES);
>>=20
>> 	return wake_batch;
>> +#else
>> +	return 1;
>> +#endif
>> }
>>=20
>> int sbitmap_queue_init_node(struct sbitmap_queue *sbq, unsigned int =
depth,
>>=20

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: bug in tag handling in blk-mq?
@ 2018-05-09  5:06                       ` Paolo Valente
  0 siblings, 0 replies; 31+ messages in thread
From: Paolo Valente @ 2018-05-09  5:06 UTC (permalink / raw)
  To: Mike Galbraith
  Cc: Jens Axboe, Christoph Hellwig, linux-block, Ulf Hansson, LKML,
	Linus Walleij, Oleksandr Natalenko



> Il giorno 09 mag 2018, alle ore 06:11, Mike Galbraith <efault@gmx.de> ha scritto:
> 
> On Tue, 2018-05-08 at 19:09 -0600, Jens Axboe wrote:
>> 
>> Alright, I managed to reproduce it. What I think is happening is that
>> BFQ is limiting the inflight case to something less than the wake
>> batch for sbitmap, which can lead to stalls. I don't have time to test
>> this tonight, but perhaps you can give it a go when you are back at it.
>> If not, I'll try tomorrow morning.
>> 
>> If this is the issue, I can turn it into a real patch. This is just to
>> confirm that the issue goes away with the below.
> 
> Confirmed.  Impressive high speed bug stomping.
> 

Great! It's a real relief that this ghost is gone.

Thank you both,
Paolo

>> diff --git a/lib/sbitmap.c b/lib/sbitmap.c
>> index e6a9c06ec70c..94ced15b6428 100644
>> --- a/lib/sbitmap.c
>> +++ b/lib/sbitmap.c
>> @@ -272,6 +272,7 @@ EXPORT_SYMBOL_GPL(sbitmap_bitmap_show);
>> 
>> static unsigned int sbq_calc_wake_batch(unsigned int depth)
>> {
>> +#if 0
>> 	unsigned int wake_batch;
>> 
>> 	/*
>> @@ -284,6 +285,9 @@ static unsigned int sbq_calc_wake_batch(unsigned int depth)
>> 		wake_batch = max(1U, depth / SBQ_WAIT_QUEUES);
>> 
>> 	return wake_batch;
>> +#else
>> +	return 1;
>> +#endif
>> }
>> 
>> int sbitmap_queue_init_node(struct sbitmap_queue *sbq, unsigned int depth,
>> 

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: bug in tag handling in blk-mq?
  2018-05-08 20:37             ` Jens Axboe
@ 2018-05-09  5:09                 ` Mike Galbraith
  2018-05-09  5:09                 ` Mike Galbraith
  1 sibling, 0 replies; 31+ messages in thread
From: Mike Galbraith @ 2018-05-09  5:09 UTC (permalink / raw)
  To: Jens Axboe, Paolo Valente
  Cc: Christoph Hellwig, linux-block, Ulf Hansson, LKML, Linus Walleij,
	Oleksandr Natalenko

On Tue, 2018-05-08 at 14:37 -0600, Jens Axboe wrote:
> 
> - sdd has nothing pending, yet has 6 active waitqueues.

sdd is where ccache storage lives, which that should have been the only
activity on that drive, as I built source in sdb, and was doing nothing
else that utilizes sdd.

	-Mike

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: bug in tag handling in blk-mq?
@ 2018-05-09  5:09                 ` Mike Galbraith
  0 siblings, 0 replies; 31+ messages in thread
From: Mike Galbraith @ 2018-05-09  5:09 UTC (permalink / raw)
  To: Jens Axboe, Paolo Valente
  Cc: Christoph Hellwig, linux-block, Ulf Hansson, LKML, Linus Walleij,
	Oleksandr Natalenko

On Tue, 2018-05-08 at 14:37 -0600, Jens Axboe wrote:
> 
> - sdd has nothing pending, yet has 6 active waitqueues.

sdd is where ccache storage lives, which that should have been the only
activity on that drive, as I built source in sdb, and was doing nothing
else that utilizes sdd.

	-Mike

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: bug in tag handling in blk-mq?
  2018-05-09  4:11                     ` Mike Galbraith
  (?)
  (?)
@ 2018-05-09 15:18                     ` Jens Axboe
  2018-05-09 16:57                         ` Mike Galbraith
  -1 siblings, 1 reply; 31+ messages in thread
From: Jens Axboe @ 2018-05-09 15:18 UTC (permalink / raw)
  To: Mike Galbraith, Paolo Valente
  Cc: Christoph Hellwig, linux-block, Ulf Hansson, LKML, Linus Walleij,
	Oleksandr Natalenko

On 5/8/18 10:11 PM, Mike Galbraith wrote:
> On Tue, 2018-05-08 at 19:09 -0600, Jens Axboe wrote:
>>
>> Alright, I managed to reproduce it. What I think is happening is that
>> BFQ is limiting the inflight case to something less than the wake
>> batch for sbitmap, which can lead to stalls. I don't have time to test
>> this tonight, but perhaps you can give it a go when you are back at it.
>> If not, I'll try tomorrow morning.
>>
>> If this is the issue, I can turn it into a real patch. This is just to
>> confirm that the issue goes away with the below.
> 
> Confirmed.  Impressive high speed bug stomping.

Well, that's good news. Can I get you to try this patch? Needs to be
split, but it'll be good to know if this fixes it too (since it's an
ACTUAL attempt at a fix, not just a masking).


diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index ebc264c87a09..b0dbfd297d20 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -533,19 +533,20 @@ static void bfq_update_depths(struct bfq_data *bfqd, struct sbitmap_queue *bt)
  * Limit depths of async I/O and sync writes so as to counter both
  * problems.
  */
-static void bfq_limit_depth(unsigned int op, struct blk_mq_alloc_data *data)
+static int bfq_limit_depth(unsigned int op, struct blk_mq_alloc_data *data)
 {
 	struct blk_mq_tags *tags = blk_mq_tags_from_data(data);
 	struct bfq_data *bfqd = data->q->elevator->elevator_data;
 	struct sbitmap_queue *bt;
+	int old_depth;
 
 	if (op_is_sync(op) && !op_is_write(op))
-		return;
+		return 0;
 
 	if (data->flags & BLK_MQ_REQ_RESERVED) {
 		if (unlikely(!tags->nr_reserved_tags)) {
 			WARN_ON_ONCE(1);
-			return;
+			return 0;
 		}
 		bt = &tags->breserved_tags;
 	} else
@@ -554,12 +555,18 @@ static void bfq_limit_depth(unsigned int op, struct blk_mq_alloc_data *data)
 	if (unlikely(bfqd->sb_shift != bt->sb.shift))
 		bfq_update_depths(bfqd, bt);
 
+	old_depth = data->shallow_depth;
 	data->shallow_depth =
 		bfqd->word_depths[!!bfqd->wr_busy_queues][op_is_sync(op)];
 
 	bfq_log(bfqd, "[%s] wr_busy %d sync %d depth %u",
 			__func__, bfqd->wr_busy_queues, op_is_sync(op),
 			data->shallow_depth);
+
+	if (old_depth != data->shallow_depth)
+		return data->shallow_depth;
+
+	return 0;
 }
 
 static struct bfq_queue *
diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
index 25c14c58385c..0c53a254671f 100644
--- a/block/blk-mq-sched.c
+++ b/block/blk-mq-sched.c
@@ -16,6 +16,32 @@
 #include "blk-mq-tag.h"
 #include "blk-wbt.h"
 
+void blk_mq_sched_limit_depth(struct elevator_queue *e,
+			      struct blk_mq_alloc_data *data, unsigned int op)
+{
+	struct blk_mq_tags *tags = blk_mq_tags_from_data(data);
+	struct sbitmap_queue *bt;
+	int ret;
+
+	/*
+	 * Flush requests are special and go directly to the
+	 * dispatch list.
+	 */
+	if (op_is_flush(op) || !e->type->ops.mq.limit_depth)
+		return;
+
+	ret = e->type->ops.mq.limit_depth(op, data);
+	if (!ret)
+		return;
+
+	if (data->flags & BLK_MQ_REQ_RESERVED)
+		bt = &tags->breserved_tags;
+	else
+		bt = &tags->bitmap_tags;
+
+	sbitmap_queue_shallow_depth(bt, ret);
+}
+
 void blk_mq_sched_free_hctx_data(struct request_queue *q,
 				 void (*exit)(struct blk_mq_hw_ctx *))
 {
diff --git a/block/blk-mq-sched.h b/block/blk-mq-sched.h
index 1e9c9018ace1..6abebc1b9ae0 100644
--- a/block/blk-mq-sched.h
+++ b/block/blk-mq-sched.h
@@ -5,6 +5,9 @@
 #include "blk-mq.h"
 #include "blk-mq-tag.h"
 
+void blk_mq_sched_limit_depth(struct elevator_queue *e,
+			      struct blk_mq_alloc_data *data, unsigned int op);
+
 void blk_mq_sched_free_hctx_data(struct request_queue *q,
 				 void (*exit)(struct blk_mq_hw_ctx *));
 
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 4e9d83594cca..1bb7aa40c192 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -357,13 +357,7 @@ static struct request *blk_mq_get_request(struct request_queue *q,
 
 	if (e) {
 		data->flags |= BLK_MQ_REQ_INTERNAL;
-
-		/*
-		 * Flush requests are special and go directly to the
-		 * dispatch list.
-		 */
-		if (!op_is_flush(op) && e->type->ops.mq.limit_depth)
-			e->type->ops.mq.limit_depth(op, data);
+		blk_mq_sched_limit_depth(e, data, op);
 	}
 
 	tag = blk_mq_get_tag(data);
diff --git a/block/kyber-iosched.c b/block/kyber-iosched.c
index 564967fafe5f..d2622386c115 100644
--- a/block/kyber-iosched.c
+++ b/block/kyber-iosched.c
@@ -433,17 +433,23 @@ static void rq_clear_domain_token(struct kyber_queue_data *kqd,
 	}
 }
 
-static void kyber_limit_depth(unsigned int op, struct blk_mq_alloc_data *data)
+static int kyber_limit_depth(unsigned int op, struct blk_mq_alloc_data *data)
 {
+	struct kyber_queue_data *kqd = data->q->elevator->elevator_data;
+
+	if (op_is_sync(op))
+		return 0;
+
 	/*
 	 * We use the scheduler tags as per-hardware queue queueing tokens.
 	 * Async requests can be limited at this stage.
 	 */
-	if (!op_is_sync(op)) {
-		struct kyber_queue_data *kqd = data->q->elevator->elevator_data;
-
+	if (data->shallow_depth != kqd->async_depth) {
 		data->shallow_depth = kqd->async_depth;
+		return data->shallow_depth;
 	}
+
+	return 0;
 }
 
 static void kyber_prepare_request(struct request *rq, struct bio *bio)
diff --git a/include/linux/elevator.h b/include/linux/elevator.h
index 6d9e230dffd2..b2712f4ca9f1 100644
--- a/include/linux/elevator.h
+++ b/include/linux/elevator.h
@@ -105,7 +105,7 @@ struct elevator_mq_ops {
 	int (*request_merge)(struct request_queue *q, struct request **, struct bio *);
 	void (*request_merged)(struct request_queue *, struct request *, enum elv_merge);
 	void (*requests_merged)(struct request_queue *, struct request *, struct request *);
-	void (*limit_depth)(unsigned int, struct blk_mq_alloc_data *);
+	int (*limit_depth)(unsigned int, struct blk_mq_alloc_data *);
 	void (*prepare_request)(struct request *, struct bio *bio);
 	void (*finish_request)(struct request *);
 	void (*insert_requests)(struct blk_mq_hw_ctx *, struct list_head *, bool);
diff --git a/include/linux/sbitmap.h b/include/linux/sbitmap.h
index 841585f6e5f2..99059789f45f 100644
--- a/include/linux/sbitmap.h
+++ b/include/linux/sbitmap.h
@@ -164,6 +164,17 @@ static inline void sbitmap_free(struct sbitmap *sb)
 void sbitmap_resize(struct sbitmap *sb, unsigned int depth);
 
 /**
+ * sbitmap_queue_shallow_depth() - Inform sbitmap about shallow depth changes
+ * @sbq: Bitmap queue in question
+ * @depth: Shallow depth limit
+ *
+ * Due to how sbitmap does batched wakes, if a user of sbitmap updates the
+ * shallow depth, then we might need to update our batched wake counts.
+ *
+ */
+void sbitmap_queue_shallow_depth(struct sbitmap_queue *sbq, unsigned int depth);
+
+/**
  * sbitmap_get() - Try to allocate a free bit from a &struct sbitmap.
  * @sb: Bitmap to allocate from.
  * @alloc_hint: Hint for where to start searching for a free bit.
diff --git a/lib/sbitmap.c b/lib/sbitmap.c
index e6a9c06ec70c..563ae9d75fb8 100644
--- a/lib/sbitmap.c
+++ b/lib/sbitmap.c
@@ -327,7 +327,8 @@ int sbitmap_queue_init_node(struct sbitmap_queue *sbq, unsigned int depth,
 }
 EXPORT_SYMBOL_GPL(sbitmap_queue_init_node);
 
-void sbitmap_queue_resize(struct sbitmap_queue *sbq, unsigned int depth)
+static void sbitmap_queue_update_batch_wake(struct sbitmap_queue *sbq,
+					    unsigned int depth)
 {
 	unsigned int wake_batch = sbq_calc_wake_batch(depth);
 	int i;
@@ -342,6 +343,11 @@ void sbitmap_queue_resize(struct sbitmap_queue *sbq, unsigned int depth)
 		for (i = 0; i < SBQ_WAIT_QUEUES; i++)
 			atomic_set(&sbq->ws[i].wait_cnt, 1);
 	}
+}
+
+void sbitmap_queue_resize(struct sbitmap_queue *sbq, unsigned int depth)
+{
+	sbitmap_queue_update_batch_wake(sbq, depth);
 	sbitmap_resize(&sbq->sb, depth);
 }
 EXPORT_SYMBOL_GPL(sbitmap_queue_resize);
@@ -403,6 +409,15 @@ int __sbitmap_queue_get_shallow(struct sbitmap_queue *sbq,
 }
 EXPORT_SYMBOL_GPL(__sbitmap_queue_get_shallow);
 
+/*
+ * User has limited the shallow depth to 'depth', update batch wake counts
+ */
+void sbitmap_queue_shallow_depth(struct sbitmap_queue *sbq, unsigned int depth)
+{
+	sbitmap_queue_update_batch_wake(sbq, depth);
+}
+EXPORT_SYMBOL_GPL(sbitmap_queue_shallow_depth);
+
 static struct sbq_wait_state *sbq_wake_ptr(struct sbitmap_queue *sbq)
 {
 	int i, wake_index;

-- 
Jens Axboe

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* Re: bug in tag handling in blk-mq?
  2018-05-09 15:18                     ` Jens Axboe
@ 2018-05-09 16:57                         ` Mike Galbraith
  0 siblings, 0 replies; 31+ messages in thread
From: Mike Galbraith @ 2018-05-09 16:57 UTC (permalink / raw)
  To: Jens Axboe, Paolo Valente
  Cc: Christoph Hellwig, linux-block, Ulf Hansson, LKML, Linus Walleij,
	Oleksandr Natalenko

On Wed, 2018-05-09 at 09:18 -0600, Jens Axboe wrote:
> On 5/8/18 10:11 PM, Mike Galbraith wrote:
> > On Tue, 2018-05-08 at 19:09 -0600, Jens Axboe wrote:
> >>
> >> Alright, I managed to reproduce it. What I think is happening is that
> >> BFQ is limiting the inflight case to something less than the wake
> >> batch for sbitmap, which can lead to stalls. I don't have time to test
> >> this tonight, but perhaps you can give it a go when you are back at it.
> >> If not, I'll try tomorrow morning.
> >>
> >> If this is the issue, I can turn it into a real patch. This is just to
> >> confirm that the issue goes away with the below.
> > 
> > Confirmed.  Impressive high speed bug stomping.
> 
> Well, that's good news. Can I get you to try this patch?

Sure thing.  The original hang (minus provocation patch) being
annoyingly non-deterministic, this will (hopefully) take a while.

	-Mike

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: bug in tag handling in blk-mq?
@ 2018-05-09 16:57                         ` Mike Galbraith
  0 siblings, 0 replies; 31+ messages in thread
From: Mike Galbraith @ 2018-05-09 16:57 UTC (permalink / raw)
  To: Jens Axboe, Paolo Valente
  Cc: Christoph Hellwig, linux-block, Ulf Hansson, LKML, Linus Walleij,
	Oleksandr Natalenko

On Wed, 2018-05-09 at 09:18 -0600, Jens Axboe wrote:
> On 5/8/18 10:11 PM, Mike Galbraith wrote:
> > On Tue, 2018-05-08 at 19:09 -0600, Jens Axboe wrote:
> >>
> >> Alright, I managed to reproduce it. What I think is happening is that
> >> BFQ is limiting the inflight case to something less than the wake
> >> batch for sbitmap, which can lead to stalls. I don't have time to test
> >> this tonight, but perhaps you can give it a go when you are back at it.
> >> If not, I'll try tomorrow morning.
> >>
> >> If this is the issue, I can turn it into a real patch. This is just to
> >> confirm that the issue goes away with the below.
> > 
> > Confirmed.  Impressive high speed bug stomping.
> 
> Well, that's good news. Can I get you to try this patch?

Sure thing.  The original hang (minus provocation patch) being
annoyingly non-deterministic, this will (hopefully) take a while.

	-Mike

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: bug in tag handling in blk-mq?
  2018-05-09 16:57                         ` Mike Galbraith
  (?)
@ 2018-05-09 17:01                         ` Jens Axboe
  2018-05-09 18:31                             ` Mike Galbraith
  -1 siblings, 1 reply; 31+ messages in thread
From: Jens Axboe @ 2018-05-09 17:01 UTC (permalink / raw)
  To: Mike Galbraith, Paolo Valente
  Cc: Christoph Hellwig, linux-block, Ulf Hansson, LKML, Linus Walleij,
	Oleksandr Natalenko

On 5/9/18 10:57 AM, Mike Galbraith wrote:
> On Wed, 2018-05-09 at 09:18 -0600, Jens Axboe wrote:
>> On 5/8/18 10:11 PM, Mike Galbraith wrote:
>>> On Tue, 2018-05-08 at 19:09 -0600, Jens Axboe wrote:
>>>>
>>>> Alright, I managed to reproduce it. What I think is happening is that
>>>> BFQ is limiting the inflight case to something less than the wake
>>>> batch for sbitmap, which can lead to stalls. I don't have time to test
>>>> this tonight, but perhaps you can give it a go when you are back at it.
>>>> If not, I'll try tomorrow morning.
>>>>
>>>> If this is the issue, I can turn it into a real patch. This is just to
>>>> confirm that the issue goes away with the below.
>>>
>>> Confirmed.  Impressive high speed bug stomping.
>>
>> Well, that's good news. Can I get you to try this patch?
> 
> Sure thing.  The original hang (minus provocation patch) being
> annoyingly non-deterministic, this will (hopefully) take a while.

You can verify with the provocation patch as well first, if you wish.
Just need to hand-apply since it'll conflict with this patch in
bfq. But it's a trivial resolve.

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: bug in tag handling in blk-mq?
  2018-05-09 17:01                         ` Jens Axboe
@ 2018-05-09 18:31                             ` Mike Galbraith
  0 siblings, 0 replies; 31+ messages in thread
From: Mike Galbraith @ 2018-05-09 18:31 UTC (permalink / raw)
  To: Jens Axboe, Paolo Valente
  Cc: Christoph Hellwig, linux-block, Ulf Hansson, LKML, Linus Walleij,
	Oleksandr Natalenko

On Wed, 2018-05-09 at 11:01 -0600, Jens Axboe wrote:
> On 5/9/18 10:57 AM, Mike Galbraith wrote:
> 
> >>> Confirmed.  Impressive high speed bug stomping.
> >>
> >> Well, that's good news. Can I get you to try this patch?
> > 
> > Sure thing.  The original hang (minus provocation patch) being
> > annoyingly non-deterministic, this will (hopefully) take a while.
> 
> You can verify with the provocation patch as well first, if you wish.

Done, box still seems fine.

	-Mike

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: bug in tag handling in blk-mq?
@ 2018-05-09 18:31                             ` Mike Galbraith
  0 siblings, 0 replies; 31+ messages in thread
From: Mike Galbraith @ 2018-05-09 18:31 UTC (permalink / raw)
  To: Jens Axboe, Paolo Valente
  Cc: Christoph Hellwig, linux-block, Ulf Hansson, LKML, Linus Walleij,
	Oleksandr Natalenko

On Wed, 2018-05-09 at 11:01 -0600, Jens Axboe wrote:
> On 5/9/18 10:57 AM, Mike Galbraith wrote:
> 
> >>> Confirmed.  Impressive high speed bug stomping.
> >>
> >> Well, that's good news. Can I get you to try this patch?
> > 
> > Sure thing.  The original hang (minus provocation patch) being
> > annoyingly non-deterministic, this will (hopefully) take a while.
> 
> You can verify with the provocation patch as well first, if you wish.

Done, box still seems fine.

	-Mike

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: bug in tag handling in blk-mq?
  2018-05-09 18:31                             ` Mike Galbraith
  (?)
@ 2018-05-09 19:50                             ` Jens Axboe
  2018-05-10  4:38                                 ` Mike Galbraith
  -1 siblings, 1 reply; 31+ messages in thread
From: Jens Axboe @ 2018-05-09 19:50 UTC (permalink / raw)
  To: Mike Galbraith, Paolo Valente
  Cc: Christoph Hellwig, linux-block, Ulf Hansson, LKML, Linus Walleij,
	Oleksandr Natalenko

On 5/9/18 12:31 PM, Mike Galbraith wrote:
> On Wed, 2018-05-09 at 11:01 -0600, Jens Axboe wrote:
>> On 5/9/18 10:57 AM, Mike Galbraith wrote:
>>
>>>>> Confirmed.  Impressive high speed bug stomping.
>>>>
>>>> Well, that's good news. Can I get you to try this patch?
>>>
>>> Sure thing.  The original hang (minus provocation patch) being
>>> annoyingly non-deterministic, this will (hopefully) take a while.
>>
>> You can verify with the provocation patch as well first, if you wish.
> 
> Done, box still seems fine.

Omar had some (valid) complaints, can you try this one as well? You
can also find it as a series here:

http://git.kernel.dk/cgit/linux-block/log/?h=bfq-cleanups

I'll repost the series shortly, need to check if it actually builds and
boots.

diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index ebc264c87a09..cba6e82153a2 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -487,46 +487,6 @@ static struct request *bfq_choose_req(struct bfq_data *bfqd,
 }
 
 /*
- * See the comments on bfq_limit_depth for the purpose of
- * the depths set in the function.
- */
-static void bfq_update_depths(struct bfq_data *bfqd, struct sbitmap_queue *bt)
-{
-	bfqd->sb_shift = bt->sb.shift;
-
-	/*
-	 * In-word depths if no bfq_queue is being weight-raised:
-	 * leaving 25% of tags only for sync reads.
-	 *
-	 * In next formulas, right-shift the value
-	 * (1U<<bfqd->sb_shift), instead of computing directly
-	 * (1U<<(bfqd->sb_shift - something)), to be robust against
-	 * any possible value of bfqd->sb_shift, without having to
-	 * limit 'something'.
-	 */
-	/* no more than 50% of tags for async I/O */
-	bfqd->word_depths[0][0] = max((1U<<bfqd->sb_shift)>>1, 1U);
-	/*
-	 * no more than 75% of tags for sync writes (25% extra tags
-	 * w.r.t. async I/O, to prevent async I/O from starving sync
-	 * writes)
-	 */
-	bfqd->word_depths[0][1] = max(((1U<<bfqd->sb_shift) * 3)>>2, 1U);
-
-	/*
-	 * In-word depths in case some bfq_queue is being weight-
-	 * raised: leaving ~63% of tags for sync reads. This is the
-	 * highest percentage for which, in our tests, application
-	 * start-up times didn't suffer from any regression due to tag
-	 * shortage.
-	 */
-	/* no more than ~18% of tags for async I/O */
-	bfqd->word_depths[1][0] = max(((1U<<bfqd->sb_shift) * 3)>>4, 1U);
-	/* no more than ~37% of tags for sync writes (~20% extra tags) */
-	bfqd->word_depths[1][1] = max(((1U<<bfqd->sb_shift) * 6)>>4, 1U);
-}
-
-/*
  * Async I/O can easily starve sync I/O (both sync reads and sync
  * writes), by consuming all tags. Similarly, storms of sync writes,
  * such as those that sync(2) may trigger, can starve sync reads.
@@ -535,25 +495,11 @@ static void bfq_update_depths(struct bfq_data *bfqd, struct sbitmap_queue *bt)
  */
 static void bfq_limit_depth(unsigned int op, struct blk_mq_alloc_data *data)
 {
-	struct blk_mq_tags *tags = blk_mq_tags_from_data(data);
 	struct bfq_data *bfqd = data->q->elevator->elevator_data;
-	struct sbitmap_queue *bt;
 
 	if (op_is_sync(op) && !op_is_write(op))
 		return;
 
-	if (data->flags & BLK_MQ_REQ_RESERVED) {
-		if (unlikely(!tags->nr_reserved_tags)) {
-			WARN_ON_ONCE(1);
-			return;
-		}
-		bt = &tags->breserved_tags;
-	} else
-		bt = &tags->bitmap_tags;
-
-	if (unlikely(bfqd->sb_shift != bt->sb.shift))
-		bfq_update_depths(bfqd, bt);
-
 	data->shallow_depth =
 		bfqd->word_depths[!!bfqd->wr_busy_queues][op_is_sync(op)];
 
@@ -5105,6 +5051,66 @@ void bfq_put_async_queues(struct bfq_data *bfqd, struct bfq_group *bfqg)
 	__bfq_put_async_bfqq(bfqd, &bfqg->async_idle_bfqq);
 }
 
+/*
+ * See the comments on bfq_limit_depth for the purpose of
+ * the depths set in the function. Return minimum shallow depth we'll use.
+ */
+static unsigned int bfq_update_depths(struct bfq_data *bfqd,
+				      struct sbitmap_queue *bt)
+{
+	unsigned int i, j, min_shallow = UINT_MAX;
+
+	bfqd->sb_shift = bt->sb.shift;
+
+	/*
+	 * In-word depths if no bfq_queue is being weight-raised:
+	 * leaving 25% of tags only for sync reads.
+	 *
+	 * In next formulas, right-shift the value
+	 * (1U<<bfqd->sb_shift), instead of computing directly
+	 * (1U<<(bfqd->sb_shift - something)), to be robust against
+	 * any possible value of bfqd->sb_shift, without having to
+	 * limit 'something'.
+	 */
+	/* no more than 50% of tags for async I/O */
+	bfqd->word_depths[0][0] = max((1U<<bfqd->sb_shift)>>1, 1U);
+	/*
+	 * no more than 75% of tags for sync writes (25% extra tags
+	 * w.r.t. async I/O, to prevent async I/O from starving sync
+	 * writes)
+	 */
+	bfqd->word_depths[0][1] = max(((1U<<bfqd->sb_shift) * 3)>>2, 1U);
+
+	/*
+	 * In-word depths in case some bfq_queue is being weight-
+	 * raised: leaving ~63% of tags for sync reads. This is the
+	 * highest percentage for which, in our tests, application
+	 * start-up times didn't suffer from any regression due to tag
+	 * shortage.
+	 */
+	/* no more than ~18% of tags for async I/O */
+	bfqd->word_depths[1][0] = max(((1U<<bfqd->sb_shift) * 3)>>4, 1U);
+	/* no more than ~37% of tags for sync writes (~20% extra tags) */
+	bfqd->word_depths[1][1] = max(((1U<<bfqd->sb_shift) * 6)>>4, 1U);
+
+	for (i = 0; i < 2; i++)
+		for (j = 0; j < 2; j++)
+			min_shallow = min(min_shallow, bfqd->word_depths[i][j]);
+
+	return min_shallow;
+}
+
+static int bfq_init_hctx(struct blk_mq_hw_ctx *hctx, unsigned int index)
+{
+	struct bfq_data *bfqd = hctx->queue->elevator->elevator_data;
+	struct blk_mq_tags *tags = hctx->sched_tags;
+	unsigned int min_shallow;
+
+	min_shallow = bfq_update_depths(bfqd, &tags->bitmap_tags);
+	sbitmap_queue_shallow_depth(&tags->bitmap_tags, min_shallow);
+	return 0;
+}
+
 static void bfq_exit_queue(struct elevator_queue *e)
 {
 	struct bfq_data *bfqd = e->elevator_data;
@@ -5526,6 +5532,7 @@ static struct elevator_type iosched_bfq_mq = {
 		.requests_merged	= bfq_requests_merged,
 		.request_merged		= bfq_request_merged,
 		.has_work		= bfq_has_work,
+		.init_hctx		= bfq_init_hctx,
 		.init_sched		= bfq_init_queue,
 		.exit_sched		= bfq_exit_queue,
 	},
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 4e9d83594cca..64630caaf27e 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -360,9 +360,11 @@ static struct request *blk_mq_get_request(struct request_queue *q,
 
 		/*
 		 * Flush requests are special and go directly to the
-		 * dispatch list.
+		 * dispatch list. Don't include reserved tags in the
+		 * limiting, as it isn't useful.
 		 */
-		if (!op_is_flush(op) && e->type->ops.mq.limit_depth)
+		if (!op_is_flush(op) && e->type->ops.mq.limit_depth &&
+		    !(data->flags & BLK_MQ_REQ_RESERVED))
 			e->type->ops.mq.limit_depth(op, data);
 	}
 
diff --git a/include/linux/sbitmap.h b/include/linux/sbitmap.h
index 841585f6e5f2..99059789f45f 100644
--- a/include/linux/sbitmap.h
+++ b/include/linux/sbitmap.h
@@ -164,6 +164,17 @@ static inline void sbitmap_free(struct sbitmap *sb)
 void sbitmap_resize(struct sbitmap *sb, unsigned int depth);
 
 /**
+ * sbitmap_queue_shallow_depth() - Inform sbitmap about shallow depth changes
+ * @sbq: Bitmap queue in question
+ * @depth: Shallow depth limit
+ *
+ * Due to how sbitmap does batched wakes, if a user of sbitmap updates the
+ * shallow depth, then we might need to update our batched wake counts.
+ *
+ */
+void sbitmap_queue_shallow_depth(struct sbitmap_queue *sbq, unsigned int depth);
+
+/**
  * sbitmap_get() - Try to allocate a free bit from a &struct sbitmap.
  * @sb: Bitmap to allocate from.
  * @alloc_hint: Hint for where to start searching for a free bit.
diff --git a/lib/sbitmap.c b/lib/sbitmap.c
index e6a9c06ec70c..a4fb48e4c26b 100644
--- a/lib/sbitmap.c
+++ b/lib/sbitmap.c
@@ -327,7 +327,8 @@ int sbitmap_queue_init_node(struct sbitmap_queue *sbq, unsigned int depth,
 }
 EXPORT_SYMBOL_GPL(sbitmap_queue_init_node);
 
-void sbitmap_queue_resize(struct sbitmap_queue *sbq, unsigned int depth)
+static void sbitmap_queue_update_batch_wake(struct sbitmap_queue *sbq,
+					    unsigned int depth)
 {
 	unsigned int wake_batch = sbq_calc_wake_batch(depth);
 	int i;
@@ -342,6 +343,11 @@ void sbitmap_queue_resize(struct sbitmap_queue *sbq, unsigned int depth)
 		for (i = 0; i < SBQ_WAIT_QUEUES; i++)
 			atomic_set(&sbq->ws[i].wait_cnt, 1);
 	}
+}
+
+void sbitmap_queue_resize(struct sbitmap_queue *sbq, unsigned int depth)
+{
+	sbitmap_queue_update_batch_wake(sbq, depth);
 	sbitmap_resize(&sbq->sb, depth);
 }
 EXPORT_SYMBOL_GPL(sbitmap_queue_resize);
@@ -403,6 +409,17 @@ int __sbitmap_queue_get_shallow(struct sbitmap_queue *sbq,
 }
 EXPORT_SYMBOL_GPL(__sbitmap_queue_get_shallow);
 
+/*
+ * User has limited the shallow depth to 'depth', update batch wake counts
+ * if depth is smaller than the sbitmap_queue depth
+ */
+void sbitmap_queue_shallow_depth(struct sbitmap_queue *sbq, unsigned int depth)
+{
+	if (depth < sbq->sb.depth)
+		sbitmap_queue_update_batch_wake(sbq, depth);
+}
+EXPORT_SYMBOL_GPL(sbitmap_queue_shallow_depth);
+
 static struct sbq_wait_state *sbq_wake_ptr(struct sbitmap_queue *sbq)
 {
 	int i, wake_index;

-- 
Jens Axboe

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* Re: bug in tag handling in blk-mq?
  2018-05-09 19:50                             ` Jens Axboe
@ 2018-05-10  4:38                                 ` Mike Galbraith
  0 siblings, 0 replies; 31+ messages in thread
From: Mike Galbraith @ 2018-05-10  4:38 UTC (permalink / raw)
  To: Jens Axboe, Paolo Valente
  Cc: Christoph Hellwig, linux-block, Ulf Hansson, LKML, Linus Walleij,
	Oleksandr Natalenko

On Wed, 2018-05-09 at 13:50 -0600, Jens Axboe wrote:
> On 5/9/18 12:31 PM, Mike Galbraith wrote:
> > On Wed, 2018-05-09 at 11:01 -0600, Jens Axboe wrote:
> >> On 5/9/18 10:57 AM, Mike Galbraith wrote:
> >>
> >>>>> Confirmed.  Impressive high speed bug stomping.
> >>>>
> >>>> Well, that's good news. Can I get you to try this patch?
> >>>
> >>> Sure thing.  The original hang (minus provocation patch) being
> >>> annoyingly non-deterministic, this will (hopefully) take a while.
> >>
> >> You can verify with the provocation patch as well first, if you wish.
> > 
> > Done, box still seems fine.
> 
> Omar had some (valid) complaints, can you try this one as well? You
> can also find it as a series here:
> 
> http://git.kernel.dk/cgit/linux-block/log/?h=bfq-cleanups
> 
> I'll repost the series shortly, need to check if it actually builds and
> boots.

I applied the series (+ provocation), all is well.

	-Mike

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: bug in tag handling in blk-mq?
@ 2018-05-10  4:38                                 ` Mike Galbraith
  0 siblings, 0 replies; 31+ messages in thread
From: Mike Galbraith @ 2018-05-10  4:38 UTC (permalink / raw)
  To: Jens Axboe, Paolo Valente
  Cc: Christoph Hellwig, linux-block, Ulf Hansson, LKML, Linus Walleij,
	Oleksandr Natalenko

On Wed, 2018-05-09 at 13:50 -0600, Jens Axboe wrote:
> On 5/9/18 12:31 PM, Mike Galbraith wrote:
> > On Wed, 2018-05-09 at 11:01 -0600, Jens Axboe wrote:
> >> On 5/9/18 10:57 AM, Mike Galbraith wrote:
> >>
> >>>>> Confirmed.  Impressive high speed bug stomping.
> >>>>
> >>>> Well, that's good news. Can I get you to try this patch?
> >>>
> >>> Sure thing.  The original hang (minus provocation patch) being
> >>> annoyingly non-deterministic, this will (hopefully) take a while.
> >>
> >> You can verify with the provocation patch as well first, if you wish.
> > 
> > Done, box still seems fine.
> 
> Omar had some (valid) complaints, can you try this one as well? You
> can also find it as a series here:
> 
> http://git.kernel.dk/cgit/linux-block/log/?h=bfq-cleanups
> 
> I'll repost the series shortly, need to check if it actually builds and
> boots.

I applied the series (+ provocation), all is well.

	-Mike

^ permalink raw reply	[flat|nested] 31+ messages in thread

end of thread, other threads:[~2018-05-10  4:38 UTC | newest]

Thread overview: 31+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-05-07 14:03 bug in tag handling in blk-mq? Paolo Valente
2018-05-07 14:03 ` Paolo Valente
2018-05-07 16:39 ` Jens Axboe
2018-05-07 18:02   ` Paolo Valente
2018-05-07 18:02     ` Paolo Valente
2018-05-08  4:51     ` Mike Galbraith
2018-05-08  4:51       ` Mike Galbraith
2018-05-08  8:37       ` Mike Galbraith
2018-05-08  8:37         ` Mike Galbraith
2018-05-08 14:55         ` Jens Axboe
2018-05-08 14:55           ` Jens Axboe
2018-05-08 16:42           ` Mike Galbraith
2018-05-08 16:42             ` Mike Galbraith
2018-05-08 20:37             ` Jens Axboe
2018-05-08 21:19               ` Jens Axboe
2018-05-09  1:09                 ` Jens Axboe
2018-05-09  4:11                   ` Mike Galbraith
2018-05-09  4:11                     ` Mike Galbraith
2018-05-09  5:06                     ` Paolo Valente
2018-05-09  5:06                       ` Paolo Valente
2018-05-09 15:18                     ` Jens Axboe
2018-05-09 16:57                       ` Mike Galbraith
2018-05-09 16:57                         ` Mike Galbraith
2018-05-09 17:01                         ` Jens Axboe
2018-05-09 18:31                           ` Mike Galbraith
2018-05-09 18:31                             ` Mike Galbraith
2018-05-09 19:50                             ` Jens Axboe
2018-05-10  4:38                               ` Mike Galbraith
2018-05-10  4:38                                 ` Mike Galbraith
2018-05-09  5:09               ` Mike Galbraith
2018-05-09  5:09                 ` Mike Galbraith

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.