linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 0/4] blk-mq: Fix two causes of IO stalls found in reboot testing
@ 2020-04-08 15:03 Douglas Anderson
  2020-04-08 15:03 ` [PATCH v4 1/4] blk-mq: In blk_mq_dispatch_rq_list() "no budget" is a reason to kick Douglas Anderson
                   ` (4 more replies)
  0 siblings, 5 replies; 12+ messages in thread
From: Douglas Anderson @ 2020-04-08 15:03 UTC (permalink / raw)
  To: axboe, jejb, martin.petersen
  Cc: paolo.valente, groeck, Gwendal Grignou, linux-scsi, linux-block,
	Ming Lei, sqazi, Douglas Anderson, André Almeida,
	Bart Van Assche, Damien Le Moal, John Garry, Pavel Begunkov,
	Sagi Grimberg, linux-kernel

While doing reboot testing, I found that occasionally my device would
trigger the hung task detector.  Many tasks were stuck waiting for the
a blkdev mutex, but at least one task in the system was always sitting
waiting for IO to complete (and holding the blkdev mutex).  One
example of a task that was just waiting for its IO to complete on one
reboot:

 udevd           D    0  2177    306 0x00400209
 Call trace:
  __switch_to+0x15c/0x17c
  __schedule+0x6e0/0x928
  schedule+0x8c/0xbc
  schedule_timeout+0x9c/0xfc
  io_schedule_timeout+0x24/0x48
  do_wait_for_common+0xd0/0x160
  wait_for_completion_io_timeout+0x54/0x74
  blk_execute_rq+0x9c/0xd8
  __scsi_execute+0x104/0x198
  scsi_test_unit_ready+0xa0/0x154
  sd_check_events+0xb4/0x164
  disk_check_events+0x58/0x154
  disk_clear_events+0x74/0x110
  check_disk_change+0x28/0x6c
  sd_open+0x5c/0x130
  __blkdev_get+0x20c/0x3d4
  blkdev_get+0x74/0x170
  blkdev_open+0x94/0xa8
  do_dentry_open+0x268/0x3a0
  vfs_open+0x34/0x40
  path_openat+0x39c/0xdf4
  do_filp_open+0x90/0x10c
  do_sys_open+0x150/0x3c8
  ...

I've reproduced this on two systems: one boots from an internal UFS
disk and one from eMMC.  Each has a card reader attached via USB with
an SD card plugged in.  On the USB-attached SD card is a disk with 12
partitions (a Chrome OS test image), if it matters.  The system
doesn't do much with the USB disk other than probe it (it's plugged in
my system to help me recover).

From digging, I believe that there are two separate but related
issues.  Both issues relate to the SCSI code saying that there is no
budget.

I have done testing with only one or the other of the two patches in
this series and found that I could still encounter hung tasks if only
one of the two patches was applied.  This deserves a bit of
explanation.  To me, it's fairly obvious that the first fix wouldn't
fix the problems talked about in the second patch.  However, it's less
obvious why the second patch doesn't fix the problems in
blk_mq_dispatch_rq_list().  It turns out that it _almost_ does
(problems become much more rare), but I did manage to get a single
trace where the "kick" scheduled by the second patch happened really
quickly.  The scheduled kick then ran and found nothing to do.  This
happened in parallel to a task running in blk_mq_dispatch_rq_list()
which hadn't gotten around to splicing the list back into
hctx->dispatch.  This is why we need both fixes.

Most of my testing has been atop Chrome OS 5.4's kernel tree which
currently has v5.4.30 merged in.  The Chrome OS 5.4 tree also has a
patch by Salman Qazi, namely ("block: Limit number of items taken from
the I/O scheduler in one go").  Reverting that patch didn't make the
hung tasks go away, so I kept it in for most of my testing.

I have also done some testing on mainline Linux (most on what git
describe calls v5.6-rc7-227-gf3e69428b5e2) even without Salman's
patch.  I found that I could reproduce the problems there and that
traces looked about the same as I saw on the downstream branch.  These
patches were also confirmed to fix the problems on mainline.

Chrome OS is currently setup to use the BFQ scheduler and I found that
I couldn't reproduce the problems without BFQ.  As discussed in the
second patch this is believed to be because BFQ sometimes returns
"true" from has_work() but then NULL from dispatch_request().

I'll insert my usual caveat that I'm sending patches to code that I
know very little about.  If I'm making a total bozo patch here, please
help me figure out how I should fix the problems I found in a better
way.

If you want to see a total ridiculous amount of chatter where I
stumbled around a whole bunch trying to figure out what was wrong and
how to fix it, feel free to read <https://crbug.com/1061950>.  I
promise it will make your eyes glaze over right away if this cover
letter didn't already do that.  Specifically comment 79 in that bug
includes a link to my ugly prototype of making BFQ's has_work() more
exact (I only managed it by actually defining _both_ an exact and
inexact function to avoid circular locking problems when it was called
directly from blk_mq_hctx_has_pending()).  Comment 79 also has more
thoughts about alternatives considered.

I don't know if these fixes represent a regression of some sort or are
new.  As per above I could only reproduce with BFQ enabled which makes
it nearly impossible to go too far back with this.  I haven't listed
any "Fixes" tags here, but if someone felt it was appropriate to
backport this to some stable trees that seems like it'd be nice.
Presumably at least 5.4 stable would make sense.

Thanks to Salman Qazi, Paolo Valente, and Guenter Roeck who spent a
bunch of time helping me trawl through some of this code and reviewing
early versions of this patch.

Changes in v4:
- Only kick in blk_mq_do_dispatch_ctx() / blk_mq_do_dispatch_sched().

Changes in v3:
- Note why blk_mq_dispatch_rq_list() change is needed.
- ("blk-mq: Add blk_mq_delay_run_hw_queues() API call") new for v3
- Always kick when putting the budget.
- Delay blk_mq_do_dispatch_sched() kick by 3 ms for inexact has_work().
- Totally rewrote commit message.
- ("Revert "scsi: core: run queue...") new for v3.

Changes in v2:
- Replace ("scsi: core: Fix stall...") w/ ("blk-mq: Rerun dispatch...")

Douglas Anderson (4):
  blk-mq: In blk_mq_dispatch_rq_list() "no budget" is a reason to kick
  blk-mq: Add blk_mq_delay_run_hw_queues() API call
  blk-mq: Rerun dispatching in the case of budget contention
  Revert "scsi: core: run queue if SCSI device queue isn't ready and
    queue is idle"

 block/blk-mq-sched.c    | 18 ++++++++++++++++++
 block/blk-mq.c          | 30 +++++++++++++++++++++++++++---
 drivers/scsi/scsi_lib.c |  7 +------
 include/linux/blk-mq.h  |  1 +
 4 files changed, 47 insertions(+), 9 deletions(-)

-- 
2.26.0.292.g33ef6b2f38-goog


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v4 1/4] blk-mq: In blk_mq_dispatch_rq_list() "no budget" is a reason to kick
  2020-04-08 15:03 [PATCH v4 0/4] blk-mq: Fix two causes of IO stalls found in reboot testing Douglas Anderson
@ 2020-04-08 15:03 ` Douglas Anderson
  2020-04-08 15:04 ` [PATCH v4 2/4] blk-mq: Add blk_mq_delay_run_hw_queues() API call Douglas Anderson
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 12+ messages in thread
From: Douglas Anderson @ 2020-04-08 15:03 UTC (permalink / raw)
  To: axboe, jejb, martin.petersen
  Cc: paolo.valente, groeck, Gwendal Grignou, linux-scsi, linux-block,
	Ming Lei, sqazi, Douglas Anderson, linux-kernel

In blk_mq_dispatch_rq_list(), if blk_mq_sched_needs_restart() returns
true and the driver returns BLK_STS_RESOURCE then we'll kick the
queue.  However, there's another case where we might need to kick it.
If we were unable to get budget we can be in much the same state as
when the driver returns BLK_STS_RESOURCE, so we should treat it the
same.

It should be noted that even if we add a whole bunch of extra kicking
to the queue in other patches this patch is still important.
Specifically any kicking that happened before we re-spliced leftover
requests into 'hctx->dispatch' wouldn't have found any work, so we
really need to make sure we kick ourselves after we've done the
splicing.

Signed-off-by: Douglas Anderson <dianders@chromium.org>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
---

Changes in v4: None
Changes in v3:
- Note why blk_mq_dispatch_rq_list() change is needed.

Changes in v2: None

 block/blk-mq.c | 11 ++++++++---
 1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index d92088dec6c3..2cd8d2b49ff4 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1189,6 +1189,7 @@ bool blk_mq_dispatch_rq_list(struct request_queue *q, struct list_head *list,
 	bool no_tag = false;
 	int errors, queued;
 	blk_status_t ret = BLK_STS_OK;
+	bool no_budget_avail = false;
 
 	if (list_empty(list))
 		return false;
@@ -1205,8 +1206,10 @@ bool blk_mq_dispatch_rq_list(struct request_queue *q, struct list_head *list,
 		rq = list_first_entry(list, struct request, queuelist);
 
 		hctx = rq->mq_hctx;
-		if (!got_budget && !blk_mq_get_dispatch_budget(hctx))
+		if (!got_budget && !blk_mq_get_dispatch_budget(hctx)) {
+			no_budget_avail = true;
 			break;
+		}
 
 		if (!blk_mq_get_driver_tag(rq)) {
 			/*
@@ -1311,13 +1314,15 @@ bool blk_mq_dispatch_rq_list(struct request_queue *q, struct list_head *list,
 		 *
 		 * If driver returns BLK_STS_RESOURCE and SCHED_RESTART
 		 * bit is set, run queue after a delay to avoid IO stalls
-		 * that could otherwise occur if the queue is idle.
+		 * that could otherwise occur if the queue is idle.  We'll do
+		 * similar if we couldn't get budget and SCHED_RESTART is set.
 		 */
 		needs_restart = blk_mq_sched_needs_restart(hctx);
 		if (!needs_restart ||
 		    (no_tag && list_empty_careful(&hctx->dispatch_wait.entry)))
 			blk_mq_run_hw_queue(hctx, true);
-		else if (needs_restart && (ret == BLK_STS_RESOURCE))
+		else if (needs_restart && (ret == BLK_STS_RESOURCE ||
+					   no_budget_avail))
 			blk_mq_delay_run_hw_queue(hctx, BLK_MQ_RESOURCE_DELAY);
 
 		blk_mq_update_dispatch_busy(hctx, true);
-- 
2.26.0.292.g33ef6b2f38-goog


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v4 2/4] blk-mq: Add blk_mq_delay_run_hw_queues() API call
  2020-04-08 15:03 [PATCH v4 0/4] blk-mq: Fix two causes of IO stalls found in reboot testing Douglas Anderson
  2020-04-08 15:03 ` [PATCH v4 1/4] blk-mq: In blk_mq_dispatch_rq_list() "no budget" is a reason to kick Douglas Anderson
@ 2020-04-08 15:04 ` Douglas Anderson
  2020-04-09  1:13   ` Ming Lei
  2020-04-08 15:04 ` [PATCH v4 3/4] blk-mq: Rerun dispatching in the case of budget contention Douglas Anderson
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 12+ messages in thread
From: Douglas Anderson @ 2020-04-08 15:04 UTC (permalink / raw)
  To: axboe, jejb, martin.petersen
  Cc: paolo.valente, groeck, Gwendal Grignou, linux-scsi, linux-block,
	Ming Lei, sqazi, Douglas Anderson, André Almeida,
	Bart Van Assche, Damien Le Moal, John Garry, Pavel Begunkov,
	Sagi Grimberg, linux-kernel

We have:
* blk_mq_run_hw_queue()
* blk_mq_delay_run_hw_queue()
* blk_mq_run_hw_queues()

...but not blk_mq_delay_run_hw_queues(), presumably because nobody
needed it before now.  Since we need it for a later patch in this
series, add it.

Signed-off-by: Douglas Anderson <dianders@chromium.org>
---

Changes in v4: None
Changes in v3:
- ("blk-mq: Add blk_mq_delay_run_hw_queues() API call") new for v3

Changes in v2: None

 block/blk-mq.c         | 19 +++++++++++++++++++
 include/linux/blk-mq.h |  1 +
 2 files changed, 20 insertions(+)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 2cd8d2b49ff4..ea0cd970a3ff 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1537,6 +1537,25 @@ void blk_mq_run_hw_queues(struct request_queue *q, bool async)
 }
 EXPORT_SYMBOL(blk_mq_run_hw_queues);
 
+/**
+ * blk_mq_delay_run_hw_queues - Run all hardware queues asynchronously.
+ * @q: Pointer to the request queue to run.
+ * @msecs: Microseconds of delay to wait before running the queues.
+ */
+void blk_mq_delay_run_hw_queues(struct request_queue *q, unsigned long msecs)
+{
+	struct blk_mq_hw_ctx *hctx;
+	int i;
+
+	queue_for_each_hw_ctx(q, hctx, i) {
+		if (blk_mq_hctx_stopped(hctx))
+			continue;
+
+		blk_mq_delay_run_hw_queue(hctx, msecs);
+	}
+}
+EXPORT_SYMBOL(blk_mq_delay_run_hw_queues);
+
 /**
  * blk_mq_queue_stopped() - check whether one or more hctxs have been stopped
  * @q: request queue.
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index 11cfd6470b1a..405f8c196517 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -503,6 +503,7 @@ void blk_mq_unquiesce_queue(struct request_queue *q);
 void blk_mq_delay_run_hw_queue(struct blk_mq_hw_ctx *hctx, unsigned long msecs);
 void blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx, bool async);
 void blk_mq_run_hw_queues(struct request_queue *q, bool async);
+void blk_mq_delay_run_hw_queues(struct request_queue *q, unsigned long msecs);
 void blk_mq_tagset_busy_iter(struct blk_mq_tag_set *tagset,
 		busy_tag_iter_fn *fn, void *priv);
 void blk_mq_tagset_wait_completed_request(struct blk_mq_tag_set *tagset);
-- 
2.26.0.292.g33ef6b2f38-goog


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v4 3/4] blk-mq: Rerun dispatching in the case of budget contention
  2020-04-08 15:03 [PATCH v4 0/4] blk-mq: Fix two causes of IO stalls found in reboot testing Douglas Anderson
  2020-04-08 15:03 ` [PATCH v4 1/4] blk-mq: In blk_mq_dispatch_rq_list() "no budget" is a reason to kick Douglas Anderson
  2020-04-08 15:04 ` [PATCH v4 2/4] blk-mq: Add blk_mq_delay_run_hw_queues() API call Douglas Anderson
@ 2020-04-08 15:04 ` Douglas Anderson
  2020-04-09  1:13   ` Ming Lei
  2020-04-08 15:04 ` [PATCH v4 4/4] Revert "scsi: core: run queue if SCSI device queue isn't ready and queue is idle" Douglas Anderson
  2020-04-20 14:45 ` [PATCH v4 0/4] blk-mq: Fix two causes of IO stalls found in reboot testing Doug Anderson
  4 siblings, 1 reply; 12+ messages in thread
From: Douglas Anderson @ 2020-04-08 15:04 UTC (permalink / raw)
  To: axboe, jejb, martin.petersen
  Cc: paolo.valente, groeck, Gwendal Grignou, linux-scsi, linux-block,
	Ming Lei, sqazi, Douglas Anderson, linux-kernel

If ever a thread running blk-mq code tries to get budget and fails it
immediately stops doing work and assumes that whenever budget is freed
up that queues will be kicked and whatever work the thread was trying
to do will be tried again.

One path where budget is freed and queues are kicked in the normal
case can be seen in scsi_finish_command().  Specifically:
- scsi_finish_command()
  - scsi_device_unbusy()
    - # Decrement "device_busy", AKA release budget
  - scsi_io_completion()
    - scsi_end_request()
      - blk_mq_run_hw_queues()

The above is all well and good.  The problem comes up when a thread
claims the budget but then releases it without actually dispatching
any work.  Since we didn't schedule any work we'll never run the path
of finishing work / kicking the queues.

This isn't often actually a problem which is why this issue has
existed for a while and nobody noticed.  Specifically we only get into
this situation when we unexpectedly found that we weren't going to do
any work.  Code that later receives new work kicks the queues.  All
good, right?

The problem shows up, however, if timing is just wrong and we hit a
race.  To see this race let's think about the case where we only have
a budget of 1 (only one thread can hold budget).  Now imagine that a
thread got budget and then decided not to dispatch work.  It's about
to call put_budget() but then the thread gets context switched out for
a long, long time.  While in this state, any and all kicks of the
queue (like the when we received new work) will be no-ops because
nobody can get budget.  Finally the thread holding budget gets to run
again and returns.  All the normal kicks will have been no-ops and we
have an I/O stall.

As you can see from the above, you need just the right timing to see
the race.  To start with, the only case it happens if we thought we
had work, actually managed to get the budget, but then actually didn't
have work.  That's pretty rare to start with.  Even then, there's
usually a very small amount of time between realizing that there's no
work and putting the budget.  During this small amount of time new
work has to come in and the queue kick has to make it all the way to
trying to get the budget and fail.  It's pretty unlikely.

One case where this could have failed is illustrated by an example of
threads running blk_mq_do_dispatch_sched():

* Threads A and B both run has_work() at the same time with the same
  "hctx".  Imagine has_work() is exact.  There's no lock, so it's OK
  if Thread A and B both get back true.
* Thread B gets interrupted for a long time right after it decides
  that there is work.  Maybe its CPU gets an interrupt and the
  interrupt handler is slow.
* Thread A runs, get budget, dispatches work.
* Thread A's work finishes and budget is released.
* Thread B finally runs again and gets budget.
* Since Thread A already took care of the work and no new work has
  come in, Thread B will get NULL from dispatch_request().  I believe
  this is specifically why dispatch_request() is allowed to return
  NULL in the first place if has_work() must be exact.
* Thread B will now be holding the budget and is about to call
  put_budget(), but hasn't called it yet.
* Thread B gets interrupted for a long time (again).  Dang interrupts.
* Now Thread C (maybe with a different "hctx" but the same queue)
  comes along and runs blk_mq_do_dispatch_sched().
* Thread C won't do anything because it can't get budget.
* Finally Thread B will run again and put the budget without kicking
  any queues.

Even though the example above is with blk_mq_do_dispatch_sched() I
believe the race is possible any time someone is holding budget but
doesn't do work.

Unfortunately, the unlikely has become more likely if you happen to be
using the BFQ I/O scheduler.  BFQ, by design, sometimes returns "true"
for has_work() but then NULL for dispatch_request() and stays in this
state for a while (currently up to 9 ms).  Suddenly you only need one
race to hit, not two races in a row.  With my current setup this is
easy to reproduce in reboot tests and traces have actually shown that
we hit a race similar to the one described above.

Note that we only need to fix blk_mq_do_dispatch_sched() and
blk_mq_do_dispatch_ctx() and not the other places that put budget.  In
other cases we know that we have work to do on at least one "hctx" and
code already exists to kick that "hctx"'s queue.  When that work
finally finishes all the queues will be kicked using the normal flow.

One last note is that (at least in the SCSI case) budget is shared by
all "hctx"s that have the same queue.  Thus we need to make sure to
kick the whole queue, not just re-run dispatching on a single "hctx".

Signed-off-by: Douglas Anderson <dianders@chromium.org>
---

Changes in v4:
- Only kick in blk_mq_do_dispatch_ctx() / blk_mq_do_dispatch_sched().

Changes in v3:
- Always kick when putting the budget.
- Delay blk_mq_do_dispatch_sched() kick by 3 ms for inexact has_work().
- Totally rewrote commit message.

Changes in v2:
- Replace ("scsi: core: Fix stall...") w/ ("blk-mq: Rerun dispatch...")

 block/blk-mq-sched.c | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
index 74cedea56034..eca81bd4010c 100644
--- a/block/blk-mq-sched.c
+++ b/block/blk-mq-sched.c
@@ -80,6 +80,8 @@ void blk_mq_sched_restart(struct blk_mq_hw_ctx *hctx)
 	blk_mq_run_hw_queue(hctx, true);
 }
 
+#define BLK_MQ_BUDGET_DELAY	3		/* ms units */
+
 /*
  * Only SCSI implements .get_budget and .put_budget, and SCSI restarts
  * its queue by itself in its completion handler, so we don't need to
@@ -103,6 +105,14 @@ static void blk_mq_do_dispatch_sched(struct blk_mq_hw_ctx *hctx)
 		rq = e->type->ops.dispatch_request(hctx);
 		if (!rq) {
 			blk_mq_put_dispatch_budget(hctx);
+			/*
+			 * We're releasing without dispatching. Holding the
+			 * budget could have blocked any "hctx"s with the
+			 * same queue and if we didn't dispatch then there's
+			 * no guarantee anyone will kick the queue.  Kick it
+			 * ourselves.
+			 */
+			blk_mq_delay_run_hw_queues(q, BLK_MQ_BUDGET_DELAY);
 			break;
 		}
 
@@ -149,6 +159,14 @@ static void blk_mq_do_dispatch_ctx(struct blk_mq_hw_ctx *hctx)
 		rq = blk_mq_dequeue_from_ctx(hctx, ctx);
 		if (!rq) {
 			blk_mq_put_dispatch_budget(hctx);
+			/*
+			 * We're releasing without dispatching. Holding the
+			 * budget could have blocked any "hctx"s with the
+			 * same queue and if we didn't dispatch then there's
+			 * no guarantee anyone will kick the queue.  Kick it
+			 * ourselves.
+			 */
+			blk_mq_delay_run_hw_queues(q, BLK_MQ_BUDGET_DELAY);
 			break;
 		}
 
-- 
2.26.0.292.g33ef6b2f38-goog


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v4 4/4] Revert "scsi: core: run queue if SCSI device queue isn't ready and queue is idle"
  2020-04-08 15:03 [PATCH v4 0/4] blk-mq: Fix two causes of IO stalls found in reboot testing Douglas Anderson
                   ` (2 preceding siblings ...)
  2020-04-08 15:04 ` [PATCH v4 3/4] blk-mq: Rerun dispatching in the case of budget contention Douglas Anderson
@ 2020-04-08 15:04 ` Douglas Anderson
  2020-04-09  1:24   ` Ming Lei
  2020-04-13 17:51   ` Martin K. Petersen
  2020-04-20 14:45 ` [PATCH v4 0/4] blk-mq: Fix two causes of IO stalls found in reboot testing Doug Anderson
  4 siblings, 2 replies; 12+ messages in thread
From: Douglas Anderson @ 2020-04-08 15:04 UTC (permalink / raw)
  To: axboe, jejb, martin.petersen
  Cc: paolo.valente, groeck, Gwendal Grignou, linux-scsi, linux-block,
	Ming Lei, sqazi, Douglas Anderson, linux-kernel

This reverts commit 7e70aa789d4a0c89dbfbd2c8a974a4df717475ec.

Now that we have the patches ("blk-mq: In blk_mq_dispatch_rq_list()
"no budget" is a reason to kick") and ("blk-mq: Rerun dispatching in
the case of budget contention") we should no longer need the fix in
the SCSI code.  Revert it, resolving conflicts with other patches that
have touched this code.

With this revert (and the two new patches) I can run the script that
was in commit 7e70aa789d4a ("scsi: core: run queue if SCSI device
queue isn't ready and queue is idle") in a loop with no failure.  If I
do this revert without the two new patches I can easily get a failure.

Signed-off-by: Douglas Anderson <dianders@chromium.org>
---
I don't know for sure that we can revert this patch, but in the very
least the original test case now passes.  If there is any question
about this, we can just drop this patch.

Changes in v4: None
Changes in v3:
- ("Revert "scsi: core: run queue...") new for v3.

Changes in v2: None

 drivers/scsi/scsi_lib.c | 7 +------
 1 file changed, 1 insertion(+), 6 deletions(-)

diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index 610ee41fa54c..b9e28ea76cc0 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -1632,12 +1632,7 @@ static bool scsi_mq_get_budget(struct blk_mq_hw_ctx *hctx)
 	struct request_queue *q = hctx->queue;
 	struct scsi_device *sdev = q->queuedata;
 
-	if (scsi_dev_queue_ready(q, sdev))
-		return true;
-
-	if (atomic_read(&sdev->device_busy) == 0 && !scsi_device_blocked(sdev))
-		blk_mq_delay_run_hw_queue(hctx, SCSI_QUEUE_DELAY);
-	return false;
+	return scsi_dev_queue_ready(q, sdev);
 }
 
 static blk_status_t scsi_queue_rq(struct blk_mq_hw_ctx *hctx,
-- 
2.26.0.292.g33ef6b2f38-goog


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH v4 2/4] blk-mq: Add blk_mq_delay_run_hw_queues() API call
  2020-04-08 15:04 ` [PATCH v4 2/4] blk-mq: Add blk_mq_delay_run_hw_queues() API call Douglas Anderson
@ 2020-04-09  1:13   ` Ming Lei
  0 siblings, 0 replies; 12+ messages in thread
From: Ming Lei @ 2020-04-09  1:13 UTC (permalink / raw)
  To: Douglas Anderson
  Cc: axboe, jejb, martin.petersen, paolo.valente, groeck,
	Gwendal Grignou, linux-scsi, linux-block, sqazi,
	André Almeida, Bart Van Assche, Damien Le Moal, John Garry,
	Pavel Begunkov, Sagi Grimberg, linux-kernel

On Wed, Apr 08, 2020 at 08:04:00AM -0700, Douglas Anderson wrote:
> We have:
> * blk_mq_run_hw_queue()
> * blk_mq_delay_run_hw_queue()
> * blk_mq_run_hw_queues()
> 
> ...but not blk_mq_delay_run_hw_queues(), presumably because nobody
> needed it before now.  Since we need it for a later patch in this
> series, add it.
> 
> Signed-off-by: Douglas Anderson <dianders@chromium.org>
> ---
> 
> Changes in v4: None
> Changes in v3:
> - ("blk-mq: Add blk_mq_delay_run_hw_queues() API call") new for v3
> 
> Changes in v2: None
> 
>  block/blk-mq.c         | 19 +++++++++++++++++++
>  include/linux/blk-mq.h |  1 +
>  2 files changed, 20 insertions(+)
> 
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index 2cd8d2b49ff4..ea0cd970a3ff 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -1537,6 +1537,25 @@ void blk_mq_run_hw_queues(struct request_queue *q, bool async)
>  }
>  EXPORT_SYMBOL(blk_mq_run_hw_queues);
>  
> +/**
> + * blk_mq_delay_run_hw_queues - Run all hardware queues asynchronously.
> + * @q: Pointer to the request queue to run.
> + * @msecs: Microseconds of delay to wait before running the queues.
> + */
> +void blk_mq_delay_run_hw_queues(struct request_queue *q, unsigned long msecs)
> +{
> +	struct blk_mq_hw_ctx *hctx;
> +	int i;
> +
> +	queue_for_each_hw_ctx(q, hctx, i) {
> +		if (blk_mq_hctx_stopped(hctx))
> +			continue;
> +
> +		blk_mq_delay_run_hw_queue(hctx, msecs);
> +	}
> +}
> +EXPORT_SYMBOL(blk_mq_delay_run_hw_queues);
> +
>  /**
>   * blk_mq_queue_stopped() - check whether one or more hctxs have been stopped
>   * @q: request queue.
> diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
> index 11cfd6470b1a..405f8c196517 100644
> --- a/include/linux/blk-mq.h
> +++ b/include/linux/blk-mq.h
> @@ -503,6 +503,7 @@ void blk_mq_unquiesce_queue(struct request_queue *q);
>  void blk_mq_delay_run_hw_queue(struct blk_mq_hw_ctx *hctx, unsigned long msecs);
>  void blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx, bool async);
>  void blk_mq_run_hw_queues(struct request_queue *q, bool async);
> +void blk_mq_delay_run_hw_queues(struct request_queue *q, unsigned long msecs);
>  void blk_mq_tagset_busy_iter(struct blk_mq_tag_set *tagset,
>  		busy_tag_iter_fn *fn, void *priv);
>  void blk_mq_tagset_wait_completed_request(struct blk_mq_tag_set *tagset);
> -- 
> 2.26.0.292.g33ef6b2f38-goog
> 

Reviewed-by: Ming Lei <ming.lei@redhat.com>

-- 
Ming


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v4 3/4] blk-mq: Rerun dispatching in the case of budget contention
  2020-04-08 15:04 ` [PATCH v4 3/4] blk-mq: Rerun dispatching in the case of budget contention Douglas Anderson
@ 2020-04-09  1:13   ` Ming Lei
  0 siblings, 0 replies; 12+ messages in thread
From: Ming Lei @ 2020-04-09  1:13 UTC (permalink / raw)
  To: Douglas Anderson
  Cc: axboe, jejb, martin.petersen, paolo.valente, groeck,
	Gwendal Grignou, linux-scsi, linux-block, sqazi, linux-kernel

On Wed, Apr 08, 2020 at 08:04:01AM -0700, Douglas Anderson wrote:
> If ever a thread running blk-mq code tries to get budget and fails it
> immediately stops doing work and assumes that whenever budget is freed
> up that queues will be kicked and whatever work the thread was trying
> to do will be tried again.
> 
> One path where budget is freed and queues are kicked in the normal
> case can be seen in scsi_finish_command().  Specifically:
> - scsi_finish_command()
>   - scsi_device_unbusy()
>     - # Decrement "device_busy", AKA release budget
>   - scsi_io_completion()
>     - scsi_end_request()
>       - blk_mq_run_hw_queues()
> 
> The above is all well and good.  The problem comes up when a thread
> claims the budget but then releases it without actually dispatching
> any work.  Since we didn't schedule any work we'll never run the path
> of finishing work / kicking the queues.
> 
> This isn't often actually a problem which is why this issue has
> existed for a while and nobody noticed.  Specifically we only get into
> this situation when we unexpectedly found that we weren't going to do
> any work.  Code that later receives new work kicks the queues.  All
> good, right?
> 
> The problem shows up, however, if timing is just wrong and we hit a
> race.  To see this race let's think about the case where we only have
> a budget of 1 (only one thread can hold budget).  Now imagine that a
> thread got budget and then decided not to dispatch work.  It's about
> to call put_budget() but then the thread gets context switched out for
> a long, long time.  While in this state, any and all kicks of the
> queue (like the when we received new work) will be no-ops because
> nobody can get budget.  Finally the thread holding budget gets to run
> again and returns.  All the normal kicks will have been no-ops and we
> have an I/O stall.
> 
> As you can see from the above, you need just the right timing to see
> the race.  To start with, the only case it happens if we thought we
> had work, actually managed to get the budget, but then actually didn't
> have work.  That's pretty rare to start with.  Even then, there's
> usually a very small amount of time between realizing that there's no
> work and putting the budget.  During this small amount of time new
> work has to come in and the queue kick has to make it all the way to
> trying to get the budget and fail.  It's pretty unlikely.
> 
> One case where this could have failed is illustrated by an example of
> threads running blk_mq_do_dispatch_sched():
> 
> * Threads A and B both run has_work() at the same time with the same
>   "hctx".  Imagine has_work() is exact.  There's no lock, so it's OK
>   if Thread A and B both get back true.
> * Thread B gets interrupted for a long time right after it decides
>   that there is work.  Maybe its CPU gets an interrupt and the
>   interrupt handler is slow.
> * Thread A runs, get budget, dispatches work.
> * Thread A's work finishes and budget is released.
> * Thread B finally runs again and gets budget.
> * Since Thread A already took care of the work and no new work has
>   come in, Thread B will get NULL from dispatch_request().  I believe
>   this is specifically why dispatch_request() is allowed to return
>   NULL in the first place if has_work() must be exact.
> * Thread B will now be holding the budget and is about to call
>   put_budget(), but hasn't called it yet.
> * Thread B gets interrupted for a long time (again).  Dang interrupts.
> * Now Thread C (maybe with a different "hctx" but the same queue)
>   comes along and runs blk_mq_do_dispatch_sched().
> * Thread C won't do anything because it can't get budget.
> * Finally Thread B will run again and put the budget without kicking
>   any queues.
> 
> Even though the example above is with blk_mq_do_dispatch_sched() I
> believe the race is possible any time someone is holding budget but
> doesn't do work.
> 
> Unfortunately, the unlikely has become more likely if you happen to be
> using the BFQ I/O scheduler.  BFQ, by design, sometimes returns "true"
> for has_work() but then NULL for dispatch_request() and stays in this
> state for a while (currently up to 9 ms).  Suddenly you only need one
> race to hit, not two races in a row.  With my current setup this is
> easy to reproduce in reboot tests and traces have actually shown that
> we hit a race similar to the one described above.
> 
> Note that we only need to fix blk_mq_do_dispatch_sched() and
> blk_mq_do_dispatch_ctx() and not the other places that put budget.  In
> other cases we know that we have work to do on at least one "hctx" and
> code already exists to kick that "hctx"'s queue.  When that work
> finally finishes all the queues will be kicked using the normal flow.
> 
> One last note is that (at least in the SCSI case) budget is shared by
> all "hctx"s that have the same queue.  Thus we need to make sure to
> kick the whole queue, not just re-run dispatching on a single "hctx".
> 
> Signed-off-by: Douglas Anderson <dianders@chromium.org>
> ---
> 
> Changes in v4:
> - Only kick in blk_mq_do_dispatch_ctx() / blk_mq_do_dispatch_sched().
> 
> Changes in v3:
> - Always kick when putting the budget.
> - Delay blk_mq_do_dispatch_sched() kick by 3 ms for inexact has_work().
> - Totally rewrote commit message.
> 
> Changes in v2:
> - Replace ("scsi: core: Fix stall...") w/ ("blk-mq: Rerun dispatch...")
> 
>  block/blk-mq-sched.c | 18 ++++++++++++++++++
>  1 file changed, 18 insertions(+)
> 
> diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
> index 74cedea56034..eca81bd4010c 100644
> --- a/block/blk-mq-sched.c
> +++ b/block/blk-mq-sched.c
> @@ -80,6 +80,8 @@ void blk_mq_sched_restart(struct blk_mq_hw_ctx *hctx)
>  	blk_mq_run_hw_queue(hctx, true);
>  }
>  
> +#define BLK_MQ_BUDGET_DELAY	3		/* ms units */
> +
>  /*
>   * Only SCSI implements .get_budget and .put_budget, and SCSI restarts
>   * its queue by itself in its completion handler, so we don't need to
> @@ -103,6 +105,14 @@ static void blk_mq_do_dispatch_sched(struct blk_mq_hw_ctx *hctx)
>  		rq = e->type->ops.dispatch_request(hctx);
>  		if (!rq) {
>  			blk_mq_put_dispatch_budget(hctx);
> +			/*
> +			 * We're releasing without dispatching. Holding the
> +			 * budget could have blocked any "hctx"s with the
> +			 * same queue and if we didn't dispatch then there's
> +			 * no guarantee anyone will kick the queue.  Kick it
> +			 * ourselves.
> +			 */
> +			blk_mq_delay_run_hw_queues(q, BLK_MQ_BUDGET_DELAY);
>  			break;
>  		}
>  
> @@ -149,6 +159,14 @@ static void blk_mq_do_dispatch_ctx(struct blk_mq_hw_ctx *hctx)
>  		rq = blk_mq_dequeue_from_ctx(hctx, ctx);
>  		if (!rq) {
>  			blk_mq_put_dispatch_budget(hctx);
> +			/*
> +			 * We're releasing without dispatching. Holding the
> +			 * budget could have blocked any "hctx"s with the
> +			 * same queue and if we didn't dispatch then there's
> +			 * no guarantee anyone will kick the queue.  Kick it
> +			 * ourselves.
> +			 */
> +			blk_mq_delay_run_hw_queues(q, BLK_MQ_BUDGET_DELAY);
>  			break;
>  		}
>  
> -- 
> 2.26.0.292.g33ef6b2f38-goog
> 

Reviewed-by: Ming Lei <ming.lei@redhat.com>

-- 
Ming


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v4 4/4] Revert "scsi: core: run queue if SCSI device queue isn't ready and queue is idle"
  2020-04-08 15:04 ` [PATCH v4 4/4] Revert "scsi: core: run queue if SCSI device queue isn't ready and queue is idle" Douglas Anderson
@ 2020-04-09  1:24   ` Ming Lei
  2020-04-13 17:51   ` Martin K. Petersen
  1 sibling, 0 replies; 12+ messages in thread
From: Ming Lei @ 2020-04-09  1:24 UTC (permalink / raw)
  To: Douglas Anderson
  Cc: axboe, jejb, martin.petersen, paolo.valente, groeck,
	Gwendal Grignou, linux-scsi, linux-block, sqazi, linux-kernel

On Wed, Apr 08, 2020 at 08:04:02AM -0700, Douglas Anderson wrote:
> This reverts commit 7e70aa789d4a0c89dbfbd2c8a974a4df717475ec.
> 
> Now that we have the patches ("blk-mq: In blk_mq_dispatch_rq_list()
> "no budget" is a reason to kick") and ("blk-mq: Rerun dispatching in
> the case of budget contention") we should no longer need the fix in
> the SCSI code.  Revert it, resolving conflicts with other patches that
> have touched this code.
> 
> With this revert (and the two new patches) I can run the script that
> was in commit 7e70aa789d4a ("scsi: core: run queue if SCSI device
> queue isn't ready and queue is idle") in a loop with no failure.  If I
> do this revert without the two new patches I can easily get a failure.
> 
> Signed-off-by: Douglas Anderson <dianders@chromium.org>
> ---
> I don't know for sure that we can revert this patch, but in the very
> least the original test case now passes.  If there is any question
> about this, we can just drop this patch.

I think it is safe to revert this patch.

This patch should have been one workaround in case of dispatch from hctx->dispatch,
since there is one race. When dispatch from scheduler queue, any
IO completion will re-run all hctxs, so no need to do the trick.

Reviewed-by: Ming Lei <ming.lei@redhat.com>


thanks,
Ming


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v4 4/4] Revert "scsi: core: run queue if SCSI device queue isn't ready and queue is idle"
  2020-04-08 15:04 ` [PATCH v4 4/4] Revert "scsi: core: run queue if SCSI device queue isn't ready and queue is idle" Douglas Anderson
  2020-04-09  1:24   ` Ming Lei
@ 2020-04-13 17:51   ` Martin K. Petersen
  1 sibling, 0 replies; 12+ messages in thread
From: Martin K. Petersen @ 2020-04-13 17:51 UTC (permalink / raw)
  To: Douglas Anderson
  Cc: axboe, jejb, martin.petersen, paolo.valente, groeck,
	Gwendal Grignou, linux-scsi, linux-block, Ming Lei, sqazi,
	linux-kernel


Douglas,

> This reverts commit 7e70aa789d4a0c89dbfbd2c8a974a4df717475ec.
>
> Now that we have the patches ("blk-mq: In blk_mq_dispatch_rq_list()
> "no budget" is a reason to kick") and ("blk-mq: Rerun dispatching in
> the case of budget contention") we should no longer need the fix in
> the SCSI code.  Revert it, resolving conflicts with other patches that
> have touched this code.
>
> With this revert (and the two new patches) I can run the script that
> was in commit 7e70aa789d4a ("scsi: core: run queue if SCSI device
> queue isn't ready and queue is idle") in a loop with no failure.  If I
> do this revert without the two new patches I can easily get a failure.
>
> Signed-off-by: Douglas Anderson <dianders@chromium.org>

Looks good to me, never really liked the original commit.

Acked-by: Martin K. Petersen <martin.petersen@oracle.com>

-- 
Martin K. Petersen	Oracle Linux Engineering

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v4 0/4] blk-mq: Fix two causes of IO stalls found in reboot testing
  2020-04-08 15:03 [PATCH v4 0/4] blk-mq: Fix two causes of IO stalls found in reboot testing Douglas Anderson
                   ` (3 preceding siblings ...)
  2020-04-08 15:04 ` [PATCH v4 4/4] Revert "scsi: core: run queue if SCSI device queue isn't ready and queue is idle" Douglas Anderson
@ 2020-04-20 14:45 ` Doug Anderson
  2020-04-20 15:49   ` Jens Axboe
  4 siblings, 1 reply; 12+ messages in thread
From: Doug Anderson @ 2020-04-20 14:45 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Paolo Valente, Guenter Roeck, Gwendal Grignou, linux-scsi,
	linux-block, Ming Lei, Salman Qazi, André Almeida,
	Bart Van Assche, Damien Le Moal, John Garry, Pavel Begunkov,
	Sagi Grimberg, LKML, Martin K. Petersen, James E.J. Bottomley

Hi Jens,

On Wed, Apr 8, 2020 at 8:35 AM Douglas Anderson <dianders@chromium.org> wrote:
>
> While doing reboot testing, I found that occasionally my device would
> trigger the hung task detector.  Many tasks were stuck waiting for the
> a blkdev mutex, but at least one task in the system was always sitting
> waiting for IO to complete (and holding the blkdev mutex).  One
> example of a task that was just waiting for its IO to complete on one
> reboot:
>
>  udevd           D    0  2177    306 0x00400209
>  Call trace:
>   __switch_to+0x15c/0x17c
>   __schedule+0x6e0/0x928
>   schedule+0x8c/0xbc
>   schedule_timeout+0x9c/0xfc
>   io_schedule_timeout+0x24/0x48
>   do_wait_for_common+0xd0/0x160
>   wait_for_completion_io_timeout+0x54/0x74
>   blk_execute_rq+0x9c/0xd8
>   __scsi_execute+0x104/0x198
>   scsi_test_unit_ready+0xa0/0x154
>   sd_check_events+0xb4/0x164
>   disk_check_events+0x58/0x154
>   disk_clear_events+0x74/0x110
>   check_disk_change+0x28/0x6c
>   sd_open+0x5c/0x130
>   __blkdev_get+0x20c/0x3d4
>   blkdev_get+0x74/0x170
>   blkdev_open+0x94/0xa8
>   do_dentry_open+0x268/0x3a0
>   vfs_open+0x34/0x40
>   path_openat+0x39c/0xdf4
>   do_filp_open+0x90/0x10c
>   do_sys_open+0x150/0x3c8
>   ...
>
> I've reproduced this on two systems: one boots from an internal UFS
> disk and one from eMMC.  Each has a card reader attached via USB with
> an SD card plugged in.  On the USB-attached SD card is a disk with 12
> partitions (a Chrome OS test image), if it matters.  The system
> doesn't do much with the USB disk other than probe it (it's plugged in
> my system to help me recover).
>
> From digging, I believe that there are two separate but related
> issues.  Both issues relate to the SCSI code saying that there is no
> budget.
>
> I have done testing with only one or the other of the two patches in
> this series and found that I could still encounter hung tasks if only
> one of the two patches was applied.  This deserves a bit of
> explanation.  To me, it's fairly obvious that the first fix wouldn't
> fix the problems talked about in the second patch.  However, it's less
> obvious why the second patch doesn't fix the problems in
> blk_mq_dispatch_rq_list().  It turns out that it _almost_ does
> (problems become much more rare), but I did manage to get a single
> trace where the "kick" scheduled by the second patch happened really
> quickly.  The scheduled kick then ran and found nothing to do.  This
> happened in parallel to a task running in blk_mq_dispatch_rq_list()
> which hadn't gotten around to splicing the list back into
> hctx->dispatch.  This is why we need both fixes.
>
> Most of my testing has been atop Chrome OS 5.4's kernel tree which
> currently has v5.4.30 merged in.  The Chrome OS 5.4 tree also has a
> patch by Salman Qazi, namely ("block: Limit number of items taken from
> the I/O scheduler in one go").  Reverting that patch didn't make the
> hung tasks go away, so I kept it in for most of my testing.
>
> I have also done some testing on mainline Linux (most on what git
> describe calls v5.6-rc7-227-gf3e69428b5e2) even without Salman's
> patch.  I found that I could reproduce the problems there and that
> traces looked about the same as I saw on the downstream branch.  These
> patches were also confirmed to fix the problems on mainline.
>
> Chrome OS is currently setup to use the BFQ scheduler and I found that
> I couldn't reproduce the problems without BFQ.  As discussed in the
> second patch this is believed to be because BFQ sometimes returns
> "true" from has_work() but then NULL from dispatch_request().
>
> I'll insert my usual caveat that I'm sending patches to code that I
> know very little about.  If I'm making a total bozo patch here, please
> help me figure out how I should fix the problems I found in a better
> way.
>
> If you want to see a total ridiculous amount of chatter where I
> stumbled around a whole bunch trying to figure out what was wrong and
> how to fix it, feel free to read <https://crbug.com/1061950>.  I
> promise it will make your eyes glaze over right away if this cover
> letter didn't already do that.  Specifically comment 79 in that bug
> includes a link to my ugly prototype of making BFQ's has_work() more
> exact (I only managed it by actually defining _both_ an exact and
> inexact function to avoid circular locking problems when it was called
> directly from blk_mq_hctx_has_pending()).  Comment 79 also has more
> thoughts about alternatives considered.
>
> I don't know if these fixes represent a regression of some sort or are
> new.  As per above I could only reproduce with BFQ enabled which makes
> it nearly impossible to go too far back with this.  I haven't listed
> any "Fixes" tags here, but if someone felt it was appropriate to
> backport this to some stable trees that seems like it'd be nice.
> Presumably at least 5.4 stable would make sense.
>
> Thanks to Salman Qazi, Paolo Valente, and Guenter Roeck who spent a
> bunch of time helping me trawl through some of this code and reviewing
> early versions of this patch.
>
> Changes in v4:
> - Only kick in blk_mq_do_dispatch_ctx() / blk_mq_do_dispatch_sched().
>
> Changes in v3:
> - Note why blk_mq_dispatch_rq_list() change is needed.
> - ("blk-mq: Add blk_mq_delay_run_hw_queues() API call") new for v3
> - Always kick when putting the budget.
> - Delay blk_mq_do_dispatch_sched() kick by 3 ms for inexact has_work().
> - Totally rewrote commit message.
> - ("Revert "scsi: core: run queue...") new for v3.
>
> Changes in v2:
> - Replace ("scsi: core: Fix stall...") w/ ("blk-mq: Rerun dispatch...")
>
> Douglas Anderson (4):
>   blk-mq: In blk_mq_dispatch_rq_list() "no budget" is a reason to kick
>   blk-mq: Add blk_mq_delay_run_hw_queues() API call
>   blk-mq: Rerun dispatching in the case of budget contention
>   Revert "scsi: core: run queue if SCSI device queue isn't ready and
>     queue is idle"
>
>  block/blk-mq-sched.c    | 18 ++++++++++++++++++
>  block/blk-mq.c          | 30 +++++++++++++++++++++++++++---
>  drivers/scsi/scsi_lib.c |  7 +------
>  include/linux/blk-mq.h  |  1 +
>  4 files changed, 47 insertions(+), 9 deletions(-)

Is there anything blocking this series from landing?  All has been
quiet for a while.  All the patches have Ming's review and the SCSI
patch has Martin's Ack.  This seems like a great time to get it into
linux-next so it can get a whole bunch of testing before the next
merge window.

Thanks!


-Doug

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v4 0/4] blk-mq: Fix two causes of IO stalls found in reboot testing
  2020-04-20 14:45 ` [PATCH v4 0/4] blk-mq: Fix two causes of IO stalls found in reboot testing Doug Anderson
@ 2020-04-20 15:49   ` Jens Axboe
  2020-04-20 16:31     ` Doug Anderson
  0 siblings, 1 reply; 12+ messages in thread
From: Jens Axboe @ 2020-04-20 15:49 UTC (permalink / raw)
  To: Doug Anderson
  Cc: Paolo Valente, Guenter Roeck, Gwendal Grignou, linux-scsi,
	linux-block, Ming Lei, Salman Qazi, André Almeida,
	Bart Van Assche, Damien Le Moal, John Garry, Pavel Begunkov,
	Sagi Grimberg, LKML, Martin K. Petersen, James E.J. Bottomley

On 4/20/20 8:45 AM, Doug Anderson wrote:
> Hi Jens,
> 
> On Wed, Apr 8, 2020 at 8:35 AM Douglas Anderson <dianders@chromium.org> wrote:
>>
>> While doing reboot testing, I found that occasionally my device would
>> trigger the hung task detector.  Many tasks were stuck waiting for the
>> a blkdev mutex, but at least one task in the system was always sitting
>> waiting for IO to complete (and holding the blkdev mutex).  One
>> example of a task that was just waiting for its IO to complete on one
>> reboot:
>>
>>  udevd           D    0  2177    306 0x00400209
>>  Call trace:
>>   __switch_to+0x15c/0x17c
>>   __schedule+0x6e0/0x928
>>   schedule+0x8c/0xbc
>>   schedule_timeout+0x9c/0xfc
>>   io_schedule_timeout+0x24/0x48
>>   do_wait_for_common+0xd0/0x160
>>   wait_for_completion_io_timeout+0x54/0x74
>>   blk_execute_rq+0x9c/0xd8
>>   __scsi_execute+0x104/0x198
>>   scsi_test_unit_ready+0xa0/0x154
>>   sd_check_events+0xb4/0x164
>>   disk_check_events+0x58/0x154
>>   disk_clear_events+0x74/0x110
>>   check_disk_change+0x28/0x6c
>>   sd_open+0x5c/0x130
>>   __blkdev_get+0x20c/0x3d4
>>   blkdev_get+0x74/0x170
>>   blkdev_open+0x94/0xa8
>>   do_dentry_open+0x268/0x3a0
>>   vfs_open+0x34/0x40
>>   path_openat+0x39c/0xdf4
>>   do_filp_open+0x90/0x10c
>>   do_sys_open+0x150/0x3c8
>>   ...
>>
>> I've reproduced this on two systems: one boots from an internal UFS
>> disk and one from eMMC.  Each has a card reader attached via USB with
>> an SD card plugged in.  On the USB-attached SD card is a disk with 12
>> partitions (a Chrome OS test image), if it matters.  The system
>> doesn't do much with the USB disk other than probe it (it's plugged in
>> my system to help me recover).
>>
>> From digging, I believe that there are two separate but related
>> issues.  Both issues relate to the SCSI code saying that there is no
>> budget.
>>
>> I have done testing with only one or the other of the two patches in
>> this series and found that I could still encounter hung tasks if only
>> one of the two patches was applied.  This deserves a bit of
>> explanation.  To me, it's fairly obvious that the first fix wouldn't
>> fix the problems talked about in the second patch.  However, it's less
>> obvious why the second patch doesn't fix the problems in
>> blk_mq_dispatch_rq_list().  It turns out that it _almost_ does
>> (problems become much more rare), but I did manage to get a single
>> trace where the "kick" scheduled by the second patch happened really
>> quickly.  The scheduled kick then ran and found nothing to do.  This
>> happened in parallel to a task running in blk_mq_dispatch_rq_list()
>> which hadn't gotten around to splicing the list back into
>> hctx->dispatch.  This is why we need both fixes.
>>
>> Most of my testing has been atop Chrome OS 5.4's kernel tree which
>> currently has v5.4.30 merged in.  The Chrome OS 5.4 tree also has a
>> patch by Salman Qazi, namely ("block: Limit number of items taken from
>> the I/O scheduler in one go").  Reverting that patch didn't make the
>> hung tasks go away, so I kept it in for most of my testing.
>>
>> I have also done some testing on mainline Linux (most on what git
>> describe calls v5.6-rc7-227-gf3e69428b5e2) even without Salman's
>> patch.  I found that I could reproduce the problems there and that
>> traces looked about the same as I saw on the downstream branch.  These
>> patches were also confirmed to fix the problems on mainline.
>>
>> Chrome OS is currently setup to use the BFQ scheduler and I found that
>> I couldn't reproduce the problems without BFQ.  As discussed in the
>> second patch this is believed to be because BFQ sometimes returns
>> "true" from has_work() but then NULL from dispatch_request().
>>
>> I'll insert my usual caveat that I'm sending patches to code that I
>> know very little about.  If I'm making a total bozo patch here, please
>> help me figure out how I should fix the problems I found in a better
>> way.
>>
>> If you want to see a total ridiculous amount of chatter where I
>> stumbled around a whole bunch trying to figure out what was wrong and
>> how to fix it, feel free to read <https://crbug.com/1061950>.  I
>> promise it will make your eyes glaze over right away if this cover
>> letter didn't already do that.  Specifically comment 79 in that bug
>> includes a link to my ugly prototype of making BFQ's has_work() more
>> exact (I only managed it by actually defining _both_ an exact and
>> inexact function to avoid circular locking problems when it was called
>> directly from blk_mq_hctx_has_pending()).  Comment 79 also has more
>> thoughts about alternatives considered.
>>
>> I don't know if these fixes represent a regression of some sort or are
>> new.  As per above I could only reproduce with BFQ enabled which makes
>> it nearly impossible to go too far back with this.  I haven't listed
>> any "Fixes" tags here, but if someone felt it was appropriate to
>> backport this to some stable trees that seems like it'd be nice.
>> Presumably at least 5.4 stable would make sense.
>>
>> Thanks to Salman Qazi, Paolo Valente, and Guenter Roeck who spent a
>> bunch of time helping me trawl through some of this code and reviewing
>> early versions of this patch.
>>
>> Changes in v4:
>> - Only kick in blk_mq_do_dispatch_ctx() / blk_mq_do_dispatch_sched().
>>
>> Changes in v3:
>> - Note why blk_mq_dispatch_rq_list() change is needed.
>> - ("blk-mq: Add blk_mq_delay_run_hw_queues() API call") new for v3
>> - Always kick when putting the budget.
>> - Delay blk_mq_do_dispatch_sched() kick by 3 ms for inexact has_work().
>> - Totally rewrote commit message.
>> - ("Revert "scsi: core: run queue...") new for v3.
>>
>> Changes in v2:
>> - Replace ("scsi: core: Fix stall...") w/ ("blk-mq: Rerun dispatch...")
>>
>> Douglas Anderson (4):
>>   blk-mq: In blk_mq_dispatch_rq_list() "no budget" is a reason to kick
>>   blk-mq: Add blk_mq_delay_run_hw_queues() API call
>>   blk-mq: Rerun dispatching in the case of budget contention
>>   Revert "scsi: core: run queue if SCSI device queue isn't ready and
>>     queue is idle"
>>
>>  block/blk-mq-sched.c    | 18 ++++++++++++++++++
>>  block/blk-mq.c          | 30 +++++++++++++++++++++++++++---
>>  drivers/scsi/scsi_lib.c |  7 +------
>>  include/linux/blk-mq.h  |  1 +
>>  4 files changed, 47 insertions(+), 9 deletions(-)
> 
> Is there anything blocking this series from landing?  All has been
> quiet for a while.  All the patches have Ming's review and the SCSI
> patch has Martin's Ack.  This seems like a great time to get it into
> linux-next so it can get a whole bunch of testing before the next
> merge window.

Current series doesn't apply - can you resend it?

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v4 0/4] blk-mq: Fix two causes of IO stalls found in reboot testing
  2020-04-20 15:49   ` Jens Axboe
@ 2020-04-20 16:31     ` Doug Anderson
  0 siblings, 0 replies; 12+ messages in thread
From: Doug Anderson @ 2020-04-20 16:31 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Paolo Valente, Guenter Roeck, Gwendal Grignou, linux-scsi,
	linux-block, Ming Lei, Salman Qazi, André Almeida,
	Bart Van Assche, Damien Le Moal, John Garry, Pavel Begunkov,
	Sagi Grimberg, LKML, Martin K. Petersen, James E.J. Bottomley

Hi,

On Mon, Apr 20, 2020 at 8:49 AM Jens Axboe <axboe@kernel.dk> wrote:
>
> On 4/20/20 8:45 AM, Doug Anderson wrote:
> > Hi Jens,
> >
> > On Wed, Apr 8, 2020 at 8:35 AM Douglas Anderson <dianders@chromium.org> wrote:
> >>
> >> While doing reboot testing, I found that occasionally my device would
> >> trigger the hung task detector.  Many tasks were stuck waiting for the
> >> a blkdev mutex, but at least one task in the system was always sitting
> >> waiting for IO to complete (and holding the blkdev mutex).  One
> >> example of a task that was just waiting for its IO to complete on one
> >> reboot:
> >>
> >>  udevd           D    0  2177    306 0x00400209
> >>  Call trace:
> >>   __switch_to+0x15c/0x17c
> >>   __schedule+0x6e0/0x928
> >>   schedule+0x8c/0xbc
> >>   schedule_timeout+0x9c/0xfc
> >>   io_schedule_timeout+0x24/0x48
> >>   do_wait_for_common+0xd0/0x160
> >>   wait_for_completion_io_timeout+0x54/0x74
> >>   blk_execute_rq+0x9c/0xd8
> >>   __scsi_execute+0x104/0x198
> >>   scsi_test_unit_ready+0xa0/0x154
> >>   sd_check_events+0xb4/0x164
> >>   disk_check_events+0x58/0x154
> >>   disk_clear_events+0x74/0x110
> >>   check_disk_change+0x28/0x6c
> >>   sd_open+0x5c/0x130
> >>   __blkdev_get+0x20c/0x3d4
> >>   blkdev_get+0x74/0x170
> >>   blkdev_open+0x94/0xa8
> >>   do_dentry_open+0x268/0x3a0
> >>   vfs_open+0x34/0x40
> >>   path_openat+0x39c/0xdf4
> >>   do_filp_open+0x90/0x10c
> >>   do_sys_open+0x150/0x3c8
> >>   ...
> >>
> >> I've reproduced this on two systems: one boots from an internal UFS
> >> disk and one from eMMC.  Each has a card reader attached via USB with
> >> an SD card plugged in.  On the USB-attached SD card is a disk with 12
> >> partitions (a Chrome OS test image), if it matters.  The system
> >> doesn't do much with the USB disk other than probe it (it's plugged in
> >> my system to help me recover).
> >>
> >> From digging, I believe that there are two separate but related
> >> issues.  Both issues relate to the SCSI code saying that there is no
> >> budget.
> >>
> >> I have done testing with only one or the other of the two patches in
> >> this series and found that I could still encounter hung tasks if only
> >> one of the two patches was applied.  This deserves a bit of
> >> explanation.  To me, it's fairly obvious that the first fix wouldn't
> >> fix the problems talked about in the second patch.  However, it's less
> >> obvious why the second patch doesn't fix the problems in
> >> blk_mq_dispatch_rq_list().  It turns out that it _almost_ does
> >> (problems become much more rare), but I did manage to get a single
> >> trace where the "kick" scheduled by the second patch happened really
> >> quickly.  The scheduled kick then ran and found nothing to do.  This
> >> happened in parallel to a task running in blk_mq_dispatch_rq_list()
> >> which hadn't gotten around to splicing the list back into
> >> hctx->dispatch.  This is why we need both fixes.
> >>
> >> Most of my testing has been atop Chrome OS 5.4's kernel tree which
> >> currently has v5.4.30 merged in.  The Chrome OS 5.4 tree also has a
> >> patch by Salman Qazi, namely ("block: Limit number of items taken from
> >> the I/O scheduler in one go").  Reverting that patch didn't make the
> >> hung tasks go away, so I kept it in for most of my testing.
> >>
> >> I have also done some testing on mainline Linux (most on what git
> >> describe calls v5.6-rc7-227-gf3e69428b5e2) even without Salman's
> >> patch.  I found that I could reproduce the problems there and that
> >> traces looked about the same as I saw on the downstream branch.  These
> >> patches were also confirmed to fix the problems on mainline.
> >>
> >> Chrome OS is currently setup to use the BFQ scheduler and I found that
> >> I couldn't reproduce the problems without BFQ.  As discussed in the
> >> second patch this is believed to be because BFQ sometimes returns
> >> "true" from has_work() but then NULL from dispatch_request().
> >>
> >> I'll insert my usual caveat that I'm sending patches to code that I
> >> know very little about.  If I'm making a total bozo patch here, please
> >> help me figure out how I should fix the problems I found in a better
> >> way.
> >>
> >> If you want to see a total ridiculous amount of chatter where I
> >> stumbled around a whole bunch trying to figure out what was wrong and
> >> how to fix it, feel free to read <https://crbug.com/1061950>.  I
> >> promise it will make your eyes glaze over right away if this cover
> >> letter didn't already do that.  Specifically comment 79 in that bug
> >> includes a link to my ugly prototype of making BFQ's has_work() more
> >> exact (I only managed it by actually defining _both_ an exact and
> >> inexact function to avoid circular locking problems when it was called
> >> directly from blk_mq_hctx_has_pending()).  Comment 79 also has more
> >> thoughts about alternatives considered.
> >>
> >> I don't know if these fixes represent a regression of some sort or are
> >> new.  As per above I could only reproduce with BFQ enabled which makes
> >> it nearly impossible to go too far back with this.  I haven't listed
> >> any "Fixes" tags here, but if someone felt it was appropriate to
> >> backport this to some stable trees that seems like it'd be nice.
> >> Presumably at least 5.4 stable would make sense.
> >>
> >> Thanks to Salman Qazi, Paolo Valente, and Guenter Roeck who spent a
> >> bunch of time helping me trawl through some of this code and reviewing
> >> early versions of this patch.
> >>
> >> Changes in v4:
> >> - Only kick in blk_mq_do_dispatch_ctx() / blk_mq_do_dispatch_sched().
> >>
> >> Changes in v3:
> >> - Note why blk_mq_dispatch_rq_list() change is needed.
> >> - ("blk-mq: Add blk_mq_delay_run_hw_queues() API call") new for v3
> >> - Always kick when putting the budget.
> >> - Delay blk_mq_do_dispatch_sched() kick by 3 ms for inexact has_work().
> >> - Totally rewrote commit message.
> >> - ("Revert "scsi: core: run queue...") new for v3.
> >>
> >> Changes in v2:
> >> - Replace ("scsi: core: Fix stall...") w/ ("blk-mq: Rerun dispatch...")
> >>
> >> Douglas Anderson (4):
> >>   blk-mq: In blk_mq_dispatch_rq_list() "no budget" is a reason to kick
> >>   blk-mq: Add blk_mq_delay_run_hw_queues() API call
> >>   blk-mq: Rerun dispatching in the case of budget contention
> >>   Revert "scsi: core: run queue if SCSI device queue isn't ready and
> >>     queue is idle"
> >>
> >>  block/blk-mq-sched.c    | 18 ++++++++++++++++++
> >>  block/blk-mq.c          | 30 +++++++++++++++++++++++++++---
> >>  drivers/scsi/scsi_lib.c |  7 +------
> >>  include/linux/blk-mq.h  |  1 +
> >>  4 files changed, 47 insertions(+), 9 deletions(-)
> >
> > Is there anything blocking this series from landing?  All has been
> > quiet for a while.  All the patches have Ming's review and the SCSI
> > patch has Martin's Ack.  This seems like a great time to get it into
> > linux-next so it can get a whole bunch of testing before the next
> > merge window.
>
> Current series doesn't apply - can you resend it?

Of course.  I've sent v5 based on 'linux_blk/block-5.7' and brought
the collected tags forward.  The conflict I found was with
5fe56de799ad ("blk-mq: Put driver tag in blk_mq_dispatch_rq_list()
when no budget").  I built and booted with my rebased series but I
didn't run a stress test since the resolution was easy.

Please yell if there's anything else you need from me.

Thanks!

-Doug

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2020-04-20 16:40 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-08 15:03 [PATCH v4 0/4] blk-mq: Fix two causes of IO stalls found in reboot testing Douglas Anderson
2020-04-08 15:03 ` [PATCH v4 1/4] blk-mq: In blk_mq_dispatch_rq_list() "no budget" is a reason to kick Douglas Anderson
2020-04-08 15:04 ` [PATCH v4 2/4] blk-mq: Add blk_mq_delay_run_hw_queues() API call Douglas Anderson
2020-04-09  1:13   ` Ming Lei
2020-04-08 15:04 ` [PATCH v4 3/4] blk-mq: Rerun dispatching in the case of budget contention Douglas Anderson
2020-04-09  1:13   ` Ming Lei
2020-04-08 15:04 ` [PATCH v4 4/4] Revert "scsi: core: run queue if SCSI device queue isn't ready and queue is idle" Douglas Anderson
2020-04-09  1:24   ` Ming Lei
2020-04-13 17:51   ` Martin K. Petersen
2020-04-20 14:45 ` [PATCH v4 0/4] blk-mq: Fix two causes of IO stalls found in reboot testing Doug Anderson
2020-04-20 15:49   ` Jens Axboe
2020-04-20 16:31     ` Doug Anderson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).