linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] blk-mq: Fix two causes of IO stalls found in reboot testing
@ 2020-03-30 14:49 Douglas Anderson
  2020-03-30 14:49 ` [PATCH 1/2] blk-mq: In blk_mq_dispatch_rq_list() "no budget" is a reason to kick Douglas Anderson
  2020-03-30 14:49 ` [PATCH 2/2] scsi: core: Fix stall if two threads request budget at the same time Douglas Anderson
  0 siblings, 2 replies; 15+ messages in thread
From: Douglas Anderson @ 2020-03-30 14:49 UTC (permalink / raw)
  To: axboe, jejb, martin.petersen
  Cc: linux-block, groeck, paolo.valente, linux-scsi, sqazi,
	Douglas Anderson, linux-kernel

While doing reboot testing, I found that occasionally my device would
trigger the hung task detector.  Many tasks were stuck waiting for the
a blkdev mutex, but at least one task in the system was always sitting
waiting for IO to complete (and holding the blkdev mutex).  One
example of a task that was just waiting for its IO to complete on one
reboot:

 udevd           D    0  2177    306 0x00400209
 Call trace:
  __switch_to+0x15c/0x17c
  __schedule+0x6e0/0x928
  schedule+0x8c/0xbc
  schedule_timeout+0x9c/0xfc
  io_schedule_timeout+0x24/0x48
  do_wait_for_common+0xd0/0x160
  wait_for_completion_io_timeout+0x54/0x74
  blk_execute_rq+0x9c/0xd8
  __scsi_execute+0x104/0x198
  scsi_test_unit_ready+0xa0/0x154
  sd_check_events+0xb4/0x164
  disk_check_events+0x58/0x154
  disk_clear_events+0x74/0x110
  check_disk_change+0x28/0x6c
  sd_open+0x5c/0x130
  __blkdev_get+0x20c/0x3d4
  blkdev_get+0x74/0x170
  blkdev_open+0x94/0xa8
  do_dentry_open+0x268/0x3a0
  vfs_open+0x34/0x40
  path_openat+0x39c/0xdf4
  do_filp_open+0x90/0x10c
  do_sys_open+0x150/0x3c8
  ...

I've reproduced this on two systems: one boots from an internal UFS
disk and one from eMMC.  Each has a card reader attached via USB with
an SD card plugged in.  On the USB-attached SD card is a disk with 12
partitions (a Chrome OS test image), if it matters.  The system
doesn't do much with the USB disk other than probe it (it's plugged in
my system to help me recover).

From digging, I believe that there are two separate but related
issues.  Both issues relate to the SCSI code saying that there is no
budget.  In one case it seems clear that the blk-mq code should have
restarted itself.  In another case it seems that we have to make the
SCSI code kick the queues.

I have done testing with only one or the other of the two patches in
this series and found that I could still encounter hung tasks if only
one of the two patches was applied.  This deserves a bit of
explanation.  To me, it's fairly obvious that the blk-mq wouldn't fix
the problems talked about in the scsi patch.  However, it's less
obvious why the scsi patch doesn't fix the problems in
blk_mq_dispatch_rq_list().  It turns out that it _almost_ does
(problems become much more rare), but I did manage to get a single
trace where the "kick" scheduled by the scsi fix happened really
quickly.  The scheduled kick then ran and found nothing to do.  This
happened in parallel to a task running in blk_mq_dispatch_rq_list()
which hadn't gotten around to splicing the list back into
hctx->dispatch.  This is why we need both fixes or a heavier hammer
where we always kick whenever two threads request budget at the same
time.

Most of my testing has been atop Chrome OS 5.4's kernel tree which
currently has v5.4.27 merged in.  The Chrome OS 5.4 tree also has a
patch by Salman Qazi, namely ("block: Limit number of items taken from
the I/O scheduler in one go").  Reverting that patch didn't make the
hung tasks go away, so I kept it in for most of my testing.

I have also done some testing on mainline Linux (git describe says I'm
on v5.6-rc7-227-gf3e69428b5e2) even without Salman's patch.  I found
that I could reproduce the problems there and that traces looked about
the same as I saw on the downstream branch.  These patches were also
confirmed to fix the problems on mainline.

Chrome OS is currently setup to use the BFQ scheduler and I found that
I couldn't reproduce the problems without BFQ.  It's possible that
other schedulers simply never trip the code sequences I ran into or
it's possible that the timing was simply different.  One important
note is that to reproduce the problems the I/O scheduler must have
returned "true" for has_work() but then dispatch_request() returns
NULL.  In any case the problems I found do seem to be real problems
and theoretically should be possible with other schedulers.

I'll insert my usual caveat that I'm sending patches to code that I
know very little about.  If I'm making a total bozo patch here, please
help me figure out how I should fix the problems I found in a better
way.

If you want to see a total ridiculous amount of chatter where I
stumbled around a whole bunch trying to figure out what was wrong and
how to fix it, feel free to read <https://crbug.com/1061950>.  I
promise it will make your eyes glaze over right away if this cover
letter didn't already do that.

I don't know if these fixes represent a regression of some sort or are
new.  As per above I could only reproduce with BFQ enabled which makes
it nearly impossible to go too far back with this.  I haven't listed
any "Fixes" tags here, but if someone felt it was appropriate to
backport this to some stable trees that seems like it'd be nice.
Presumably at least 5.4 stable would make sense.

Thanks to Salman Qazi, Paolo Valente, and Guenter Roeck who spent a
bunch of time helping me trawl through some of this code and reviewing
early versions of this patch.


Douglas Anderson (2):
  blk-mq: In blk_mq_dispatch_rq_list() "no budget" is a reason to kick
  scsi: core: Fix stall if two threads request budget at the same time

 block/blk-mq.c             | 11 ++++++++---
 drivers/scsi/scsi_lib.c    | 27 ++++++++++++++++++++++++---
 drivers/scsi/scsi_scan.c   |  1 +
 include/scsi/scsi_device.h |  2 ++
 4 files changed, 35 insertions(+), 6 deletions(-)

-- 
2.26.0.rc2.310.g2932bb562d-goog


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH 1/2] blk-mq: In blk_mq_dispatch_rq_list() "no budget" is a reason to kick
  2020-03-30 14:49 [PATCH 0/2] blk-mq: Fix two causes of IO stalls found in reboot testing Douglas Anderson
@ 2020-03-30 14:49 ` Douglas Anderson
  2020-03-30 14:49 ` [PATCH 2/2] scsi: core: Fix stall if two threads request budget at the same time Douglas Anderson
  1 sibling, 0 replies; 15+ messages in thread
From: Douglas Anderson @ 2020-03-30 14:49 UTC (permalink / raw)
  To: axboe, jejb, martin.petersen
  Cc: linux-block, groeck, paolo.valente, linux-scsi, sqazi,
	Douglas Anderson, linux-kernel

In blk_mq_dispatch_rq_list(), if blk_mq_sched_needs_restart() returns
true and the driver returns BLK_STS_RESOURCE then we'll kick the
queue.  However, there's another case where we might need to kick it.
If we were unable to get budget we can be in much the same state as
when the driver returns BLK_STS_RESOURCE, so we should treat it the
same.

Signed-off-by: Douglas Anderson <dianders@chromium.org>
---

 block/blk-mq.c | 11 ++++++++---
 1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index d92088dec6c3..2cd8d2b49ff4 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1189,6 +1189,7 @@ bool blk_mq_dispatch_rq_list(struct request_queue *q, struct list_head *list,
 	bool no_tag = false;
 	int errors, queued;
 	blk_status_t ret = BLK_STS_OK;
+	bool no_budget_avail = false;
 
 	if (list_empty(list))
 		return false;
@@ -1205,8 +1206,10 @@ bool blk_mq_dispatch_rq_list(struct request_queue *q, struct list_head *list,
 		rq = list_first_entry(list, struct request, queuelist);
 
 		hctx = rq->mq_hctx;
-		if (!got_budget && !blk_mq_get_dispatch_budget(hctx))
+		if (!got_budget && !blk_mq_get_dispatch_budget(hctx)) {
+			no_budget_avail = true;
 			break;
+		}
 
 		if (!blk_mq_get_driver_tag(rq)) {
 			/*
@@ -1311,13 +1314,15 @@ bool blk_mq_dispatch_rq_list(struct request_queue *q, struct list_head *list,
 		 *
 		 * If driver returns BLK_STS_RESOURCE and SCHED_RESTART
 		 * bit is set, run queue after a delay to avoid IO stalls
-		 * that could otherwise occur if the queue is idle.
+		 * that could otherwise occur if the queue is idle.  We'll do
+		 * similar if we couldn't get budget and SCHED_RESTART is set.
 		 */
 		needs_restart = blk_mq_sched_needs_restart(hctx);
 		if (!needs_restart ||
 		    (no_tag && list_empty_careful(&hctx->dispatch_wait.entry)))
 			blk_mq_run_hw_queue(hctx, true);
-		else if (needs_restart && (ret == BLK_STS_RESOURCE))
+		else if (needs_restart && (ret == BLK_STS_RESOURCE ||
+					   no_budget_avail))
 			blk_mq_delay_run_hw_queue(hctx, BLK_MQ_RESOURCE_DELAY);
 
 		blk_mq_update_dispatch_busy(hctx, true);
-- 
2.26.0.rc2.310.g2932bb562d-goog


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 2/2] scsi: core: Fix stall if two threads request budget at the same time
  2020-03-30 14:49 [PATCH 0/2] blk-mq: Fix two causes of IO stalls found in reboot testing Douglas Anderson
  2020-03-30 14:49 ` [PATCH 1/2] blk-mq: In blk_mq_dispatch_rq_list() "no budget" is a reason to kick Douglas Anderson
@ 2020-03-30 14:49 ` Douglas Anderson
  2020-03-31  1:41   ` Ming Lei
  1 sibling, 1 reply; 15+ messages in thread
From: Douglas Anderson @ 2020-03-30 14:49 UTC (permalink / raw)
  To: axboe, jejb, martin.petersen
  Cc: linux-block, groeck, paolo.valente, linux-scsi, sqazi,
	Douglas Anderson, linux-kernel

It is possible for two threads to be running
blk_mq_do_dispatch_sched() at the same time with the same "hctx".
This is because there can be more than one caller to
__blk_mq_run_hw_queue() with the same "hctx" and hctx_lock() doesn't
prevent more than one thread from entering.

If more than one thread is running blk_mq_do_dispatch_sched() at the
same time with the same "hctx", they may have contention acquiring
budget.  The blk_mq_get_dispatch_budget() can eventually translate
into scsi_mq_get_budget().  If the device's "queue_depth" is 1 (not
uncommon) then only one of the two threads will be the one to
increment "device_busy" to 1 and get the budget.

The losing thread will break out of blk_mq_do_dispatch_sched() and
will stop dispatching requests.  The assumption is that when more
budget is available later (when existing transactions finish) the
queue will be kicked again, perhaps in scsi_end_request().

The winning thread now has budget and can go on to call
dispatch_request().  If dispatch_request() returns NULL here then we
have a potential problem.  Specifically we'll now call
blk_mq_put_dispatch_budget() which translates into
scsi_mq_put_budget().  That will mark the device as no longer busy but
doesn't do anything to kick the queue.  This violates the assumption
that the queue would be kicked when more budget was available.

Pictorially:

Thread A                          Thread B
================================= ==================================
blk_mq_get_dispatch_budget() => 1
dispatch_request() => NULL
                                  blk_mq_get_dispatch_budget() => 0
                                  // because Thread A marked
                                  // "device_busy" in scsi_device
blk_mq_put_dispatch_budget()

The above case was observed in reboot tests and caused a task to hang
forever waiting for IO to complete.  Traces showed that in fact two
tasks were running blk_mq_do_dispatch_sched() at the same time with
the same "hctx".  The task that got the budget did in fact see
dispatch_request() return NULL.  Both tasks returned and the system
went on for several minutes (until the hung task delay kicked in)
without the given "hctx" showing up again in traces.

Let's attempt to fix this problem by detecting budget contention.  If
we're in the SCSI code's put_budget() function and we saw that someone
else might have wanted the budget we got then we'll kick the queue.

The mechanism of kicking due to budget contention has the potential to
overcompensate and kick the queue more than strictly necessary, but it
shouldn't hurt.

Signed-off-by: Douglas Anderson <dianders@chromium.org>
---

 drivers/scsi/scsi_lib.c    | 27 ++++++++++++++++++++++++---
 drivers/scsi/scsi_scan.c   |  1 +
 include/scsi/scsi_device.h |  2 ++
 3 files changed, 27 insertions(+), 3 deletions(-)

diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index 610ee41fa54c..0530da909995 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -344,6 +344,21 @@ static void scsi_dec_host_busy(struct Scsi_Host *shost, struct scsi_cmnd *cmd)
 	rcu_read_unlock();
 }
 
+static void scsi_device_dec_busy(struct scsi_device *sdev)
+{
+	bool was_contention;
+	unsigned long flags;
+
+	spin_lock_irqsave(&sdev->budget_lock, flags);
+	atomic_dec(&sdev->device_busy);
+	was_contention = sdev->budget_contention;
+	sdev->budget_contention = false;
+	spin_unlock_irqrestore(&sdev->budget_lock, flags);
+
+	if (was_contention)
+		blk_mq_run_hw_queues(sdev->request_queue, true);
+}
+
 void scsi_device_unbusy(struct scsi_device *sdev, struct scsi_cmnd *cmd)
 {
 	struct Scsi_Host *shost = sdev->host;
@@ -354,7 +369,7 @@ void scsi_device_unbusy(struct scsi_device *sdev, struct scsi_cmnd *cmd)
 	if (starget->can_queue > 0)
 		atomic_dec(&starget->target_busy);
 
-	atomic_dec(&sdev->device_busy);
+	scsi_device_dec_busy(sdev);
 }
 
 static void scsi_kick_queue(struct request_queue *q)
@@ -1624,16 +1639,22 @@ static void scsi_mq_put_budget(struct blk_mq_hw_ctx *hctx)
 	struct request_queue *q = hctx->queue;
 	struct scsi_device *sdev = q->queuedata;
 
-	atomic_dec(&sdev->device_busy);
+	scsi_device_dec_busy(sdev);
 }
 
 static bool scsi_mq_get_budget(struct blk_mq_hw_ctx *hctx)
 {
 	struct request_queue *q = hctx->queue;
 	struct scsi_device *sdev = q->queuedata;
+	unsigned long flags;
 
-	if (scsi_dev_queue_ready(q, sdev))
+	spin_lock_irqsave(&sdev->budget_lock, flags);
+	if (scsi_dev_queue_ready(q, sdev)) {
+		spin_unlock_irqrestore(&sdev->budget_lock, flags);
 		return true;
+	}
+	sdev->budget_contention = true;
+	spin_unlock_irqrestore(&sdev->budget_lock, flags);
 
 	if (atomic_read(&sdev->device_busy) == 0 && !scsi_device_blocked(sdev))
 		blk_mq_delay_run_hw_queue(hctx, SCSI_QUEUE_DELAY);
diff --git a/drivers/scsi/scsi_scan.c b/drivers/scsi/scsi_scan.c
index 058079f915f1..72f7b6faed9b 100644
--- a/drivers/scsi/scsi_scan.c
+++ b/drivers/scsi/scsi_scan.c
@@ -240,6 +240,7 @@ static struct scsi_device *scsi_alloc_sdev(struct scsi_target *starget,
 	INIT_LIST_HEAD(&sdev->starved_entry);
 	INIT_LIST_HEAD(&sdev->event_list);
 	spin_lock_init(&sdev->list_lock);
+	spin_lock_init(&sdev->budget_lock);
 	mutex_init(&sdev->inquiry_mutex);
 	INIT_WORK(&sdev->event_work, scsi_evt_thread);
 	INIT_WORK(&sdev->requeue_work, scsi_requeue_run_queue);
diff --git a/include/scsi/scsi_device.h b/include/scsi/scsi_device.h
index f8312a3e5b42..3c5e0f0c8a91 100644
--- a/include/scsi/scsi_device.h
+++ b/include/scsi/scsi_device.h
@@ -106,6 +106,8 @@ struct scsi_device {
 	struct list_head    siblings;   /* list of all devices on this host */
 	struct list_head    same_target_siblings; /* just the devices sharing same target id */
 
+	spinlock_t budget_lock;		/* For device_busy and budget_contention */
+	bool budget_contention;		/* Someone couldn't get budget */
 	atomic_t device_busy;		/* commands actually active on LLDD */
 	atomic_t device_blocked;	/* Device returned QUEUE_FULL. */
 
-- 
2.26.0.rc2.310.g2932bb562d-goog


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH 2/2] scsi: core: Fix stall if two threads request budget at the same time
  2020-03-30 14:49 ` [PATCH 2/2] scsi: core: Fix stall if two threads request budget at the same time Douglas Anderson
@ 2020-03-31  1:41   ` Ming Lei
  2020-03-31  2:15     ` Doug Anderson
  2020-03-31 18:07     ` Paolo Valente
  0 siblings, 2 replies; 15+ messages in thread
From: Ming Lei @ 2020-03-31  1:41 UTC (permalink / raw)
  To: Douglas Anderson
  Cc: axboe, jejb, martin.petersen, linux-block, groeck, paolo.valente,
	linux-scsi, sqazi, linux-kernel

On Mon, Mar 30, 2020 at 07:49:06AM -0700, Douglas Anderson wrote:
> It is possible for two threads to be running
> blk_mq_do_dispatch_sched() at the same time with the same "hctx".
> This is because there can be more than one caller to
> __blk_mq_run_hw_queue() with the same "hctx" and hctx_lock() doesn't
> prevent more than one thread from entering.
> 
> If more than one thread is running blk_mq_do_dispatch_sched() at the
> same time with the same "hctx", they may have contention acquiring
> budget.  The blk_mq_get_dispatch_budget() can eventually translate
> into scsi_mq_get_budget().  If the device's "queue_depth" is 1 (not
> uncommon) then only one of the two threads will be the one to
> increment "device_busy" to 1 and get the budget.
> 
> The losing thread will break out of blk_mq_do_dispatch_sched() and
> will stop dispatching requests.  The assumption is that when more
> budget is available later (when existing transactions finish) the
> queue will be kicked again, perhaps in scsi_end_request().
> 
> The winning thread now has budget and can go on to call
> dispatch_request().  If dispatch_request() returns NULL here then we
> have a potential problem.  Specifically we'll now call

I guess this problem should be BFQ specific. Now there is definitely
requests in BFQ queue wrt. this hctx. However, looks this request is
only available from another loser thread, and it won't be retrieved in
the winning thread via e->type->ops.dispatch_request().

Just wondering why BFQ is implemented in this way?

> blk_mq_put_dispatch_budget() which translates into
> scsi_mq_put_budget().  That will mark the device as no longer busy but
> doesn't do anything to kick the queue.  This violates the assumption
> that the queue would be kicked when more budget was available.
> 
> Pictorially:
> 
> Thread A                          Thread B
> ================================= ==================================
> blk_mq_get_dispatch_budget() => 1
> dispatch_request() => NULL
>                                   blk_mq_get_dispatch_budget() => 0
>                                   // because Thread A marked
>                                   // "device_busy" in scsi_device
> blk_mq_put_dispatch_budget()
> 
> The above case was observed in reboot tests and caused a task to hang
> forever waiting for IO to complete.  Traces showed that in fact two
> tasks were running blk_mq_do_dispatch_sched() at the same time with
> the same "hctx".  The task that got the budget did in fact see
> dispatch_request() return NULL.  Both tasks returned and the system
> went on for several minutes (until the hung task delay kicked in)
> without the given "hctx" showing up again in traces.
> 
> Let's attempt to fix this problem by detecting budget contention.  If
> we're in the SCSI code's put_budget() function and we saw that someone
> else might have wanted the budget we got then we'll kick the queue.
> 
> The mechanism of kicking due to budget contention has the potential to
> overcompensate and kick the queue more than strictly necessary, but it
> shouldn't hurt.
> 
> Signed-off-by: Douglas Anderson <dianders@chromium.org>
> ---
> 
>  drivers/scsi/scsi_lib.c    | 27 ++++++++++++++++++++++++---
>  drivers/scsi/scsi_scan.c   |  1 +
>  include/scsi/scsi_device.h |  2 ++
>  3 files changed, 27 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
> index 610ee41fa54c..0530da909995 100644
> --- a/drivers/scsi/scsi_lib.c
> +++ b/drivers/scsi/scsi_lib.c
> @@ -344,6 +344,21 @@ static void scsi_dec_host_busy(struct Scsi_Host *shost, struct scsi_cmnd *cmd)
>  	rcu_read_unlock();
>  }
>  
> +static void scsi_device_dec_busy(struct scsi_device *sdev)
> +{
> +	bool was_contention;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&sdev->budget_lock, flags);
> +	atomic_dec(&sdev->device_busy);
> +	was_contention = sdev->budget_contention;
> +	sdev->budget_contention = false;
> +	spin_unlock_irqrestore(&sdev->budget_lock, flags);
> +
> +	if (was_contention)
> +		blk_mq_run_hw_queues(sdev->request_queue, true);
> +}
> +
>  void scsi_device_unbusy(struct scsi_device *sdev, struct scsi_cmnd *cmd)
>  {
>  	struct Scsi_Host *shost = sdev->host;
> @@ -354,7 +369,7 @@ void scsi_device_unbusy(struct scsi_device *sdev, struct scsi_cmnd *cmd)
>  	if (starget->can_queue > 0)
>  		atomic_dec(&starget->target_busy);
>  
> -	atomic_dec(&sdev->device_busy);
> +	scsi_device_dec_busy(sdev);
>  }
>  
>  static void scsi_kick_queue(struct request_queue *q)
> @@ -1624,16 +1639,22 @@ static void scsi_mq_put_budget(struct blk_mq_hw_ctx *hctx)
>  	struct request_queue *q = hctx->queue;
>  	struct scsi_device *sdev = q->queuedata;
>  
> -	atomic_dec(&sdev->device_busy);
> +	scsi_device_dec_busy(sdev);
>  }
>  
>  static bool scsi_mq_get_budget(struct blk_mq_hw_ctx *hctx)
>  {
>  	struct request_queue *q = hctx->queue;
>  	struct scsi_device *sdev = q->queuedata;
> +	unsigned long flags;
>  
> -	if (scsi_dev_queue_ready(q, sdev))
> +	spin_lock_irqsave(&sdev->budget_lock, flags);
> +	if (scsi_dev_queue_ready(q, sdev)) {
> +		spin_unlock_irqrestore(&sdev->budget_lock, flags);
>  		return true;
> +	}
> +	sdev->budget_contention = true;
> +	spin_unlock_irqrestore(&sdev->budget_lock, flags);

No, it really hurts performance by adding one per-sdev spinlock in fast path,
and we actually tried to kill the atomic variable of 'sdev->device_busy'
for high performance HBA.

Thanks,
Ming


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 2/2] scsi: core: Fix stall if two threads request budget at the same time
  2020-03-31  1:41   ` Ming Lei
@ 2020-03-31  2:15     ` Doug Anderson
  2020-03-31  2:58       ` Ming Lei
  2020-03-31 18:07     ` Paolo Valente
  1 sibling, 1 reply; 15+ messages in thread
From: Doug Anderson @ 2020-03-31  2:15 UTC (permalink / raw)
  To: Ming Lei
  Cc: Jens Axboe, James E.J. Bottomley, Martin K. Petersen,
	linux-block, Guenter Roeck, Paolo Valente, linux-scsi,
	Salman Qazi, LKML

Hi,

On Mon, Mar 30, 2020 at 6:41 PM Ming Lei <ming.lei@redhat.com> wrote:
>
> On Mon, Mar 30, 2020 at 07:49:06AM -0700, Douglas Anderson wrote:
> > It is possible for two threads to be running
> > blk_mq_do_dispatch_sched() at the same time with the same "hctx".
> > This is because there can be more than one caller to
> > __blk_mq_run_hw_queue() with the same "hctx" and hctx_lock() doesn't
> > prevent more than one thread from entering.
> >
> > If more than one thread is running blk_mq_do_dispatch_sched() at the
> > same time with the same "hctx", they may have contention acquiring
> > budget.  The blk_mq_get_dispatch_budget() can eventually translate
> > into scsi_mq_get_budget().  If the device's "queue_depth" is 1 (not
> > uncommon) then only one of the two threads will be the one to
> > increment "device_busy" to 1 and get the budget.
> >
> > The losing thread will break out of blk_mq_do_dispatch_sched() and
> > will stop dispatching requests.  The assumption is that when more
> > budget is available later (when existing transactions finish) the
> > queue will be kicked again, perhaps in scsi_end_request().
> >
> > The winning thread now has budget and can go on to call
> > dispatch_request().  If dispatch_request() returns NULL here then we
> > have a potential problem.  Specifically we'll now call
>
> I guess this problem should be BFQ specific. Now there is definitely
> requests in BFQ queue wrt. this hctx. However, looks this request is
> only available from another loser thread, and it won't be retrieved in
> the winning thread via e->type->ops.dispatch_request().
>
> Just wondering why BFQ is implemented in this way?

Paolo can maybe comment why.

...but even if BFQ wanted to try to change this, I think it's
impossible to fully close the race.  There is no locking between the
call to has_work() and dispatch_request() and there can be two (or
more) threads running the code at the same time.  Without some type of
locking I think it will always be possible for dispatch_request() to
return NULL.  Are we OK with code that works most of the time but
still has a race?  ...or did I misunderstand how this all works?


> > blk_mq_put_dispatch_budget() which translates into
> > scsi_mq_put_budget().  That will mark the device as no longer busy but
> > doesn't do anything to kick the queue.  This violates the assumption
> > that the queue would be kicked when more budget was available.
> >
> > Pictorially:
> >
> > Thread A                          Thread B
> > ================================= ==================================
> > blk_mq_get_dispatch_budget() => 1
> > dispatch_request() => NULL
> >                                   blk_mq_get_dispatch_budget() => 0
> >                                   // because Thread A marked
> >                                   // "device_busy" in scsi_device
> > blk_mq_put_dispatch_budget()
> >
> > The above case was observed in reboot tests and caused a task to hang
> > forever waiting for IO to complete.  Traces showed that in fact two
> > tasks were running blk_mq_do_dispatch_sched() at the same time with
> > the same "hctx".  The task that got the budget did in fact see
> > dispatch_request() return NULL.  Both tasks returned and the system
> > went on for several minutes (until the hung task delay kicked in)
> > without the given "hctx" showing up again in traces.
> >
> > Let's attempt to fix this problem by detecting budget contention.  If
> > we're in the SCSI code's put_budget() function and we saw that someone
> > else might have wanted the budget we got then we'll kick the queue.
> >
> > The mechanism of kicking due to budget contention has the potential to
> > overcompensate and kick the queue more than strictly necessary, but it
> > shouldn't hurt.
> >
> > Signed-off-by: Douglas Anderson <dianders@chromium.org>
> > ---
> >
> >  drivers/scsi/scsi_lib.c    | 27 ++++++++++++++++++++++++---
> >  drivers/scsi/scsi_scan.c   |  1 +
> >  include/scsi/scsi_device.h |  2 ++
> >  3 files changed, 27 insertions(+), 3 deletions(-)
> >
> > diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
> > index 610ee41fa54c..0530da909995 100644
> > --- a/drivers/scsi/scsi_lib.c
> > +++ b/drivers/scsi/scsi_lib.c
> > @@ -344,6 +344,21 @@ static void scsi_dec_host_busy(struct Scsi_Host *shost, struct scsi_cmnd *cmd)
> >       rcu_read_unlock();
> >  }
> >
> > +static void scsi_device_dec_busy(struct scsi_device *sdev)
> > +{
> > +     bool was_contention;
> > +     unsigned long flags;
> > +
> > +     spin_lock_irqsave(&sdev->budget_lock, flags);
> > +     atomic_dec(&sdev->device_busy);
> > +     was_contention = sdev->budget_contention;
> > +     sdev->budget_contention = false;
> > +     spin_unlock_irqrestore(&sdev->budget_lock, flags);
> > +
> > +     if (was_contention)
> > +             blk_mq_run_hw_queues(sdev->request_queue, true);
> > +}
> > +
> >  void scsi_device_unbusy(struct scsi_device *sdev, struct scsi_cmnd *cmd)
> >  {
> >       struct Scsi_Host *shost = sdev->host;
> > @@ -354,7 +369,7 @@ void scsi_device_unbusy(struct scsi_device *sdev, struct scsi_cmnd *cmd)
> >       if (starget->can_queue > 0)
> >               atomic_dec(&starget->target_busy);
> >
> > -     atomic_dec(&sdev->device_busy);
> > +     scsi_device_dec_busy(sdev);
> >  }
> >
> >  static void scsi_kick_queue(struct request_queue *q)
> > @@ -1624,16 +1639,22 @@ static void scsi_mq_put_budget(struct blk_mq_hw_ctx *hctx)
> >       struct request_queue *q = hctx->queue;
> >       struct scsi_device *sdev = q->queuedata;
> >
> > -     atomic_dec(&sdev->device_busy);
> > +     scsi_device_dec_busy(sdev);
> >  }
> >
> >  static bool scsi_mq_get_budget(struct blk_mq_hw_ctx *hctx)
> >  {
> >       struct request_queue *q = hctx->queue;
> >       struct scsi_device *sdev = q->queuedata;
> > +     unsigned long flags;
> >
> > -     if (scsi_dev_queue_ready(q, sdev))
> > +     spin_lock_irqsave(&sdev->budget_lock, flags);
> > +     if (scsi_dev_queue_ready(q, sdev)) {
> > +             spin_unlock_irqrestore(&sdev->budget_lock, flags);
> >               return true;
> > +     }
> > +     sdev->budget_contention = true;
> > +     spin_unlock_irqrestore(&sdev->budget_lock, flags);
>
> No, it really hurts performance by adding one per-sdev spinlock in fast path,
> and we actually tried to kill the atomic variable of 'sdev->device_busy'
> for high performance HBA.

It might be slow, but correctness trumps speed, right?  I tried to do
this with a 2nd atomic and without the spinlock but I kept having a
hole one way or the other.  I ended up just trying to keep the
spinlock section as small as possible.

If you know of a way to get rid of the spinlock that still makes the
code correct, I'd be super interested!  :-)  I certainly won't claim
that it's impossible to do, only that I didn't manage to come up with
a way.

-Doug

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 2/2] scsi: core: Fix stall if two threads request budget at the same time
  2020-03-31  2:15     ` Doug Anderson
@ 2020-03-31  2:58       ` Ming Lei
  2020-03-31  4:05         ` Doug Anderson
  0 siblings, 1 reply; 15+ messages in thread
From: Ming Lei @ 2020-03-31  2:58 UTC (permalink / raw)
  To: Doug Anderson
  Cc: Jens Axboe, James E.J. Bottomley, Martin K. Petersen,
	linux-block, Guenter Roeck, Paolo Valente, linux-scsi,
	Salman Qazi, LKML

On Mon, Mar 30, 2020 at 07:15:54PM -0700, Doug Anderson wrote:
> Hi,
> 
> On Mon, Mar 30, 2020 at 6:41 PM Ming Lei <ming.lei@redhat.com> wrote:
> >
> > On Mon, Mar 30, 2020 at 07:49:06AM -0700, Douglas Anderson wrote:
> > > It is possible for two threads to be running
> > > blk_mq_do_dispatch_sched() at the same time with the same "hctx".
> > > This is because there can be more than one caller to
> > > __blk_mq_run_hw_queue() with the same "hctx" and hctx_lock() doesn't
> > > prevent more than one thread from entering.
> > >
> > > If more than one thread is running blk_mq_do_dispatch_sched() at the
> > > same time with the same "hctx", they may have contention acquiring
> > > budget.  The blk_mq_get_dispatch_budget() can eventually translate
> > > into scsi_mq_get_budget().  If the device's "queue_depth" is 1 (not
> > > uncommon) then only one of the two threads will be the one to
> > > increment "device_busy" to 1 and get the budget.
> > >
> > > The losing thread will break out of blk_mq_do_dispatch_sched() and
> > > will stop dispatching requests.  The assumption is that when more
> > > budget is available later (when existing transactions finish) the
> > > queue will be kicked again, perhaps in scsi_end_request().
> > >
> > > The winning thread now has budget and can go on to call
> > > dispatch_request().  If dispatch_request() returns NULL here then we
> > > have a potential problem.  Specifically we'll now call
> >
> > I guess this problem should be BFQ specific. Now there is definitely
> > requests in BFQ queue wrt. this hctx. However, looks this request is
> > only available from another loser thread, and it won't be retrieved in
> > the winning thread via e->type->ops.dispatch_request().
> >
> > Just wondering why BFQ is implemented in this way?
> 
> Paolo can maybe comment why.
> 
> ...but even if BFQ wanted to try to change this, I think it's
> impossible to fully close the race.  There is no locking between the
> call to has_work() and dispatch_request() and there can be two (or
> more) threads running the code at the same time.  Without some type of
> locking I think it will always be possible for dispatch_request() to
> return NULL.  Are we OK with code that works most of the time but
> still has a race?  ...or did I misunderstand how this all works?

Wrt. dispatching requests from hctx->dispatch, there is really one
race given scsi's run queue from scsi_end_request() may not see
that request. Looks that is what the patch 1 is addressing.

However, for this issue, there isn't race, given when we get budget,
the request isn't dequeued from BFQ yet. If budget is assigned
successfully, either the request is dispatched to LLD successfully,
or STS_RESOURCE is triggered, or running out of driver tag, run queue
is guaranteed to be started for handling another dispatch path 
which running out of budget.

That is why I raise the question why BFQ dispatches request in this way.

> 
> 
> > > blk_mq_put_dispatch_budget() which translates into
> > > scsi_mq_put_budget().  That will mark the device as no longer busy but
> > > doesn't do anything to kick the queue.  This violates the assumption
> > > that the queue would be kicked when more budget was available.
> > >
> > > Pictorially:
> > >
> > > Thread A                          Thread B
> > > ================================= ==================================
> > > blk_mq_get_dispatch_budget() => 1
> > > dispatch_request() => NULL
> > >                                   blk_mq_get_dispatch_budget() => 0
> > >                                   // because Thread A marked
> > >                                   // "device_busy" in scsi_device
> > > blk_mq_put_dispatch_budget()
> > >
> > > The above case was observed in reboot tests and caused a task to hang
> > > forever waiting for IO to complete.  Traces showed that in fact two
> > > tasks were running blk_mq_do_dispatch_sched() at the same time with
> > > the same "hctx".  The task that got the budget did in fact see
> > > dispatch_request() return NULL.  Both tasks returned and the system
> > > went on for several minutes (until the hung task delay kicked in)
> > > without the given "hctx" showing up again in traces.
> > >
> > > Let's attempt to fix this problem by detecting budget contention.  If
> > > we're in the SCSI code's put_budget() function and we saw that someone
> > > else might have wanted the budget we got then we'll kick the queue.
> > >
> > > The mechanism of kicking due to budget contention has the potential to
> > > overcompensate and kick the queue more than strictly necessary, but it
> > > shouldn't hurt.
> > >
> > > Signed-off-by: Douglas Anderson <dianders@chromium.org>
> > > ---
> > >
> > >  drivers/scsi/scsi_lib.c    | 27 ++++++++++++++++++++++++---
> > >  drivers/scsi/scsi_scan.c   |  1 +
> > >  include/scsi/scsi_device.h |  2 ++
> > >  3 files changed, 27 insertions(+), 3 deletions(-)
> > >
> > > diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
> > > index 610ee41fa54c..0530da909995 100644
> > > --- a/drivers/scsi/scsi_lib.c
> > > +++ b/drivers/scsi/scsi_lib.c
> > > @@ -344,6 +344,21 @@ static void scsi_dec_host_busy(struct Scsi_Host *shost, struct scsi_cmnd *cmd)
> > >       rcu_read_unlock();
> > >  }
> > >
> > > +static void scsi_device_dec_busy(struct scsi_device *sdev)
> > > +{
> > > +     bool was_contention;
> > > +     unsigned long flags;
> > > +
> > > +     spin_lock_irqsave(&sdev->budget_lock, flags);
> > > +     atomic_dec(&sdev->device_busy);
> > > +     was_contention = sdev->budget_contention;
> > > +     sdev->budget_contention = false;
> > > +     spin_unlock_irqrestore(&sdev->budget_lock, flags);
> > > +
> > > +     if (was_contention)
> > > +             blk_mq_run_hw_queues(sdev->request_queue, true);
> > > +}
> > > +
> > >  void scsi_device_unbusy(struct scsi_device *sdev, struct scsi_cmnd *cmd)
> > >  {
> > >       struct Scsi_Host *shost = sdev->host;
> > > @@ -354,7 +369,7 @@ void scsi_device_unbusy(struct scsi_device *sdev, struct scsi_cmnd *cmd)
> > >       if (starget->can_queue > 0)
> > >               atomic_dec(&starget->target_busy);
> > >
> > > -     atomic_dec(&sdev->device_busy);
> > > +     scsi_device_dec_busy(sdev);
> > >  }
> > >
> > >  static void scsi_kick_queue(struct request_queue *q)
> > > @@ -1624,16 +1639,22 @@ static void scsi_mq_put_budget(struct blk_mq_hw_ctx *hctx)
> > >       struct request_queue *q = hctx->queue;
> > >       struct scsi_device *sdev = q->queuedata;
> > >
> > > -     atomic_dec(&sdev->device_busy);
> > > +     scsi_device_dec_busy(sdev);
> > >  }
> > >
> > >  static bool scsi_mq_get_budget(struct blk_mq_hw_ctx *hctx)
> > >  {
> > >       struct request_queue *q = hctx->queue;
> > >       struct scsi_device *sdev = q->queuedata;
> > > +     unsigned long flags;
> > >
> > > -     if (scsi_dev_queue_ready(q, sdev))
> > > +     spin_lock_irqsave(&sdev->budget_lock, flags);
> > > +     if (scsi_dev_queue_ready(q, sdev)) {
> > > +             spin_unlock_irqrestore(&sdev->budget_lock, flags);
> > >               return true;
> > > +     }
> > > +     sdev->budget_contention = true;
> > > +     spin_unlock_irqrestore(&sdev->budget_lock, flags);
> >
> > No, it really hurts performance by adding one per-sdev spinlock in fast path,
> > and we actually tried to kill the atomic variable of 'sdev->device_busy'
> > for high performance HBA.
> 
> It might be slow, but correctness trumps speed, right?  I tried to do

Correctness doesn't have to cause performance regression, does it?

> this with a 2nd atomic and without the spinlock but I kept having a
> hole one way or the other.  I ended up just trying to keep the
> spinlock section as small as possible.
> 
> If you know of a way to get rid of the spinlock that still makes the
> code correct, I'd be super interested!  :-)  I certainly won't claim
> that it's impossible to do, only that I didn't manage to come up with
> a way.

As I mentioned, if BFQ doesn't dispatch request in this special way,
there isn't such race.

Thanks,
Ming


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 2/2] scsi: core: Fix stall if two threads request budget at the same time
  2020-03-31  2:58       ` Ming Lei
@ 2020-03-31  4:05         ` Doug Anderson
  0 siblings, 0 replies; 15+ messages in thread
From: Doug Anderson @ 2020-03-31  4:05 UTC (permalink / raw)
  To: Ming Lei
  Cc: Jens Axboe, James E.J. Bottomley, Martin K. Petersen,
	linux-block, Guenter Roeck, Paolo Valente, linux-scsi,
	Salman Qazi, LKML

Hi,

On Mon, Mar 30, 2020 at 7:58 PM Ming Lei <ming.lei@redhat.com> wrote:
>
> On Mon, Mar 30, 2020 at 07:15:54PM -0700, Doug Anderson wrote:
> > Hi,
> >
> > On Mon, Mar 30, 2020 at 6:41 PM Ming Lei <ming.lei@redhat.com> wrote:
> > >
> > > On Mon, Mar 30, 2020 at 07:49:06AM -0700, Douglas Anderson wrote:
> > > > It is possible for two threads to be running
> > > > blk_mq_do_dispatch_sched() at the same time with the same "hctx".
> > > > This is because there can be more than one caller to
> > > > __blk_mq_run_hw_queue() with the same "hctx" and hctx_lock() doesn't
> > > > prevent more than one thread from entering.
> > > >
> > > > If more than one thread is running blk_mq_do_dispatch_sched() at the
> > > > same time with the same "hctx", they may have contention acquiring
> > > > budget.  The blk_mq_get_dispatch_budget() can eventually translate
> > > > into scsi_mq_get_budget().  If the device's "queue_depth" is 1 (not
> > > > uncommon) then only one of the two threads will be the one to
> > > > increment "device_busy" to 1 and get the budget.
> > > >
> > > > The losing thread will break out of blk_mq_do_dispatch_sched() and
> > > > will stop dispatching requests.  The assumption is that when more
> > > > budget is available later (when existing transactions finish) the
> > > > queue will be kicked again, perhaps in scsi_end_request().
> > > >
> > > > The winning thread now has budget and can go on to call
> > > > dispatch_request().  If dispatch_request() returns NULL here then we
> > > > have a potential problem.  Specifically we'll now call
> > >
> > > I guess this problem should be BFQ specific. Now there is definitely
> > > requests in BFQ queue wrt. this hctx. However, looks this request is
> > > only available from another loser thread, and it won't be retrieved in
> > > the winning thread via e->type->ops.dispatch_request().
> > >
> > > Just wondering why BFQ is implemented in this way?
> >
> > Paolo can maybe comment why.
> >
> > ...but even if BFQ wanted to try to change this, I think it's
> > impossible to fully close the race.  There is no locking between the
> > call to has_work() and dispatch_request() and there can be two (or
> > more) threads running the code at the same time.  Without some type of
> > locking I think it will always be possible for dispatch_request() to
> > return NULL.  Are we OK with code that works most of the time but
> > still has a race?  ...or did I misunderstand how this all works?
>
> Wrt. dispatching requests from hctx->dispatch, there is really one
> race given scsi's run queue from scsi_end_request() may not see
> that request. Looks that is what the patch 1 is addressing.

OK, at least I got something right.  ;-)


> However, for this issue, there isn't race, given when we get budget,
> the request isn't dequeued from BFQ yet. If budget is assigned
> successfully, either the request is dispatched to LLD successfully,
> or STS_RESOURCE is triggered, or running out of driver tag, run queue
> is guaranteed to be started for handling another dispatch path
> which running out of budget.
>
> That is why I raise the question why BFQ dispatches request in this way.

Ah, I _think_ I see what you mean.  So there should be no race because
the "has_work" is just a hint?  It's assumed that whichever task gets
the budget will be able to dispatch all the work that's there.  Is
that right?


> > > > blk_mq_put_dispatch_budget() which translates into
> > > > scsi_mq_put_budget().  That will mark the device as no longer busy but
> > > > doesn't do anything to kick the queue.  This violates the assumption
> > > > that the queue would be kicked when more budget was available.
> > > >
> > > > Pictorially:
> > > >
> > > > Thread A                          Thread B
> > > > ================================= ==================================
> > > > blk_mq_get_dispatch_budget() => 1
> > > > dispatch_request() => NULL
> > > >                                   blk_mq_get_dispatch_budget() => 0
> > > >                                   // because Thread A marked
> > > >                                   // "device_busy" in scsi_device
> > > > blk_mq_put_dispatch_budget()
> > > >
> > > > The above case was observed in reboot tests and caused a task to hang
> > > > forever waiting for IO to complete.  Traces showed that in fact two
> > > > tasks were running blk_mq_do_dispatch_sched() at the same time with
> > > > the same "hctx".  The task that got the budget did in fact see
> > > > dispatch_request() return NULL.  Both tasks returned and the system
> > > > went on for several minutes (until the hung task delay kicked in)
> > > > without the given "hctx" showing up again in traces.
> > > >
> > > > Let's attempt to fix this problem by detecting budget contention.  If
> > > > we're in the SCSI code's put_budget() function and we saw that someone
> > > > else might have wanted the budget we got then we'll kick the queue.
> > > >
> > > > The mechanism of kicking due to budget contention has the potential to
> > > > overcompensate and kick the queue more than strictly necessary, but it
> > > > shouldn't hurt.
> > > >
> > > > Signed-off-by: Douglas Anderson <dianders@chromium.org>
> > > > ---
> > > >
> > > >  drivers/scsi/scsi_lib.c    | 27 ++++++++++++++++++++++++---
> > > >  drivers/scsi/scsi_scan.c   |  1 +
> > > >  include/scsi/scsi_device.h |  2 ++
> > > >  3 files changed, 27 insertions(+), 3 deletions(-)
> > > >
> > > > diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
> > > > index 610ee41fa54c..0530da909995 100644
> > > > --- a/drivers/scsi/scsi_lib.c
> > > > +++ b/drivers/scsi/scsi_lib.c
> > > > @@ -344,6 +344,21 @@ static void scsi_dec_host_busy(struct Scsi_Host *shost, struct scsi_cmnd *cmd)
> > > >       rcu_read_unlock();
> > > >  }
> > > >
> > > > +static void scsi_device_dec_busy(struct scsi_device *sdev)
> > > > +{
> > > > +     bool was_contention;
> > > > +     unsigned long flags;
> > > > +
> > > > +     spin_lock_irqsave(&sdev->budget_lock, flags);
> > > > +     atomic_dec(&sdev->device_busy);
> > > > +     was_contention = sdev->budget_contention;
> > > > +     sdev->budget_contention = false;
> > > > +     spin_unlock_irqrestore(&sdev->budget_lock, flags);
> > > > +
> > > > +     if (was_contention)
> > > > +             blk_mq_run_hw_queues(sdev->request_queue, true);
> > > > +}
> > > > +
> > > >  void scsi_device_unbusy(struct scsi_device *sdev, struct scsi_cmnd *cmd)
> > > >  {
> > > >       struct Scsi_Host *shost = sdev->host;
> > > > @@ -354,7 +369,7 @@ void scsi_device_unbusy(struct scsi_device *sdev, struct scsi_cmnd *cmd)
> > > >       if (starget->can_queue > 0)
> > > >               atomic_dec(&starget->target_busy);
> > > >
> > > > -     atomic_dec(&sdev->device_busy);
> > > > +     scsi_device_dec_busy(sdev);
> > > >  }
> > > >
> > > >  static void scsi_kick_queue(struct request_queue *q)
> > > > @@ -1624,16 +1639,22 @@ static void scsi_mq_put_budget(struct blk_mq_hw_ctx *hctx)
> > > >       struct request_queue *q = hctx->queue;
> > > >       struct scsi_device *sdev = q->queuedata;
> > > >
> > > > -     atomic_dec(&sdev->device_busy);
> > > > +     scsi_device_dec_busy(sdev);
> > > >  }
> > > >
> > > >  static bool scsi_mq_get_budget(struct blk_mq_hw_ctx *hctx)
> > > >  {
> > > >       struct request_queue *q = hctx->queue;
> > > >       struct scsi_device *sdev = q->queuedata;
> > > > +     unsigned long flags;
> > > >
> > > > -     if (scsi_dev_queue_ready(q, sdev))
> > > > +     spin_lock_irqsave(&sdev->budget_lock, flags);
> > > > +     if (scsi_dev_queue_ready(q, sdev)) {
> > > > +             spin_unlock_irqrestore(&sdev->budget_lock, flags);
> > > >               return true;
> > > > +     }
> > > > +     sdev->budget_contention = true;
> > > > +     spin_unlock_irqrestore(&sdev->budget_lock, flags);
> > >
> > > No, it really hurts performance by adding one per-sdev spinlock in fast path,
> > > and we actually tried to kill the atomic variable of 'sdev->device_busy'
> > > for high performance HBA.
> >
> > It might be slow, but correctness trumps speed, right?  I tried to do
>
> Correctness doesn't have to cause performance regression, does it?

I guess what I'm saying is that if there is a choice between the two
we have to choose correctness.  If there is a bug and we don't know of
any way to fix it other than with a fix that regresses performance
then we have to regress performance.  I wasn't able to find a way to
fix the bug (as I understood it) without regressing performance, but
I'd be happy if someone else could come up with a way.


> > this with a 2nd atomic and without the spinlock but I kept having a
> > hole one way or the other.  I ended up just trying to keep the
> > spinlock section as small as possible.
> >
> > If you know of a way to get rid of the spinlock that still makes the
> > code correct, I'd be super interested!  :-)  I certainly won't claim
> > that it's impossible to do, only that I didn't manage to come up with
> > a way.
>
> As I mentioned, if BFQ doesn't dispatch request in this special way,
> there isn't such race.

OK, so I guess this puts it in Paolo's court then.  I'm about done for
the evening, but maybe he can comment on it or come up with a fix?

-Doug

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 2/2] scsi: core: Fix stall if two threads request budget at the same time
  2020-03-31  1:41   ` Ming Lei
  2020-03-31  2:15     ` Doug Anderson
@ 2020-03-31 18:07     ` Paolo Valente
  2020-03-31 18:26       ` Jens Axboe
  1 sibling, 1 reply; 15+ messages in thread
From: Paolo Valente @ 2020-03-31 18:07 UTC (permalink / raw)
  To: Ming Lei
  Cc: Douglas Anderson, Jens Axboe, jejb, Martin K. Petersen,
	linux-block, Guenter Roeck, linux-scsi, sqazi, linux-kernel



> Il giorno 31 mar 2020, alle ore 03:41, Ming Lei <ming.lei@redhat.com> ha scritto:
> 
> On Mon, Mar 30, 2020 at 07:49:06AM -0700, Douglas Anderson wrote:
>> It is possible for two threads to be running
>> blk_mq_do_dispatch_sched() at the same time with the same "hctx".
>> This is because there can be more than one caller to
>> __blk_mq_run_hw_queue() with the same "hctx" and hctx_lock() doesn't
>> prevent more than one thread from entering.
>> 
>> If more than one thread is running blk_mq_do_dispatch_sched() at the
>> same time with the same "hctx", they may have contention acquiring
>> budget.  The blk_mq_get_dispatch_budget() can eventually translate
>> into scsi_mq_get_budget().  If the device's "queue_depth" is 1 (not
>> uncommon) then only one of the two threads will be the one to
>> increment "device_busy" to 1 and get the budget.
>> 
>> The losing thread will break out of blk_mq_do_dispatch_sched() and
>> will stop dispatching requests.  The assumption is that when more
>> budget is available later (when existing transactions finish) the
>> queue will be kicked again, perhaps in scsi_end_request().
>> 
>> The winning thread now has budget and can go on to call
>> dispatch_request().  If dispatch_request() returns NULL here then we
>> have a potential problem.  Specifically we'll now call
> 
> I guess this problem should be BFQ specific. Now there is definitely
> requests in BFQ queue wrt. this hctx. However, looks this request is
> only available from another loser thread, and it won't be retrieved in
> the winning thread via e->type->ops.dispatch_request().
> 
> Just wondering why BFQ is implemented in this way?
> 

BFQ inherited this powerful non-working scheme from CFQ, some age ago.

In more detail: if BFQ has at least one non-empty internal queue, then
is says of course that there is work to do.  But if the currently
in-service queue is empty, and is expected to receive new I/O, then
BFQ plugs I/O dispatch to enforce service guarantees for the
in-service queue, i.e., BFQ responds NULL to a dispatch request.

It would be very easy to change bfq_has_work so that it returns false
in case the in-service queue is empty, even if there is I/O
backlogged.  My only concern is: since everything has worked with the
current scheme for probably 15 years, are we sure that everything is
still ok after we change this scheme?

I'm confident it would be ok, because a timer fires if the in-service
queue does not receive any I/O for too long, and the handler of the
timer invokes blk_mq_run_hw_queues().

Looking forward to your feedback before proposing a change,
Paolo

>> blk_mq_put_dispatch_budget() which translates into
>> scsi_mq_put_budget().  That will mark the device as no longer busy but
>> doesn't do anything to kick the queue.  This violates the assumption
>> that the queue would be kicked when more budget was available.
>> 
>> Pictorially:
>> 
>> Thread A                          Thread B
>> ================================= ==================================
>> blk_mq_get_dispatch_budget() => 1
>> dispatch_request() => NULL
>>                                  blk_mq_get_dispatch_budget() => 0
>>                                  // because Thread A marked
>>                                  // "device_busy" in scsi_device
>> blk_mq_put_dispatch_budget()
>> 
>> The above case was observed in reboot tests and caused a task to hang
>> forever waiting for IO to complete.  Traces showed that in fact two
>> tasks were running blk_mq_do_dispatch_sched() at the same time with
>> the same "hctx".  The task that got the budget did in fact see
>> dispatch_request() return NULL.  Both tasks returned and the system
>> went on for several minutes (until the hung task delay kicked in)
>> without the given "hctx" showing up again in traces.
>> 
>> Let's attempt to fix this problem by detecting budget contention.  If
>> we're in the SCSI code's put_budget() function and we saw that someone
>> else might have wanted the budget we got then we'll kick the queue.
>> 
>> The mechanism of kicking due to budget contention has the potential to
>> overcompensate and kick the queue more than strictly necessary, but it
>> shouldn't hurt.
>> 
>> Signed-off-by: Douglas Anderson <dianders@chromium.org>
>> ---
>> 
>> drivers/scsi/scsi_lib.c    | 27 ++++++++++++++++++++++++---
>> drivers/scsi/scsi_scan.c   |  1 +
>> include/scsi/scsi_device.h |  2 ++
>> 3 files changed, 27 insertions(+), 3 deletions(-)
>> 
>> diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
>> index 610ee41fa54c..0530da909995 100644
>> --- a/drivers/scsi/scsi_lib.c
>> +++ b/drivers/scsi/scsi_lib.c
>> @@ -344,6 +344,21 @@ static void scsi_dec_host_busy(struct Scsi_Host *shost, struct scsi_cmnd *cmd)
>> 	rcu_read_unlock();
>> }
>> 
>> +static void scsi_device_dec_busy(struct scsi_device *sdev)
>> +{
>> +	bool was_contention;
>> +	unsigned long flags;
>> +
>> +	spin_lock_irqsave(&sdev->budget_lock, flags);
>> +	atomic_dec(&sdev->device_busy);
>> +	was_contention = sdev->budget_contention;
>> +	sdev->budget_contention = false;
>> +	spin_unlock_irqrestore(&sdev->budget_lock, flags);
>> +
>> +	if (was_contention)
>> +		blk_mq_run_hw_queues(sdev->request_queue, true);
>> +}
>> +
>> void scsi_device_unbusy(struct scsi_device *sdev, struct scsi_cmnd *cmd)
>> {
>> 	struct Scsi_Host *shost = sdev->host;
>> @@ -354,7 +369,7 @@ void scsi_device_unbusy(struct scsi_device *sdev, struct scsi_cmnd *cmd)
>> 	if (starget->can_queue > 0)
>> 		atomic_dec(&starget->target_busy);
>> 
>> -	atomic_dec(&sdev->device_busy);
>> +	scsi_device_dec_busy(sdev);
>> }
>> 
>> static void scsi_kick_queue(struct request_queue *q)
>> @@ -1624,16 +1639,22 @@ static void scsi_mq_put_budget(struct blk_mq_hw_ctx *hctx)
>> 	struct request_queue *q = hctx->queue;
>> 	struct scsi_device *sdev = q->queuedata;
>> 
>> -	atomic_dec(&sdev->device_busy);
>> +	scsi_device_dec_busy(sdev);
>> }
>> 
>> static bool scsi_mq_get_budget(struct blk_mq_hw_ctx *hctx)
>> {
>> 	struct request_queue *q = hctx->queue;
>> 	struct scsi_device *sdev = q->queuedata;
>> +	unsigned long flags;
>> 
>> -	if (scsi_dev_queue_ready(q, sdev))
>> +	spin_lock_irqsave(&sdev->budget_lock, flags);
>> +	if (scsi_dev_queue_ready(q, sdev)) {
>> +		spin_unlock_irqrestore(&sdev->budget_lock, flags);
>> 		return true;
>> +	}
>> +	sdev->budget_contention = true;
>> +	spin_unlock_irqrestore(&sdev->budget_lock, flags);
> 
> No, it really hurts performance by adding one per-sdev spinlock in fast path,
> and we actually tried to kill the atomic variable of 'sdev->device_busy'
> for high performance HBA.
> 
> Thanks,
> Ming


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 2/2] scsi: core: Fix stall if two threads request budget at the same time
  2020-03-31 18:07     ` Paolo Valente
@ 2020-03-31 18:26       ` Jens Axboe
  2020-03-31 23:51         ` Doug Anderson
  0 siblings, 1 reply; 15+ messages in thread
From: Jens Axboe @ 2020-03-31 18:26 UTC (permalink / raw)
  To: Paolo Valente, Ming Lei
  Cc: Douglas Anderson, jejb, Martin K. Petersen, linux-block,
	Guenter Roeck, linux-scsi, sqazi, linux-kernel

On 3/31/20 12:07 PM, Paolo Valente wrote:
>> Il giorno 31 mar 2020, alle ore 03:41, Ming Lei <ming.lei@redhat.com> ha scritto:
>>
>> On Mon, Mar 30, 2020 at 07:49:06AM -0700, Douglas Anderson wrote:
>>> It is possible for two threads to be running
>>> blk_mq_do_dispatch_sched() at the same time with the same "hctx".
>>> This is because there can be more than one caller to
>>> __blk_mq_run_hw_queue() with the same "hctx" and hctx_lock() doesn't
>>> prevent more than one thread from entering.
>>>
>>> If more than one thread is running blk_mq_do_dispatch_sched() at the
>>> same time with the same "hctx", they may have contention acquiring
>>> budget.  The blk_mq_get_dispatch_budget() can eventually translate
>>> into scsi_mq_get_budget().  If the device's "queue_depth" is 1 (not
>>> uncommon) then only one of the two threads will be the one to
>>> increment "device_busy" to 1 and get the budget.
>>>
>>> The losing thread will break out of blk_mq_do_dispatch_sched() and
>>> will stop dispatching requests.  The assumption is that when more
>>> budget is available later (when existing transactions finish) the
>>> queue will be kicked again, perhaps in scsi_end_request().
>>>
>>> The winning thread now has budget and can go on to call
>>> dispatch_request().  If dispatch_request() returns NULL here then we
>>> have a potential problem.  Specifically we'll now call
>>
>> I guess this problem should be BFQ specific. Now there is definitely
>> requests in BFQ queue wrt. this hctx. However, looks this request is
>> only available from another loser thread, and it won't be retrieved in
>> the winning thread via e->type->ops.dispatch_request().
>>
>> Just wondering why BFQ is implemented in this way?
>>
> 
> BFQ inherited this powerful non-working scheme from CFQ, some age ago.
> 
> In more detail: if BFQ has at least one non-empty internal queue, then
> is says of course that there is work to do.  But if the currently
> in-service queue is empty, and is expected to receive new I/O, then
> BFQ plugs I/O dispatch to enforce service guarantees for the
> in-service queue, i.e., BFQ responds NULL to a dispatch request.

What BFQ is doing is fine, IFF it always ensures that the queue is run
at some later time, if it returns "yep I have work" yet returns NULL
when attempting to retrieve that work. Generally this should happen from
subsequent IO completion, or whatever else condition will resolve the
issue that is currently preventing dispatch of that request. Last resort
would be a timer, but that can happen if you're slicing your scheduling
somehow.

> It would be very easy to change bfq_has_work so that it returns false
> in case the in-service queue is empty, even if there is I/O
> backlogged.  My only concern is: since everything has worked with the
> current scheme for probably 15 years, are we sure that everything is
> still ok after we change this scheme?

You're comparing apples to oranges, CFQ never worked within the blk-mq
scheduling framework.

That said, I don't think such a change is needed. If we currently have a
hang due to this discrepancy between has_work and gets_work, then it
sounds like we're not always re-running the queue as we should. From the
original patch, the budget putting is not something the scheduler is
involved with. Do we just need to ensure that if we put budget without
having dispatched a request, we need to kick off dispatching again?


-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 2/2] scsi: core: Fix stall if two threads request budget at the same time
  2020-03-31 18:26       ` Jens Axboe
@ 2020-03-31 23:51         ` Doug Anderson
  2020-04-01  1:21           ` Jens Axboe
  2020-04-01  2:04           ` Ming Lei
  0 siblings, 2 replies; 15+ messages in thread
From: Doug Anderson @ 2020-03-31 23:51 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Paolo Valente, Ming Lei, James E.J. Bottomley,
	Martin K. Petersen, linux-block, Guenter Roeck, linux-scsi,
	Salman Qazi, LKML

Hi,

On Tue, Mar 31, 2020 at 11:26 AM Jens Axboe <axboe@kernel.dk> wrote:
>
> On 3/31/20 12:07 PM, Paolo Valente wrote:
> >> Il giorno 31 mar 2020, alle ore 03:41, Ming Lei <ming.lei@redhat.com> ha scritto:
> >>
> >> On Mon, Mar 30, 2020 at 07:49:06AM -0700, Douglas Anderson wrote:
> >>> It is possible for two threads to be running
> >>> blk_mq_do_dispatch_sched() at the same time with the same "hctx".
> >>> This is because there can be more than one caller to
> >>> __blk_mq_run_hw_queue() with the same "hctx" and hctx_lock() doesn't
> >>> prevent more than one thread from entering.
> >>>
> >>> If more than one thread is running blk_mq_do_dispatch_sched() at the
> >>> same time with the same "hctx", they may have contention acquiring
> >>> budget.  The blk_mq_get_dispatch_budget() can eventually translate
> >>> into scsi_mq_get_budget().  If the device's "queue_depth" is 1 (not
> >>> uncommon) then only one of the two threads will be the one to
> >>> increment "device_busy" to 1 and get the budget.
> >>>
> >>> The losing thread will break out of blk_mq_do_dispatch_sched() and
> >>> will stop dispatching requests.  The assumption is that when more
> >>> budget is available later (when existing transactions finish) the
> >>> queue will be kicked again, perhaps in scsi_end_request().
> >>>
> >>> The winning thread now has budget and can go on to call
> >>> dispatch_request().  If dispatch_request() returns NULL here then we
> >>> have a potential problem.  Specifically we'll now call
> >>
> >> I guess this problem should be BFQ specific. Now there is definitely
> >> requests in BFQ queue wrt. this hctx. However, looks this request is
> >> only available from another loser thread, and it won't be retrieved in
> >> the winning thread via e->type->ops.dispatch_request().
> >>
> >> Just wondering why BFQ is implemented in this way?
> >>
> >
> > BFQ inherited this powerful non-working scheme from CFQ, some age ago.
> >
> > In more detail: if BFQ has at least one non-empty internal queue, then
> > is says of course that there is work to do.  But if the currently
> > in-service queue is empty, and is expected to receive new I/O, then
> > BFQ plugs I/O dispatch to enforce service guarantees for the
> > in-service queue, i.e., BFQ responds NULL to a dispatch request.
>
> What BFQ is doing is fine, IFF it always ensures that the queue is run
> at some later time, if it returns "yep I have work" yet returns NULL
> when attempting to retrieve that work. Generally this should happen from
> subsequent IO completion, or whatever else condition will resolve the
> issue that is currently preventing dispatch of that request. Last resort
> would be a timer, but that can happen if you're slicing your scheduling
> somehow.

I've been poking more at this today trying to understand why the idle
timer that Paolo says is in BFQ isn't doing what it should be doing.
I've been continuing to put most of my stream-of-consciousness at
<https://crbug.com/1061950> to avoid so much spamming of this thread.
In the trace I looked at most recently it looks like BFQ does try to
ensure that the queue is run at a later time, but at least in this
trace the later time is not late enough.  Specifically the quick
summary of my recent trace:

28977309us - PID 2167 got the budget.
28977518us - BFQ told PID 2167 that there was nothing to dispatch.
28977702us - BFQ idle timer fires.
28977725us - We start to try to dispatch as a result of BFQ's idle timer.
28977732us - Dispatching that was a result of BFQ's idle timer can't get
             budget and thus does nothing.
28977780us - PID 2167 put the budget and exits since there was nothing
             to dispatch.

This is only one particular trace, but in this case BFQ did attempt to
rerun the queue after it returned NULL, but that ran almost
immediately after it returned NULL and thus ran into the race.  :(


> > It would be very easy to change bfq_has_work so that it returns false
> > in case the in-service queue is empty, even if there is I/O
> > backlogged.  My only concern is: since everything has worked with the
> > current scheme for probably 15 years, are we sure that everything is
> > still ok after we change this scheme?
>
> You're comparing apples to oranges, CFQ never worked within the blk-mq
> scheduling framework.
>
> That said, I don't think such a change is needed. If we currently have a
> hang due to this discrepancy between has_work and gets_work, then it
> sounds like we're not always re-running the queue as we should. From the
> original patch, the budget putting is not something the scheduler is
> involved with. Do we just need to ensure that if we put budget without
> having dispatched a request, we need to kick off dispatching again?

By this you mean a change like this in blk_mq_do_dispatch_sched()?

  if (!rq) {
    blk_mq_put_dispatch_budget(hctx);
+    ret = true;
    break;
  }

I'm pretty sure that would fix the problems and I'd be happy to test,
but it feels like a heavy hammer.  IIUC we're essentially going to go
into a polling loop and keep calling has_work() and dispatch_request()
over and over again until has_work() returns false or we manage to
dispatch something...

-Doug

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 2/2] scsi: core: Fix stall if two threads request budget at the same time
  2020-03-31 23:51         ` Doug Anderson
@ 2020-04-01  1:21           ` Jens Axboe
  2020-04-01  7:49             ` Paolo Valente
  2020-04-01  2:04           ` Ming Lei
  1 sibling, 1 reply; 15+ messages in thread
From: Jens Axboe @ 2020-04-01  1:21 UTC (permalink / raw)
  To: Doug Anderson
  Cc: Paolo Valente, Ming Lei, James E.J. Bottomley,
	Martin K. Petersen, linux-block, Guenter Roeck, linux-scsi,
	Salman Qazi, LKML

On 3/31/20 5:51 PM, Doug Anderson wrote:
> Hi,
> 
> On Tue, Mar 31, 2020 at 11:26 AM Jens Axboe <axboe@kernel.dk> wrote:
>>
>> On 3/31/20 12:07 PM, Paolo Valente wrote:
>>>> Il giorno 31 mar 2020, alle ore 03:41, Ming Lei <ming.lei@redhat.com> ha scritto:
>>>>
>>>> On Mon, Mar 30, 2020 at 07:49:06AM -0700, Douglas Anderson wrote:
>>>>> It is possible for two threads to be running
>>>>> blk_mq_do_dispatch_sched() at the same time with the same "hctx".
>>>>> This is because there can be more than one caller to
>>>>> __blk_mq_run_hw_queue() with the same "hctx" and hctx_lock() doesn't
>>>>> prevent more than one thread from entering.
>>>>>
>>>>> If more than one thread is running blk_mq_do_dispatch_sched() at the
>>>>> same time with the same "hctx", they may have contention acquiring
>>>>> budget.  The blk_mq_get_dispatch_budget() can eventually translate
>>>>> into scsi_mq_get_budget().  If the device's "queue_depth" is 1 (not
>>>>> uncommon) then only one of the two threads will be the one to
>>>>> increment "device_busy" to 1 and get the budget.
>>>>>
>>>>> The losing thread will break out of blk_mq_do_dispatch_sched() and
>>>>> will stop dispatching requests.  The assumption is that when more
>>>>> budget is available later (when existing transactions finish) the
>>>>> queue will be kicked again, perhaps in scsi_end_request().
>>>>>
>>>>> The winning thread now has budget and can go on to call
>>>>> dispatch_request().  If dispatch_request() returns NULL here then we
>>>>> have a potential problem.  Specifically we'll now call
>>>>
>>>> I guess this problem should be BFQ specific. Now there is definitely
>>>> requests in BFQ queue wrt. this hctx. However, looks this request is
>>>> only available from another loser thread, and it won't be retrieved in
>>>> the winning thread via e->type->ops.dispatch_request().
>>>>
>>>> Just wondering why BFQ is implemented in this way?
>>>>
>>>
>>> BFQ inherited this powerful non-working scheme from CFQ, some age ago.
>>>
>>> In more detail: if BFQ has at least one non-empty internal queue, then
>>> is says of course that there is work to do.  But if the currently
>>> in-service queue is empty, and is expected to receive new I/O, then
>>> BFQ plugs I/O dispatch to enforce service guarantees for the
>>> in-service queue, i.e., BFQ responds NULL to a dispatch request.
>>
>> What BFQ is doing is fine, IFF it always ensures that the queue is run
>> at some later time, if it returns "yep I have work" yet returns NULL
>> when attempting to retrieve that work. Generally this should happen from
>> subsequent IO completion, or whatever else condition will resolve the
>> issue that is currently preventing dispatch of that request. Last resort
>> would be a timer, but that can happen if you're slicing your scheduling
>> somehow.
> 
> I've been poking more at this today trying to understand why the idle
> timer that Paolo says is in BFQ isn't doing what it should be doing.
> I've been continuing to put most of my stream-of-consciousness at
> <https://crbug.com/1061950> to avoid so much spamming of this thread.
> In the trace I looked at most recently it looks like BFQ does try to
> ensure that the queue is run at a later time, but at least in this
> trace the later time is not late enough.  Specifically the quick
> summary of my recent trace:
> 
> 28977309us - PID 2167 got the budget.
> 28977518us - BFQ told PID 2167 that there was nothing to dispatch.
> 28977702us - BFQ idle timer fires.
> 28977725us - We start to try to dispatch as a result of BFQ's idle timer.
> 28977732us - Dispatching that was a result of BFQ's idle timer can't get
>              budget and thus does nothing.
> 28977780us - PID 2167 put the budget and exits since there was nothing
>              to dispatch.
> 
> This is only one particular trace, but in this case BFQ did attempt to
> rerun the queue after it returned NULL, but that ran almost
> immediately after it returned NULL and thus ran into the race.  :(

OK, and then it doesn't trigger again? It's key that it keeps doing this
timeout and re-dispatch if it fails, not just once.

But BFQ really should be smarter here. It's the same caller etc that
asks whether it has work and whether it can dispatch, yet the answer is
different. That's just kind of silly, and it'd make more sense if BFQ
actually implemented the ->has_work() as a "would I actually dispatch
for this guy, now".

>>> It would be very easy to change bfq_has_work so that it returns false
>>> in case the in-service queue is empty, even if there is I/O
>>> backlogged.  My only concern is: since everything has worked with the
>>> current scheme for probably 15 years, are we sure that everything is
>>> still ok after we change this scheme?
>>
>> You're comparing apples to oranges, CFQ never worked within the blk-mq
>> scheduling framework.
>>
>> That said, I don't think such a change is needed. If we currently have a
>> hang due to this discrepancy between has_work and gets_work, then it
>> sounds like we're not always re-running the queue as we should. From the
>> original patch, the budget putting is not something the scheduler is
>> involved with. Do we just need to ensure that if we put budget without
>> having dispatched a request, we need to kick off dispatching again?
> 
> By this you mean a change like this in blk_mq_do_dispatch_sched()?
> 
>   if (!rq) {
>     blk_mq_put_dispatch_budget(hctx);
> +    ret = true;
>     break;
>   }
> 
> I'm pretty sure that would fix the problems and I'd be happy to test,
> but it feels like a heavy hammer.  IIUC we're essentially going to go
> into a polling loop and keep calling has_work() and dispatch_request()
> over and over again until has_work() returns false or we manage to
> dispatch something...

We obviously have to be careful not to introduce a busy-loop, where we
just keep scheduling dispatch, only to fail.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 2/2] scsi: core: Fix stall if two threads request budget at the same time
  2020-03-31 23:51         ` Doug Anderson
  2020-04-01  1:21           ` Jens Axboe
@ 2020-04-01  2:04           ` Ming Lei
  2020-04-01  2:32             ` Doug Anderson
  1 sibling, 1 reply; 15+ messages in thread
From: Ming Lei @ 2020-04-01  2:04 UTC (permalink / raw)
  To: Doug Anderson
  Cc: Jens Axboe, Paolo Valente, James E.J. Bottomley,
	Martin K. Petersen, linux-block, Guenter Roeck, linux-scsi,
	Salman Qazi, LKML

On Tue, Mar 31, 2020 at 04:51:00PM -0700, Doug Anderson wrote:
> Hi,
> 
> On Tue, Mar 31, 2020 at 11:26 AM Jens Axboe <axboe@kernel.dk> wrote:
> >
> > On 3/31/20 12:07 PM, Paolo Valente wrote:
> > >> Il giorno 31 mar 2020, alle ore 03:41, Ming Lei <ming.lei@redhat.com> ha scritto:
> > >>
> > >> On Mon, Mar 30, 2020 at 07:49:06AM -0700, Douglas Anderson wrote:
> > >>> It is possible for two threads to be running
> > >>> blk_mq_do_dispatch_sched() at the same time with the same "hctx".
> > >>> This is because there can be more than one caller to
> > >>> __blk_mq_run_hw_queue() with the same "hctx" and hctx_lock() doesn't
> > >>> prevent more than one thread from entering.
> > >>>
> > >>> If more than one thread is running blk_mq_do_dispatch_sched() at the
> > >>> same time with the same "hctx", they may have contention acquiring
> > >>> budget.  The blk_mq_get_dispatch_budget() can eventually translate
> > >>> into scsi_mq_get_budget().  If the device's "queue_depth" is 1 (not
> > >>> uncommon) then only one of the two threads will be the one to
> > >>> increment "device_busy" to 1 and get the budget.
> > >>>
> > >>> The losing thread will break out of blk_mq_do_dispatch_sched() and
> > >>> will stop dispatching requests.  The assumption is that when more
> > >>> budget is available later (when existing transactions finish) the
> > >>> queue will be kicked again, perhaps in scsi_end_request().
> > >>>
> > >>> The winning thread now has budget and can go on to call
> > >>> dispatch_request().  If dispatch_request() returns NULL here then we
> > >>> have a potential problem.  Specifically we'll now call
> > >>
> > >> I guess this problem should be BFQ specific. Now there is definitely
> > >> requests in BFQ queue wrt. this hctx. However, looks this request is
> > >> only available from another loser thread, and it won't be retrieved in
> > >> the winning thread via e->type->ops.dispatch_request().
> > >>
> > >> Just wondering why BFQ is implemented in this way?
> > >>
> > >
> > > BFQ inherited this powerful non-working scheme from CFQ, some age ago.
> > >
> > > In more detail: if BFQ has at least one non-empty internal queue, then
> > > is says of course that there is work to do.  But if the currently
> > > in-service queue is empty, and is expected to receive new I/O, then
> > > BFQ plugs I/O dispatch to enforce service guarantees for the
> > > in-service queue, i.e., BFQ responds NULL to a dispatch request.
> >
> > What BFQ is doing is fine, IFF it always ensures that the queue is run
> > at some later time, if it returns "yep I have work" yet returns NULL
> > when attempting to retrieve that work. Generally this should happen from
> > subsequent IO completion, or whatever else condition will resolve the
> > issue that is currently preventing dispatch of that request. Last resort
> > would be a timer, but that can happen if you're slicing your scheduling
> > somehow.
> 
> I've been poking more at this today trying to understand why the idle
> timer that Paolo says is in BFQ isn't doing what it should be doing.
> I've been continuing to put most of my stream-of-consciousness at
> <https://crbug.com/1061950> to avoid so much spamming of this thread.
> In the trace I looked at most recently it looks like BFQ does try to
> ensure that the queue is run at a later time, but at least in this
> trace the later time is not late enough.  Specifically the quick
> summary of my recent trace:
> 
> 28977309us - PID 2167 got the budget.
> 28977518us - BFQ told PID 2167 that there was nothing to dispatch.
> 28977702us - BFQ idle timer fires.
> 28977725us - We start to try to dispatch as a result of BFQ's idle timer.
> 28977732us - Dispatching that was a result of BFQ's idle timer can't get
>              budget and thus does nothing.

Looks the BFQ idle timer may be re-tried given it knows there is work to do.

> 28977780us - PID 2167 put the budget and exits since there was nothing
>              to dispatch.
> 
> This is only one particular trace, but in this case BFQ did attempt to
> rerun the queue after it returned NULL, but that ran almost
> immediately after it returned NULL and thus ran into the race.  :(
> 
> 
> > > It would be very easy to change bfq_has_work so that it returns false
> > > in case the in-service queue is empty, even if there is I/O
> > > backlogged.  My only concern is: since everything has worked with the
> > > current scheme for probably 15 years, are we sure that everything is
> > > still ok after we change this scheme?
> >
> > You're comparing apples to oranges, CFQ never worked within the blk-mq
> > scheduling framework.
> >
> > That said, I don't think such a change is needed. If we currently have a
> > hang due to this discrepancy between has_work and gets_work, then it
> > sounds like we're not always re-running the queue as we should. From the
> > original patch, the budget putting is not something the scheduler is
> > involved with. Do we just need to ensure that if we put budget without
> > having dispatched a request, we need to kick off dispatching again?
> 
> By this you mean a change like this in blk_mq_do_dispatch_sched()?
> 
>   if (!rq) {
>     blk_mq_put_dispatch_budget(hctx);
> +    ret = true;
>     break;
>   }

From Jens's tree, blk_mq_do_dispatch_sched() returns nothing.

Which tree are you talking against?


Thanks, 
Ming


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 2/2] scsi: core: Fix stall if two threads request budget at the same time
  2020-04-01  2:04           ` Ming Lei
@ 2020-04-01  2:32             ` Doug Anderson
  0 siblings, 0 replies; 15+ messages in thread
From: Doug Anderson @ 2020-04-01  2:32 UTC (permalink / raw)
  To: Ming Lei
  Cc: Jens Axboe, Paolo Valente, James E.J. Bottomley,
	Martin K. Petersen, linux-block, Guenter Roeck, linux-scsi,
	Salman Qazi, LKML

Hi,

On Tue, Mar 31, 2020 at 7:04 PM Ming Lei <ming.lei@redhat.com> wrote:
>
> On Tue, Mar 31, 2020 at 04:51:00PM -0700, Doug Anderson wrote:
> > Hi,
> >
> > On Tue, Mar 31, 2020 at 11:26 AM Jens Axboe <axboe@kernel.dk> wrote:
> > >
> > > On 3/31/20 12:07 PM, Paolo Valente wrote:
> > > >> Il giorno 31 mar 2020, alle ore 03:41, Ming Lei <ming.lei@redhat.com> ha scritto:
> > > >>
> > > >> On Mon, Mar 30, 2020 at 07:49:06AM -0700, Douglas Anderson wrote:
> > > >>> It is possible for two threads to be running
> > > >>> blk_mq_do_dispatch_sched() at the same time with the same "hctx".
> > > >>> This is because there can be more than one caller to
> > > >>> __blk_mq_run_hw_queue() with the same "hctx" and hctx_lock() doesn't
> > > >>> prevent more than one thread from entering.
> > > >>>
> > > >>> If more than one thread is running blk_mq_do_dispatch_sched() at the
> > > >>> same time with the same "hctx", they may have contention acquiring
> > > >>> budget.  The blk_mq_get_dispatch_budget() can eventually translate
> > > >>> into scsi_mq_get_budget().  If the device's "queue_depth" is 1 (not
> > > >>> uncommon) then only one of the two threads will be the one to
> > > >>> increment "device_busy" to 1 and get the budget.
> > > >>>
> > > >>> The losing thread will break out of blk_mq_do_dispatch_sched() and
> > > >>> will stop dispatching requests.  The assumption is that when more
> > > >>> budget is available later (when existing transactions finish) the
> > > >>> queue will be kicked again, perhaps in scsi_end_request().
> > > >>>
> > > >>> The winning thread now has budget and can go on to call
> > > >>> dispatch_request().  If dispatch_request() returns NULL here then we
> > > >>> have a potential problem.  Specifically we'll now call
> > > >>
> > > >> I guess this problem should be BFQ specific. Now there is definitely
> > > >> requests in BFQ queue wrt. this hctx. However, looks this request is
> > > >> only available from another loser thread, and it won't be retrieved in
> > > >> the winning thread via e->type->ops.dispatch_request().
> > > >>
> > > >> Just wondering why BFQ is implemented in this way?
> > > >>
> > > >
> > > > BFQ inherited this powerful non-working scheme from CFQ, some age ago.
> > > >
> > > > In more detail: if BFQ has at least one non-empty internal queue, then
> > > > is says of course that there is work to do.  But if the currently
> > > > in-service queue is empty, and is expected to receive new I/O, then
> > > > BFQ plugs I/O dispatch to enforce service guarantees for the
> > > > in-service queue, i.e., BFQ responds NULL to a dispatch request.
> > >
> > > What BFQ is doing is fine, IFF it always ensures that the queue is run
> > > at some later time, if it returns "yep I have work" yet returns NULL
> > > when attempting to retrieve that work. Generally this should happen from
> > > subsequent IO completion, or whatever else condition will resolve the
> > > issue that is currently preventing dispatch of that request. Last resort
> > > would be a timer, but that can happen if you're slicing your scheduling
> > > somehow.
> >
> > I've been poking more at this today trying to understand why the idle
> > timer that Paolo says is in BFQ isn't doing what it should be doing.
> > I've been continuing to put most of my stream-of-consciousness at
> > <https://crbug.com/1061950> to avoid so much spamming of this thread.
> > In the trace I looked at most recently it looks like BFQ does try to
> > ensure that the queue is run at a later time, but at least in this
> > trace the later time is not late enough.  Specifically the quick
> > summary of my recent trace:
> >
> > 28977309us - PID 2167 got the budget.
> > 28977518us - BFQ told PID 2167 that there was nothing to dispatch.
> > 28977702us - BFQ idle timer fires.
> > 28977725us - We start to try to dispatch as a result of BFQ's idle timer.
> > 28977732us - Dispatching that was a result of BFQ's idle timer can't get
> >              budget and thus does nothing.
>
> Looks the BFQ idle timer may be re-tried given it knows there is work to do.

Yeah, it does seem like perhaps a BFQ fix like this would be ideal.


> > 28977780us - PID 2167 put the budget and exits since there was nothing
> >              to dispatch.
> >
> > This is only one particular trace, but in this case BFQ did attempt to
> > rerun the queue after it returned NULL, but that ran almost
> > immediately after it returned NULL and thus ran into the race.  :(
> >
> >
> > > > It would be very easy to change bfq_has_work so that it returns false
> > > > in case the in-service queue is empty, even if there is I/O
> > > > backlogged.  My only concern is: since everything has worked with the
> > > > current scheme for probably 15 years, are we sure that everything is
> > > > still ok after we change this scheme?
> > >
> > > You're comparing apples to oranges, CFQ never worked within the blk-mq
> > > scheduling framework.
> > >
> > > That said, I don't think such a change is needed. If we currently have a
> > > hang due to this discrepancy between has_work and gets_work, then it
> > > sounds like we're not always re-running the queue as we should. From the
> > > original patch, the budget putting is not something the scheduler is
> > > involved with. Do we just need to ensure that if we put budget without
> > > having dispatched a request, we need to kick off dispatching again?
> >
> > By this you mean a change like this in blk_mq_do_dispatch_sched()?
> >
> >   if (!rq) {
> >     blk_mq_put_dispatch_budget(hctx);
> > +    ret = true;
> >     break;
> >   }
>
> From Jens's tree, blk_mq_do_dispatch_sched() returns nothing.
>
> Which tree are you talking against?

Ah, right.  Sorry.  As per the cover letter, I've tested against the
Chrome OS 5.4 tree and also against mainline Linux and both
experienced the same behavior.  It's slightly more convenient for me
to test against the Chrome OS 5.4 tree, so I've been focusing most of
my effort there.

As mentioned in the cover letter, in the Chrome OS 5.4 branch we have
an extra FROMLIST patch from Salman, specifically:

http://lore.kernel.org/r/20200207190416.99928-1-sqazi@google.com

...that's what makes the return value "bool" for me.


-Doug

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 2/2] scsi: core: Fix stall if two threads request budget at the same time
  2020-04-01  1:21           ` Jens Axboe
@ 2020-04-01  7:49             ` Paolo Valente
  2020-04-02 15:52               ` Doug Anderson
  0 siblings, 1 reply; 15+ messages in thread
From: Paolo Valente @ 2020-04-01  7:49 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Doug Anderson, Ming Lei, James E.J. Bottomley,
	Martin K. Petersen, linux-block, Guenter Roeck, linux-scsi,
	Salman Qazi, LKML



> Il giorno 1 apr 2020, alle ore 03:21, Jens Axboe <axboe@kernel.dk> ha scritto:
> 
> On 3/31/20 5:51 PM, Doug Anderson wrote:
>> Hi,
>> 
>> On Tue, Mar 31, 2020 at 11:26 AM Jens Axboe <axboe@kernel.dk> wrote:
>>> 
>>> On 3/31/20 12:07 PM, Paolo Valente wrote:
>>>>> Il giorno 31 mar 2020, alle ore 03:41, Ming Lei <ming.lei@redhat.com> ha scritto:
>>>>> 
>>>>> On Mon, Mar 30, 2020 at 07:49:06AM -0700, Douglas Anderson wrote:
>>>>>> It is possible for two threads to be running
>>>>>> blk_mq_do_dispatch_sched() at the same time with the same "hctx".
>>>>>> This is because there can be more than one caller to
>>>>>> __blk_mq_run_hw_queue() with the same "hctx" and hctx_lock() doesn't
>>>>>> prevent more than one thread from entering.
>>>>>> 
>>>>>> If more than one thread is running blk_mq_do_dispatch_sched() at the
>>>>>> same time with the same "hctx", they may have contention acquiring
>>>>>> budget.  The blk_mq_get_dispatch_budget() can eventually translate
>>>>>> into scsi_mq_get_budget().  If the device's "queue_depth" is 1 (not
>>>>>> uncommon) then only one of the two threads will be the one to
>>>>>> increment "device_busy" to 1 and get the budget.
>>>>>> 
>>>>>> The losing thread will break out of blk_mq_do_dispatch_sched() and
>>>>>> will stop dispatching requests.  The assumption is that when more
>>>>>> budget is available later (when existing transactions finish) the
>>>>>> queue will be kicked again, perhaps in scsi_end_request().
>>>>>> 
>>>>>> The winning thread now has budget and can go on to call
>>>>>> dispatch_request().  If dispatch_request() returns NULL here then we
>>>>>> have a potential problem.  Specifically we'll now call
>>>>> 
>>>>> I guess this problem should be BFQ specific. Now there is definitely
>>>>> requests in BFQ queue wrt. this hctx. However, looks this request is
>>>>> only available from another loser thread, and it won't be retrieved in
>>>>> the winning thread via e->type->ops.dispatch_request().
>>>>> 
>>>>> Just wondering why BFQ is implemented in this way?
>>>>> 
>>>> 
>>>> BFQ inherited this powerful non-working scheme from CFQ, some age ago.
>>>> 
>>>> In more detail: if BFQ has at least one non-empty internal queue, then
>>>> is says of course that there is work to do.  But if the currently
>>>> in-service queue is empty, and is expected to receive new I/O, then
>>>> BFQ plugs I/O dispatch to enforce service guarantees for the
>>>> in-service queue, i.e., BFQ responds NULL to a dispatch request.
>>> 
>>> What BFQ is doing is fine, IFF it always ensures that the queue is run
>>> at some later time, if it returns "yep I have work" yet returns NULL
>>> when attempting to retrieve that work. Generally this should happen from
>>> subsequent IO completion, or whatever else condition will resolve the
>>> issue that is currently preventing dispatch of that request. Last resort
>>> would be a timer, but that can happen if you're slicing your scheduling
>>> somehow.
>> 
>> I've been poking more at this today trying to understand why the idle
>> timer that Paolo says is in BFQ isn't doing what it should be doing.
>> I've been continuing to put most of my stream-of-consciousness at
>> <https://crbug.com/1061950> to avoid so much spamming of this thread.
>> In the trace I looked at most recently it looks like BFQ does try to
>> ensure that the queue is run at a later time, but at least in this
>> trace the later time is not late enough.  Specifically the quick
>> summary of my recent trace:
>> 
>> 28977309us - PID 2167 got the budget.
>> 28977518us - BFQ told PID 2167 that there was nothing to dispatch.
>> 28977702us - BFQ idle timer fires.
>> 28977725us - We start to try to dispatch as a result of BFQ's idle timer.
>> 28977732us - Dispatching that was a result of BFQ's idle timer can't get
>>             budget and thus does nothing.
>> 28977780us - PID 2167 put the budget and exits since there was nothing
>>             to dispatch.
>> 
>> This is only one particular trace, but in this case BFQ did attempt to
>> rerun the queue after it returned NULL, but that ran almost
>> immediately after it returned NULL and thus ran into the race.  :(
> 
> OK, and then it doesn't trigger again? It's key that it keeps doing this
> timeout and re-dispatch if it fails, not just once.
> 

The goal of BFQ's timer is to make BFQ switch from non-work-conserving
to work-conserving mode, just because not doing so would cause a
stall.  In contrast, it sounds a little weird that an I/O scheduler
systematically kicks I/O periodically (how can BFQ know when no more
kicking is needed?).  IOW, it doesn't seem very robust that blk-mq may
need a series of periodic kicks to finally restart, like a flooded
engine.

Compared with this solution, I'd still prefer one where BFQ doesn't
trigger this blk-mq stall at all.

Paolo

> But BFQ really should be smarter here. It's the same caller etc that
> asks whether it has work and whether it can dispatch, yet the answer is
> different. That's just kind of silly, and it'd make more sense if BFQ
> actually implemented the ->has_work() as a "would I actually dispatch
> for this guy, now".
> 
>>>> It would be very easy to change bfq_has_work so that it returns false
>>>> in case the in-service queue is empty, even if there is I/O
>>>> backlogged.  My only concern is: since everything has worked with the
>>>> current scheme for probably 15 years, are we sure that everything is
>>>> still ok after we change this scheme?
>>> 
>>> You're comparing apples to oranges, CFQ never worked within the blk-mq
>>> scheduling framework.
>>> 
>>> That said, I don't think such a change is needed. If we currently have a
>>> hang due to this discrepancy between has_work and gets_work, then it
>>> sounds like we're not always re-running the queue as we should. From the
>>> original patch, the budget putting is not something the scheduler is
>>> involved with. Do we just need to ensure that if we put budget without
>>> having dispatched a request, we need to kick off dispatching again?
>> 
>> By this you mean a change like this in blk_mq_do_dispatch_sched()?
>> 
>>  if (!rq) {
>>    blk_mq_put_dispatch_budget(hctx);
>> +    ret = true;
>>    break;
>>  }
>> 
>> I'm pretty sure that would fix the problems and I'd be happy to test,
>> but it feels like a heavy hammer.  IIUC we're essentially going to go
>> into a polling loop and keep calling has_work() and dispatch_request()
>> over and over again until has_work() returns false or we manage to
>> dispatch something...
> 
> We obviously have to be careful not to introduce a busy-loop, where we
> just keep scheduling dispatch, only to fail.
> 
> -- 
> Jens Axboe


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 2/2] scsi: core: Fix stall if two threads request budget at the same time
  2020-04-01  7:49             ` Paolo Valente
@ 2020-04-02 15:52               ` Doug Anderson
  0 siblings, 0 replies; 15+ messages in thread
From: Doug Anderson @ 2020-04-02 15:52 UTC (permalink / raw)
  To: Paolo Valente
  Cc: Jens Axboe, Ming Lei, James E.J. Bottomley, Martin K. Petersen,
	linux-block, Guenter Roeck, linux-scsi, Salman Qazi, LKML

Hi,

On Wed, Apr 1, 2020 at 12:48 AM Paolo Valente <paolo.valente@linaro.org> wrote:
>
> >> 28977309us - PID 2167 got the budget.
> >> 28977518us - BFQ told PID 2167 that there was nothing to dispatch.
> >> 28977702us - BFQ idle timer fires.
> >> 28977725us - We start to try to dispatch as a result of BFQ's idle timer.
> >> 28977732us - Dispatching that was a result of BFQ's idle timer can't get
> >>             budget and thus does nothing.
> >> 28977780us - PID 2167 put the budget and exits since there was nothing
> >>             to dispatch.
> >>
> >> This is only one particular trace, but in this case BFQ did attempt to
> >> rerun the queue after it returned NULL, but that ran almost
> >> immediately after it returned NULL and thus ran into the race.  :(
> >
> > OK, and then it doesn't trigger again? It's key that it keeps doing this
> > timeout and re-dispatch if it fails, not just once.
> >
>
> The goal of BFQ's timer is to make BFQ switch from non-work-conserving
> to work-conserving mode, just because not doing so would cause a
> stall.  In contrast, it sounds a little weird that an I/O scheduler
> systematically kicks I/O periodically (how can BFQ know when no more
> kicking is needed?).  IOW, it doesn't seem very robust that blk-mq may
> need a series of periodic kicks to finally restart, like a flooded
> engine.
>
> Compared with this solution, I'd still prefer one where BFQ doesn't
> trigger this blk-mq stall at all.

I spent more time thinking about this / testing things.  Probably the
best summary of my thoughts can be found at
<https://crbug.com/1061950#c79>.  The quick summary is that I believe
the problem is that BFQ has faith that when it calls
blk_mq_run_hw_queues() that it will eventually cause BFQ to be called
back at least once to dispatch.  That doesn't always happen due to the
race we're trying to solve here.  If we fix the race / make
blk_mq_run_hw_queues() reliable then I don't think there's a need for
BFQ to implement some type of timeout/retry mechanism.


> > But BFQ really should be smarter here. It's the same caller etc that
> > asks whether it has work and whether it can dispatch, yet the answer is
> > different. That's just kind of silly, and it'd make more sense if BFQ
> > actually implemented the ->has_work() as a "would I actually dispatch
> > for this guy, now".

I prototyped this and I think it would solve the problem (though I
haven't had time to do extensive testing yet).  It certainly makes
BFQ's has_work() more expensive in some cases, but it might be worth
it.  Someone setup to do benchmarking would need to say for sure.

However, I think I've figured out an inexpensive / lightweight
solution that means we can let has_work() be inexact.  It's mostly the
same as this patch but implemented at the blk-mq layer (not the SCSI
layer) and doesn't add a spinlock.  I'll post a v2 and you can see if
you hate it or if it looks OK.  You can find it at:

https://lore.kernel.org/r/20200402085050.v2.2.I28278ef8ea27afc0ec7e597752a6d4e58c16176f@changeid

-Doug

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2020-04-02 15:53 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-30 14:49 [PATCH 0/2] blk-mq: Fix two causes of IO stalls found in reboot testing Douglas Anderson
2020-03-30 14:49 ` [PATCH 1/2] blk-mq: In blk_mq_dispatch_rq_list() "no budget" is a reason to kick Douglas Anderson
2020-03-30 14:49 ` [PATCH 2/2] scsi: core: Fix stall if two threads request budget at the same time Douglas Anderson
2020-03-31  1:41   ` Ming Lei
2020-03-31  2:15     ` Doug Anderson
2020-03-31  2:58       ` Ming Lei
2020-03-31  4:05         ` Doug Anderson
2020-03-31 18:07     ` Paolo Valente
2020-03-31 18:26       ` Jens Axboe
2020-03-31 23:51         ` Doug Anderson
2020-04-01  1:21           ` Jens Axboe
2020-04-01  7:49             ` Paolo Valente
2020-04-02 15:52               ` Doug Anderson
2020-04-01  2:04           ` Ming Lei
2020-04-01  2:32             ` Doug Anderson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).