linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ming Lei <ming.lei@redhat.com>
To: Douglas Anderson <dianders@chromium.org>
Cc: axboe@kernel.dk, jejb@linux.ibm.com, martin.petersen@oracle.com,
	paolo.valente@linaro.org, groeck@chromium.org,
	Gwendal Grignou <gwendal@chromium.org>,
	linux-scsi@vger.kernel.org, linux-block@vger.kernel.org,
	sqazi@google.com, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v4 3/4] blk-mq: Rerun dispatching in the case of budget contention
Date: Thu, 9 Apr 2020 09:13:44 +0800	[thread overview]
Message-ID: <20200409011344.GB369792@localhost.localdomain> (raw)
In-Reply-To: <20200408080255.v4.3.I28278ef8ea27afc0ec7e597752a6d4e58c16176f@changeid>

On Wed, Apr 08, 2020 at 08:04:01AM -0700, Douglas Anderson wrote:
> If ever a thread running blk-mq code tries to get budget and fails it
> immediately stops doing work and assumes that whenever budget is freed
> up that queues will be kicked and whatever work the thread was trying
> to do will be tried again.
> 
> One path where budget is freed and queues are kicked in the normal
> case can be seen in scsi_finish_command().  Specifically:
> - scsi_finish_command()
>   - scsi_device_unbusy()
>     - # Decrement "device_busy", AKA release budget
>   - scsi_io_completion()
>     - scsi_end_request()
>       - blk_mq_run_hw_queues()
> 
> The above is all well and good.  The problem comes up when a thread
> claims the budget but then releases it without actually dispatching
> any work.  Since we didn't schedule any work we'll never run the path
> of finishing work / kicking the queues.
> 
> This isn't often actually a problem which is why this issue has
> existed for a while and nobody noticed.  Specifically we only get into
> this situation when we unexpectedly found that we weren't going to do
> any work.  Code that later receives new work kicks the queues.  All
> good, right?
> 
> The problem shows up, however, if timing is just wrong and we hit a
> race.  To see this race let's think about the case where we only have
> a budget of 1 (only one thread can hold budget).  Now imagine that a
> thread got budget and then decided not to dispatch work.  It's about
> to call put_budget() but then the thread gets context switched out for
> a long, long time.  While in this state, any and all kicks of the
> queue (like the when we received new work) will be no-ops because
> nobody can get budget.  Finally the thread holding budget gets to run
> again and returns.  All the normal kicks will have been no-ops and we
> have an I/O stall.
> 
> As you can see from the above, you need just the right timing to see
> the race.  To start with, the only case it happens if we thought we
> had work, actually managed to get the budget, but then actually didn't
> have work.  That's pretty rare to start with.  Even then, there's
> usually a very small amount of time between realizing that there's no
> work and putting the budget.  During this small amount of time new
> work has to come in and the queue kick has to make it all the way to
> trying to get the budget and fail.  It's pretty unlikely.
> 
> One case where this could have failed is illustrated by an example of
> threads running blk_mq_do_dispatch_sched():
> 
> * Threads A and B both run has_work() at the same time with the same
>   "hctx".  Imagine has_work() is exact.  There's no lock, so it's OK
>   if Thread A and B both get back true.
> * Thread B gets interrupted for a long time right after it decides
>   that there is work.  Maybe its CPU gets an interrupt and the
>   interrupt handler is slow.
> * Thread A runs, get budget, dispatches work.
> * Thread A's work finishes and budget is released.
> * Thread B finally runs again and gets budget.
> * Since Thread A already took care of the work and no new work has
>   come in, Thread B will get NULL from dispatch_request().  I believe
>   this is specifically why dispatch_request() is allowed to return
>   NULL in the first place if has_work() must be exact.
> * Thread B will now be holding the budget and is about to call
>   put_budget(), but hasn't called it yet.
> * Thread B gets interrupted for a long time (again).  Dang interrupts.
> * Now Thread C (maybe with a different "hctx" but the same queue)
>   comes along and runs blk_mq_do_dispatch_sched().
> * Thread C won't do anything because it can't get budget.
> * Finally Thread B will run again and put the budget without kicking
>   any queues.
> 
> Even though the example above is with blk_mq_do_dispatch_sched() I
> believe the race is possible any time someone is holding budget but
> doesn't do work.
> 
> Unfortunately, the unlikely has become more likely if you happen to be
> using the BFQ I/O scheduler.  BFQ, by design, sometimes returns "true"
> for has_work() but then NULL for dispatch_request() and stays in this
> state for a while (currently up to 9 ms).  Suddenly you only need one
> race to hit, not two races in a row.  With my current setup this is
> easy to reproduce in reboot tests and traces have actually shown that
> we hit a race similar to the one described above.
> 
> Note that we only need to fix blk_mq_do_dispatch_sched() and
> blk_mq_do_dispatch_ctx() and not the other places that put budget.  In
> other cases we know that we have work to do on at least one "hctx" and
> code already exists to kick that "hctx"'s queue.  When that work
> finally finishes all the queues will be kicked using the normal flow.
> 
> One last note is that (at least in the SCSI case) budget is shared by
> all "hctx"s that have the same queue.  Thus we need to make sure to
> kick the whole queue, not just re-run dispatching on a single "hctx".
> 
> Signed-off-by: Douglas Anderson <dianders@chromium.org>
> ---
> 
> Changes in v4:
> - Only kick in blk_mq_do_dispatch_ctx() / blk_mq_do_dispatch_sched().
> 
> Changes in v3:
> - Always kick when putting the budget.
> - Delay blk_mq_do_dispatch_sched() kick by 3 ms for inexact has_work().
> - Totally rewrote commit message.
> 
> Changes in v2:
> - Replace ("scsi: core: Fix stall...") w/ ("blk-mq: Rerun dispatch...")
> 
>  block/blk-mq-sched.c | 18 ++++++++++++++++++
>  1 file changed, 18 insertions(+)
> 
> diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
> index 74cedea56034..eca81bd4010c 100644
> --- a/block/blk-mq-sched.c
> +++ b/block/blk-mq-sched.c
> @@ -80,6 +80,8 @@ void blk_mq_sched_restart(struct blk_mq_hw_ctx *hctx)
>  	blk_mq_run_hw_queue(hctx, true);
>  }
>  
> +#define BLK_MQ_BUDGET_DELAY	3		/* ms units */
> +
>  /*
>   * Only SCSI implements .get_budget and .put_budget, and SCSI restarts
>   * its queue by itself in its completion handler, so we don't need to
> @@ -103,6 +105,14 @@ static void blk_mq_do_dispatch_sched(struct blk_mq_hw_ctx *hctx)
>  		rq = e->type->ops.dispatch_request(hctx);
>  		if (!rq) {
>  			blk_mq_put_dispatch_budget(hctx);
> +			/*
> +			 * We're releasing without dispatching. Holding the
> +			 * budget could have blocked any "hctx"s with the
> +			 * same queue and if we didn't dispatch then there's
> +			 * no guarantee anyone will kick the queue.  Kick it
> +			 * ourselves.
> +			 */
> +			blk_mq_delay_run_hw_queues(q, BLK_MQ_BUDGET_DELAY);
>  			break;
>  		}
>  
> @@ -149,6 +159,14 @@ static void blk_mq_do_dispatch_ctx(struct blk_mq_hw_ctx *hctx)
>  		rq = blk_mq_dequeue_from_ctx(hctx, ctx);
>  		if (!rq) {
>  			blk_mq_put_dispatch_budget(hctx);
> +			/*
> +			 * We're releasing without dispatching. Holding the
> +			 * budget could have blocked any "hctx"s with the
> +			 * same queue and if we didn't dispatch then there's
> +			 * no guarantee anyone will kick the queue.  Kick it
> +			 * ourselves.
> +			 */
> +			blk_mq_delay_run_hw_queues(q, BLK_MQ_BUDGET_DELAY);
>  			break;
>  		}
>  
> -- 
> 2.26.0.292.g33ef6b2f38-goog
> 

Reviewed-by: Ming Lei <ming.lei@redhat.com>

-- 
Ming


  reply	other threads:[~2020-04-09  1:14 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-04-08 15:03 [PATCH v4 0/4] blk-mq: Fix two causes of IO stalls found in reboot testing Douglas Anderson
2020-04-08 15:03 ` [PATCH v4 1/4] blk-mq: In blk_mq_dispatch_rq_list() "no budget" is a reason to kick Douglas Anderson
2020-04-08 15:04 ` [PATCH v4 2/4] blk-mq: Add blk_mq_delay_run_hw_queues() API call Douglas Anderson
2020-04-09  1:13   ` Ming Lei
2020-04-08 15:04 ` [PATCH v4 3/4] blk-mq: Rerun dispatching in the case of budget contention Douglas Anderson
2020-04-09  1:13   ` Ming Lei [this message]
2020-04-08 15:04 ` [PATCH v4 4/4] Revert "scsi: core: run queue if SCSI device queue isn't ready and queue is idle" Douglas Anderson
2020-04-09  1:24   ` Ming Lei
2020-04-13 17:51   ` Martin K. Petersen
2020-04-20 14:45 ` [PATCH v4 0/4] blk-mq: Fix two causes of IO stalls found in reboot testing Doug Anderson
2020-04-20 15:49   ` Jens Axboe
2020-04-20 16:31     ` Doug Anderson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200409011344.GB369792@localhost.localdomain \
    --to=ming.lei@redhat.com \
    --cc=axboe@kernel.dk \
    --cc=dianders@chromium.org \
    --cc=groeck@chromium.org \
    --cc=gwendal@chromium.org \
    --cc=jejb@linux.ibm.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-scsi@vger.kernel.org \
    --cc=martin.petersen@oracle.com \
    --cc=paolo.valente@linaro.org \
    --cc=sqazi@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).