* [PATCH V2] blk-mq: fix race between complete and BLK_EH_RESET_TIMER
@ 2018-04-11 23:38 Ming Lei
2018-04-12 2:38 ` jianchao.wang
0 siblings, 1 reply; 4+ messages in thread
From: Ming Lei @ 2018-04-11 23:38 UTC (permalink / raw)
To: Jens Axboe, linux-block
Cc: Ming Lei, Bart Van Assche, Tejun Heo, Christoph Hellwig,
Sagi Grimberg, Israel Rukshin, Max Gurtovoy, stable
The normal request completion can be done before or during handling
BLK_EH_RESET_TIMER, and this race may cause the request to never
be completed since driver's .timeout() may always return
BLK_EH_RESET_TIMER.
This issue can't be fixed completely by driver, since the normal
completion can be done between returning .timeout() and handing
BLK_EH_RESET_TIMER.
This patch fixes this race by introducing rq state of MQ_RQ_COMPLETE_IN_RESET,
and reading/writing rq's state by holding queue lock, which can be
per-request actually, but just not necessary to introduce one lock for
so unusual event.
Cc: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Ming Lei <ming.lei@redhat.com>
Cc: Sagi Grimberg <sagi@grimberg.me>
Cc: Israel Rukshin <israelr@mellanox.com>,
Cc: Max Gurtovoy <maxg@mellanox.com>
Cc: stable@vger.kernel.org
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
V2:
- rename the new flag as MQ_RQ_COMPLETE_IN_TIMEOUT
- fix lock uses in blk_mq_rq_timed_out
- document re-order between blk_add_timer() and
blk_mq_rq_update_aborted_gstate(req, 0)
block/blk-mq.c | 49 ++++++++++++++++++++++++++++++++++++++++++++-----
block/blk-mq.h | 1 +
2 files changed, 45 insertions(+), 5 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 0dc9e341c2a7..db1c84b2bb5e 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -630,10 +630,27 @@ void blk_mq_complete_request(struct request *rq)
* However, that would complicate paths which want to synchronize
* against us. Let stay in sync with the issue path so that
* hctx_lock() covers both issue and completion paths.
+ *
+ * Cover complete vs BLK_EH_RESET_TIMER race in slow path with
+ * helding queue lock.
*/
hctx_lock(hctx, &srcu_idx);
if (blk_mq_rq_aborted_gstate(rq) != rq->gstate)
__blk_mq_complete_request(rq);
+ else {
+ unsigned long flags;
+ bool need_complete = false;
+
+ spin_lock_irqsave(q->queue_lock, flags);
+ if (!blk_mq_rq_aborted_gstate(rq))
+ need_complete = true;
+ else
+ blk_mq_rq_update_state(rq, MQ_RQ_COMPLETE_IN_TIMEOUT);
+ spin_unlock_irqrestore(q->queue_lock, flags);
+
+ if (need_complete)
+ __blk_mq_complete_request(rq);
+ }
hctx_unlock(hctx, srcu_idx);
}
EXPORT_SYMBOL(blk_mq_complete_request);
@@ -814,6 +831,7 @@ static void blk_mq_rq_timed_out(struct request *req, bool reserved)
{
const struct blk_mq_ops *ops = req->q->mq_ops;
enum blk_eh_timer_return ret = BLK_EH_RESET_TIMER;
+ unsigned long flags;
req->rq_flags |= RQF_MQ_TIMEOUT_EXPIRED;
@@ -826,12 +844,33 @@ static void blk_mq_rq_timed_out(struct request *req, bool reserved)
break;
case BLK_EH_RESET_TIMER:
/*
- * As nothing prevents from completion happening while
- * ->aborted_gstate is set, this may lead to ignored
- * completions and further spurious timeouts.
+ * The normal completion may happen during handling the
+ * timeout, or even after returning from .timeout(), so
+ * once the request has been completed, we can't reset
+ * timer any more since this request may be handled as
+ * BLK_EH_RESET_TIMER in next timeout handling too, and
+ * it has to be completed in this situation.
+ *
+ * Holding the queue lock to cover read/write rq's
+ * aborted_gstate and normal state, so the race can be
+ * avoided completely.
+ *
+ * blk_add_timer() may be re-ordered with resetting
+ * aborted_gstate, and the only side-effec is that if this
+ * request is recycled after aborted_gstate is cleared, it
+ * may be timed out a bit late, that is what we can survive
+ * given timeout event is so unusual.
*/
- blk_mq_rq_update_aborted_gstate(req, 0);
- blk_add_timer(req);
+ spin_lock_irqsave(req->q->queue_lock, flags);
+ if (blk_mq_rq_state(req) != MQ_RQ_COMPLETE_IN_TIMEOUT) {
+ blk_add_timer(req);
+ blk_mq_rq_update_aborted_gstate(req, 0);
+ spin_unlock_irqrestore(req->q->queue_lock, flags);
+ } else {
+ blk_mq_rq_update_state(req, MQ_RQ_IN_FLIGHT);
+ spin_unlock_irqrestore(req->q->queue_lock, flags);
+ __blk_mq_complete_request(req);
+ }
break;
case BLK_EH_NOT_HANDLED:
break;
diff --git a/block/blk-mq.h b/block/blk-mq.h
index 88c558f71819..0426d048743d 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -35,6 +35,7 @@ enum mq_rq_state {
MQ_RQ_IDLE = 0,
MQ_RQ_IN_FLIGHT = 1,
MQ_RQ_COMPLETE = 2,
+ MQ_RQ_COMPLETE_IN_TIMEOUT = 3,
MQ_RQ_STATE_BITS = 2,
MQ_RQ_STATE_MASK = (1 << MQ_RQ_STATE_BITS) - 1,
--
2.9.5
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH V2] blk-mq: fix race between complete and BLK_EH_RESET_TIMER
2018-04-11 23:38 [PATCH V2] blk-mq: fix race between complete and BLK_EH_RESET_TIMER Ming Lei
@ 2018-04-12 2:38 ` jianchao.wang
2018-04-12 7:03 ` Ming Lei
2018-04-12 9:49 ` Ming Lei
0 siblings, 2 replies; 4+ messages in thread
From: jianchao.wang @ 2018-04-12 2:38 UTC (permalink / raw)
To: Ming Lei, Jens Axboe, linux-block
Cc: Bart Van Assche, Tejun Heo, Christoph Hellwig, Sagi Grimberg,
Israel Rukshin, Max Gurtovoy, stable
Hi Ming
On 04/12/2018 07:38 AM, Ming Lei wrote:
> + *
> + * Cover complete vs BLK_EH_RESET_TIMER race in slow path with
> + * helding queue lock.
> */
> hctx_lock(hctx, &srcu_idx);
> if (blk_mq_rq_aborted_gstate(rq) != rq->gstate)
> __blk_mq_complete_request(rq);
> + else {
> + unsigned long flags;
> + bool need_complete = false;
> +
> + spin_lock_irqsave(q->queue_lock, flags);
> + if (!blk_mq_rq_aborted_gstate(rq))
> + need_complete = true;
> + else
> + blk_mq_rq_update_state(rq, MQ_RQ_COMPLETE_IN_TIMEOUT);
> + spin_unlock_irqrestore(q->queue_lock, flags);
What if the .timeout return BLK_EH_HANDLED during this ?
timeout context irq context
.timeout()
blk_mq_complete_request
set state to MQ_RQ_COMPLETE_IN_TIMEOUT
__blk_mq_complete_request
WARN_ON_ONCE(blk_mq_rq_state(rq)
!= MQ_RQ_IN_FLIGHT);
If further upon blk_mq_free_request, the final freed request maybe changed to MQ_RQ_COMPLETE_IN_TIMEOUT
instead of MQ_RQ_IDLE.
Thanks
Jianchao
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH V2] blk-mq: fix race between complete and BLK_EH_RESET_TIMER
2018-04-12 2:38 ` jianchao.wang
@ 2018-04-12 7:03 ` Ming Lei
2018-04-12 9:49 ` Ming Lei
1 sibling, 0 replies; 4+ messages in thread
From: Ming Lei @ 2018-04-12 7:03 UTC (permalink / raw)
To: jianchao.wang
Cc: Jens Axboe, linux-block, Bart Van Assche, Tejun Heo,
Christoph Hellwig, Sagi Grimberg, Israel Rukshin, Max Gurtovoy,
stable
Hi Jianchao,
On Thu, Apr 12, 2018 at 10:38:56AM +0800, jianchao.wang wrote:
> Hi Ming
>
> On 04/12/2018 07:38 AM, Ming Lei wrote:
> > + *
> > + * Cover complete vs BLK_EH_RESET_TIMER race in slow path with
> > + * helding queue lock.
> > */
> > hctx_lock(hctx, &srcu_idx);
> > if (blk_mq_rq_aborted_gstate(rq) != rq->gstate)
> > __blk_mq_complete_request(rq);
> > + else {
> > + unsigned long flags;
> > + bool need_complete = false;
> > +
> > + spin_lock_irqsave(q->queue_lock, flags);
> > + if (!blk_mq_rq_aborted_gstate(rq))
> > + need_complete = true;
> > + else
> > + blk_mq_rq_update_state(rq, MQ_RQ_COMPLETE_IN_TIMEOUT);
> > + spin_unlock_irqrestore(q->queue_lock, flags);
>
> What if the .timeout return BLK_EH_HANDLED during this ?
> timeout context irq context
> .timeout()
> blk_mq_complete_request
> set state to MQ_RQ_COMPLETE_IN_TIMEOUT
> __blk_mq_complete_request
> WARN_ON_ONCE(blk_mq_rq_state(rq)
> != MQ_RQ_IN_FLIGHT);
>
> If further upon blk_mq_free_request, the final freed request maybe changed to MQ_RQ_COMPLETE_IN_TIMEOUT
> instead of MQ_RQ_IDLE.
Good catch, so this patch has to deal with race between complete and
BLK_EH_HANDLED too.
Given both the above handling are in slow path, the queue lock can be used
to cover them all atomically just as done for EH_RESET_TIMER.
Will will it in V3.
Thanks,
Ming
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH V2] blk-mq: fix race between complete and BLK_EH_RESET_TIMER
2018-04-12 2:38 ` jianchao.wang
2018-04-12 7:03 ` Ming Lei
@ 2018-04-12 9:49 ` Ming Lei
1 sibling, 0 replies; 4+ messages in thread
From: Ming Lei @ 2018-04-12 9:49 UTC (permalink / raw)
To: jianchao.wang
Cc: Jens Axboe, linux-block, Bart Van Assche, Tejun Heo,
Christoph Hellwig, Sagi Grimberg, Israel Rukshin, Max Gurtovoy,
stable
On Thu, Apr 12, 2018 at 10:38:56AM +0800, jianchao.wang wrote:
> Hi Ming
>
> On 04/12/2018 07:38 AM, Ming Lei wrote:
> > + *
> > + * Cover complete vs BLK_EH_RESET_TIMER race in slow path with
> > + * helding queue lock.
> > */
> > hctx_lock(hctx, &srcu_idx);
> > if (blk_mq_rq_aborted_gstate(rq) != rq->gstate)
> > __blk_mq_complete_request(rq);
> > + else {
> > + unsigned long flags;
> > + bool need_complete = false;
> > +
> > + spin_lock_irqsave(q->queue_lock, flags);
> > + if (!blk_mq_rq_aborted_gstate(rq))
> > + need_complete = true;
> > + else
> > + blk_mq_rq_update_state(rq, MQ_RQ_COMPLETE_IN_TIMEOUT);
> > + spin_unlock_irqrestore(q->queue_lock, flags);
>
> What if the .timeout return BLK_EH_HANDLED during this ?
> timeout context irq context
> .timeout()
> blk_mq_complete_request
> set state to MQ_RQ_COMPLETE_IN_TIMEOUT
> __blk_mq_complete_request
> WARN_ON_ONCE(blk_mq_rq_state(rq)
> != MQ_RQ_IN_FLIGHT);
Thinking of it further, if the normal completion from irq context can
happen after BLK_EH_HANDLED is returned from .timeout, this request may
be recycled immediately after __blk_mq_complete_request(), then
blk_mq_complete_request() can complete the new instance early, so that
can be another race between old normal completion and new dispatch.
I guess .timeout should make sure this thing won't happen.
--
Ming
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2018-04-12 9:49 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-04-11 23:38 [PATCH V2] blk-mq: fix race between complete and BLK_EH_RESET_TIMER Ming Lei
2018-04-12 2:38 ` jianchao.wang
2018-04-12 7:03 ` Ming Lei
2018-04-12 9:49 ` Ming Lei
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.