All of lore.kernel.org
 help / color / mirror / Atom feed
From: Ming Lei <ming.lei@redhat.com>
To: Jens Axboe <axboe@kernel.dk>, Christoph Hellwig <hch@lst.de>,
	Bart Van Assche <bvanassche@acm.org>,
	djeffery@redhat.com, stefanha@redhat.com
Cc: ming.lei@redhat.com, linux-block@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	virtualization@lists.linux-foundation.org
Subject: [Bug] double ->queue_rq() because of timeout in ->queue_rq()
Date: Thu, 20 Oct 2022 17:10:13 +0800	[thread overview]
Message-ID: <Y1EQdafQlKNAsutk@T590> (raw)

Hi,

David Jeffery found one double ->queue_rq() issue, so far it can
be triggered in the following two cases:

1) scsi driver in guest kernel

- the story could be long vmexit latency or long preempt latency of
vCPU pthread, then IO req is timed out before queuing the request
to hardware but after calling blk_mq_start_request() during ->queue_rq(),
then timeout handler handles it by requeue, then double ->queue_rq() is
caused, and kernel panic

2) burst of kernel messages from irq handler 

For 1), I think it is one reasonable case, given latency from host side
can come anytime in theory because vCPU is emulated by one normal host
pthread which can be preempted anywhere. For 2), I guess kernel message is
supposed to be rate limited.

Firstly, is this kind of so long(30sec) random latency when running kernel
code something normal? Or do we need to take care of it? IMO, it looks
reasonable in case of VM, but our VM experts may have better idea about this
situation. Also the default 30sec timeout could be reduced via sysfs or
drivers.

Suppose it is one reasonable report to fix, what is the preferred solution?

So far, it is driver's responsibility to cover the race between timeout
and completion, so it is supposed to be solved in driver in theory, given
driver has enough knowledge.

But it is really one common problem, lots of driver could have similar
issue, and could be hard to fix all affected drivers, so David suggests
the following patch by draining in-progress ->queue_rq() for this issue.
And the patch looks reasonable too.

Any comments for this issue and the solution?


diff --git a/block/blk-mq.c b/block/blk-mq.c
index 8070b6c10e8d..ca57c060bb65 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1523,7 +1523,12 @@ static void blk_mq_rq_timed_out(struct request *req)
 	blk_add_timer(req);
 }
 
-static bool blk_mq_req_expired(struct request *rq, unsigned long *next)
+struct blk_expired_data {
+	unsigned long next;
+	unsigned long now;
+};
+
+static bool blk_mq_req_expired(struct request *rq, struct blk_expired_data *expired)
 {
 	unsigned long deadline;
 
@@ -1533,13 +1538,13 @@ static bool blk_mq_req_expired(struct request *rq, unsigned long *next)
 		return false;
 
 	deadline = READ_ONCE(rq->deadline);
-	if (time_after_eq(jiffies, deadline))
+	if (time_after_eq(expired->now, deadline))
 		return true;
 
-	if (*next == 0)
-		*next = deadline;
-	else if (time_after(*next, deadline))
-		*next = deadline;
+	if (expired->next == 0)
+		expired->next = deadline;
+	else if (time_after(expired->next, deadline))
+		expired->next = deadline;
 	return false;
 }
 
@@ -1555,7 +1560,7 @@ void blk_mq_put_rq_ref(struct request *rq)
 
 static bool blk_mq_check_expired(struct request *rq, void *priv)
 {
-	unsigned long *next = priv;
+	struct blk_expired_data *expired = priv;
 
 	/*
 	 * blk_mq_queue_tag_busy_iter() has locked the request, so it cannot
@@ -1564,7 +1569,7 @@ static bool blk_mq_check_expired(struct request *rq, void *priv)
 	 * it was completed and reallocated as a new request after returning
 	 * from blk_mq_check_expired().
 	 */
-	if (blk_mq_req_expired(rq, next))
+	if (blk_mq_req_expired(rq, expired))
 		blk_mq_rq_timed_out(rq);
 	return true;
 }
@@ -1573,7 +1578,7 @@ static void blk_mq_timeout_work(struct work_struct *work)
 {
 	struct request_queue *q =
 		container_of(work, struct request_queue, timeout_work);
-	unsigned long next = 0;
+	struct blk_expired_data expired = {.next = 0, .now = jiffies};
 	struct blk_mq_hw_ctx *hctx;
 	unsigned long i;
 
@@ -1593,10 +1598,17 @@ static void blk_mq_timeout_work(struct work_struct *work)
 	if (!percpu_ref_tryget(&q->q_usage_counter))
 		return;
 
-	blk_mq_queue_tag_busy_iter(q, blk_mq_check_expired, &next);
+	/* Before walking tags, we must ensure any submit started before the
+	 * current time has finished. Since the submit uses srcu or rcu, wait
+	 * for a synchronization point to ensure all running submits have
+	 * finished
+	 */
+	blk_mq_wait_quiesce_done(q);
+
+	blk_mq_queue_tag_busy_iter(q, blk_mq_check_expired, &expired);
 
-	if (next != 0) {
-		mod_timer(&q->timeout, next);
+	if (expired.next != 0) {
+		mod_timer(&q->timeout, expired.next);
 	} else {
 		/*
 		 * Request timeouts are handled as a forward rolling timer. If



Thanks, 
Ming


WARNING: multiple messages have this Message-ID (diff)
From: Ming Lei <ming.lei@redhat.com>
To: Jens Axboe <axboe@kernel.dk>, Christoph Hellwig <hch@lst.de>,
	Bart Van Assche <bvanassche@acm.org>,
	djeffery@redhat.com, stefanha@redhat.com
Cc: linux-block@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	linux-scsi@vger.kernel.org, ming.lei@redhat.com
Subject: [Bug] double ->queue_rq() because of timeout in ->queue_rq()
Date: Thu, 20 Oct 2022 17:10:13 +0800	[thread overview]
Message-ID: <Y1EQdafQlKNAsutk@T590> (raw)

Hi,

David Jeffery found one double ->queue_rq() issue, so far it can
be triggered in the following two cases:

1) scsi driver in guest kernel

- the story could be long vmexit latency or long preempt latency of
vCPU pthread, then IO req is timed out before queuing the request
to hardware but after calling blk_mq_start_request() during ->queue_rq(),
then timeout handler handles it by requeue, then double ->queue_rq() is
caused, and kernel panic

2) burst of kernel messages from irq handler 

For 1), I think it is one reasonable case, given latency from host side
can come anytime in theory because vCPU is emulated by one normal host
pthread which can be preempted anywhere. For 2), I guess kernel message is
supposed to be rate limited.

Firstly, is this kind of so long(30sec) random latency when running kernel
code something normal? Or do we need to take care of it? IMO, it looks
reasonable in case of VM, but our VM experts may have better idea about this
situation. Also the default 30sec timeout could be reduced via sysfs or
drivers.

Suppose it is one reasonable report to fix, what is the preferred solution?

So far, it is driver's responsibility to cover the race between timeout
and completion, so it is supposed to be solved in driver in theory, given
driver has enough knowledge.

But it is really one common problem, lots of driver could have similar
issue, and could be hard to fix all affected drivers, so David suggests
the following patch by draining in-progress ->queue_rq() for this issue.
And the patch looks reasonable too.

Any comments for this issue and the solution?


diff --git a/block/blk-mq.c b/block/blk-mq.c
index 8070b6c10e8d..ca57c060bb65 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1523,7 +1523,12 @@ static void blk_mq_rq_timed_out(struct request *req)
 	blk_add_timer(req);
 }
 
-static bool blk_mq_req_expired(struct request *rq, unsigned long *next)
+struct blk_expired_data {
+	unsigned long next;
+	unsigned long now;
+};
+
+static bool blk_mq_req_expired(struct request *rq, struct blk_expired_data *expired)
 {
 	unsigned long deadline;
 
@@ -1533,13 +1538,13 @@ static bool blk_mq_req_expired(struct request *rq, unsigned long *next)
 		return false;
 
 	deadline = READ_ONCE(rq->deadline);
-	if (time_after_eq(jiffies, deadline))
+	if (time_after_eq(expired->now, deadline))
 		return true;
 
-	if (*next == 0)
-		*next = deadline;
-	else if (time_after(*next, deadline))
-		*next = deadline;
+	if (expired->next == 0)
+		expired->next = deadline;
+	else if (time_after(expired->next, deadline))
+		expired->next = deadline;
 	return false;
 }
 
@@ -1555,7 +1560,7 @@ void blk_mq_put_rq_ref(struct request *rq)
 
 static bool blk_mq_check_expired(struct request *rq, void *priv)
 {
-	unsigned long *next = priv;
+	struct blk_expired_data *expired = priv;
 
 	/*
 	 * blk_mq_queue_tag_busy_iter() has locked the request, so it cannot
@@ -1564,7 +1569,7 @@ static bool blk_mq_check_expired(struct request *rq, void *priv)
 	 * it was completed and reallocated as a new request after returning
 	 * from blk_mq_check_expired().
 	 */
-	if (blk_mq_req_expired(rq, next))
+	if (blk_mq_req_expired(rq, expired))
 		blk_mq_rq_timed_out(rq);
 	return true;
 }
@@ -1573,7 +1578,7 @@ static void blk_mq_timeout_work(struct work_struct *work)
 {
 	struct request_queue *q =
 		container_of(work, struct request_queue, timeout_work);
-	unsigned long next = 0;
+	struct blk_expired_data expired = {.next = 0, .now = jiffies};
 	struct blk_mq_hw_ctx *hctx;
 	unsigned long i;
 
@@ -1593,10 +1598,17 @@ static void blk_mq_timeout_work(struct work_struct *work)
 	if (!percpu_ref_tryget(&q->q_usage_counter))
 		return;
 
-	blk_mq_queue_tag_busy_iter(q, blk_mq_check_expired, &next);
+	/* Before walking tags, we must ensure any submit started before the
+	 * current time has finished. Since the submit uses srcu or rcu, wait
+	 * for a synchronization point to ensure all running submits have
+	 * finished
+	 */
+	blk_mq_wait_quiesce_done(q);
+
+	blk_mq_queue_tag_busy_iter(q, blk_mq_check_expired, &expired);
 
-	if (next != 0) {
-		mod_timer(&q->timeout, next);
+	if (expired.next != 0) {
+		mod_timer(&q->timeout, expired.next);
 	} else {
 		/*
 		 * Request timeouts are handled as a forward rolling timer. If



Thanks, 
Ming

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

             reply	other threads:[~2022-10-20  9:10 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-10-20  9:10 Ming Lei [this message]
2022-10-20  9:10 ` [Bug] double ->queue_rq() because of timeout in ->queue_rq() Ming Lei
2022-10-20 20:01 ` Stefan Hajnoczi
2022-10-20 20:01   ` Stefan Hajnoczi
2022-10-21  2:23   ` Ming Lei
2022-10-21  2:23     ` Ming Lei
2022-10-24 15:30     ` Stefan Hajnoczi
2022-10-24 15:30       ` Stefan Hajnoczi
2022-10-24 15:41       ` Ming Lei
2022-10-24 15:41         ` Ming Lei
2022-10-20 20:26 ` Bart Van Assche
2022-10-20 20:26   ` Bart Van Assche
2022-10-21  0:57   ` Ming Lei
2022-10-21  0:57     ` Ming Lei
2022-10-21 14:32 ` Keith Busch
2022-10-21 15:22   ` Ming Lei
2022-10-21 15:22     ` Ming Lei
2022-10-21 18:33     ` David Jeffery
2022-10-22  4:27       ` Ming Lei
2022-10-22  4:27         ` Ming Lei
2022-10-21 18:21   ` David Jeffery

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Y1EQdafQlKNAsutk@T590 \
    --to=ming.lei@redhat.com \
    --cc=axboe@kernel.dk \
    --cc=bvanassche@acm.org \
    --cc=djeffery@redhat.com \
    --cc=hch@lst.de \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-scsi@vger.kernel.org \
    --cc=stefanha@redhat.com \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.