All of lore.kernel.org
 help / color / mirror / Atom feed
* [Bug] double ->queue_rq() because of timeout in ->queue_rq()
@ 2022-10-20  9:10 ` Ming Lei
  0 siblings, 0 replies; 21+ messages in thread
From: Ming Lei @ 2022-10-20  9:10 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig, Bart Van Assche, djeffery, stefanha
  Cc: ming.lei, linux-block, linux-scsi, virtualization

Hi,

David Jeffery found one double ->queue_rq() issue, so far it can
be triggered in the following two cases:

1) scsi driver in guest kernel

- the story could be long vmexit latency or long preempt latency of
vCPU pthread, then IO req is timed out before queuing the request
to hardware but after calling blk_mq_start_request() during ->queue_rq(),
then timeout handler handles it by requeue, then double ->queue_rq() is
caused, and kernel panic

2) burst of kernel messages from irq handler 

For 1), I think it is one reasonable case, given latency from host side
can come anytime in theory because vCPU is emulated by one normal host
pthread which can be preempted anywhere. For 2), I guess kernel message is
supposed to be rate limited.

Firstly, is this kind of so long(30sec) random latency when running kernel
code something normal? Or do we need to take care of it? IMO, it looks
reasonable in case of VM, but our VM experts may have better idea about this
situation. Also the default 30sec timeout could be reduced via sysfs or
drivers.

Suppose it is one reasonable report to fix, what is the preferred solution?

So far, it is driver's responsibility to cover the race between timeout
and completion, so it is supposed to be solved in driver in theory, given
driver has enough knowledge.

But it is really one common problem, lots of driver could have similar
issue, and could be hard to fix all affected drivers, so David suggests
the following patch by draining in-progress ->queue_rq() for this issue.
And the patch looks reasonable too.

Any comments for this issue and the solution?


diff --git a/block/blk-mq.c b/block/blk-mq.c
index 8070b6c10e8d..ca57c060bb65 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1523,7 +1523,12 @@ static void blk_mq_rq_timed_out(struct request *req)
 	blk_add_timer(req);
 }
 
-static bool blk_mq_req_expired(struct request *rq, unsigned long *next)
+struct blk_expired_data {
+	unsigned long next;
+	unsigned long now;
+};
+
+static bool blk_mq_req_expired(struct request *rq, struct blk_expired_data *expired)
 {
 	unsigned long deadline;
 
@@ -1533,13 +1538,13 @@ static bool blk_mq_req_expired(struct request *rq, unsigned long *next)
 		return false;
 
 	deadline = READ_ONCE(rq->deadline);
-	if (time_after_eq(jiffies, deadline))
+	if (time_after_eq(expired->now, deadline))
 		return true;
 
-	if (*next == 0)
-		*next = deadline;
-	else if (time_after(*next, deadline))
-		*next = deadline;
+	if (expired->next == 0)
+		expired->next = deadline;
+	else if (time_after(expired->next, deadline))
+		expired->next = deadline;
 	return false;
 }
 
@@ -1555,7 +1560,7 @@ void blk_mq_put_rq_ref(struct request *rq)
 
 static bool blk_mq_check_expired(struct request *rq, void *priv)
 {
-	unsigned long *next = priv;
+	struct blk_expired_data *expired = priv;
 
 	/*
 	 * blk_mq_queue_tag_busy_iter() has locked the request, so it cannot
@@ -1564,7 +1569,7 @@ static bool blk_mq_check_expired(struct request *rq, void *priv)
 	 * it was completed and reallocated as a new request after returning
 	 * from blk_mq_check_expired().
 	 */
-	if (blk_mq_req_expired(rq, next))
+	if (blk_mq_req_expired(rq, expired))
 		blk_mq_rq_timed_out(rq);
 	return true;
 }
@@ -1573,7 +1578,7 @@ static void blk_mq_timeout_work(struct work_struct *work)
 {
 	struct request_queue *q =
 		container_of(work, struct request_queue, timeout_work);
-	unsigned long next = 0;
+	struct blk_expired_data expired = {.next = 0, .now = jiffies};
 	struct blk_mq_hw_ctx *hctx;
 	unsigned long i;
 
@@ -1593,10 +1598,17 @@ static void blk_mq_timeout_work(struct work_struct *work)
 	if (!percpu_ref_tryget(&q->q_usage_counter))
 		return;
 
-	blk_mq_queue_tag_busy_iter(q, blk_mq_check_expired, &next);
+	/* Before walking tags, we must ensure any submit started before the
+	 * current time has finished. Since the submit uses srcu or rcu, wait
+	 * for a synchronization point to ensure all running submits have
+	 * finished
+	 */
+	blk_mq_wait_quiesce_done(q);
+
+	blk_mq_queue_tag_busy_iter(q, blk_mq_check_expired, &expired);
 
-	if (next != 0) {
-		mod_timer(&q->timeout, next);
+	if (expired.next != 0) {
+		mod_timer(&q->timeout, expired.next);
 	} else {
 		/*
 		 * Request timeouts are handled as a forward rolling timer. If



Thanks, 
Ming


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Bug] double ->queue_rq() because of timeout in ->queue_rq()
@ 2022-10-20  9:10 ` Ming Lei
  0 siblings, 0 replies; 21+ messages in thread
From: Ming Lei @ 2022-10-20  9:10 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig, Bart Van Assche, djeffery, stefanha
  Cc: linux-block, virtualization, linux-scsi, ming.lei

Hi,

David Jeffery found one double ->queue_rq() issue, so far it can
be triggered in the following two cases:

1) scsi driver in guest kernel

- the story could be long vmexit latency or long preempt latency of
vCPU pthread, then IO req is timed out before queuing the request
to hardware but after calling blk_mq_start_request() during ->queue_rq(),
then timeout handler handles it by requeue, then double ->queue_rq() is
caused, and kernel panic

2) burst of kernel messages from irq handler 

For 1), I think it is one reasonable case, given latency from host side
can come anytime in theory because vCPU is emulated by one normal host
pthread which can be preempted anywhere. For 2), I guess kernel message is
supposed to be rate limited.

Firstly, is this kind of so long(30sec) random latency when running kernel
code something normal? Or do we need to take care of it? IMO, it looks
reasonable in case of VM, but our VM experts may have better idea about this
situation. Also the default 30sec timeout could be reduced via sysfs or
drivers.

Suppose it is one reasonable report to fix, what is the preferred solution?

So far, it is driver's responsibility to cover the race between timeout
and completion, so it is supposed to be solved in driver in theory, given
driver has enough knowledge.

But it is really one common problem, lots of driver could have similar
issue, and could be hard to fix all affected drivers, so David suggests
the following patch by draining in-progress ->queue_rq() for this issue.
And the patch looks reasonable too.

Any comments for this issue and the solution?


diff --git a/block/blk-mq.c b/block/blk-mq.c
index 8070b6c10e8d..ca57c060bb65 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1523,7 +1523,12 @@ static void blk_mq_rq_timed_out(struct request *req)
 	blk_add_timer(req);
 }
 
-static bool blk_mq_req_expired(struct request *rq, unsigned long *next)
+struct blk_expired_data {
+	unsigned long next;
+	unsigned long now;
+};
+
+static bool blk_mq_req_expired(struct request *rq, struct blk_expired_data *expired)
 {
 	unsigned long deadline;
 
@@ -1533,13 +1538,13 @@ static bool blk_mq_req_expired(struct request *rq, unsigned long *next)
 		return false;
 
 	deadline = READ_ONCE(rq->deadline);
-	if (time_after_eq(jiffies, deadline))
+	if (time_after_eq(expired->now, deadline))
 		return true;
 
-	if (*next == 0)
-		*next = deadline;
-	else if (time_after(*next, deadline))
-		*next = deadline;
+	if (expired->next == 0)
+		expired->next = deadline;
+	else if (time_after(expired->next, deadline))
+		expired->next = deadline;
 	return false;
 }
 
@@ -1555,7 +1560,7 @@ void blk_mq_put_rq_ref(struct request *rq)
 
 static bool blk_mq_check_expired(struct request *rq, void *priv)
 {
-	unsigned long *next = priv;
+	struct blk_expired_data *expired = priv;
 
 	/*
 	 * blk_mq_queue_tag_busy_iter() has locked the request, so it cannot
@@ -1564,7 +1569,7 @@ static bool blk_mq_check_expired(struct request *rq, void *priv)
 	 * it was completed and reallocated as a new request after returning
 	 * from blk_mq_check_expired().
 	 */
-	if (blk_mq_req_expired(rq, next))
+	if (blk_mq_req_expired(rq, expired))
 		blk_mq_rq_timed_out(rq);
 	return true;
 }
@@ -1573,7 +1578,7 @@ static void blk_mq_timeout_work(struct work_struct *work)
 {
 	struct request_queue *q =
 		container_of(work, struct request_queue, timeout_work);
-	unsigned long next = 0;
+	struct blk_expired_data expired = {.next = 0, .now = jiffies};
 	struct blk_mq_hw_ctx *hctx;
 	unsigned long i;
 
@@ -1593,10 +1598,17 @@ static void blk_mq_timeout_work(struct work_struct *work)
 	if (!percpu_ref_tryget(&q->q_usage_counter))
 		return;
 
-	blk_mq_queue_tag_busy_iter(q, blk_mq_check_expired, &next);
+	/* Before walking tags, we must ensure any submit started before the
+	 * current time has finished. Since the submit uses srcu or rcu, wait
+	 * for a synchronization point to ensure all running submits have
+	 * finished
+	 */
+	blk_mq_wait_quiesce_done(q);
+
+	blk_mq_queue_tag_busy_iter(q, blk_mq_check_expired, &expired);
 
-	if (next != 0) {
-		mod_timer(&q->timeout, next);
+	if (expired.next != 0) {
+		mod_timer(&q->timeout, expired.next);
 	} else {
 		/*
 		 * Request timeouts are handled as a forward rolling timer. If



Thanks, 
Ming

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [Bug] double ->queue_rq() because of timeout in ->queue_rq()
  2022-10-20  9:10 ` Ming Lei
@ 2022-10-20 20:01   ` Stefan Hajnoczi
  -1 siblings, 0 replies; 21+ messages in thread
From: Stefan Hajnoczi @ 2022-10-20 20:01 UTC (permalink / raw)
  To: Ming Lei
  Cc: Jens Axboe, Christoph Hellwig, Bart Van Assche, djeffery,
	linux-block, linux-scsi, virtualization

[-- Attachment #1: Type: text/plain, Size: 5349 bytes --]

On Thu, Oct 20, 2022 at 05:10:13PM +0800, Ming Lei wrote:
> Hi,
> 
> David Jeffery found one double ->queue_rq() issue, so far it can
> be triggered in the following two cases:
> 
> 1) scsi driver in guest kernel
> 
> - the story could be long vmexit latency or long preempt latency of
> vCPU pthread, then IO req is timed out before queuing the request
> to hardware but after calling blk_mq_start_request() during ->queue_rq(),
> then timeout handler handles it by requeue, then double ->queue_rq() is
> caused, and kernel panic
> 
> 2) burst of kernel messages from irq handler 
> 
> For 1), I think it is one reasonable case, given latency from host side
> can come anytime in theory because vCPU is emulated by one normal host
> pthread which can be preempted anywhere. For 2), I guess kernel message is
> supposed to be rate limited.
> 
> Firstly, is this kind of so long(30sec) random latency when running kernel
> code something normal? Or do we need to take care of it? IMO, it looks
> reasonable in case of VM, but our VM experts may have better idea about this
> situation. Also the default 30sec timeout could be reduced via sysfs or
> drivers.

30 seconds is a long latency that does not occur during normal
operation, but unfortunately does happen on occasion.

I think there's an interest in understanding the root cause and solving
long latencies (if possible) in the QEMU/KVM communities. We can
investigate specific cases on kvm@vger.kernel.org and/or
qemu-devel@nongnu.org.

The kernel should be robust in the face of long latencies even if they
are due to issues with hardware or the hypervisor. I'm not familiar
enough with the Linux block layer to say whether the patch below is
correct, but having a solution in place would be good.

> 
> Suppose it is one reasonable report to fix, what is the preferred solution?
> 
> So far, it is driver's responsibility to cover the race between timeout
> and completion, so it is supposed to be solved in driver in theory, given
> driver has enough knowledge.
> 
> But it is really one common problem, lots of driver could have similar
> issue, and could be hard to fix all affected drivers, so David suggests
> the following patch by draining in-progress ->queue_rq() for this issue.
> And the patch looks reasonable too.
> 
> Any comments for this issue and the solution?
> 
> 
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index 8070b6c10e8d..ca57c060bb65 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -1523,7 +1523,12 @@ static void blk_mq_rq_timed_out(struct request *req)
>  	blk_add_timer(req);
>  }
>  
> -static bool blk_mq_req_expired(struct request *rq, unsigned long *next)
> +struct blk_expired_data {
> +	unsigned long next;
> +	unsigned long now;
> +};
> +
> +static bool blk_mq_req_expired(struct request *rq, struct blk_expired_data *expired)
>  {
>  	unsigned long deadline;
>  
> @@ -1533,13 +1538,13 @@ static bool blk_mq_req_expired(struct request *rq, unsigned long *next)
>  		return false;
>  
>  	deadline = READ_ONCE(rq->deadline);
> -	if (time_after_eq(jiffies, deadline))
> +	if (time_after_eq(expired->now, deadline))
>  		return true;
>  
> -	if (*next == 0)
> -		*next = deadline;
> -	else if (time_after(*next, deadline))
> -		*next = deadline;
> +	if (expired->next == 0)
> +		expired->next = deadline;
> +	else if (time_after(expired->next, deadline))
> +		expired->next = deadline;
>  	return false;
>  }
>  
> @@ -1555,7 +1560,7 @@ void blk_mq_put_rq_ref(struct request *rq)
>  
>  static bool blk_mq_check_expired(struct request *rq, void *priv)
>  {
> -	unsigned long *next = priv;
> +	struct blk_expired_data *expired = priv;
>  
>  	/*
>  	 * blk_mq_queue_tag_busy_iter() has locked the request, so it cannot
> @@ -1564,7 +1569,7 @@ static bool blk_mq_check_expired(struct request *rq, void *priv)
>  	 * it was completed and reallocated as a new request after returning
>  	 * from blk_mq_check_expired().
>  	 */
> -	if (blk_mq_req_expired(rq, next))
> +	if (blk_mq_req_expired(rq, expired))
>  		blk_mq_rq_timed_out(rq);
>  	return true;
>  }
> @@ -1573,7 +1578,7 @@ static void blk_mq_timeout_work(struct work_struct *work)
>  {
>  	struct request_queue *q =
>  		container_of(work, struct request_queue, timeout_work);
> -	unsigned long next = 0;
> +	struct blk_expired_data expired = {.next = 0, .now = jiffies};
>  	struct blk_mq_hw_ctx *hctx;
>  	unsigned long i;
>  
> @@ -1593,10 +1598,17 @@ static void blk_mq_timeout_work(struct work_struct *work)
>  	if (!percpu_ref_tryget(&q->q_usage_counter))
>  		return;
>  
> -	blk_mq_queue_tag_busy_iter(q, blk_mq_check_expired, &next);
> +	/* Before walking tags, we must ensure any submit started before the
> +	 * current time has finished. Since the submit uses srcu or rcu, wait
> +	 * for a synchronization point to ensure all running submits have
> +	 * finished
> +	 */
> +	blk_mq_wait_quiesce_done(q);
> +
> +	blk_mq_queue_tag_busy_iter(q, blk_mq_check_expired, &expired);
>  
> -	if (next != 0) {
> -		mod_timer(&q->timeout, next);
> +	if (expired.next != 0) {
> +		mod_timer(&q->timeout, expired.next);
>  	} else {
>  		/*
>  		 * Request timeouts are handled as a forward rolling timer. If
> 
> 
> 
> Thanks, 
> Ming
> 

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Bug] double ->queue_rq() because of timeout in ->queue_rq()
@ 2022-10-20 20:01   ` Stefan Hajnoczi
  0 siblings, 0 replies; 21+ messages in thread
From: Stefan Hajnoczi @ 2022-10-20 20:01 UTC (permalink / raw)
  To: Ming Lei
  Cc: Jens Axboe, djeffery, Bart Van Assche, linux-scsi,
	virtualization, linux-block, Christoph Hellwig


[-- Attachment #1.1: Type: text/plain, Size: 5349 bytes --]

On Thu, Oct 20, 2022 at 05:10:13PM +0800, Ming Lei wrote:
> Hi,
> 
> David Jeffery found one double ->queue_rq() issue, so far it can
> be triggered in the following two cases:
> 
> 1) scsi driver in guest kernel
> 
> - the story could be long vmexit latency or long preempt latency of
> vCPU pthread, then IO req is timed out before queuing the request
> to hardware but after calling blk_mq_start_request() during ->queue_rq(),
> then timeout handler handles it by requeue, then double ->queue_rq() is
> caused, and kernel panic
> 
> 2) burst of kernel messages from irq handler 
> 
> For 1), I think it is one reasonable case, given latency from host side
> can come anytime in theory because vCPU is emulated by one normal host
> pthread which can be preempted anywhere. For 2), I guess kernel message is
> supposed to be rate limited.
> 
> Firstly, is this kind of so long(30sec) random latency when running kernel
> code something normal? Or do we need to take care of it? IMO, it looks
> reasonable in case of VM, but our VM experts may have better idea about this
> situation. Also the default 30sec timeout could be reduced via sysfs or
> drivers.

30 seconds is a long latency that does not occur during normal
operation, but unfortunately does happen on occasion.

I think there's an interest in understanding the root cause and solving
long latencies (if possible) in the QEMU/KVM communities. We can
investigate specific cases on kvm@vger.kernel.org and/or
qemu-devel@nongnu.org.

The kernel should be robust in the face of long latencies even if they
are due to issues with hardware or the hypervisor. I'm not familiar
enough with the Linux block layer to say whether the patch below is
correct, but having a solution in place would be good.

> 
> Suppose it is one reasonable report to fix, what is the preferred solution?
> 
> So far, it is driver's responsibility to cover the race between timeout
> and completion, so it is supposed to be solved in driver in theory, given
> driver has enough knowledge.
> 
> But it is really one common problem, lots of driver could have similar
> issue, and could be hard to fix all affected drivers, so David suggests
> the following patch by draining in-progress ->queue_rq() for this issue.
> And the patch looks reasonable too.
> 
> Any comments for this issue and the solution?
> 
> 
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index 8070b6c10e8d..ca57c060bb65 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -1523,7 +1523,12 @@ static void blk_mq_rq_timed_out(struct request *req)
>  	blk_add_timer(req);
>  }
>  
> -static bool blk_mq_req_expired(struct request *rq, unsigned long *next)
> +struct blk_expired_data {
> +	unsigned long next;
> +	unsigned long now;
> +};
> +
> +static bool blk_mq_req_expired(struct request *rq, struct blk_expired_data *expired)
>  {
>  	unsigned long deadline;
>  
> @@ -1533,13 +1538,13 @@ static bool blk_mq_req_expired(struct request *rq, unsigned long *next)
>  		return false;
>  
>  	deadline = READ_ONCE(rq->deadline);
> -	if (time_after_eq(jiffies, deadline))
> +	if (time_after_eq(expired->now, deadline))
>  		return true;
>  
> -	if (*next == 0)
> -		*next = deadline;
> -	else if (time_after(*next, deadline))
> -		*next = deadline;
> +	if (expired->next == 0)
> +		expired->next = deadline;
> +	else if (time_after(expired->next, deadline))
> +		expired->next = deadline;
>  	return false;
>  }
>  
> @@ -1555,7 +1560,7 @@ void blk_mq_put_rq_ref(struct request *rq)
>  
>  static bool blk_mq_check_expired(struct request *rq, void *priv)
>  {
> -	unsigned long *next = priv;
> +	struct blk_expired_data *expired = priv;
>  
>  	/*
>  	 * blk_mq_queue_tag_busy_iter() has locked the request, so it cannot
> @@ -1564,7 +1569,7 @@ static bool blk_mq_check_expired(struct request *rq, void *priv)
>  	 * it was completed and reallocated as a new request after returning
>  	 * from blk_mq_check_expired().
>  	 */
> -	if (blk_mq_req_expired(rq, next))
> +	if (blk_mq_req_expired(rq, expired))
>  		blk_mq_rq_timed_out(rq);
>  	return true;
>  }
> @@ -1573,7 +1578,7 @@ static void blk_mq_timeout_work(struct work_struct *work)
>  {
>  	struct request_queue *q =
>  		container_of(work, struct request_queue, timeout_work);
> -	unsigned long next = 0;
> +	struct blk_expired_data expired = {.next = 0, .now = jiffies};
>  	struct blk_mq_hw_ctx *hctx;
>  	unsigned long i;
>  
> @@ -1593,10 +1598,17 @@ static void blk_mq_timeout_work(struct work_struct *work)
>  	if (!percpu_ref_tryget(&q->q_usage_counter))
>  		return;
>  
> -	blk_mq_queue_tag_busy_iter(q, blk_mq_check_expired, &next);
> +	/* Before walking tags, we must ensure any submit started before the
> +	 * current time has finished. Since the submit uses srcu or rcu, wait
> +	 * for a synchronization point to ensure all running submits have
> +	 * finished
> +	 */
> +	blk_mq_wait_quiesce_done(q);
> +
> +	blk_mq_queue_tag_busy_iter(q, blk_mq_check_expired, &expired);
>  
> -	if (next != 0) {
> -		mod_timer(&q->timeout, next);
> +	if (expired.next != 0) {
> +		mod_timer(&q->timeout, expired.next);
>  	} else {
>  		/*
>  		 * Request timeouts are handled as a forward rolling timer. If
> 
> 
> 
> Thanks, 
> Ming
> 

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 183 bytes --]

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Bug] double ->queue_rq() because of timeout in ->queue_rq()
  2022-10-20  9:10 ` Ming Lei
@ 2022-10-20 20:26   ` Bart Van Assche
  -1 siblings, 0 replies; 21+ messages in thread
From: Bart Van Assche @ 2022-10-20 20:26 UTC (permalink / raw)
  To: Ming Lei, Jens Axboe, Christoph Hellwig, djeffery, stefanha
  Cc: linux-block, linux-scsi, virtualization

On 10/20/22 02:10, Ming Lei wrote:
 > [ ... ]

Hi Ming,

Fixing this in the block layer seems fine to me. A few comments:

> +	/* Before walking tags, we must ensure any submit started before the
> +	 * current time has finished. Since the submit uses srcu or rcu, wait
> +	 * for a synchronization point to ensure all running submits have
> +	 * finished
> +	 */

Should the above comment follow the style of other comments in the block 
layer?

> +	blk_mq_wait_quiesce_done(q);
> +
> +	blk_mq_queue_tag_busy_iter(q, blk_mq_check_expired, &expired);

The above doesn't look sufficient to me since .queue_rq() may be called 
while blk_mq_queue_tag_busy_iter() is in progress. How about moving the 
blk_mq_wait_quiesce_done() call into blk_mq_check_expired() and 
preventing new .queue_rq() calls before the timeout handler is called?

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Bug] double ->queue_rq() because of timeout in ->queue_rq()
@ 2022-10-20 20:26   ` Bart Van Assche
  0 siblings, 0 replies; 21+ messages in thread
From: Bart Van Assche @ 2022-10-20 20:26 UTC (permalink / raw)
  To: Ming Lei, Jens Axboe, Christoph Hellwig, djeffery, stefanha
  Cc: linux-block, linux-scsi, virtualization

On 10/20/22 02:10, Ming Lei wrote:
 > [ ... ]

Hi Ming,

Fixing this in the block layer seems fine to me. A few comments:

> +	/* Before walking tags, we must ensure any submit started before the
> +	 * current time has finished. Since the submit uses srcu or rcu, wait
> +	 * for a synchronization point to ensure all running submits have
> +	 * finished
> +	 */

Should the above comment follow the style of other comments in the block 
layer?

> +	blk_mq_wait_quiesce_done(q);
> +
> +	blk_mq_queue_tag_busy_iter(q, blk_mq_check_expired, &expired);

The above doesn't look sufficient to me since .queue_rq() may be called 
while blk_mq_queue_tag_busy_iter() is in progress. How about moving the 
blk_mq_wait_quiesce_done() call into blk_mq_check_expired() and 
preventing new .queue_rq() calls before the timeout handler is called?

Thanks,

Bart.
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Bug] double ->queue_rq() because of timeout in ->queue_rq()
  2022-10-20 20:26   ` Bart Van Assche
@ 2022-10-21  0:57     ` Ming Lei
  -1 siblings, 0 replies; 21+ messages in thread
From: Ming Lei @ 2022-10-21  0:57 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jens Axboe, Christoph Hellwig, djeffery, stefanha, linux-block,
	linux-scsi, virtualization

On Thu, Oct 20, 2022 at 01:26:48PM -0700, Bart Van Assche wrote:
> On 10/20/22 02:10, Ming Lei wrote:
> > [ ... ]
> 
> Hi Ming,
> 
> Fixing this in the block layer seems fine to me. A few comments:
> 
> > +	/* Before walking tags, we must ensure any submit started before the
> > +	 * current time has finished. Since the submit uses srcu or rcu, wait
> > +	 * for a synchronization point to ensure all running submits have
> > +	 * finished
> > +	 */
> 
> Should the above comment follow the style of other comments in the block
> layer?

OK.

> 
> > +	blk_mq_wait_quiesce_done(q);
> > +
> > +	blk_mq_queue_tag_busy_iter(q, blk_mq_check_expired, &expired);
> 
> The above doesn't look sufficient to me since .queue_rq() may be called
> while blk_mq_queue_tag_busy_iter() is in progress. How about moving the
> blk_mq_wait_quiesce_done() call into blk_mq_check_expired() and preventing
> new .queue_rq() calls before the timeout handler is called?

blk_mq_timeout_work() records the time before calling
blk_mq_wait_quiesce_done(), and only handle requests which is timed out
before the recorded jiffies, so new queued request won't be covered
in this time.

Thanks,
Ming


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Bug] double ->queue_rq() because of timeout in ->queue_rq()
@ 2022-10-21  0:57     ` Ming Lei
  0 siblings, 0 replies; 21+ messages in thread
From: Ming Lei @ 2022-10-21  0:57 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jens Axboe, djeffery, linux-scsi, virtualization, linux-block,
	stefanha, Christoph Hellwig

On Thu, Oct 20, 2022 at 01:26:48PM -0700, Bart Van Assche wrote:
> On 10/20/22 02:10, Ming Lei wrote:
> > [ ... ]
> 
> Hi Ming,
> 
> Fixing this in the block layer seems fine to me. A few comments:
> 
> > +	/* Before walking tags, we must ensure any submit started before the
> > +	 * current time has finished. Since the submit uses srcu or rcu, wait
> > +	 * for a synchronization point to ensure all running submits have
> > +	 * finished
> > +	 */
> 
> Should the above comment follow the style of other comments in the block
> layer?

OK.

> 
> > +	blk_mq_wait_quiesce_done(q);
> > +
> > +	blk_mq_queue_tag_busy_iter(q, blk_mq_check_expired, &expired);
> 
> The above doesn't look sufficient to me since .queue_rq() may be called
> while blk_mq_queue_tag_busy_iter() is in progress. How about moving the
> blk_mq_wait_quiesce_done() call into blk_mq_check_expired() and preventing
> new .queue_rq() calls before the timeout handler is called?

blk_mq_timeout_work() records the time before calling
blk_mq_wait_quiesce_done(), and only handle requests which is timed out
before the recorded jiffies, so new queued request won't be covered
in this time.

Thanks,
Ming

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Bug] double ->queue_rq() because of timeout in ->queue_rq()
  2022-10-20 20:01   ` Stefan Hajnoczi
@ 2022-10-21  2:23     ` Ming Lei
  -1 siblings, 0 replies; 21+ messages in thread
From: Ming Lei @ 2022-10-21  2:23 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Jens Axboe, djeffery, Bart Van Assche, linux-scsi,
	virtualization, linux-block, Christoph Hellwig

On Thu, Oct 20, 2022 at 04:01:11PM -0400, Stefan Hajnoczi wrote:
> On Thu, Oct 20, 2022 at 05:10:13PM +0800, Ming Lei wrote:
> > Hi,
> > 
> > David Jeffery found one double ->queue_rq() issue, so far it can
> > be triggered in the following two cases:
> > 
> > 1) scsi driver in guest kernel
> > 
> > - the story could be long vmexit latency or long preempt latency of
> > vCPU pthread, then IO req is timed out before queuing the request
> > to hardware but after calling blk_mq_start_request() during ->queue_rq(),
> > then timeout handler handles it by requeue, then double ->queue_rq() is
> > caused, and kernel panic
> > 
> > 2) burst of kernel messages from irq handler 
> > 
> > For 1), I think it is one reasonable case, given latency from host side
> > can come anytime in theory because vCPU is emulated by one normal host
> > pthread which can be preempted anywhere. For 2), I guess kernel message is
> > supposed to be rate limited.
> > 
> > Firstly, is this kind of so long(30sec) random latency when running kernel
> > code something normal? Or do we need to take care of it? IMO, it looks
> > reasonable in case of VM, but our VM experts may have better idea about this
> > situation. Also the default 30sec timeout could be reduced via sysfs or
> > drivers.
> 
> 30 seconds is a long latency that does not occur during normal
> operation, but unfortunately does happen on occasion.

Thanks for the confirmation!

> 
> I think there's an interest in understanding the root cause and solving
> long latencies (if possible) in the QEMU/KVM communities. We can
> investigate specific cases on kvm@vger.kernel.org and/or
> qemu-devel@nongnu.org.

The issue was original reported on VMware VM, but maybe David can figure
out how to trigger it on QEMU/KVM.



Thanks, 
Ming

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Bug] double ->queue_rq() because of timeout in ->queue_rq()
@ 2022-10-21  2:23     ` Ming Lei
  0 siblings, 0 replies; 21+ messages in thread
From: Ming Lei @ 2022-10-21  2:23 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Jens Axboe, Christoph Hellwig, Bart Van Assche, djeffery,
	linux-block, linux-scsi, virtualization

On Thu, Oct 20, 2022 at 04:01:11PM -0400, Stefan Hajnoczi wrote:
> On Thu, Oct 20, 2022 at 05:10:13PM +0800, Ming Lei wrote:
> > Hi,
> > 
> > David Jeffery found one double ->queue_rq() issue, so far it can
> > be triggered in the following two cases:
> > 
> > 1) scsi driver in guest kernel
> > 
> > - the story could be long vmexit latency or long preempt latency of
> > vCPU pthread, then IO req is timed out before queuing the request
> > to hardware but after calling blk_mq_start_request() during ->queue_rq(),
> > then timeout handler handles it by requeue, then double ->queue_rq() is
> > caused, and kernel panic
> > 
> > 2) burst of kernel messages from irq handler 
> > 
> > For 1), I think it is one reasonable case, given latency from host side
> > can come anytime in theory because vCPU is emulated by one normal host
> > pthread which can be preempted anywhere. For 2), I guess kernel message is
> > supposed to be rate limited.
> > 
> > Firstly, is this kind of so long(30sec) random latency when running kernel
> > code something normal? Or do we need to take care of it? IMO, it looks
> > reasonable in case of VM, but our VM experts may have better idea about this
> > situation. Also the default 30sec timeout could be reduced via sysfs or
> > drivers.
> 
> 30 seconds is a long latency that does not occur during normal
> operation, but unfortunately does happen on occasion.

Thanks for the confirmation!

> 
> I think there's an interest in understanding the root cause and solving
> long latencies (if possible) in the QEMU/KVM communities. We can
> investigate specific cases on kvm@vger.kernel.org and/or
> qemu-devel@nongnu.org.

The issue was original reported on VMware VM, but maybe David can figure
out how to trigger it on QEMU/KVM.



Thanks, 
Ming


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Bug] double ->queue_rq() because of timeout in ->queue_rq()
  2022-10-20  9:10 ` Ming Lei
                   ` (2 preceding siblings ...)
  (?)
@ 2022-10-21 14:32 ` Keith Busch
  2022-10-21 15:22     ` Ming Lei
  2022-10-21 18:21   ` David Jeffery
  -1 siblings, 2 replies; 21+ messages in thread
From: Keith Busch @ 2022-10-21 14:32 UTC (permalink / raw)
  To: Ming Lei
  Cc: Jens Axboe, Christoph Hellwig, Bart Van Assche, djeffery,
	stefanha, linux-block, linux-scsi, virtualization

On Thu, Oct 20, 2022 at 05:10:13PM +0800, Ming Lei wrote:
> @@ -1593,10 +1598,17 @@ static void blk_mq_timeout_work(struct work_struct *work)
>  	if (!percpu_ref_tryget(&q->q_usage_counter))
>  		return;
>  
> -	blk_mq_queue_tag_busy_iter(q, blk_mq_check_expired, &next);
> +	/* Before walking tags, we must ensure any submit started before the
> +	 * current time has finished. Since the submit uses srcu or rcu, wait
> +	 * for a synchronization point to ensure all running submits have
> +	 * finished
> +	 */
> +	blk_mq_wait_quiesce_done(q);
> +
> +	blk_mq_queue_tag_busy_iter(q, blk_mq_check_expired, &expired);

The blk_mq_wait_quiesce_done() will only wait for tasks that entered
just before calling that function. It will not wait for tasks that
entered immediately after.

If I correctly understand the problem you're describing, the hypervisor
may prevent any guest process from running. If so, the timeout work may
be stalled after the quiesce, and if a queue_rq() process also stalled
after starting quiesce_done(), then we're in the same situation you're
trying to prevent, right?

I agree with your idea that this is a lower level driver responsibility:
it should reclaim all started requests before allowing new queuing.
Perhaps the block layer should also raise a clear warning if it's
queueing a request that's already started.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Bug] double ->queue_rq() because of timeout in ->queue_rq()
  2022-10-21 14:32 ` Keith Busch
@ 2022-10-21 15:22     ` Ming Lei
  2022-10-21 18:21   ` David Jeffery
  1 sibling, 0 replies; 21+ messages in thread
From: Ming Lei @ 2022-10-21 15:22 UTC (permalink / raw)
  To: Keith Busch
  Cc: Jens Axboe, Christoph Hellwig, Bart Van Assche, djeffery,
	stefanha, linux-block, linux-scsi, virtualization

On Fri, Oct 21, 2022 at 08:32:31AM -0600, Keith Busch wrote:
> On Thu, Oct 20, 2022 at 05:10:13PM +0800, Ming Lei wrote:
> > @@ -1593,10 +1598,17 @@ static void blk_mq_timeout_work(struct work_struct *work)
> >  	if (!percpu_ref_tryget(&q->q_usage_counter))
> >  		return;
> >  
> > -	blk_mq_queue_tag_busy_iter(q, blk_mq_check_expired, &next);
> > +	/* Before walking tags, we must ensure any submit started before the
> > +	 * current time has finished. Since the submit uses srcu or rcu, wait
> > +	 * for a synchronization point to ensure all running submits have
> > +	 * finished
> > +	 */
> > +	blk_mq_wait_quiesce_done(q);
> > +
> > +	blk_mq_queue_tag_busy_iter(q, blk_mq_check_expired, &expired);
> 
> The blk_mq_wait_quiesce_done() will only wait for tasks that entered
> just before calling that function. It will not wait for tasks that
> entered immediately after.

Yeah, but the patch records the jiffies before calling
blk_mq_wait_quiesce_done, and only time out requests which are timed out
before the recorded time, so it is fine to use blk_mq_wait_quiesce_done
in this way.

> 
> If I correctly understand the problem you're describing, the hypervisor
> may prevent any guest process from running. If so, the timeout work may
> be stalled after the quiesce, and if a queue_rq() process also stalled
> after starting quiesce_done(), then we're in the same situation you're
> trying to prevent, right?

No, the stall just happens on one vCPU, and other vCPUs may run smoothly.

1) vmexit, which only stalls one vCPU, some vmexit could come anytime,
such as external interrupt

2) vCPU is emulated by pthread usually, and the pthread is just one
normal host userspace pthread, which can be preempted anytime, and
the preempt latency could be long enough when the system load is
heavy.

And it is like random stall added when running any instruction of
VM kernel code.

> 
> I agree with your idea that this is a lower level driver responsibility:
> it should reclaim all started requests before allowing new queuing.
> Perhaps the block layer should also raise a clear warning if it's
> queueing a request that's already started.

The thing is that it is one generic issue, lots of VM drivers could be
affected, and it may not be easy for drivers to handle the race too.



Thanks,
Ming


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Bug] double ->queue_rq() because of timeout in ->queue_rq()
@ 2022-10-21 15:22     ` Ming Lei
  0 siblings, 0 replies; 21+ messages in thread
From: Ming Lei @ 2022-10-21 15:22 UTC (permalink / raw)
  To: Keith Busch
  Cc: Jens Axboe, djeffery, Bart Van Assche, linux-scsi,
	virtualization, linux-block, stefanha, Christoph Hellwig

On Fri, Oct 21, 2022 at 08:32:31AM -0600, Keith Busch wrote:
> On Thu, Oct 20, 2022 at 05:10:13PM +0800, Ming Lei wrote:
> > @@ -1593,10 +1598,17 @@ static void blk_mq_timeout_work(struct work_struct *work)
> >  	if (!percpu_ref_tryget(&q->q_usage_counter))
> >  		return;
> >  
> > -	blk_mq_queue_tag_busy_iter(q, blk_mq_check_expired, &next);
> > +	/* Before walking tags, we must ensure any submit started before the
> > +	 * current time has finished. Since the submit uses srcu or rcu, wait
> > +	 * for a synchronization point to ensure all running submits have
> > +	 * finished
> > +	 */
> > +	blk_mq_wait_quiesce_done(q);
> > +
> > +	blk_mq_queue_tag_busy_iter(q, blk_mq_check_expired, &expired);
> 
> The blk_mq_wait_quiesce_done() will only wait for tasks that entered
> just before calling that function. It will not wait for tasks that
> entered immediately after.

Yeah, but the patch records the jiffies before calling
blk_mq_wait_quiesce_done, and only time out requests which are timed out
before the recorded time, so it is fine to use blk_mq_wait_quiesce_done
in this way.

> 
> If I correctly understand the problem you're describing, the hypervisor
> may prevent any guest process from running. If so, the timeout work may
> be stalled after the quiesce, and if a queue_rq() process also stalled
> after starting quiesce_done(), then we're in the same situation you're
> trying to prevent, right?

No, the stall just happens on one vCPU, and other vCPUs may run smoothly.

1) vmexit, which only stalls one vCPU, some vmexit could come anytime,
such as external interrupt

2) vCPU is emulated by pthread usually, and the pthread is just one
normal host userspace pthread, which can be preempted anytime, and
the preempt latency could be long enough when the system load is
heavy.

And it is like random stall added when running any instruction of
VM kernel code.

> 
> I agree with your idea that this is a lower level driver responsibility:
> it should reclaim all started requests before allowing new queuing.
> Perhaps the block layer should also raise a clear warning if it's
> queueing a request that's already started.

The thing is that it is one generic issue, lots of VM drivers could be
affected, and it may not be easy for drivers to handle the race too.



Thanks,
Ming

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Bug] double ->queue_rq() because of timeout in ->queue_rq()
  2022-10-21 14:32 ` Keith Busch
  2022-10-21 15:22     ` Ming Lei
@ 2022-10-21 18:21   ` David Jeffery
  1 sibling, 0 replies; 21+ messages in thread
From: David Jeffery @ 2022-10-21 18:21 UTC (permalink / raw)
  To: Keith Busch
  Cc: Ming Lei, Jens Axboe, Christoph Hellwig, Bart Van Assche,
	stefanha, linux-block, linux-scsi, virtualization

blk_On Fri, Oct 21, 2022 at 10:40 AM Keith Busch <kbusch@kernel.org> wrote:
>
> The blk_mq_wait_quiesce_done() will only wait for tasks that entered
> just before calling that function. It will not wait for tasks that
> entered immediately after.

Right. The design was depending on saving a jiffies value before the
blk_mq_wait_quiesce_done() call and deadline behavior to avoid having
to care about queue_rq calls starting after
blk_mq_wait_quiesce_done().

> If I correctly understand the problem you're describing, the hypervisor
> may prevent any guest process from running. If so, the timeout work may
> be stalled after the quiesce, and if a queue_rq() process also stalled
> after starting quiesce_done(), then we're in the same situation you're
> trying to prevent, right?

For handling this condition and avoiding the issue, the code was
depending on the deadline value and rq->state value.
blk_mq_start_request sets a deadline then sets MQ_RQ_IN_FLIGHT, while
blk_mq_req_expired checks for MQ_RQ_IN_FLIGHT then checks the
deadline. A new queue_rq call would then not count as expired from
these checks from either its deadline being updated to be in the
future or from missing MQ_RQ_IN_FLIGHT .

However, there are no memory barriers ensuring this ordering. So it
looks like it should prevent an issue with a new queue_rq on x86 cpus
but could race on cpus with weaker memory ordering.


> I agree with your idea that this is a lower level driver responsibility:
> it should reclaim all started requests before allowing new queuing.
> Perhaps the block layer should also raise a clear warning if it's
> queueing a request that's already started.
>

If I understand what you are saying, that is roughly how the old scsi
timeout handling used to work before the async-abort changes. It
prevented new requests and drained everything (though HBA-wide not
just request_queue-wide) before doing anything with timed out
commands. The current implementation makes avoiding race conditions
more difficult but reduces the performance impact of a lone
misbehaving request.

So a driver under the block layer could potentially avoid the current
races by properly quiescing the request_queue before handling any
timed out requests. But I was trying to come up with a solution which
avoided that requirement.

David Jeffery


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Bug] double ->queue_rq() because of timeout in ->queue_rq()
  2022-10-21 15:22     ` Ming Lei
  (?)
@ 2022-10-21 18:33     ` David Jeffery
  2022-10-22  4:27         ` Ming Lei
  -1 siblings, 1 reply; 21+ messages in thread
From: David Jeffery @ 2022-10-21 18:33 UTC (permalink / raw)
  To: Ming Lei
  Cc: Keith Busch, Jens Axboe, Christoph Hellwig, Bart Van Assche,
	stefanha, linux-block, linux-scsi, virtualization

On Fri, Oct 21, 2022 at 11:22 AM Ming Lei <ming.lei@redhat.com> wrote:
>
> On Fri, Oct 21, 2022 at 08:32:31AM -0600, Keith Busch wrote:
> >
> > I agree with your idea that this is a lower level driver responsibility:
> > it should reclaim all started requests before allowing new queuing.
> > Perhaps the block layer should also raise a clear warning if it's
> > queueing a request that's already started.
>
> The thing is that it is one generic issue, lots of VM drivers could be
> affected, and it may not be easy for drivers to handle the race too.
>

While virtual systems are a common source of the problem, fully
preempt kernels (with or without real-time patches) can also trigger
this condition rather simply with a poorly behaved real-time task. The
involuntary preemption means the queue_rq call can be stopped to let
another task run. Poorly behaving tasks claiming the CPU for longer
than the request timeout when preempting a task in a queue_rq function
could cause the condition on real or virtual hardware. So it's not
just VM related drivers that are affected by the race.

David Jeffery


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Bug] double ->queue_rq() because of timeout in ->queue_rq()
  2022-10-21 18:33     ` David Jeffery
@ 2022-10-22  4:27         ` Ming Lei
  0 siblings, 0 replies; 21+ messages in thread
From: Ming Lei @ 2022-10-22  4:27 UTC (permalink / raw)
  To: David Jeffery
  Cc: Keith Busch, Jens Axboe, Christoph Hellwig, Bart Van Assche,
	stefanha, linux-block, linux-scsi, virtualization

On Fri, Oct 21, 2022 at 02:33:21PM -0400, David Jeffery wrote:
> On Fri, Oct 21, 2022 at 11:22 AM Ming Lei <ming.lei@redhat.com> wrote:
> >
> > On Fri, Oct 21, 2022 at 08:32:31AM -0600, Keith Busch wrote:
> > >
> > > I agree with your idea that this is a lower level driver responsibility:
> > > it should reclaim all started requests before allowing new queuing.
> > > Perhaps the block layer should also raise a clear warning if it's
> > > queueing a request that's already started.
> >
> > The thing is that it is one generic issue, lots of VM drivers could be
> > affected, and it may not be easy for drivers to handle the race too.
> >
> 
> While virtual systems are a common source of the problem, fully
> preempt kernels (with or without real-time patches) can also trigger
> this condition rather simply with a poorly behaved real-time task. The
> involuntary preemption means the queue_rq call can be stopped to let
> another task run. Poorly behaving tasks claiming the CPU for longer
> than the request timeout when preempting a task in a queue_rq function
> could cause the condition on real or virtual hardware. So it's not
> just VM related drivers that are affected by the race.

In theory, yes. But ->queue_rq() is in rcu read critical area, and
usually CONFIG_RCU_BOOST is enabled for covering this problem otherwise
OOM can be triggered easily too.

I guess it is hard to trigger it in real hardware with preempt kernel.


Thanks,
Ming


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Bug] double ->queue_rq() because of timeout in ->queue_rq()
@ 2022-10-22  4:27         ` Ming Lei
  0 siblings, 0 replies; 21+ messages in thread
From: Ming Lei @ 2022-10-22  4:27 UTC (permalink / raw)
  To: David Jeffery
  Cc: Jens Axboe, Bart Van Assche, linux-scsi, virtualization,
	linux-block, stefanha, Keith Busch, Christoph Hellwig

On Fri, Oct 21, 2022 at 02:33:21PM -0400, David Jeffery wrote:
> On Fri, Oct 21, 2022 at 11:22 AM Ming Lei <ming.lei@redhat.com> wrote:
> >
> > On Fri, Oct 21, 2022 at 08:32:31AM -0600, Keith Busch wrote:
> > >
> > > I agree with your idea that this is a lower level driver responsibility:
> > > it should reclaim all started requests before allowing new queuing.
> > > Perhaps the block layer should also raise a clear warning if it's
> > > queueing a request that's already started.
> >
> > The thing is that it is one generic issue, lots of VM drivers could be
> > affected, and it may not be easy for drivers to handle the race too.
> >
> 
> While virtual systems are a common source of the problem, fully
> preempt kernels (with or without real-time patches) can also trigger
> this condition rather simply with a poorly behaved real-time task. The
> involuntary preemption means the queue_rq call can be stopped to let
> another task run. Poorly behaving tasks claiming the CPU for longer
> than the request timeout when preempting a task in a queue_rq function
> could cause the condition on real or virtual hardware. So it's not
> just VM related drivers that are affected by the race.

In theory, yes. But ->queue_rq() is in rcu read critical area, and
usually CONFIG_RCU_BOOST is enabled for covering this problem otherwise
OOM can be triggered easily too.

I guess it is hard to trigger it in real hardware with preempt kernel.


Thanks,
Ming

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Bug] double ->queue_rq() because of timeout in ->queue_rq()
  2022-10-21  2:23     ` Ming Lei
@ 2022-10-24 15:30       ` Stefan Hajnoczi
  -1 siblings, 0 replies; 21+ messages in thread
From: Stefan Hajnoczi @ 2022-10-24 15:30 UTC (permalink / raw)
  To: Ming Lei
  Cc: Jens Axboe, djeffery, Bart Van Assche, linux-scsi,
	virtualization, linux-block, Christoph Hellwig


[-- Attachment #1.1: Type: text/plain, Size: 2393 bytes --]

On Fri, Oct 21, 2022 at 10:23:57AM +0800, Ming Lei wrote:
> On Thu, Oct 20, 2022 at 04:01:11PM -0400, Stefan Hajnoczi wrote:
> > On Thu, Oct 20, 2022 at 05:10:13PM +0800, Ming Lei wrote:
> > > Hi,
> > > 
> > > David Jeffery found one double ->queue_rq() issue, so far it can
> > > be triggered in the following two cases:
> > > 
> > > 1) scsi driver in guest kernel
> > > 
> > > - the story could be long vmexit latency or long preempt latency of
> > > vCPU pthread, then IO req is timed out before queuing the request
> > > to hardware but after calling blk_mq_start_request() during ->queue_rq(),
> > > then timeout handler handles it by requeue, then double ->queue_rq() is
> > > caused, and kernel panic
> > > 
> > > 2) burst of kernel messages from irq handler 
> > > 
> > > For 1), I think it is one reasonable case, given latency from host side
> > > can come anytime in theory because vCPU is emulated by one normal host
> > > pthread which can be preempted anywhere. For 2), I guess kernel message is
> > > supposed to be rate limited.
> > > 
> > > Firstly, is this kind of so long(30sec) random latency when running kernel
> > > code something normal? Or do we need to take care of it? IMO, it looks
> > > reasonable in case of VM, but our VM experts may have better idea about this
> > > situation. Also the default 30sec timeout could be reduced via sysfs or
> > > drivers.
> > 
> > 30 seconds is a long latency that does not occur during normal
> > operation, but unfortunately does happen on occasion.
> 
> Thanks for the confirmation!
> 
> > 
> > I think there's an interest in understanding the root cause and solving
> > long latencies (if possible) in the QEMU/KVM communities. We can
> > investigate specific cases on kvm@vger.kernel.org and/or
> > qemu-devel@nongnu.org.
> 
> The issue was original reported on VMware VM, but maybe David can figure
> out how to trigger it on QEMU/KVM.

A very basic question:

The virtio_blk driver has no q->mq_ops->timeout() callback. Why does the
block layer still enable the timeout mechanism when the driver doesn't
implement ->timeout()?

I saw there was some "idle" hctx logic and I guess the requests are
resubmitted (although it wasn't obvious to me how that happens in the
code)? Maybe that's why the timer is still used if the driver doesn't
care about timeouts...

Stefan

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 183 bytes --]

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Bug] double ->queue_rq() because of timeout in ->queue_rq()
@ 2022-10-24 15:30       ` Stefan Hajnoczi
  0 siblings, 0 replies; 21+ messages in thread
From: Stefan Hajnoczi @ 2022-10-24 15:30 UTC (permalink / raw)
  To: Ming Lei
  Cc: Jens Axboe, Christoph Hellwig, Bart Van Assche, djeffery,
	linux-block, linux-scsi, virtualization

[-- Attachment #1: Type: text/plain, Size: 2393 bytes --]

On Fri, Oct 21, 2022 at 10:23:57AM +0800, Ming Lei wrote:
> On Thu, Oct 20, 2022 at 04:01:11PM -0400, Stefan Hajnoczi wrote:
> > On Thu, Oct 20, 2022 at 05:10:13PM +0800, Ming Lei wrote:
> > > Hi,
> > > 
> > > David Jeffery found one double ->queue_rq() issue, so far it can
> > > be triggered in the following two cases:
> > > 
> > > 1) scsi driver in guest kernel
> > > 
> > > - the story could be long vmexit latency or long preempt latency of
> > > vCPU pthread, then IO req is timed out before queuing the request
> > > to hardware but after calling blk_mq_start_request() during ->queue_rq(),
> > > then timeout handler handles it by requeue, then double ->queue_rq() is
> > > caused, and kernel panic
> > > 
> > > 2) burst of kernel messages from irq handler 
> > > 
> > > For 1), I think it is one reasonable case, given latency from host side
> > > can come anytime in theory because vCPU is emulated by one normal host
> > > pthread which can be preempted anywhere. For 2), I guess kernel message is
> > > supposed to be rate limited.
> > > 
> > > Firstly, is this kind of so long(30sec) random latency when running kernel
> > > code something normal? Or do we need to take care of it? IMO, it looks
> > > reasonable in case of VM, but our VM experts may have better idea about this
> > > situation. Also the default 30sec timeout could be reduced via sysfs or
> > > drivers.
> > 
> > 30 seconds is a long latency that does not occur during normal
> > operation, but unfortunately does happen on occasion.
> 
> Thanks for the confirmation!
> 
> > 
> > I think there's an interest in understanding the root cause and solving
> > long latencies (if possible) in the QEMU/KVM communities. We can
> > investigate specific cases on kvm@vger.kernel.org and/or
> > qemu-devel@nongnu.org.
> 
> The issue was original reported on VMware VM, but maybe David can figure
> out how to trigger it on QEMU/KVM.

A very basic question:

The virtio_blk driver has no q->mq_ops->timeout() callback. Why does the
block layer still enable the timeout mechanism when the driver doesn't
implement ->timeout()?

I saw there was some "idle" hctx logic and I guess the requests are
resubmitted (although it wasn't obvious to me how that happens in the
code)? Maybe that's why the timer is still used if the driver doesn't
care about timeouts...

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Bug] double ->queue_rq() because of timeout in ->queue_rq()
  2022-10-24 15:30       ` Stefan Hajnoczi
@ 2022-10-24 15:41         ` Ming Lei
  -1 siblings, 0 replies; 21+ messages in thread
From: Ming Lei @ 2022-10-24 15:41 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Jens Axboe, djeffery, Bart Van Assche, linux-scsi,
	virtualization, linux-block, Christoph Hellwig

On Mon, Oct 24, 2022 at 11:30:39AM -0400, Stefan Hajnoczi wrote:
> On Fri, Oct 21, 2022 at 10:23:57AM +0800, Ming Lei wrote:
> > On Thu, Oct 20, 2022 at 04:01:11PM -0400, Stefan Hajnoczi wrote:
> > > On Thu, Oct 20, 2022 at 05:10:13PM +0800, Ming Lei wrote:
> > > > Hi,
> > > > 
> > > > David Jeffery found one double ->queue_rq() issue, so far it can
> > > > be triggered in the following two cases:
> > > > 
> > > > 1) scsi driver in guest kernel
> > > > 
> > > > - the story could be long vmexit latency or long preempt latency of
> > > > vCPU pthread, then IO req is timed out before queuing the request
> > > > to hardware but after calling blk_mq_start_request() during ->queue_rq(),
> > > > then timeout handler handles it by requeue, then double ->queue_rq() is
> > > > caused, and kernel panic
> > > > 
> > > > 2) burst of kernel messages from irq handler 
> > > > 
> > > > For 1), I think it is one reasonable case, given latency from host side
> > > > can come anytime in theory because vCPU is emulated by one normal host
> > > > pthread which can be preempted anywhere. For 2), I guess kernel message is
> > > > supposed to be rate limited.
> > > > 
> > > > Firstly, is this kind of so long(30sec) random latency when running kernel
> > > > code something normal? Or do we need to take care of it? IMO, it looks
> > > > reasonable in case of VM, but our VM experts may have better idea about this
> > > > situation. Also the default 30sec timeout could be reduced via sysfs or
> > > > drivers.
> > > 
> > > 30 seconds is a long latency that does not occur during normal
> > > operation, but unfortunately does happen on occasion.
> > 
> > Thanks for the confirmation!
> > 
> > > 
> > > I think there's an interest in understanding the root cause and solving
> > > long latencies (if possible) in the QEMU/KVM communities. We can
> > > investigate specific cases on kvm@vger.kernel.org and/or
> > > qemu-devel@nongnu.org.
> > 
> > The issue was original reported on VMware VM, but maybe David can figure
> > out how to trigger it on QEMU/KVM.
> 
> A very basic question:
> 
> The virtio_blk driver has no q->mq_ops->timeout() callback. Why does the
> block layer still enable the timeout mechanism when the driver doesn't
> implement ->timeout()?

No matter if ->timeout() is implemented or not, request still may
be timed out, and it is better for block layer to find such issue
and simply reset timer in case of no ->timeout().

> 
> I saw there was some "idle" hctx logic and I guess the requests are

timeout timer is reused for idle hctx detection.

> resubmitted (although it wasn't obvious to me how that happens in the
> code)? Maybe that's why the timer is still used if the driver doesn't
> care about timeouts...

Timeout handling is totally decided by driver's ->timeout() callback.
If driver doesn't implement ->timeout(), the request's timer is
reset.



Thanks
Ming

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Bug] double ->queue_rq() because of timeout in ->queue_rq()
@ 2022-10-24 15:41         ` Ming Lei
  0 siblings, 0 replies; 21+ messages in thread
From: Ming Lei @ 2022-10-24 15:41 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Jens Axboe, Christoph Hellwig, Bart Van Assche, djeffery,
	linux-block, linux-scsi, virtualization

On Mon, Oct 24, 2022 at 11:30:39AM -0400, Stefan Hajnoczi wrote:
> On Fri, Oct 21, 2022 at 10:23:57AM +0800, Ming Lei wrote:
> > On Thu, Oct 20, 2022 at 04:01:11PM -0400, Stefan Hajnoczi wrote:
> > > On Thu, Oct 20, 2022 at 05:10:13PM +0800, Ming Lei wrote:
> > > > Hi,
> > > > 
> > > > David Jeffery found one double ->queue_rq() issue, so far it can
> > > > be triggered in the following two cases:
> > > > 
> > > > 1) scsi driver in guest kernel
> > > > 
> > > > - the story could be long vmexit latency or long preempt latency of
> > > > vCPU pthread, then IO req is timed out before queuing the request
> > > > to hardware but after calling blk_mq_start_request() during ->queue_rq(),
> > > > then timeout handler handles it by requeue, then double ->queue_rq() is
> > > > caused, and kernel panic
> > > > 
> > > > 2) burst of kernel messages from irq handler 
> > > > 
> > > > For 1), I think it is one reasonable case, given latency from host side
> > > > can come anytime in theory because vCPU is emulated by one normal host
> > > > pthread which can be preempted anywhere. For 2), I guess kernel message is
> > > > supposed to be rate limited.
> > > > 
> > > > Firstly, is this kind of so long(30sec) random latency when running kernel
> > > > code something normal? Or do we need to take care of it? IMO, it looks
> > > > reasonable in case of VM, but our VM experts may have better idea about this
> > > > situation. Also the default 30sec timeout could be reduced via sysfs or
> > > > drivers.
> > > 
> > > 30 seconds is a long latency that does not occur during normal
> > > operation, but unfortunately does happen on occasion.
> > 
> > Thanks for the confirmation!
> > 
> > > 
> > > I think there's an interest in understanding the root cause and solving
> > > long latencies (if possible) in the QEMU/KVM communities. We can
> > > investigate specific cases on kvm@vger.kernel.org and/or
> > > qemu-devel@nongnu.org.
> > 
> > The issue was original reported on VMware VM, but maybe David can figure
> > out how to trigger it on QEMU/KVM.
> 
> A very basic question:
> 
> The virtio_blk driver has no q->mq_ops->timeout() callback. Why does the
> block layer still enable the timeout mechanism when the driver doesn't
> implement ->timeout()?

No matter if ->timeout() is implemented or not, request still may
be timed out, and it is better for block layer to find such issue
and simply reset timer in case of no ->timeout().

> 
> I saw there was some "idle" hctx logic and I guess the requests are

timeout timer is reused for idle hctx detection.

> resubmitted (although it wasn't obvious to me how that happens in the
> code)? Maybe that's why the timer is still used if the driver doesn't
> care about timeouts...

Timeout handling is totally decided by driver's ->timeout() callback.
If driver doesn't implement ->timeout(), the request's timer is
reset.



Thanks
Ming


^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2022-10-24 17:33 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-10-20  9:10 [Bug] double ->queue_rq() because of timeout in ->queue_rq() Ming Lei
2022-10-20  9:10 ` Ming Lei
2022-10-20 20:01 ` Stefan Hajnoczi
2022-10-20 20:01   ` Stefan Hajnoczi
2022-10-21  2:23   ` Ming Lei
2022-10-21  2:23     ` Ming Lei
2022-10-24 15:30     ` Stefan Hajnoczi
2022-10-24 15:30       ` Stefan Hajnoczi
2022-10-24 15:41       ` Ming Lei
2022-10-24 15:41         ` Ming Lei
2022-10-20 20:26 ` Bart Van Assche
2022-10-20 20:26   ` Bart Van Assche
2022-10-21  0:57   ` Ming Lei
2022-10-21  0:57     ` Ming Lei
2022-10-21 14:32 ` Keith Busch
2022-10-21 15:22   ` Ming Lei
2022-10-21 15:22     ` Ming Lei
2022-10-21 18:33     ` David Jeffery
2022-10-22  4:27       ` Ming Lei
2022-10-22  4:27         ` Ming Lei
2022-10-21 18:21   ` David Jeffery

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.