linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH V2 0/2] block: fix race between adding wbt and normal IO
@ 2021-06-08  7:19 Ming Lei
  2021-06-08  7:19 ` [PATCH V2 1/2] block: fix race between adding/removing rq qos " Ming Lei
  2021-06-08  7:19 ` [PATCH V2 2/2] block: mark queue init done at the end of blk_register_queue Ming Lei
  0 siblings, 2 replies; 5+ messages in thread
From: Ming Lei @ 2021-06-08  7:19 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig
  Cc: linux-block, Ming Lei, Yi Zhang, Bart Van Assche

Hello,

Yi reported several kernel panics on:

[16687.001777] Unable to handle kernel NULL pointer dereference at virtual address 0000000000000008
...
[16687.163549] pc : __rq_qos_track+0x38/0x60

or

[  997.690455] Unable to handle kernel NULL pointer dereference at virtual address 0000000000000020
...
[  997.850347] pc : __rq_qos_done+0x2c/0x50

Turns out it is caused by race between adding wbt and normal IO.

Fix the issue by freezing request queue when adding/deleting rq qos.

V2:
	- switch to the approach of freezing queue, which is more generic
	  than V1.


Ming Lei (2):
  block: fix race between adding/removing rq qos and normal IO
  block: mark queue init done at the end of blk_register_queue

 block/blk-rq-qos.h | 13 +++++++++++++
 block/blk-sysfs.c  | 29 +++++++++++++++--------------
 2 files changed, 28 insertions(+), 14 deletions(-)

Cc: Yi Zhang <yi.zhang@redhat.com>
Cc: Bart Van Assche <bvanassche@acm.org>
-- 
2.31.1


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH V2 1/2] block: fix race between adding/removing rq qos and normal IO
  2021-06-08  7:19 [PATCH V2 0/2] block: fix race between adding wbt and normal IO Ming Lei
@ 2021-06-08  7:19 ` Ming Lei
  2021-06-08 15:04   ` Bart Van Assche
  2021-06-08  7:19 ` [PATCH V2 2/2] block: mark queue init done at the end of blk_register_queue Ming Lei
  1 sibling, 1 reply; 5+ messages in thread
From: Ming Lei @ 2021-06-08  7:19 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig
  Cc: linux-block, Ming Lei, Yi Zhang, Bart Van Assche

Yi reported several kernel panics on:

[16687.001777] Unable to handle kernel NULL pointer dereference at virtual address 0000000000000008
...
[16687.163549] pc : __rq_qos_track+0x38/0x60

or

[  997.690455] Unable to handle kernel NULL pointer dereference at virtual address 0000000000000020
...
[  997.850347] pc : __rq_qos_done+0x2c/0x50

Turns out it is caused by race between adding rq qos(wbt) and normal IO
because rq_qos_add can be run when IO is being submitted, fix this issue
by freezing queue before adding/deleting rq qos to queue.

rq_qos_exit() needn't to freeze queue because it is called after queue
has been frozen.

iolatency calls rq_qos_add() during allocating queue, so freezing won't
add delay because queue usage refcount works at atomic mode at that
time.

iocost calls rq_qos_add() when writing cgroup attribute file, that is
fine to freeze queue at that time since we usually freeze queue when
storing to queue sysfs attribute, meantime iocost only exists on the
root cgroup.

wbt_init calls it in blk_register_queue() and queue sysfs attribute
store(queue_wb_lat_store() when write it 1st time in case of !BLK_WBT_MQ),
the following patch will speedup the queue freezing in wbt_init.

Reported-by: Yi Zhang <yi.zhang@redhat.com>
Cc: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 block/blk-rq-qos.h | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/block/blk-rq-qos.h b/block/blk-rq-qos.h
index 2bc43e94f4c4..c9dccb344312 100644
--- a/block/blk-rq-qos.h
+++ b/block/blk-rq-qos.h
@@ -7,6 +7,7 @@
 #include <linux/blk_types.h>
 #include <linux/atomic.h>
 #include <linux/wait.h>
+#include <linux/blk-mq.h>
 
 #include "blk-mq-debugfs.h"
 
@@ -99,8 +100,14 @@ static inline void rq_wait_init(struct rq_wait *rq_wait)
 
 static inline void rq_qos_add(struct request_queue *q, struct rq_qos *rqos)
 {
+	/*
+	 * No IO can be in-flight when adding rqos, so freeze queue, which
+	 * is fine since we only support rq_qos for blk-mq queue
+	 */
+	blk_mq_freeze_queue(q);
 	rqos->next = q->rq_qos;
 	q->rq_qos = rqos;
+	blk_mq_unfreeze_queue(q);
 
 	if (rqos->ops->debugfs_attrs)
 		blk_mq_debugfs_register_rqos(rqos);
@@ -110,12 +117,18 @@ static inline void rq_qos_del(struct request_queue *q, struct rq_qos *rqos)
 {
 	struct rq_qos **cur;
 
+	/*
+	 * No IO can be in-flight when removing rqos, so freeze queue,
+	 * which is fine since we only support rq_qos for blk-mq queue
+	 */
+	blk_mq_freeze_queue(q);
 	for (cur = &q->rq_qos; *cur; cur = &(*cur)->next) {
 		if (*cur == rqos) {
 			*cur = rqos->next;
 			break;
 		}
 	}
+	blk_mq_unfreeze_queue(q);
 
 	blk_mq_debugfs_unregister_rqos(rqos);
 }
-- 
2.31.1


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH V2 2/2] block: mark queue init done at the end of blk_register_queue
  2021-06-08  7:19 [PATCH V2 0/2] block: fix race between adding wbt and normal IO Ming Lei
  2021-06-08  7:19 ` [PATCH V2 1/2] block: fix race between adding/removing rq qos " Ming Lei
@ 2021-06-08  7:19 ` Ming Lei
  1 sibling, 0 replies; 5+ messages in thread
From: Ming Lei @ 2021-06-08  7:19 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig
  Cc: linux-block, Ming Lei, Yi Zhang, Bart Van Assche

Mark queue init done when everything is done well in blk_register_queue(),
so that wbt_enable_default() can be run quickly without any RCU period
involved since adding rq qos requires to freeze queue.

Also no any side effect by delaying to mark queue init done.

Reported-by: Yi Zhang <yi.zhang@redhat.com>
Cc: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 block/blk-sysfs.c | 29 +++++++++++++++--------------
 1 file changed, 15 insertions(+), 14 deletions(-)

diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index f89e2fc3963b..370d83c18057 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -866,20 +866,6 @@ int blk_register_queue(struct gendisk *disk)
 		  "%s is registering an already registered queue\n",
 		  kobject_name(&dev->kobj));
 
-	/*
-	 * SCSI probing may synchronously create and destroy a lot of
-	 * request_queues for non-existent devices.  Shutting down a fully
-	 * functional queue takes measureable wallclock time as RCU grace
-	 * periods are involved.  To avoid excessive latency in these
-	 * cases, a request_queue starts out in a degraded mode which is
-	 * faster to shut down and is made fully functional here as
-	 * request_queues for non-existent devices never get registered.
-	 */
-	if (!blk_queue_init_done(q)) {
-		blk_queue_flag_set(QUEUE_FLAG_INIT_DONE, q);
-		percpu_ref_switch_to_percpu(&q->q_usage_counter);
-	}
-
 	blk_queue_update_readahead(q);
 
 	ret = blk_trace_init_sysfs(dev);
@@ -938,6 +924,21 @@ int blk_register_queue(struct gendisk *disk)
 	ret = 0;
 unlock:
 	mutex_unlock(&q->sysfs_dir_lock);
+
+	/*
+	 * SCSI probing may synchronously create and destroy a lot of
+	 * request_queues for non-existent devices.  Shutting down a fully
+	 * functional queue takes measureable wallclock time as RCU grace
+	 * periods are involved.  To avoid excessive latency in these
+	 * cases, a request_queue starts out in a degraded mode which is
+	 * faster to shut down and is made fully functional here as
+	 * request_queues for non-existent devices never get registered.
+	 */
+	if (!blk_queue_init_done(q)) {
+		blk_queue_flag_set(QUEUE_FLAG_INIT_DONE, q);
+		percpu_ref_switch_to_percpu(&q->q_usage_counter);
+	}
+
 	return ret;
 }
 EXPORT_SYMBOL_GPL(blk_register_queue);
-- 
2.31.1


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH V2 1/2] block: fix race between adding/removing rq qos and normal IO
  2021-06-08  7:19 ` [PATCH V2 1/2] block: fix race between adding/removing rq qos " Ming Lei
@ 2021-06-08 15:04   ` Bart Van Assche
  2021-06-09  0:55     ` Ming Lei
  0 siblings, 1 reply; 5+ messages in thread
From: Bart Van Assche @ 2021-06-08 15:04 UTC (permalink / raw)
  To: Ming Lei, Jens Axboe, Christoph Hellwig; +Cc: linux-block, Yi Zhang

On 6/8/21 12:19 AM, Ming Lei wrote:
>  static inline void rq_qos_add(struct request_queue *q, struct rq_qos *rqos)
>  {
> +	/*
> +	 * No IO can be in-flight when adding rqos, so freeze queue, which
> +	 * is fine since we only support rq_qos for blk-mq queue
> +	 */
> +	blk_mq_freeze_queue(q);
>  	rqos->next = q->rq_qos;
>  	q->rq_qos = rqos;
> +	blk_mq_unfreeze_queue(q);
>  
>  	if (rqos->ops->debugfs_attrs)
>  		blk_mq_debugfs_register_rqos(rqos);
> @@ -110,12 +117,18 @@ static inline void rq_qos_del(struct request_queue *q, struct rq_qos *rqos)
>  {
>  	struct rq_qos **cur;
>  
> +	/*
> +	 * No IO can be in-flight when removing rqos, so freeze queue,
> +	 * which is fine since we only support rq_qos for blk-mq queue
> +	 */
> +	blk_mq_freeze_queue(q);
>  	for (cur = &q->rq_qos; *cur; cur = &(*cur)->next) {
>  		if (*cur == rqos) {
>  			*cur = rqos->next;
>  			break;
>  		}
>  	}
> +	blk_mq_unfreeze_queue(q);
>  
>  	blk_mq_debugfs_unregister_rqos(rqos);
>  }

Although this patch looks like an improvement to me, I think we also
need protection against concurrent rq_qos_add() and rq_qos_del() calls,
e.g. via a mutex.

Thanks,

Bart.



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH V2 1/2] block: fix race between adding/removing rq qos and normal IO
  2021-06-08 15:04   ` Bart Van Assche
@ 2021-06-09  0:55     ` Ming Lei
  0 siblings, 0 replies; 5+ messages in thread
From: Ming Lei @ 2021-06-09  0:55 UTC (permalink / raw)
  To: Bart Van Assche; +Cc: Jens Axboe, Christoph Hellwig, linux-block, Yi Zhang

On Tue, Jun 08, 2021 at 08:04:00AM -0700, Bart Van Assche wrote:
> On 6/8/21 12:19 AM, Ming Lei wrote:
> >  static inline void rq_qos_add(struct request_queue *q, struct rq_qos *rqos)
> >  {
> > +	/*
> > +	 * No IO can be in-flight when adding rqos, so freeze queue, which
> > +	 * is fine since we only support rq_qos for blk-mq queue
> > +	 */
> > +	blk_mq_freeze_queue(q);
> >  	rqos->next = q->rq_qos;
> >  	q->rq_qos = rqos;
> > +	blk_mq_unfreeze_queue(q);
> >  
> >  	if (rqos->ops->debugfs_attrs)
> >  		blk_mq_debugfs_register_rqos(rqos);
> > @@ -110,12 +117,18 @@ static inline void rq_qos_del(struct request_queue *q, struct rq_qos *rqos)
> >  {
> >  	struct rq_qos **cur;
> >  
> > +	/*
> > +	 * No IO can be in-flight when removing rqos, so freeze queue,
> > +	 * which is fine since we only support rq_qos for blk-mq queue
> > +	 */
> > +	blk_mq_freeze_queue(q);
> >  	for (cur = &q->rq_qos; *cur; cur = &(*cur)->next) {
> >  		if (*cur == rqos) {
> >  			*cur = rqos->next;
> >  			break;
> >  		}
> >  	}
> > +	blk_mq_unfreeze_queue(q);
> >  
> >  	blk_mq_debugfs_unregister_rqos(rqos);
> >  }
> 
> Although this patch looks like an improvement to me, I think we also
> need protection against concurrent rq_qos_add() and rq_qos_del() calls,
> e.g. via a mutex.

Fine, one spinlock should be enough, will do it in V3.


Thanks,
Ming


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2021-06-09  0:55 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-08  7:19 [PATCH V2 0/2] block: fix race between adding wbt and normal IO Ming Lei
2021-06-08  7:19 ` [PATCH V2 1/2] block: fix race between adding/removing rq qos " Ming Lei
2021-06-08 15:04   ` Bart Van Assche
2021-06-09  0:55     ` Ming Lei
2021-06-08  7:19 ` [PATCH V2 2/2] block: mark queue init done at the end of blk_register_queue Ming Lei

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).