linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC V2 0/6] blk: make blk-rq-qos policies pluggable and modular
@ 2022-02-15 12:36 Wang Jianchao (Kuaishou)
  2022-02-15 12:37 ` [RFC V2 1/6] blk: make blk-rq-qos support pluggable and modular policy Wang Jianchao (Kuaishou)
                   ` (6 more replies)
  0 siblings, 7 replies; 15+ messages in thread
From: Wang Jianchao (Kuaishou) @ 2022-02-15 12:36 UTC (permalink / raw)
  To: Jens Axboe
  Cc: hch, Josef Bacik, Tejun Heo, Bart Van Assche, linux-block, linux-kernel

Hi Jens

blk-rq-qos is a standalone framework out of io-sched and can be used to
control or observe the IO progress in block-layer with hooks. blk-rq-qos
is a great design but right now, it is totally fixed and built-in and shut
out peoples who want to use it with external module.

This patchset attempts to make blk-rq-qos framework pluggable and modular.
Then we can update the blk-rq-qos policy module w/o stopping the IO workload.
And it is more convenient to introduce new policy on old machines w/o udgrade
kernel. And we can close all of the blk-rq-qos policy if we needn't any of
them. At the moment, the request_queue.rqos list is empty, we needn't to
waste cpu cyles on them.

Changes since v1:
 - Just make iocost and iolatency pluggable, then we needn't to export
   those interfaces
 - Remove the iostat rqos policy
 - Rename module of blk-ioprio to io-prio to avoid rename ioprio.c file

WangJianchao(6):
	blk: make blk-rq-qos support pluggable and modular
    blk-wbt: make wbt pluggable
    blk-iolatency: make iolatency pluggable
    blk-iocost: make iocost pluggable
    blk-ioprio: make ioprio pluggable and modular
    blk: remove unused interfaces of blk-rq-qos

block/Kconfig          |   2 +-
 block/Makefile         |   3 +-
 block/blk-cgroup.c     |  11 ---
 block/blk-core.c       |   2 +
 block/blk-iocost.c     |  59 ++++++++------
 block/blk-iolatency.c  |  33 ++++++--
 block/blk-ioprio.c     |  50 +++++++-----
 block/blk-ioprio.h     |  19 -----
 block/blk-mq-debugfs.c |  18 +----
 block/blk-rq-qos.c     | 312 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
 block/blk-rq-qos.h     | 104 +++++++++++--------------
 block/blk-sysfs.c      |   9 +--
 block/blk-wbt.c        |  36 +++++++--
 block/blk-wbt.h        |   8 +-
 block/blk.h            |   6 --
 block/elevator.c       |   3 +
 block/genhd.c          |   2 -
 include/linux/blkdev.h |   4 +
 18 files changed, 501 insertions(+), 180 deletions(-)


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [RFC V2 1/6] blk: make blk-rq-qos support pluggable and modular policy
  2022-02-15 12:36 [RFC V2 0/6] blk: make blk-rq-qos policies pluggable and modular Wang Jianchao (Kuaishou)
@ 2022-02-15 12:37 ` Wang Jianchao (Kuaishou)
  2022-02-15 21:23   ` Bart Van Assche
  2022-02-15 12:37 ` [RFC V2 2/6] blk-wbt: make wbt pluggable Wang Jianchao (Kuaishou)
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 15+ messages in thread
From: Wang Jianchao (Kuaishou) @ 2022-02-15 12:37 UTC (permalink / raw)
  To: Jens Axboe
  Cc: hch, Josef Bacik, Tejun Heo, Bart Van Assche, linux-block, linux-kernel

blk-rq-qos is a standalone framework out of io-sched and can be
used to control or observe the IO progress in block-layer with
hooks. blk-rq-qos is a great design but right now, it is totally
fixed and built-in and shut out peoples who want to use it with
external module.

This patch make blk-rq-qos policies pluggable and modular.
(1) Add code to maintain the rq_qos_ops. A rq-qos module need to
    register itself with rq_qos_register(). The original enum
    rq_qos_id will be removed in following patch. They will use
    a dynamic id maintained by rq_qos_ida.
(2) Add .init callback into rq_qos_ops. We use it to initialize the
    resource.
(3) Add /sys/block/x/queue/qos
    We can use '+name' or "-name" to open or close the blk-rq-qos
    policy.

Because the rq-qos list can be modified at anytime, rq_qos_id()
which has been renamed to rq_qos_by_id() has to iterate the list
under sysfs_lock or queue_lock. This patch adapts the code for this.
More details, please refer to the comment above rq_qos_get(), And
the rq_qos_exit() is moved to blk_cleanup_queue. Except for these
modification, there is no other functional change here. Following
patches will adpat the code of wbt, iolatency, iocost and ioprio
to make them pluggable and modular one by one.

Signed-off-by: Wang Jianchao (Kuaishou) <jianchao.wan9@gmail.com>
---
 block/blk-core.c       |   2 +
 block/blk-iocost.c     |  22 ++-
 block/blk-mq-debugfs.c |   4 +-
 block/blk-rq-qos.c     | 312 ++++++++++++++++++++++++++++++++++++++++-
 block/blk-rq-qos.h     |  55 +++++++-
 block/blk-sysfs.c      |   2 +
 block/blk-wbt.c        |   6 +-
 block/elevator.c       |   3 +
 block/genhd.c          |   2 -
 include/linux/blkdev.h |   4 +
 10 files changed, 395 insertions(+), 17 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index d93e3bb9a769..00352bcba32d 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -50,6 +50,7 @@
 #include "blk-mq-sched.h"
 #include "blk-pm.h"
 #include "blk-throttle.h"
+#include "blk-rq-qos.h"
 
 struct dentry *blk_debugfs_root;
 
@@ -337,6 +338,7 @@ void blk_cleanup_queue(struct request_queue *q)
 	 * it is safe to free requests now.
 	 */
 	mutex_lock(&q->sysfs_lock);
+	rq_qos_exit(q);
 	if (q->elevator)
 		blk_mq_sched_free_rqs(q);
 	mutex_unlock(&q->sysfs_lock);
diff --git a/block/blk-iocost.c b/block/blk-iocost.c
index 769b64394298..54d6c93090ba 100644
--- a/block/blk-iocost.c
+++ b/block/blk-iocost.c
@@ -662,7 +662,7 @@ static struct ioc *rqos_to_ioc(struct rq_qos *rqos)
 
 static struct ioc *q_to_ioc(struct request_queue *q)
 {
-	return rqos_to_ioc(rq_qos_id(q, RQ_QOS_COST));
+	return rqos_to_ioc(rq_qos_by_id(q, RQ_QOS_COST));
 }
 
 static const char *q_name(struct request_queue *q)
@@ -3162,6 +3162,7 @@ static ssize_t ioc_qos_write(struct kernfs_open_file *of, char *input,
 			     size_t nbytes, loff_t off)
 {
 	struct block_device *bdev;
+	struct rq_qos *rqos;
 	struct ioc *ioc;
 	u32 qos[NR_QOS_PARAMS];
 	bool enable, user;
@@ -3172,14 +3173,15 @@ static ssize_t ioc_qos_write(struct kernfs_open_file *of, char *input,
 	if (IS_ERR(bdev))
 		return PTR_ERR(bdev);
 
-	ioc = q_to_ioc(bdev_get_queue(bdev));
-	if (!ioc) {
+	rqos = rq_qos_get(bdev_get_queue(bdev), RQ_QOS_COST);
+	if (!rqos) {
 		ret = blk_iocost_init(bdev_get_queue(bdev));
 		if (ret)
 			goto err;
-		ioc = q_to_ioc(bdev_get_queue(bdev));
+		rqos = rq_qos_get(bdev_get_queue(bdev), RQ_QOS_COST);
 	}
 
+	ioc = rqos_to_ioc(rqos);
 	spin_lock_irq(&ioc->lock);
 	memcpy(qos, ioc->params.qos, sizeof(qos));
 	enable = ioc->enabled;
@@ -3272,10 +3274,12 @@ static ssize_t ioc_qos_write(struct kernfs_open_file *of, char *input,
 	ioc_refresh_params(ioc, true);
 	spin_unlock_irq(&ioc->lock);
 
+	rq_qos_put(rqos);
 	blkdev_put_no_open(bdev);
 	return nbytes;
 einval:
 	ret = -EINVAL;
+	rq_qos_put(rqos);
 err:
 	blkdev_put_no_open(bdev);
 	return ret;
@@ -3329,6 +3333,7 @@ static ssize_t ioc_cost_model_write(struct kernfs_open_file *of, char *input,
 				    size_t nbytes, loff_t off)
 {
 	struct block_device *bdev;
+	struct rq_qos *rqos;
 	struct ioc *ioc;
 	u64 u[NR_I_LCOEFS];
 	bool user;
@@ -3339,14 +3344,15 @@ static ssize_t ioc_cost_model_write(struct kernfs_open_file *of, char *input,
 	if (IS_ERR(bdev))
 		return PTR_ERR(bdev);
 
-	ioc = q_to_ioc(bdev_get_queue(bdev));
-	if (!ioc) {
+	rqos = rq_qos_get(bdev_get_queue(bdev), RQ_QOS_COST);
+	if (!rqos) {
 		ret = blk_iocost_init(bdev_get_queue(bdev));
 		if (ret)
 			goto err;
-		ioc = q_to_ioc(bdev_get_queue(bdev));
+		rqos = rq_qos_get(bdev_get_queue(bdev), RQ_QOS_COST);
 	}
 
+	ioc = rqos_to_ioc(rqos);
 	spin_lock_irq(&ioc->lock);
 	memcpy(u, ioc->params.i_lcoefs, sizeof(u));
 	user = ioc->user_cost_model;
@@ -3397,11 +3403,13 @@ static ssize_t ioc_cost_model_write(struct kernfs_open_file *of, char *input,
 	ioc_refresh_params(ioc, true);
 	spin_unlock_irq(&ioc->lock);
 
+	rq_qos_put(rqos);
 	blkdev_put_no_open(bdev);
 	return nbytes;
 
 einval:
 	ret = -EINVAL;
+	rq_qos_put(rqos);
 err:
 	blkdev_put_no_open(bdev);
 	return ret;
diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
index 3a790eb4995c..30f064872581 100644
--- a/block/blk-mq-debugfs.c
+++ b/block/blk-mq-debugfs.c
@@ -844,7 +844,9 @@ void blk_mq_debugfs_unregister_rqos(struct rq_qos *rqos)
 void blk_mq_debugfs_register_rqos(struct rq_qos *rqos)
 {
 	struct request_queue *q = rqos->q;
-	const char *dir_name = rq_qos_id_to_name(rqos->id);
+	const char *dir_name;
+
+	dir_name = rqos->ops->name ? rqos->ops->name : rq_qos_id_to_name(rqos->id);
 
 	if (rqos->debugfs_dir || !rqos->ops->debugfs_attrs)
 		return;
diff --git a/block/blk-rq-qos.c b/block/blk-rq-qos.c
index e83af7bc7591..a94ff872722b 100644
--- a/block/blk-rq-qos.c
+++ b/block/blk-rq-qos.c
@@ -2,6 +2,11 @@
 
 #include "blk-rq-qos.h"
 
+static DEFINE_IDA(rq_qos_ida);
+static int nr_rqos_blkcg_pols;
+static DEFINE_MUTEX(rq_qos_mutex);
+static LIST_HEAD(rq_qos_list);
+
 /*
  * Increment 'v', if 'v' is below 'below'. Returns true if we succeeded,
  * false if 'v' + 1 would be bigger than 'below'.
@@ -294,11 +299,316 @@ void rq_qos_wait(struct rq_wait *rqw, void *private_data,
 
 void rq_qos_exit(struct request_queue *q)
 {
-	blk_mq_debugfs_unregister_queue_rqos(q);
+	WARN_ON(!mutex_is_locked(&q->sysfs_lock));
 
 	while (q->rq_qos) {
 		struct rq_qos *rqos = q->rq_qos;
 		q->rq_qos = rqos->next;
+		if (rqos->ops->owner)
+			module_put(rqos->ops->owner);
 		rqos->ops->exit(rqos);
 	}
+	blk_mq_debugfs_unregister_queue_rqos(q);
+}
+
+/*
+ * After the pluggable blk-qos, rqos's life cycle become complicated,
+ * qos switching path can add/delete rqos to/from request_queue
+ * under sysfs_lock and queue_lock. There are following places
+ * may access rqos through rq_qos_by_id() concurrently:
+ * (1) normal IO path, under q_usage_counter,
+ * (2) queue sysfs interfaces, under sysfs_lock,
+ * (3) blkg_create, the .pd_init_fn() may access rqos, under queue_lock,
+ * (4) cgroup file, such as ioc_cost_model_write,
+ *
+ * (1)(2)(3) are definitely safe. case (4) is tricky. rq_qos_get() is
+ * for the case.
+ */
+struct rq_qos *rq_qos_get(struct request_queue *q, int id)
+{
+	struct rq_qos *rqos;
+
+	spin_lock_irq(&q->queue_lock);
+	rqos = rq_qos_by_id(q, id);
+	if (rqos && rqos->dying)
+		rqos = NULL;
+	if (rqos)
+		refcount_inc(&rqos->ref);
+	spin_unlock_irq(&q->queue_lock);
+	return rqos;
+}
+EXPORT_SYMBOL_GPL(rq_qos_get);
+
+void rq_qos_put(struct rq_qos *rqos)
+{
+	struct request_queue *q = rqos->q;
+
+	spin_lock_irq(&q->queue_lock);
+	refcount_dec(&rqos->ref);
+	if (rqos->dying)
+		wake_up(&rqos->waitq);
+	spin_unlock_irq(&q->queue_lock);
+}
+EXPORT_SYMBOL_GPL(rq_qos_put);
+
+void rq_qos_activate(struct request_queue *q,
+		struct rq_qos *rqos, const struct rq_qos_ops *ops)
+{
+	struct rq_qos *pos;
+	bool rq_alloc_time = false;
+
+	WARN_ON(!mutex_is_locked(&q->sysfs_lock));
+
+	rqos->dying = false;
+	refcount_set(&rqos->ref, 1);
+	init_waitqueue_head(&rqos->waitq);
+	rqos->id = ops->id;
+	rqos->ops = ops;
+	rqos->q = q;
+	rqos->next = NULL;
+
+	spin_lock_irq(&q->queue_lock);
+	pos = q->rq_qos;
+	if (pos) {
+		while (pos->next) {
+			if (pos->ops->flags & RQOS_FLAG_RQ_ALLOC_TIME)
+				rq_alloc_time = true;
+			pos = pos->next;
+		}
+		pos->next = rqos;
+	} else {
+		q->rq_qos = rqos;
+	}
+	if (ops->flags & RQOS_FLAG_RQ_ALLOC_TIME &&
+	    !rq_alloc_time)
+		blk_queue_flag_set(QUEUE_FLAG_RQ_ALLOC_TIME, q);
+
+	spin_unlock_irq(&q->queue_lock);
+
+	if (rqos->ops->debugfs_attrs)
+		blk_mq_debugfs_register_rqos(rqos);
+}
+EXPORT_SYMBOL_GPL(rq_qos_activate);
+
+void rq_qos_deactivate(struct rq_qos *rqos)
+{
+	struct request_queue *q = rqos->q;
+	struct rq_qos **cur, *pos;
+	bool rq_alloc_time = false;
+
+	WARN_ON(!mutex_is_locked(&q->sysfs_lock));
+
+	spin_lock_irq(&q->queue_lock);
+	rqos->dying = true;
+	/*
+	 * Drain all of the usage of get/put_rqos()
+	 */
+	wait_event_lock_irq(rqos->waitq,
+		refcount_read(&rqos->ref) == 1, q->queue_lock);
+	for (cur = &q->rq_qos; *cur; cur = &(*cur)->next) {
+		if (*cur == rqos) {
+			*cur = rqos->next;
+			break;
+		}
+	}
+
+	pos = q->rq_qos;
+	while (pos && pos->next) {
+		if (pos->ops->flags & RQOS_FLAG_RQ_ALLOC_TIME)
+			rq_alloc_time = true;
+		pos = pos->next;
+	}
+
+	if (rqos->ops->flags & RQOS_FLAG_RQ_ALLOC_TIME &&
+	    !rq_alloc_time)
+		blk_queue_flag_clear(QUEUE_FLAG_RQ_ALLOC_TIME, q);
+
+	spin_unlock_irq(&q->queue_lock);
+	blk_mq_debugfs_unregister_rqos(rqos);
+}
+EXPORT_SYMBOL_GPL(rq_qos_deactivate);
+
+static struct rq_qos_ops *rq_qos_find_by_name(const char *name)
+{
+	struct rq_qos_ops *pos;
+
+	list_for_each_entry(pos, &rq_qos_list, node) {
+		if (!strncmp(pos->name, name, strlen(pos->name)))
+			return pos;
+	}
+
+	return NULL;
+}
+
+int rq_qos_register(struct rq_qos_ops *ops)
+{
+	int ret, start;
+
+	mutex_lock(&rq_qos_mutex);
+
+	if (rq_qos_find_by_name(ops->name)) {
+		ret = -EEXIST;
+		goto out;
+	}
+
+	if (ops->flags & RQOS_FLAG_CGRP_POL &&
+	    nr_rqos_blkcg_pols >= (BLKCG_MAX_POLS - BLKCG_NON_RQOS_POLS)) {
+		ret = -ENOSPC;
+		goto out;
+	}
+
+	start = RQ_QOS_IOPRIO + 1;
+	ret = ida_simple_get(&rq_qos_ida, start, INT_MAX, GFP_KERNEL);
+	if (ret < 0)
+		goto out;
+
+	if (ops->flags & RQOS_FLAG_CGRP_POL)
+		nr_rqos_blkcg_pols++;
+
+	ops->id = ret;
+	ret = 0;
+	INIT_LIST_HEAD(&ops->node);
+	list_add_tail(&ops->node, &rq_qos_list);
+out:
+	mutex_unlock(&rq_qos_mutex);
+	return ret;
+}
+EXPORT_SYMBOL_GPL(rq_qos_register);
+
+void rq_qos_unregister(struct rq_qos_ops *ops)
+{
+	mutex_lock(&rq_qos_mutex);
+
+	if (ops->flags & RQOS_FLAG_CGRP_POL)
+		nr_rqos_blkcg_pols--;
+	list_del_init(&ops->node);
+	ida_simple_remove(&rq_qos_ida, ops->id);
+	mutex_unlock(&rq_qos_mutex);
+}
+EXPORT_SYMBOL_GPL(rq_qos_unregister);
+
+ssize_t queue_qos_show(struct request_queue *q, char *buf)
+{
+	struct rq_qos_ops *ops;
+	struct rq_qos *rqos;
+	int ret = 0;
+
+	mutex_lock(&rq_qos_mutex);
+	/*
+	 * Show the policies in the order of being invoked
+	 */
+	for (rqos = q->rq_qos; rqos; rqos = rqos->next) {
+		if (!rqos->ops->name)
+			continue;
+		ret += sprintf(buf + ret, "[%s] ", rqos->ops->name);
+	}
+	list_for_each_entry(ops, &rq_qos_list, node) {
+		if (!rq_qos_by_name(q, ops->name))
+			ret += sprintf(buf + ret, "%s ", ops->name);
+	}
+
+	ret--; /* overwrite the last space */
+	ret += sprintf(buf + ret, "\n");
+	mutex_unlock(&rq_qos_mutex);
+
+	return ret;
+}
+
+int rq_qos_switch(struct request_queue *q,
+		const struct rq_qos_ops *ops,
+		struct rq_qos *rqos)
+{
+	int ret;
+
+	WARN_ON(!mutex_is_locked(&q->sysfs_lock));
+
+	blk_mq_freeze_queue(q);
+	if (!rqos) {
+		ret = ops->init(q);
+	} else {
+		ops->exit(rqos);
+		ret = 0;
+	}
+	blk_mq_unfreeze_queue(q);
+
+	return ret;
+}
+
+ssize_t queue_qos_store(struct request_queue *q, const char *page,
+			  size_t count)
+{
+	const struct rq_qos_ops *ops;
+	struct rq_qos *rqos;
+	const char *qosname;
+	char *buf;
+	bool add;
+	int ret;
+
+	buf = kstrdup(page, GFP_KERNEL);
+	if (!buf)
+		return -ENOMEM;
+
+	buf = strim(buf);
+	if (buf[0] != '+' && buf[0] != '-') {
+		ret = -EINVAL;
+		goto out;
+	}
+
+	add = buf[0] == '+';
+	qosname = buf + 1;
+
+	rqos = rq_qos_by_name(q, qosname);
+	if ((buf[0] == '+' && rqos)) {
+		ret = -EEXIST;
+		goto out;
+	}
+
+	if ((buf[0] == '-' && !rqos)) {
+		ret = -ENODEV;
+		goto out;
+	}
+
+	mutex_lock(&rq_qos_mutex);
+	if (add) {
+		ops = rq_qos_find_by_name(qosname);
+		if (!ops) {
+			/*
+			 * module_init callback may request this mutex
+			 */
+			mutex_unlock(&rq_qos_mutex);
+			request_module("%s", qosname);
+			mutex_lock(&rq_qos_mutex);
+			ops = rq_qos_find_by_name(qosname);
+		}
+	} else {
+		ops = rqos->ops;
+	}
+
+	if (!ops) {
+		ret = -EINVAL;
+	} else if (ops->owner && !try_module_get(ops->owner)) {
+		ops = NULL;
+		ret = -EAGAIN;
+	}
+	mutex_unlock(&rq_qos_mutex);
+
+	if (!ops)
+		goto out;
+
+	if (add) {
+		ret = rq_qos_switch(q, ops, NULL);
+		if (!ret && ops->owner)
+			__module_get(ops->owner);
+	} else {
+		rq_qos_switch(q, ops, rqos);
+		ret = 0;
+		if (ops->owner)
+			module_put(ops->owner);
+	}
+
+	if (ops->owner)
+		module_put(ops->owner);
+out:
+	kfree(buf);
+	return ret ? ret : count;
 }
diff --git a/block/blk-rq-qos.h b/block/blk-rq-qos.h
index 3cfbc8668cba..c2b9b41f8fd4 100644
--- a/block/blk-rq-qos.h
+++ b/block/blk-rq-qos.h
@@ -26,7 +26,10 @@ struct rq_wait {
 };
 
 struct rq_qos {
-	struct rq_qos_ops *ops;
+	refcount_t ref;
+	wait_queue_head_t waitq;
+	bool dying;
+	const struct rq_qos_ops *ops;
 	struct request_queue *q;
 	enum rq_qos_id id;
 	struct rq_qos *next;
@@ -35,7 +38,17 @@ struct rq_qos {
 #endif
 };
 
+enum {
+	RQOS_FLAG_CGRP_POL = 1 << 0,
+	RQOS_FLAG_RQ_ALLOC_TIME = 1 << 1
+};
+
 struct rq_qos_ops {
+	struct list_head node;
+	struct module *owner;
+	const char *name;
+	int flags;
+	int id;
 	void (*throttle)(struct rq_qos *, struct bio *);
 	void (*track)(struct rq_qos *, struct request *, struct bio *);
 	void (*merge)(struct rq_qos *, struct request *, struct bio *);
@@ -46,6 +59,7 @@ struct rq_qos_ops {
 	void (*cleanup)(struct rq_qos *, struct bio *);
 	void (*queue_depth_changed)(struct rq_qos *);
 	void (*exit)(struct rq_qos *);
+	int (*init)(struct request_queue *);
 	const struct blk_mq_debugfs_attr *debugfs_attrs;
 };
 
@@ -59,10 +73,12 @@ struct rq_depth {
 	unsigned int default_depth;
 };
 
-static inline struct rq_qos *rq_qos_id(struct request_queue *q,
-				       enum rq_qos_id id)
+static inline struct rq_qos *rq_qos_by_id(struct request_queue *q, int id)
 {
 	struct rq_qos *rqos;
+
+	WARN_ON(!mutex_is_locked(&q->sysfs_lock) && !spin_is_locked(&q->queue_lock));
+
 	for (rqos = q->rq_qos; rqos; rqos = rqos->next) {
 		if (rqos->id == id)
 			break;
@@ -72,12 +88,12 @@ static inline struct rq_qos *rq_qos_id(struct request_queue *q,
 
 static inline struct rq_qos *wbt_rq_qos(struct request_queue *q)
 {
-	return rq_qos_id(q, RQ_QOS_WBT);
+	return rq_qos_by_id(q, RQ_QOS_WBT);
 }
 
 static inline struct rq_qos *blkcg_rq_qos(struct request_queue *q)
 {
-	return rq_qos_id(q, RQ_QOS_LATENCY);
+	return rq_qos_by_id(q, RQ_QOS_LATENCY);
 }
 
 static inline void rq_wait_init(struct rq_wait *rq_wait)
@@ -132,6 +148,35 @@ static inline void rq_qos_del(struct request_queue *q, struct rq_qos *rqos)
 	blk_mq_debugfs_unregister_rqos(rqos);
 }
 
+int rq_qos_register(struct rq_qos_ops *ops);
+void rq_qos_unregister(struct rq_qos_ops *ops);
+void rq_qos_activate(struct request_queue *q,
+		struct rq_qos *rqos, const struct rq_qos_ops *ops);
+void rq_qos_deactivate(struct rq_qos *rqos);
+ssize_t queue_qos_show(struct request_queue *q, char *buf);
+ssize_t queue_qos_store(struct request_queue *q, const char *page,
+			  size_t count);
+struct rq_qos *rq_qos_get(struct request_queue *q, int id);
+void rq_qos_put(struct rq_qos *rqos);
+
+static inline struct rq_qos *rq_qos_by_name(struct request_queue *q,
+		const char *name)
+{
+	struct rq_qos *rqos;
+
+	WARN_ON(!mutex_is_locked(&q->sysfs_lock));
+
+	for (rqos = q->rq_qos; rqos; rqos = rqos->next) {
+		if (!rqos->ops->name)
+			continue;
+
+		if (!strncmp(rqos->ops->name, name,
+					strlen(rqos->ops->name)))
+			return rqos;
+	}
+	return NULL;
+}
+
 typedef bool (acquire_inflight_cb_t)(struct rq_wait *rqw, void *private_data);
 typedef void (cleanup_cb_t)(struct rq_wait *rqw, void *private_data);
 
diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index 9f32882ceb2f..c02747db4e3b 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -574,6 +574,7 @@ QUEUE_RO_ENTRY(queue_max_segments, "max_segments");
 QUEUE_RO_ENTRY(queue_max_integrity_segments, "max_integrity_segments");
 QUEUE_RO_ENTRY(queue_max_segment_size, "max_segment_size");
 QUEUE_RW_ENTRY(elv_iosched, "scheduler");
+QUEUE_RW_ENTRY(queue_qos, "qos");
 
 QUEUE_RO_ENTRY(queue_logical_block_size, "logical_block_size");
 QUEUE_RO_ENTRY(queue_physical_block_size, "physical_block_size");
@@ -633,6 +634,7 @@ static struct attribute *queue_attrs[] = {
 	&queue_max_integrity_segments_entry.attr,
 	&queue_max_segment_size_entry.attr,
 	&elv_iosched_entry.attr,
+	&queue_qos_entry.attr,
 	&queue_hw_sector_size_entry.attr,
 	&queue_logical_block_size_entry.attr,
 	&queue_physical_block_size_entry.attr,
diff --git a/block/blk-wbt.c b/block/blk-wbt.c
index 0c119be0e813..88265ae4fa41 100644
--- a/block/blk-wbt.c
+++ b/block/blk-wbt.c
@@ -628,9 +628,13 @@ static void wbt_requeue(struct rq_qos *rqos, struct request *rq)
 
 void wbt_set_write_cache(struct request_queue *q, bool write_cache_on)
 {
-	struct rq_qos *rqos = wbt_rq_qos(q);
+	struct rq_qos *rqos;
+
+	spin_lock_irq(&q->queue_lock);
+	rqos = wbt_rq_qos(q);
 	if (rqos)
 		RQWB(rqos)->wc = write_cache_on;
+	spin_unlock_irq(&q->queue_lock);
 }
 
 /*
diff --git a/block/elevator.c b/block/elevator.c
index ec98aed39c4f..dd8b3fbc34fe 100644
--- a/block/elevator.c
+++ b/block/elevator.c
@@ -705,12 +705,15 @@ void elevator_init_mq(struct request_queue *q)
 	 * requests, then no need to quiesce queue which may add long boot
 	 * latency, especially when lots of disks are involved.
 	 */
+
+	mutex_lock(&q->sysfs_lock);
 	blk_mq_freeze_queue(q);
 	blk_mq_cancel_work_sync(q);
 
 	err = blk_mq_init_sched(q, e);
 
 	blk_mq_unfreeze_queue(q);
+	mutex_unlock(&q->sysfs_lock);
 
 	if (err) {
 		pr_warn("\"%s\" elevator initialization failed, "
diff --git a/block/genhd.c b/block/genhd.c
index 626c8406f21a..fe7d4b169a1d 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -621,8 +621,6 @@ void del_gendisk(struct gendisk *disk)
 	device_del(disk_to_dev(disk));
 
 	blk_mq_freeze_queue_wait(q);
-
-	rq_qos_exit(q);
 	blk_sync_queue(q);
 	blk_flush_integrity();
 	/*
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index f35aea98bc35..d5698a7cda67 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -44,6 +44,10 @@ struct blk_crypto_profile;
  * Defined here to simplify include dependency.
  */
 #define BLKCG_MAX_POLS		6
+/*
+ * Non blk-rq-qos blkcg policies include blk-throttle and bfq
+ */
+#define BLKCG_NON_RQOS_POLS		2
 
 static inline int blk_validate_block_size(unsigned long bsize)
 {
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [RFC V2 2/6] blk-wbt: make wbt pluggable
  2022-02-15 12:36 [RFC V2 0/6] blk: make blk-rq-qos policies pluggable and modular Wang Jianchao (Kuaishou)
  2022-02-15 12:37 ` [RFC V2 1/6] blk: make blk-rq-qos support pluggable and modular policy Wang Jianchao (Kuaishou)
@ 2022-02-15 12:37 ` Wang Jianchao (Kuaishou)
  2022-02-15 12:37 ` [RFC V2 3/6] blk-iolatency: make iolatency pluggable Wang Jianchao (Kuaishou)
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 15+ messages in thread
From: Wang Jianchao (Kuaishou) @ 2022-02-15 12:37 UTC (permalink / raw)
  To: Jens Axboe
  Cc: hch, Josef Bacik, Tejun Heo, Bart Van Assche, linux-block, linux-kernel

This patch makes wbt pluggable through /sys/block/xxx/queue/qos.

Signed-off-by: Wang Jianchao (Kuaishou) <jianchao.wan9@gmail.com>
---
 block/blk-mq-debugfs.c |  2 --
 block/blk-rq-qos.h     |  8 ++------
 block/blk-sysfs.c      |  7 ++-----
 block/blk-wbt.c        | 30 +++++++++++++++++++++++++-----
 block/blk-wbt.h        |  8 ++++----
 5 files changed, 33 insertions(+), 22 deletions(-)

diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
index 30f064872581..c7b17576a65f 100644
--- a/block/blk-mq-debugfs.c
+++ b/block/blk-mq-debugfs.c
@@ -823,8 +823,6 @@ void blk_mq_debugfs_unregister_sched(struct request_queue *q)
 static const char *rq_qos_id_to_name(enum rq_qos_id id)
 {
 	switch (id) {
-	case RQ_QOS_WBT:
-		return "wbt";
 	case RQ_QOS_LATENCY:
 		return "latency";
 	case RQ_QOS_COST:
diff --git a/block/blk-rq-qos.h b/block/blk-rq-qos.h
index c2b9b41f8fd4..de82eb951bdd 100644
--- a/block/blk-rq-qos.h
+++ b/block/blk-rq-qos.h
@@ -14,7 +14,6 @@
 struct blk_mq_debugfs_attr;
 
 enum rq_qos_id {
-	RQ_QOS_WBT,
 	RQ_QOS_LATENCY,
 	RQ_QOS_COST,
 	RQ_QOS_IOPRIO,
@@ -86,11 +85,6 @@ static inline struct rq_qos *rq_qos_by_id(struct request_queue *q, int id)
 	return rqos;
 }
 
-static inline struct rq_qos *wbt_rq_qos(struct request_queue *q)
-{
-	return rq_qos_by_id(q, RQ_QOS_WBT);
-}
-
 static inline struct rq_qos *blkcg_rq_qos(struct request_queue *q)
 {
 	return rq_qos_by_id(q, RQ_QOS_LATENCY);
@@ -158,6 +152,8 @@ ssize_t queue_qos_store(struct request_queue *q, const char *page,
 			  size_t count);
 struct rq_qos *rq_qos_get(struct request_queue *q, int id);
 void rq_qos_put(struct rq_qos *rqos);
+int rq_qos_switch(struct request_queue *q, const struct rq_qos_ops *ops,
+		struct rq_qos *rqos);
 
 static inline struct rq_qos *rq_qos_by_name(struct request_queue *q,
 		const char *name)
diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index c02747db4e3b..f9021cf10d06 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -483,11 +483,8 @@ static ssize_t queue_wb_lat_store(struct request_queue *q, const char *page,
 		return -EINVAL;
 
 	rqos = wbt_rq_qos(q);
-	if (!rqos) {
-		ret = wbt_init(q);
-		if (ret)
-			return ret;
-	}
+	if (!rqos)
+		return -EOPNOTSUPP;
 
 	if (val == -1)
 		val = wbt_default_latency_nsec(q);
diff --git a/block/blk-wbt.c b/block/blk-wbt.c
index 88265ae4fa41..ce4b41e50564 100644
--- a/block/blk-wbt.c
+++ b/block/blk-wbt.c
@@ -31,6 +31,13 @@
 #define CREATE_TRACE_POINTS
 #include <trace/events/wbt.h>
 
+static struct rq_qos_ops wbt_rqos_ops;
+
+struct rq_qos *wbt_rq_qos(struct request_queue *q)
+{
+	return rq_qos_by_id(q, wbt_rqos_ops.id);
+}
+
 static inline void wbt_clear_state(struct request *rq)
 {
 	rq->wbt_flags = 0;
@@ -656,7 +663,7 @@ void wbt_enable_default(struct request_queue *q)
 		return;
 
 	if (queue_is_mq(q) && IS_ENABLED(CONFIG_BLK_WBT_MQ))
-		wbt_init(q);
+		rq_qos_switch(q, &wbt_rqos_ops, NULL);
 }
 EXPORT_SYMBOL_GPL(wbt_enable_default);
 
@@ -696,6 +703,7 @@ static void wbt_exit(struct rq_qos *rqos)
 	struct rq_wb *rwb = RQWB(rqos);
 	struct request_queue *q = rqos->q;
 
+	rq_qos_deactivate(rqos);
 	blk_stat_remove_callback(q, rwb->cb);
 	blk_stat_free_callback(rwb->cb);
 	kfree(rwb);
@@ -806,7 +814,9 @@ static const struct blk_mq_debugfs_attr wbt_debugfs_attrs[] = {
 };
 #endif
 
+int wbt_init(struct request_queue *q);
 static struct rq_qos_ops wbt_rqos_ops = {
+	.name = "wbt",
 	.throttle = wbt_wait,
 	.issue = wbt_issue,
 	.track = wbt_track,
@@ -815,6 +825,7 @@ static struct rq_qos_ops wbt_rqos_ops = {
 	.cleanup = wbt_cleanup,
 	.queue_depth_changed = wbt_queue_depth_changed,
 	.exit = wbt_exit,
+	.init = wbt_init,
 #ifdef CONFIG_BLK_DEBUG_FS
 	.debugfs_attrs = wbt_debugfs_attrs,
 #endif
@@ -838,9 +849,6 @@ int wbt_init(struct request_queue *q)
 	for (i = 0; i < WBT_NUM_RWQ; i++)
 		rq_wait_init(&rwb->rq_wait[i]);
 
-	rwb->rqos.id = RQ_QOS_WBT;
-	rwb->rqos.ops = &wbt_rqos_ops;
-	rwb->rqos.q = q;
 	rwb->last_comp = rwb->last_issue = jiffies;
 	rwb->win_nsec = RWB_WINDOW_NSEC;
 	rwb->enable_state = WBT_STATE_ON_DEFAULT;
@@ -850,7 +858,7 @@ int wbt_init(struct request_queue *q)
 	/*
 	 * Assign rwb and add the stats callback.
 	 */
-	rq_qos_add(q, &rwb->rqos);
+	rq_qos_activate(q, &rwb->rqos, &wbt_rqos_ops);
 	blk_stat_add_callback(q, rwb->cb);
 
 	rwb->min_lat_nsec = wbt_default_latency_nsec(q);
@@ -860,3 +868,15 @@ int wbt_init(struct request_queue *q)
 
 	return 0;
 }
+
+static __init int wbt_mod_init(void)
+{
+	return rq_qos_register(&wbt_rqos_ops);
+}
+
+static __exit void wbt_mod_exit(void)
+{
+	return rq_qos_unregister(&wbt_rqos_ops);
+}
+module_init(wbt_mod_init);
+module_exit(wbt_mod_exit);
diff --git a/block/blk-wbt.h b/block/blk-wbt.h
index 2eb01becde8c..72e9602df330 100644
--- a/block/blk-wbt.h
+++ b/block/blk-wbt.h
@@ -88,7 +88,7 @@ static inline unsigned int wbt_inflight(struct rq_wb *rwb)
 
 #ifdef CONFIG_BLK_WBT
 
-int wbt_init(struct request_queue *);
+struct rq_qos *wbt_rq_qos(struct request_queue *q);
 void wbt_disable_default(struct request_queue *);
 void wbt_enable_default(struct request_queue *);
 
@@ -101,12 +101,12 @@ u64 wbt_default_latency_nsec(struct request_queue *);
 
 #else
 
-static inline void wbt_track(struct request *rq, enum wbt_flags flags)
+static inline struct rq_qos *wbt_rq_qos(struct request_queue *q)
 {
+	return NULL;
 }
-static inline int wbt_init(struct request_queue *q)
+static inline void wbt_track(struct request *rq, enum wbt_flags flags)
 {
-	return -EINVAL;
 }
 static inline void wbt_disable_default(struct request_queue *q)
 {
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [RFC V2 3/6] blk-iolatency: make iolatency pluggable
  2022-02-15 12:36 [RFC V2 0/6] blk: make blk-rq-qos policies pluggable and modular Wang Jianchao (Kuaishou)
  2022-02-15 12:37 ` [RFC V2 1/6] blk: make blk-rq-qos support pluggable and modular policy Wang Jianchao (Kuaishou)
  2022-02-15 12:37 ` [RFC V2 2/6] blk-wbt: make wbt pluggable Wang Jianchao (Kuaishou)
@ 2022-02-15 12:37 ` Wang Jianchao (Kuaishou)
  2022-02-15 12:37 ` [RFC V2 4/6] blk-iocost: make iocost pluggable Wang Jianchao (Kuaishou)
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 15+ messages in thread
From: Wang Jianchao (Kuaishou) @ 2022-02-15 12:37 UTC (permalink / raw)
  To: Jens Axboe
  Cc: hch, Josef Bacik, Tejun Heo, Bart Van Assche, linux-block, linux-kernel

Make blk-iolatency pluggable. Then we can close or open it through
/sys/block/xxx/queue/qos.

Signed-off-by: Wang Jianchao (Kuaishou) <jianchao.wan9@gmail.com>
---
 block/blk-cgroup.c     |  6 ------
 block/blk-iolatency.c  | 33 +++++++++++++++++++++++++--------
 block/blk-mq-debugfs.c |  2 --
 block/blk-rq-qos.h     |  6 ------
 block/blk.h            |  6 ------
 5 files changed, 25 insertions(+), 28 deletions(-)

diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index 650f7e27989f..3ae2aa557aef 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -1203,12 +1203,6 @@ int blkcg_init_queue(struct request_queue *q)
 	if (ret)
 		goto err_destroy_all;
 
-	ret = blk_iolatency_init(q);
-	if (ret) {
-		blk_throtl_exit(q);
-		goto err_destroy_all;
-	}
-
 	return 0;
 
 err_destroy_all:
diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c
index 6593c7123b97..b0596a7a35f0 100644
--- a/block/blk-iolatency.c
+++ b/block/blk-iolatency.c
@@ -90,6 +90,12 @@ struct blk_iolatency {
 	atomic_t enabled;
 };
 
+static struct rq_qos_ops blkcg_iolatency_ops;
+static inline struct rq_qos *blkcg_rq_qos(struct request_queue *q)
+{
+	return rq_qos_by_id(q, blkcg_iolatency_ops.id);
+}
+
 static inline struct blk_iolatency *BLKIOLATENCY(struct rq_qos *rqos)
 {
 	return container_of(rqos, struct blk_iolatency, rqos);
@@ -646,13 +652,18 @@ static void blkcg_iolatency_exit(struct rq_qos *rqos)
 
 	del_timer_sync(&blkiolat->timer);
 	blkcg_deactivate_policy(rqos->q, &blkcg_policy_iolatency);
+	rq_qos_deactivate(rqos);
 	kfree(blkiolat);
 }
 
+static int blk_iolatency_init(struct request_queue *q);
 static struct rq_qos_ops blkcg_iolatency_ops = {
+	.name = "iolat",
+	.flags = RQOS_FLAG_CGRP_POL,
 	.throttle = blkcg_iolatency_throttle,
 	.done_bio = blkcg_iolatency_done_bio,
 	.exit = blkcg_iolatency_exit,
+	.init = blk_iolatency_init,
 };
 
 static void blkiolatency_timer_fn(struct timer_list *t)
@@ -727,15 +738,10 @@ int blk_iolatency_init(struct request_queue *q)
 		return -ENOMEM;
 
 	rqos = &blkiolat->rqos;
-	rqos->id = RQ_QOS_LATENCY;
-	rqos->ops = &blkcg_iolatency_ops;
-	rqos->q = q;
-
-	rq_qos_add(q, rqos);
-
+	rq_qos_activate(q, rqos, &blkcg_iolatency_ops);
 	ret = blkcg_activate_policy(q, &blkcg_policy_iolatency);
 	if (ret) {
-		rq_qos_del(q, rqos);
+		rq_qos_deactivate(rqos);
 		kfree(blkiolat);
 		return ret;
 	}
@@ -1046,12 +1052,23 @@ static struct blkcg_policy blkcg_policy_iolatency = {
 
 static int __init iolatency_init(void)
 {
-	return blkcg_policy_register(&blkcg_policy_iolatency);
+	int ret;
+
+	ret = rq_qos_register(&blkcg_iolatency_ops);
+	if (ret)
+		return ret;
+
+	ret = blkcg_policy_register(&blkcg_policy_iolatency);
+	if (ret)
+		rq_qos_unregister(&blkcg_iolatency_ops);
+
+	return ret;
 }
 
 static void __exit iolatency_exit(void)
 {
 	blkcg_policy_unregister(&blkcg_policy_iolatency);
+	rq_qos_unregister(&blkcg_iolatency_ops);
 }
 
 module_init(iolatency_init);
diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
index c7b17576a65f..8faa5c5e25be 100644
--- a/block/blk-mq-debugfs.c
+++ b/block/blk-mq-debugfs.c
@@ -823,8 +823,6 @@ void blk_mq_debugfs_unregister_sched(struct request_queue *q)
 static const char *rq_qos_id_to_name(enum rq_qos_id id)
 {
 	switch (id) {
-	case RQ_QOS_LATENCY:
-		return "latency";
 	case RQ_QOS_COST:
 		return "cost";
 	case RQ_QOS_IOPRIO:
diff --git a/block/blk-rq-qos.h b/block/blk-rq-qos.h
index de82eb951bdd..6ca46c69e325 100644
--- a/block/blk-rq-qos.h
+++ b/block/blk-rq-qos.h
@@ -14,7 +14,6 @@
 struct blk_mq_debugfs_attr;
 
 enum rq_qos_id {
-	RQ_QOS_LATENCY,
 	RQ_QOS_COST,
 	RQ_QOS_IOPRIO,
 };
@@ -85,11 +84,6 @@ static inline struct rq_qos *rq_qos_by_id(struct request_queue *q, int id)
 	return rqos;
 }
 
-static inline struct rq_qos *blkcg_rq_qos(struct request_queue *q)
-{
-	return rq_qos_by_id(q, RQ_QOS_LATENCY);
-}
-
 static inline void rq_wait_init(struct rq_wait *rq_wait)
 {
 	atomic_set(&rq_wait->inflight, 0);
diff --git a/block/blk.h b/block/blk.h
index 8bd43b3ad33d..1a314257b6a3 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -400,12 +400,6 @@ static inline void blk_queue_bounce(struct request_queue *q, struct bio **bio)
 		__blk_queue_bounce(q, bio);	
 }
 
-#ifdef CONFIG_BLK_CGROUP_IOLATENCY
-extern int blk_iolatency_init(struct request_queue *q);
-#else
-static inline int blk_iolatency_init(struct request_queue *q) { return 0; }
-#endif
-
 struct bio *blk_next_bio(struct bio *bio, unsigned int nr_pages, gfp_t gfp);
 
 #ifdef CONFIG_BLK_DEV_ZONED
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [RFC V2 4/6] blk-iocost: make iocost pluggable
  2022-02-15 12:36 [RFC V2 0/6] blk: make blk-rq-qos policies pluggable and modular Wang Jianchao (Kuaishou)
                   ` (2 preceding siblings ...)
  2022-02-15 12:37 ` [RFC V2 3/6] blk-iolatency: make iolatency pluggable Wang Jianchao (Kuaishou)
@ 2022-02-15 12:37 ` Wang Jianchao (Kuaishou)
  2022-02-15 12:37 ` [RFC V2 5/6] blk-ioprio: make ioprio pluggable and modular Wang Jianchao (Kuaishou)
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 15+ messages in thread
From: Wang Jianchao (Kuaishou) @ 2022-02-15 12:37 UTC (permalink / raw)
  To: Jens Axboe
  Cc: hch, Josef Bacik, Tejun Heo, Bart Van Assche, linux-block, linux-kernel

Make blk-iocost pluggable. Then we can close or open it through
/sys/block/xxx/queue/qos.

Signed-off-by: Wang Jianchao (Kuaishou) <jianchao.wan9@gmail.com>
---
 block/blk-iocost.c     | 47 ++++++++++++++++++++++++------------------
 block/blk-mq-debugfs.c |  2 --
 block/blk-rq-qos.h     |  1 -
 3 files changed, 27 insertions(+), 23 deletions(-)

diff --git a/block/blk-iocost.c b/block/blk-iocost.c
index 54d6c93090ba..9d82f1002de9 100644
--- a/block/blk-iocost.c
+++ b/block/blk-iocost.c
@@ -660,9 +660,10 @@ static struct ioc *rqos_to_ioc(struct rq_qos *rqos)
 	return container_of(rqos, struct ioc, rqos);
 }
 
+static struct rq_qos_ops ioc_rqos_ops;
 static struct ioc *q_to_ioc(struct request_queue *q)
 {
-	return rqos_to_ioc(rq_qos_by_id(q, RQ_QOS_COST));
+	return rqos_to_ioc(rq_qos_by_id(q, ioc_rqos_ops.id));
 }
 
 static const char *q_name(struct request_queue *q)
@@ -2810,6 +2811,7 @@ static void ioc_rqos_exit(struct rq_qos *rqos)
 	struct ioc *ioc = rqos_to_ioc(rqos);
 
 	blkcg_deactivate_policy(rqos->q, &blkcg_policy_iocost);
+	rq_qos_deactivate(rqos);
 
 	spin_lock_irq(&ioc->lock);
 	ioc->running = IOC_STOP;
@@ -2820,13 +2822,17 @@ static void ioc_rqos_exit(struct rq_qos *rqos)
 	kfree(ioc);
 }
 
+static int blk_iocost_init(struct request_queue *q);
 static struct rq_qos_ops ioc_rqos_ops = {
+	.name = "iocost",
+	.flags = RQOS_FLAG_CGRP_POL | RQOS_FLAG_RQ_ALLOC_TIME,
 	.throttle = ioc_rqos_throttle,
 	.merge = ioc_rqos_merge,
 	.done_bio = ioc_rqos_done_bio,
 	.done = ioc_rqos_done,
 	.queue_depth_changed = ioc_rqos_queue_depth_changed,
 	.exit = ioc_rqos_exit,
+	.init = blk_iocost_init,
 };
 
 static int blk_iocost_init(struct request_queue *q)
@@ -2856,10 +2862,7 @@ static int blk_iocost_init(struct request_queue *q)
 	}
 
 	rqos = &ioc->rqos;
-	rqos->id = RQ_QOS_COST;
-	rqos->ops = &ioc_rqos_ops;
-	rqos->q = q;
-
+	rq_qos_activate(q, rqos, &ioc_rqos_ops);
 	spin_lock_init(&ioc->lock);
 	timer_setup(&ioc->timer, ioc_timer_fn, 0);
 	INIT_LIST_HEAD(&ioc->active_iocgs);
@@ -2883,10 +2886,9 @@ static int blk_iocost_init(struct request_queue *q)
 	 * called before policy activation completion, can't assume that the
 	 * target bio has an iocg associated and need to test for NULL iocg.
 	 */
-	rq_qos_add(q, rqos);
 	ret = blkcg_activate_policy(q, &blkcg_policy_iocost);
 	if (ret) {
-		rq_qos_del(q, rqos);
+		rq_qos_deactivate(rqos);
 		free_percpu(ioc->pcpu_stat);
 		kfree(ioc);
 		return ret;
@@ -3173,12 +3175,10 @@ static ssize_t ioc_qos_write(struct kernfs_open_file *of, char *input,
 	if (IS_ERR(bdev))
 		return PTR_ERR(bdev);
 
-	rqos = rq_qos_get(bdev_get_queue(bdev), RQ_QOS_COST);
+	rqos = rq_qos_get(bdev_get_queue(bdev), ioc_rqos_ops.id);
 	if (!rqos) {
-		ret = blk_iocost_init(bdev_get_queue(bdev));
-		if (ret)
-			goto err;
-		rqos = rq_qos_get(bdev_get_queue(bdev), RQ_QOS_COST);
+		ret = -EOPNOTSUPP;
+		goto err;
 	}
 
 	ioc = rqos_to_ioc(rqos);
@@ -3257,10 +3257,8 @@ static ssize_t ioc_qos_write(struct kernfs_open_file *of, char *input,
 
 	if (enable) {
 		blk_stat_enable_accounting(ioc->rqos.q);
-		blk_queue_flag_set(QUEUE_FLAG_RQ_ALLOC_TIME, ioc->rqos.q);
 		ioc->enabled = true;
 	} else {
-		blk_queue_flag_clear(QUEUE_FLAG_RQ_ALLOC_TIME, ioc->rqos.q);
 		ioc->enabled = false;
 	}
 
@@ -3344,12 +3342,10 @@ static ssize_t ioc_cost_model_write(struct kernfs_open_file *of, char *input,
 	if (IS_ERR(bdev))
 		return PTR_ERR(bdev);
 
-	rqos = rq_qos_get(bdev_get_queue(bdev), RQ_QOS_COST);
+	rqos = rq_qos_get(bdev_get_queue(bdev), ioc_rqos_ops.id);
 	if (!rqos) {
-		ret = blk_iocost_init(bdev_get_queue(bdev));
-		if (ret)
-			goto err;
-		rqos = rq_qos_get(bdev_get_queue(bdev), RQ_QOS_COST);
+		ret = -EOPNOTSUPP;
+		goto err;
 	}
 
 	ioc = rqos_to_ioc(rqos);
@@ -3449,12 +3445,23 @@ static struct blkcg_policy blkcg_policy_iocost = {
 
 static int __init ioc_init(void)
 {
-	return blkcg_policy_register(&blkcg_policy_iocost);
+	int ret;
+
+	ret = rq_qos_register(&ioc_rqos_ops);
+	if (ret)
+		return ret;
+
+	ret = blkcg_policy_register(&blkcg_policy_iocost);
+	if (ret)
+		rq_qos_unregister(&ioc_rqos_ops);
+
+	return ret;
 }
 
 static void __exit ioc_exit(void)
 {
 	blkcg_policy_unregister(&blkcg_policy_iocost);
+	rq_qos_unregister(&ioc_rqos_ops);
 }
 
 module_init(ioc_init);
diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
index 8faa5c5e25be..9bb1dabc223b 100644
--- a/block/blk-mq-debugfs.c
+++ b/block/blk-mq-debugfs.c
@@ -823,8 +823,6 @@ void blk_mq_debugfs_unregister_sched(struct request_queue *q)
 static const char *rq_qos_id_to_name(enum rq_qos_id id)
 {
 	switch (id) {
-	case RQ_QOS_COST:
-		return "cost";
 	case RQ_QOS_IOPRIO:
 		return "ioprio";
 	}
diff --git a/block/blk-rq-qos.h b/block/blk-rq-qos.h
index 6ca46c69e325..4eef53f2c290 100644
--- a/block/blk-rq-qos.h
+++ b/block/blk-rq-qos.h
@@ -14,7 +14,6 @@
 struct blk_mq_debugfs_attr;
 
 enum rq_qos_id {
-	RQ_QOS_COST,
 	RQ_QOS_IOPRIO,
 };
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [RFC V2 5/6] blk-ioprio: make ioprio pluggable and modular
  2022-02-15 12:36 [RFC V2 0/6] blk: make blk-rq-qos policies pluggable and modular Wang Jianchao (Kuaishou)
                   ` (3 preceding siblings ...)
  2022-02-15 12:37 ` [RFC V2 4/6] blk-iocost: make iocost pluggable Wang Jianchao (Kuaishou)
@ 2022-02-15 12:37 ` Wang Jianchao (Kuaishou)
  2022-02-15 21:26   ` Bart Van Assche
  2022-02-15 12:37 ` [RFC V2 6/6] blk: remove unused interfaces of blk-rq-qos Wang Jianchao (Kuaishou)
  2022-02-15 13:01 ` [RFC V2 0/6] blk: make blk-rq-qos policies pluggable and modular Chaitanya Kulkarni
  6 siblings, 1 reply; 15+ messages in thread
From: Wang Jianchao (Kuaishou) @ 2022-02-15 12:37 UTC (permalink / raw)
  To: Jens Axboe
  Cc: hch, Josef Bacik, Tejun Heo, Bart Van Assche, linux-block, linux-kernel

Make blk-ioprio pluggable and modular. Then we can close or open
it through /sys/block/xxx/queue/qos and rmmod the module if we don't
need it which can release one blkcg policy slot.

Signed-off-by: Wang Jianchao (Kuaishou) <jianchao.wan9@gmail.com>
---
 block/Kconfig          |  2 +-
 block/Makefile         |  3 ++-
 block/blk-cgroup.c     |  5 -----
 block/blk-ioprio.c     | 50 ++++++++++++++++++++++++++++--------------
 block/blk-ioprio.h     | 19 ----------------
 block/blk-mq-debugfs.c |  4 ----
 block/blk-rq-qos.c     |  2 +-
 block/blk-rq-qos.h     |  2 +-
 8 files changed, 38 insertions(+), 49 deletions(-)
 delete mode 100644 block/blk-ioprio.h

diff --git a/block/Kconfig b/block/Kconfig
index d5d4197b7ed2..9cc8e4688953 100644
--- a/block/Kconfig
+++ b/block/Kconfig
@@ -145,7 +145,7 @@ config BLK_CGROUP_IOCOST
 	their share of the overall weight distribution.
 
 config BLK_CGROUP_IOPRIO
-	bool "Cgroup I/O controller for assigning an I/O priority class"
+	tristate "Cgroup I/O controller for assigning an I/O priority class"
 	depends on BLK_CGROUP
 	help
 	Enable the .prio interface for assigning an I/O priority class to
diff --git a/block/Makefile b/block/Makefile
index f38eaa612929..f6a3995af285 100644
--- a/block/Makefile
+++ b/block/Makefile
@@ -17,7 +17,8 @@ obj-$(CONFIG_BLK_DEV_BSGLIB)	+= bsg-lib.o
 obj-$(CONFIG_BLK_CGROUP)	+= blk-cgroup.o
 obj-$(CONFIG_BLK_CGROUP_RWSTAT)	+= blk-cgroup-rwstat.o
 obj-$(CONFIG_BLK_DEV_THROTTLING)	+= blk-throttle.o
-obj-$(CONFIG_BLK_CGROUP_IOPRIO)	+= blk-ioprio.o
+io-prio-y 			:= blk-ioprio.o
+obj-$(CONFIG_BLK_CGROUP_IOPRIO)	+= io-prio.o
 obj-$(CONFIG_BLK_CGROUP_IOLATENCY)	+= blk-iolatency.o
 obj-$(CONFIG_BLK_CGROUP_IOCOST)	+= blk-iocost.o
 obj-$(CONFIG_MQ_IOSCHED_DEADLINE)	+= mq-deadline.o
diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index 3ae2aa557aef..f617f7ba311d 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -32,7 +32,6 @@
 #include <linux/psi.h>
 #include <linux/part_stat.h>
 #include "blk.h"
-#include "blk-ioprio.h"
 #include "blk-throttle.h"
 
 /*
@@ -1195,10 +1194,6 @@ int blkcg_init_queue(struct request_queue *q)
 	if (preloaded)
 		radix_tree_preload_end();
 
-	ret = blk_ioprio_init(q);
-	if (ret)
-		goto err_destroy_all;
-
 	ret = blk_throtl_init(q);
 	if (ret)
 		goto err_destroy_all;
diff --git a/block/blk-ioprio.c b/block/blk-ioprio.c
index 2e7f10e1c03f..074cc0978d0b 100644
--- a/block/blk-ioprio.c
+++ b/block/blk-ioprio.c
@@ -17,7 +17,6 @@
 #include <linux/blk_types.h>
 #include <linux/kernel.h>
 #include <linux/module.h>
-#include "blk-ioprio.h"
 #include "blk-rq-qos.h"
 
 /**
@@ -216,15 +215,23 @@ static void blkcg_ioprio_exit(struct rq_qos *rqos)
 		container_of(rqos, typeof(*blkioprio_blkg), rqos);
 
 	blkcg_deactivate_policy(rqos->q, &ioprio_policy);
+	rq_qos_deactivate(rqos);
 	kfree(blkioprio_blkg);
 }
 
+static int blk_ioprio_init(struct request_queue *q);
 static struct rq_qos_ops blkcg_ioprio_ops = {
+#if IS_MODULE(CONFIG_BLK_CGROUP_IOPRIO)
+	.owner	= THIS_MODULE,
+#endif
+	.flags	= RQOS_FLAG_CGRP_POL,
+	.name	= "io-prio",
 	.track	= blkcg_ioprio_track,
 	.exit	= blkcg_ioprio_exit,
+	.init	= blk_ioprio_init,
 };
 
-int blk_ioprio_init(struct request_queue *q)
+static int blk_ioprio_init(struct request_queue *q)
 {
 	struct blk_ioprio *blkioprio_blkg;
 	struct rq_qos *rqos;
@@ -234,36 +241,45 @@ int blk_ioprio_init(struct request_queue *q)
 	if (!blkioprio_blkg)
 		return -ENOMEM;
 
+	/*
+	 * No need to worry ioprio_blkcg_from_css return NULL as
+	 * the queue is frozen right now.
+	 */
+	rqos = &blkioprio_blkg->rqos;
+	rq_qos_activate(q, rqos, &blkcg_ioprio_ops);
+
 	ret = blkcg_activate_policy(q, &ioprio_policy);
 	if (ret) {
+		rq_qos_deactivate(rqos);
 		kfree(blkioprio_blkg);
-		return ret;
 	}
 
-	rqos = &blkioprio_blkg->rqos;
-	rqos->id = RQ_QOS_IOPRIO;
-	rqos->ops = &blkcg_ioprio_ops;
-	rqos->q = q;
-
-	/*
-	 * Registering the rq-qos policy after activating the blk-cgroup
-	 * policy guarantees that ioprio_blkcg_from_bio(bio) != NULL in the
-	 * rq-qos callbacks.
-	 */
-	rq_qos_add(q, rqos);
-
-	return 0;
+	return ret;
 }
 
 static int __init ioprio_init(void)
 {
-	return blkcg_policy_register(&ioprio_policy);
+	int ret;
+
+	ret = rq_qos_register(&blkcg_ioprio_ops);
+	if (ret)
+		return ret;
+
+	ret = blkcg_policy_register(&ioprio_policy);
+	if (ret)
+		rq_qos_unregister(&blkcg_ioprio_ops);
+
+	return ret;
 }
 
 static void __exit ioprio_exit(void)
 {
 	blkcg_policy_unregister(&ioprio_policy);
+	rq_qos_unregister(&blkcg_ioprio_ops);
 }
 
 module_init(ioprio_init);
 module_exit(ioprio_exit);
+MODULE_AUTHOR("Bart Van Assche");
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("Cgroup I/O controller for assigning an I/O priority class");
diff --git a/block/blk-ioprio.h b/block/blk-ioprio.h
deleted file mode 100644
index a7785c2f1aea..000000000000
--- a/block/blk-ioprio.h
+++ /dev/null
@@ -1,19 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-
-#ifndef _BLK_IOPRIO_H_
-#define _BLK_IOPRIO_H_
-
-#include <linux/kconfig.h>
-
-struct request_queue;
-
-#ifdef CONFIG_BLK_CGROUP_IOPRIO
-int blk_ioprio_init(struct request_queue *q);
-#else
-static inline int blk_ioprio_init(struct request_queue *q)
-{
-	return 0;
-}
-#endif
-
-#endif /* _BLK_IOPRIO_H_ */
diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
index 9bb1dabc223b..70a3e4599d99 100644
--- a/block/blk-mq-debugfs.c
+++ b/block/blk-mq-debugfs.c
@@ -822,10 +822,6 @@ void blk_mq_debugfs_unregister_sched(struct request_queue *q)
 
 static const char *rq_qos_id_to_name(enum rq_qos_id id)
 {
-	switch (id) {
-	case RQ_QOS_IOPRIO:
-		return "ioprio";
-	}
 	return "unknown";
 }
 
diff --git a/block/blk-rq-qos.c b/block/blk-rq-qos.c
index a94ff872722b..629d521e20a7 100644
--- a/block/blk-rq-qos.c
+++ b/block/blk-rq-qos.c
@@ -457,7 +457,7 @@ int rq_qos_register(struct rq_qos_ops *ops)
 		goto out;
 	}
 
-	start = RQ_QOS_IOPRIO + 1;
+	start = 1;
 	ret = ida_simple_get(&rq_qos_ida, start, INT_MAX, GFP_KERNEL);
 	if (ret < 0)
 		goto out;
diff --git a/block/blk-rq-qos.h b/block/blk-rq-qos.h
index 4eef53f2c290..ee396367a5b2 100644
--- a/block/blk-rq-qos.h
+++ b/block/blk-rq-qos.h
@@ -14,7 +14,7 @@
 struct blk_mq_debugfs_attr;
 
 enum rq_qos_id {
-	RQ_QOS_IOPRIO,
+	RQ_QOS_UNUSED,
 };
 
 struct rq_wait {
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [RFC V2 6/6] blk: remove unused interfaces of blk-rq-qos
  2022-02-15 12:36 [RFC V2 0/6] blk: make blk-rq-qos policies pluggable and modular Wang Jianchao (Kuaishou)
                   ` (4 preceding siblings ...)
  2022-02-15 12:37 ` [RFC V2 5/6] blk-ioprio: make ioprio pluggable and modular Wang Jianchao (Kuaishou)
@ 2022-02-15 12:37 ` Wang Jianchao (Kuaishou)
  2022-02-15 13:01 ` [RFC V2 0/6] blk: make blk-rq-qos policies pluggable and modular Chaitanya Kulkarni
  6 siblings, 0 replies; 15+ messages in thread
From: Wang Jianchao (Kuaishou) @ 2022-02-15 12:37 UTC (permalink / raw)
  To: Jens Axboe
  Cc: hch, Josef Bacik, Tejun Heo, Bart Van Assche, linux-block, linux-kernel

No functional changes here

Signed-off-by: Wang Jianchao (Kuaishou) <jianchao.wan9@gmail.com>
---
 block/blk-mq-debugfs.c | 10 +-------
 block/blk-rq-qos.h     | 52 +-----------------------------------------
 2 files changed, 2 insertions(+), 60 deletions(-)

diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
index 70a3e4599d99..23336c879e0b 100644
--- a/block/blk-mq-debugfs.c
+++ b/block/blk-mq-debugfs.c
@@ -820,11 +820,6 @@ void blk_mq_debugfs_unregister_sched(struct request_queue *q)
 	q->sched_debugfs_dir = NULL;
 }
 
-static const char *rq_qos_id_to_name(enum rq_qos_id id)
-{
-	return "unknown";
-}
-
 void blk_mq_debugfs_unregister_rqos(struct rq_qos *rqos)
 {
 	debugfs_remove_recursive(rqos->debugfs_dir);
@@ -834,9 +829,6 @@ void blk_mq_debugfs_unregister_rqos(struct rq_qos *rqos)
 void blk_mq_debugfs_register_rqos(struct rq_qos *rqos)
 {
 	struct request_queue *q = rqos->q;
-	const char *dir_name;
-
-	dir_name = rqos->ops->name ? rqos->ops->name : rq_qos_id_to_name(rqos->id);
 
 	if (rqos->debugfs_dir || !rqos->ops->debugfs_attrs)
 		return;
@@ -845,7 +837,7 @@ void blk_mq_debugfs_register_rqos(struct rq_qos *rqos)
 		q->rqos_debugfs_dir = debugfs_create_dir("rqos",
 							 q->debugfs_dir);
 
-	rqos->debugfs_dir = debugfs_create_dir(dir_name,
+	rqos->debugfs_dir = debugfs_create_dir(rqos->ops->name,
 					       rqos->q->rqos_debugfs_dir);
 
 	debugfs_create_files(rqos->debugfs_dir, rqos, rqos->ops->debugfs_attrs);
diff --git a/block/blk-rq-qos.h b/block/blk-rq-qos.h
index ee396367a5b2..123b6b100355 100644
--- a/block/blk-rq-qos.h
+++ b/block/blk-rq-qos.h
@@ -13,10 +13,6 @@
 
 struct blk_mq_debugfs_attr;
 
-enum rq_qos_id {
-	RQ_QOS_UNUSED,
-};
-
 struct rq_wait {
 	wait_queue_head_t wait;
 	atomic_t inflight;
@@ -28,7 +24,7 @@ struct rq_qos {
 	bool dying;
 	const struct rq_qos_ops *ops;
 	struct request_queue *q;
-	enum rq_qos_id id;
+	int id;
 	struct rq_qos *next;
 #ifdef CONFIG_BLK_DEBUG_FS
 	struct dentry *debugfs_dir;
@@ -89,52 +85,6 @@ static inline void rq_wait_init(struct rq_wait *rq_wait)
 	init_waitqueue_head(&rq_wait->wait);
 }
 
-static inline void rq_qos_add(struct request_queue *q, struct rq_qos *rqos)
-{
-	/*
-	 * No IO can be in-flight when adding rqos, so freeze queue, which
-	 * is fine since we only support rq_qos for blk-mq queue.
-	 *
-	 * Reuse ->queue_lock for protecting against other concurrent
-	 * rq_qos adding/deleting
-	 */
-	blk_mq_freeze_queue(q);
-
-	spin_lock_irq(&q->queue_lock);
-	rqos->next = q->rq_qos;
-	q->rq_qos = rqos;
-	spin_unlock_irq(&q->queue_lock);
-
-	blk_mq_unfreeze_queue(q);
-
-	if (rqos->ops->debugfs_attrs)
-		blk_mq_debugfs_register_rqos(rqos);
-}
-
-static inline void rq_qos_del(struct request_queue *q, struct rq_qos *rqos)
-{
-	struct rq_qos **cur;
-
-	/*
-	 * See comment in rq_qos_add() about freezing queue & using
-	 * ->queue_lock.
-	 */
-	blk_mq_freeze_queue(q);
-
-	spin_lock_irq(&q->queue_lock);
-	for (cur = &q->rq_qos; *cur; cur = &(*cur)->next) {
-		if (*cur == rqos) {
-			*cur = rqos->next;
-			break;
-		}
-	}
-	spin_unlock_irq(&q->queue_lock);
-
-	blk_mq_unfreeze_queue(q);
-
-	blk_mq_debugfs_unregister_rqos(rqos);
-}
-
 int rq_qos_register(struct rq_qos_ops *ops);
 void rq_qos_unregister(struct rq_qos_ops *ops);
 void rq_qos_activate(struct request_queue *q,
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [RFC V2 0/6] blk: make blk-rq-qos policies pluggable and modular
  2022-02-15 12:36 [RFC V2 0/6] blk: make blk-rq-qos policies pluggable and modular Wang Jianchao (Kuaishou)
                   ` (5 preceding siblings ...)
  2022-02-15 12:37 ` [RFC V2 6/6] blk: remove unused interfaces of blk-rq-qos Wang Jianchao (Kuaishou)
@ 2022-02-15 13:01 ` Chaitanya Kulkarni
  2022-02-16  1:43   ` Wang Jianchao
  6 siblings, 1 reply; 15+ messages in thread
From: Chaitanya Kulkarni @ 2022-02-15 13:01 UTC (permalink / raw)
  To: Wang Jianchao (Kuaishou)
  Cc: hch, Josef Bacik, Tejun Heo, Bart Van Assche, linux-block,
	linux-kernel, Jens Axboe

Wang Jianchao (Kuaishou),

On 2/15/22 04:36, Wang Jianchao (Kuaishou) wrote:
> Hi Jens
> 
> blk-rq-qos is a standalone framework out of io-sched and can be used to
> control or observe the IO progress in block-layer with hooks. blk-rq-qos
> is a great design but right now, it is totally fixed and built-in and shut
> out peoples who want to use it with external module.
> 
> This patchset attempts to make blk-rq-qos framework pluggable and modular.
> Then we can update the blk-rq-qos policy module w/o stopping the IO workload.
> And it is more convenient to introduce new policy on old machines w/o udgrade
> kernel. And we can close all of the blk-rq-qos policy if we needn't any of
> them. At the moment, the request_queue.rqos list is empty, we needn't to
> waste cpu cyles on them.
> 
>

Please write tests in blktests [1] to cover the code that we are
changing in this commit.

-ck

[1] https://github.com/osandov/blktests



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC V2 1/6] blk: make blk-rq-qos support pluggable and modular policy
  2022-02-15 12:37 ` [RFC V2 1/6] blk: make blk-rq-qos support pluggable and modular policy Wang Jianchao (Kuaishou)
@ 2022-02-15 21:23   ` Bart Van Assche
  2022-02-16  2:11     ` Wang Jianchao
  0 siblings, 1 reply; 15+ messages in thread
From: Bart Van Assche @ 2022-02-15 21:23 UTC (permalink / raw)
  To: Wang Jianchao (Kuaishou), Jens Axboe
  Cc: hch, Josef Bacik, Tejun Heo, linux-block, linux-kernel

On 2/15/22 04:37, Wang Jianchao (Kuaishou) wrote:
> @@ -337,6 +338,7 @@ void blk_cleanup_queue(struct request_queue *q)
>   	 * it is safe to free requests now.
>   	 */
>   	mutex_lock(&q->sysfs_lock);
> +	rq_qos_exit(q);
>   	if (q->elevator)
>   		blk_mq_sched_free_rqs(q);
>   	mutex_unlock(&q->sysfs_lock);

I think this change should be a separate patch with tag "Fixes: 
8e141f9eb803 ("block: drain file system I/O on del_gendisk")". See also 
https://lore.kernel.org/linux-block/b64942a1-0f7e-9e9c-0fd4-c35647035eaf@acm.org/

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC V2 5/6] blk-ioprio: make ioprio pluggable and modular
  2022-02-15 12:37 ` [RFC V2 5/6] blk-ioprio: make ioprio pluggable and modular Wang Jianchao (Kuaishou)
@ 2022-02-15 21:26   ` Bart Van Assche
  2022-02-16  2:09     ` Wang Jianchao
  0 siblings, 1 reply; 15+ messages in thread
From: Bart Van Assche @ 2022-02-15 21:26 UTC (permalink / raw)
  To: Wang Jianchao (Kuaishou), Jens Axboe
  Cc: hch, Josef Bacik, Tejun Heo, linux-block, linux-kernel

On 2/15/22 04:37, Wang Jianchao (Kuaishou) wrote:
> diff --git a/block/Makefile b/block/Makefile
> index f38eaa612929..f6a3995af285 100644
> --- a/block/Makefile
> +++ b/block/Makefile
> @@ -17,7 +17,8 @@ obj-$(CONFIG_BLK_DEV_BSGLIB)	+= bsg-lib.o
>   obj-$(CONFIG_BLK_CGROUP)	+= blk-cgroup.o
>   obj-$(CONFIG_BLK_CGROUP_RWSTAT)	+= blk-cgroup-rwstat.o
>   obj-$(CONFIG_BLK_DEV_THROTTLING)	+= blk-throttle.o
> -obj-$(CONFIG_BLK_CGROUP_IOPRIO)	+= blk-ioprio.o
> +io-prio-y 			:= blk-ioprio.o
> +obj-$(CONFIG_BLK_CGROUP_IOPRIO)	+= io-prio.o
>   obj-$(CONFIG_BLK_CGROUP_IOLATENCY)	+= blk-iolatency.o
>   obj-$(CONFIG_BLK_CGROUP_IOCOST)	+= blk-iocost.o
>   obj-$(CONFIG_MQ_IOSCHED_DEADLINE)	+= mq-deadline.o

Is the above change really necessary?

> +static int blk_ioprio_init(struct request_queue *q);
>   static struct rq_qos_ops blkcg_ioprio_ops = {

Please insert a blank line between a function declaration and a 
structure definition.

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC V2 0/6] blk: make blk-rq-qos policies pluggable and modular
  2022-02-15 13:01 ` [RFC V2 0/6] blk: make blk-rq-qos policies pluggable and modular Chaitanya Kulkarni
@ 2022-02-16  1:43   ` Wang Jianchao
  0 siblings, 0 replies; 15+ messages in thread
From: Wang Jianchao @ 2022-02-16  1:43 UTC (permalink / raw)
  To: Chaitanya Kulkarni
  Cc: hch, Josef Bacik, Tejun Heo, Bart Van Assche, linux-block,
	linux-kernel, Jens Axboe



On 2022/2/15 9:01 下午, Chaitanya Kulkarni wrote:
> Wang Jianchao (Kuaishou),
> 
> On 2/15/22 04:36, Wang Jianchao (Kuaishou) wrote:
>> Hi Jens
>>
>> blk-rq-qos is a standalone framework out of io-sched and can be used to
>> control or observe the IO progress in block-layer with hooks. blk-rq-qos
>> is a great design but right now, it is totally fixed and built-in and shut
>> out peoples who want to use it with external module.
>>
>> This patchset attempts to make blk-rq-qos framework pluggable and modular.
>> Then we can update the blk-rq-qos policy module w/o stopping the IO workload.
>> And it is more convenient to introduce new policy on old machines w/o udgrade
>> kernel. And we can close all of the blk-rq-qos policy if we needn't any of
>> them. At the moment, the request_queue.rqos list is empty, we needn't to
>> waste cpu cyles on them.
>>
>>
> 
> Please write tests in blktests [1] to cover the code that we are
> changing in this commit.
> 
> -ck
> 
> [1] https://github.com/osandov/blktests
> 

Yes, I will send out next

Thanks
Jianchao

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC V2 5/6] blk-ioprio: make ioprio pluggable and modular
  2022-02-15 21:26   ` Bart Van Assche
@ 2022-02-16  2:09     ` Wang Jianchao
  2022-02-16 18:01       ` Bart Van Assche
  0 siblings, 1 reply; 15+ messages in thread
From: Wang Jianchao @ 2022-02-16  2:09 UTC (permalink / raw)
  To: Bart Van Assche, Jens Axboe
  Cc: hch, Josef Bacik, Tejun Heo, linux-block, linux-kernel



On 2022/2/16 5:26 上午, Bart Van Assche wrote:
> On 2/15/22 04:37, Wang Jianchao (Kuaishou) wrote:
>> diff --git a/block/Makefile b/block/Makefile
>> index f38eaa612929..f6a3995af285 100644
>> --- a/block/Makefile
>> +++ b/block/Makefile
>> @@ -17,7 +17,8 @@ obj-$(CONFIG_BLK_DEV_BSGLIB)    += bsg-lib.o
>>   obj-$(CONFIG_BLK_CGROUP)    += blk-cgroup.o
>>   obj-$(CONFIG_BLK_CGROUP_RWSTAT)    += blk-cgroup-rwstat.o
>>   obj-$(CONFIG_BLK_DEV_THROTTLING)    += blk-throttle.o
>> -obj-$(CONFIG_BLK_CGROUP_IOPRIO)    += blk-ioprio.o
>> +io-prio-y             := blk-ioprio.o
>> +obj-$(CONFIG_BLK_CGROUP_IOPRIO)    += io-prio.o
>>   obj-$(CONFIG_BLK_CGROUP_IOLATENCY)    += blk-iolatency.o
>>   obj-$(CONFIG_BLK_CGROUP_IOCOST)    += blk-iocost.o
>>   obj-$(CONFIG_MQ_IOSCHED_DEADLINE)    += mq-deadline.o
> 
> Is the above change really necessary?

Except for making maintaining easier on a running system, removing a
rqos policy module with cgroup supporting can release a blk-cgroup
policy slots. As BLKCG_MAX_POLS, the max slots number is fixed now.

> 
>> +static int blk_ioprio_init(struct request_queue *q);
>>   static struct rq_qos_ops blkcg_ioprio_ops = {
> 
> Please insert a blank line between a function declaration and a structure definition.

Yes, I will do it in next version.

Thanks
Jianchao

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC V2 1/6] blk: make blk-rq-qos support pluggable and modular policy
  2022-02-15 21:23   ` Bart Van Assche
@ 2022-02-16  2:11     ` Wang Jianchao
  0 siblings, 0 replies; 15+ messages in thread
From: Wang Jianchao @ 2022-02-16  2:11 UTC (permalink / raw)
  To: Bart Van Assche, Jens Axboe
  Cc: hch, Josef Bacik, Tejun Heo, linux-block, linux-kernel



On 2022/2/16 5:23 上午, Bart Van Assche wrote:
> On 2/15/22 04:37, Wang Jianchao (Kuaishou) wrote:
>> @@ -337,6 +338,7 @@ void blk_cleanup_queue(struct request_queue *q)
>>        * it is safe to free requests now.
>>        */
>>       mutex_lock(&q->sysfs_lock);
>> +    rq_qos_exit(q);
>>       if (q->elevator)
>>           blk_mq_sched_free_rqs(q);
>>       mutex_unlock(&q->sysfs_lock);
> 
> I think this change should be a separate patch with tag "Fixes: 8e141f9eb803 ("block: drain file system I/O on del_gendisk")". See also https://lore.kernel.org/linux-block/b64942a1-0f7e-9e9c-0fd4-c35647035eaf@acm.org/
> 
Yes, I will do it in next version

Thanks
Jianchao

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC V2 5/6] blk-ioprio: make ioprio pluggable and modular
  2022-02-16  2:09     ` Wang Jianchao
@ 2022-02-16 18:01       ` Bart Van Assche
  2022-02-17  2:56         ` Wang Jianchao
  0 siblings, 1 reply; 15+ messages in thread
From: Bart Van Assche @ 2022-02-16 18:01 UTC (permalink / raw)
  To: Wang Jianchao, Jens Axboe
  Cc: hch, Josef Bacik, Tejun Heo, linux-block, linux-kernel

On 2/15/22 18:09, Wang Jianchao wrote:
> On 2022/2/16 5:26 上午, Bart Van Assche wrote:
>> On 2/15/22 04:37, Wang Jianchao (Kuaishou) wrote:
>>> diff --git a/block/Makefile b/block/Makefile
>>> index f38eaa612929..f6a3995af285 100644
>>> --- a/block/Makefile
>>> +++ b/block/Makefile
>>> @@ -17,7 +17,8 @@ obj-$(CONFIG_BLK_DEV_BSGLIB)    += bsg-lib.o
>>>    obj-$(CONFIG_BLK_CGROUP)    += blk-cgroup.o
>>>    obj-$(CONFIG_BLK_CGROUP_RWSTAT)    += blk-cgroup-rwstat.o
>>>    obj-$(CONFIG_BLK_DEV_THROTTLING)    += blk-throttle.o
>>> -obj-$(CONFIG_BLK_CGROUP_IOPRIO)    += blk-ioprio.o
>>> +io-prio-y             := blk-ioprio.o
>>> +obj-$(CONFIG_BLK_CGROUP_IOPRIO)    += io-prio.o
>>>    obj-$(CONFIG_BLK_CGROUP_IOLATENCY)    += blk-iolatency.o
>>>    obj-$(CONFIG_BLK_CGROUP_IOCOST)    += blk-iocost.o
>>>    obj-$(CONFIG_MQ_IOSCHED_DEADLINE)    += mq-deadline.o
>>
>> Is the above change really necessary?
> 
> Except for making maintaining easier on a running system, removing a
> rqos policy module with cgroup supporting can release a blk-cgroup
> policy slots. As BLKCG_MAX_POLS, the max slots number is fixed now.

It seems like my question was not clear? What I meant is that I think 
that the above changes are not necessary to build blk-ioprio as a kernel 
module.

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC V2 5/6] blk-ioprio: make ioprio pluggable and modular
  2022-02-16 18:01       ` Bart Van Assche
@ 2022-02-17  2:56         ` Wang Jianchao
  0 siblings, 0 replies; 15+ messages in thread
From: Wang Jianchao @ 2022-02-17  2:56 UTC (permalink / raw)
  To: Bart Van Assche, Jens Axboe
  Cc: hch, Josef Bacik, Tejun Heo, linux-block, linux-kernel



On 2022/2/17 2:01 上午, Bart Van Assche wrote:
> On 2/15/22 18:09, Wang Jianchao wrote:
>> On 2022/2/16 5:26 上午, Bart Van Assche wrote:
>>> On 2/15/22 04:37, Wang Jianchao (Kuaishou) wrote:
>>>> diff --git a/block/Makefile b/block/Makefile
>>>> index f38eaa612929..f6a3995af285 100644
>>>> --- a/block/Makefile
>>>> +++ b/block/Makefile
>>>> @@ -17,7 +17,8 @@ obj-$(CONFIG_BLK_DEV_BSGLIB)    += bsg-lib.o
>>>>    obj-$(CONFIG_BLK_CGROUP)    += blk-cgroup.o
>>>>    obj-$(CONFIG_BLK_CGROUP_RWSTAT)    += blk-cgroup-rwstat.o
>>>>    obj-$(CONFIG_BLK_DEV_THROTTLING)    += blk-throttle.o
>>>> -obj-$(CONFIG_BLK_CGROUP_IOPRIO)    += blk-ioprio.o
>>>> +io-prio-y             := blk-ioprio.o
>>>> +obj-$(CONFIG_BLK_CGROUP_IOPRIO)    += io-prio.o
>>>>    obj-$(CONFIG_BLK_CGROUP_IOLATENCY)    += blk-iolatency.o
>>>>    obj-$(CONFIG_BLK_CGROUP_IOCOST)    += blk-iocost.o
>>>>    obj-$(CONFIG_MQ_IOSCHED_DEADLINE)    += mq-deadline.o
>>>
>>> Is the above change really necessary?
>>
>> Except for making maintaining easier on a running system, removing a
>> rqos policy module with cgroup supporting can release a blk-cgroup
>> policy slots. As BLKCG_MAX_POLS, the max slots number is fixed now.
> 
> It seems like my question was not clear? What I meant is that I think that the above changes are not necessary to build blk-ioprio as a kernel module.
> 
Thanks so much for your kindly remind. I have changed it.

Regards
Jianchao


^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2022-02-17  2:56 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-02-15 12:36 [RFC V2 0/6] blk: make blk-rq-qos policies pluggable and modular Wang Jianchao (Kuaishou)
2022-02-15 12:37 ` [RFC V2 1/6] blk: make blk-rq-qos support pluggable and modular policy Wang Jianchao (Kuaishou)
2022-02-15 21:23   ` Bart Van Assche
2022-02-16  2:11     ` Wang Jianchao
2022-02-15 12:37 ` [RFC V2 2/6] blk-wbt: make wbt pluggable Wang Jianchao (Kuaishou)
2022-02-15 12:37 ` [RFC V2 3/6] blk-iolatency: make iolatency pluggable Wang Jianchao (Kuaishou)
2022-02-15 12:37 ` [RFC V2 4/6] blk-iocost: make iocost pluggable Wang Jianchao (Kuaishou)
2022-02-15 12:37 ` [RFC V2 5/6] blk-ioprio: make ioprio pluggable and modular Wang Jianchao (Kuaishou)
2022-02-15 21:26   ` Bart Van Assche
2022-02-16  2:09     ` Wang Jianchao
2022-02-16 18:01       ` Bart Van Assche
2022-02-17  2:56         ` Wang Jianchao
2022-02-15 12:37 ` [RFC V2 6/6] blk: remove unused interfaces of blk-rq-qos Wang Jianchao (Kuaishou)
2022-02-15 13:01 ` [RFC V2 0/6] blk: make blk-rq-qos policies pluggable and modular Chaitanya Kulkarni
2022-02-16  1:43   ` Wang Jianchao

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).