linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH -next 00/11] support concurrent sync io for bfq on a specail occasion
@ 2022-03-05  9:11 Yu Kuai
  2022-03-05  9:11 ` [PATCH -next 01/11] block, bfq: add new apis to iterate bfq entities Yu Kuai
                   ` (12 more replies)
  0 siblings, 13 replies; 32+ messages in thread
From: Yu Kuai @ 2022-03-05  9:11 UTC (permalink / raw)
  To: tj, axboe, paolo.valente, jack
  Cc: cgroups, linux-block, linux-kernel, yukuai3, yi.zhang

Currently, bfq can't handle sync io concurrently as long as they
are not issued from root group. This is because
'bfqd->num_groups_with_pending_reqs > 0' is always true in
bfq_asymmetric_scenario().

This patchset tries to support concurrent sync io if all the sync ios
are issued from the same cgroup:

1) Count root_group into 'num_groups_with_pending_reqs', patch 1-5;

2) Don't idle if 'num_groups_with_pending_reqs' is 1, patch 6;

3) Don't count the group if the group doesn't have pending requests,
while it's child groups may have pending requests, patch 7;

This is because, for example:
if sync ios are issued from cgroup /root/c1/c2, root, c1 and c2
will all be counted into 'num_groups_with_pending_reqs',
which makes it impossible to handle sync ios concurrently.

4) Decrease 'num_groups_with_pending_reqs' when the last queue completes
all the requests, while child groups may still have pending
requests, patch 8-10;

This is because, for example:
t1 issue sync io on root group, t2 and t3 issue sync io on the same
child group. num_groups_with_pending_reqs is 2 now.
After t1 stopped, num_groups_with_pending_reqs is still 2. sync io from
t2 and t3 still can't be handled concurrently.

fio test script: startdelay is used to avoid queue merging
[global]
filename=/dev/nvme0n1
allow_mounted_write=0
ioengine=psync
direct=1
ioscheduler=bfq
offset_increment=10g
group_reporting
rw=randwrite
bs=4k

[test1]
numjobs=1

[test2]
startdelay=1
numjobs=1

[test3]
startdelay=2
numjobs=1

[test4]
startdelay=3
numjobs=1

[test5]
startdelay=4
numjobs=1

[test6]
startdelay=5
numjobs=1

[test7]
startdelay=6
numjobs=1

[test8]
startdelay=7
numjobs=1

test result:
running fio on root cgroup
v5.17-rc6:	   550 Mib/s
v5.17-rc6-patched: 550 Mib/s

running fio on non-root cgroup
v5.17-rc6:	   349 Mib/s
v5.17-rc6-patched: 550 Mib/s

Yu Kuai (11):
  block, bfq: add new apis to iterate bfq entities
  block, bfq: apply news apis where root group is not expected
  block, bfq: cleanup for __bfq_activate_requeue_entity()
  block, bfq: move the increasement of 'num_groups_with_pending_reqs' to
    it's caller
  block, bfq: count root group into 'num_groups_with_pending_reqs'
  block, bfq: do not idle if only one cgroup is activated
  block, bfq: only count parent bfqg when bfqq is activated
  block, bfq: record how many queues have pending requests in bfq_group
  block, bfq: move forward __bfq_weights_tree_remove()
  block, bfq: decrease 'num_groups_with_pending_reqs' earlier
  block, bfq: cleanup bfqq_group()

 block/bfq-cgroup.c  | 13 +++----
 block/bfq-iosched.c | 87 +++++++++++++++++++++++----------------------
 block/bfq-iosched.h | 41 +++++++++++++--------
 block/bfq-wf2q.c    | 56 +++++++++++++++--------------
 4 files changed, 106 insertions(+), 91 deletions(-)

-- 
2.31.1


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH -next 01/11] block, bfq: add new apis to iterate bfq entities
  2022-03-05  9:11 [PATCH -next 00/11] support concurrent sync io for bfq on a specail occasion Yu Kuai
@ 2022-03-05  9:11 ` Yu Kuai
  2022-03-05  9:11 ` [PATCH -next 02/11] block, bfq: apply news apis where root group is not expected Yu Kuai
                   ` (11 subsequent siblings)
  12 siblings, 0 replies; 32+ messages in thread
From: Yu Kuai @ 2022-03-05  9:11 UTC (permalink / raw)
  To: tj, axboe, paolo.valente, jack
  Cc: cgroups, linux-block, linux-kernel, yukuai3, yi.zhang

The old and the new apis are the same currently, prepare to count
root group into 'num_groups_with_pending_reqs'. The old apis will be
used to iterate with root group's entity, and the new apis will be
used to iterate without root group's entity.

Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
 block/bfq-iosched.h | 19 ++++++++++++++++++-
 1 file changed, 18 insertions(+), 1 deletion(-)

diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h
index 3b83e3d1c2e5..d703492714e2 100644
--- a/block/bfq-iosched.h
+++ b/block/bfq-iosched.h
@@ -1037,9 +1037,20 @@ extern struct blkcg_policy blkcg_policy_bfq;
 #define for_each_entity_safe(entity, parent) \
 	for (; entity && ({ parent = entity->parent; 1; }); entity = parent)
 
+#define is_root_entity(entity) \
+	(entity->sched_data == NULL)
+
+#define for_each_entity_not_root(entity) \
+	for (; entity && !is_root_entity(entity); entity = entity->parent)
+
+#define for_each_entity_not_root_safe(entity, parent) \
+	for (; entity && !is_root_entity(entity) && \
+	       ({ parent = entity->parent; 1; }); entity = parent)
 #else /* CONFIG_BFQ_GROUP_IOSCHED */
+#define is_root_entity(entity) (false)
+
 /*
- * Next two macros are fake loops when cgroups support is not
+ * Next four macros are fake loops when cgroups support is not
  * enabled. I fact, in such a case, there is only one level to go up
  * (to reach the root group).
  */
@@ -1048,6 +1059,12 @@ extern struct blkcg_policy blkcg_policy_bfq;
 
 #define for_each_entity_safe(entity, parent) \
 	for (parent = NULL; entity ; entity = parent)
+
+#define for_each_entity_not_root(entity) \
+	for (; entity ; entity = NULL)
+
+#define for_each_entity_not_root_safe(entity, parent) \
+	for (parent = NULL; entity ; entity = parent)
 #endif /* CONFIG_BFQ_GROUP_IOSCHED */
 
 struct bfq_queue *bfq_entity_to_bfqq(struct bfq_entity *entity);
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH -next 02/11] block, bfq: apply news apis where root group is not expected
  2022-03-05  9:11 [PATCH -next 00/11] support concurrent sync io for bfq on a specail occasion Yu Kuai
  2022-03-05  9:11 ` [PATCH -next 01/11] block, bfq: add new apis to iterate bfq entities Yu Kuai
@ 2022-03-05  9:11 ` Yu Kuai
  2022-04-13  9:50   ` Jan Kara
  2022-03-05  9:11 ` [PATCH -next 03/11] block, bfq: cleanup for __bfq_activate_requeue_entity() Yu Kuai
                   ` (10 subsequent siblings)
  12 siblings, 1 reply; 32+ messages in thread
From: Yu Kuai @ 2022-03-05  9:11 UTC (permalink / raw)
  To: tj, axboe, paolo.valente, jack
  Cc: cgroups, linux-block, linux-kernel, yukuai3, yi.zhang

'entity->sched_data' is set to parent group's sched_data, thus it's NULL
for root group. And for_each_entity() is used widely to access
'entity->sched_data', thus aplly news apis if root group is not
expected. Prepare to count root group into 'num_groups_with_pending_reqs'.

Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
 block/bfq-iosched.c |  2 +-
 block/bfq-iosched.h | 22 ++++++++--------------
 block/bfq-wf2q.c    | 10 +++++-----
 3 files changed, 14 insertions(+), 20 deletions(-)

diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index 69ddf6b0f01d..3bc7a7686aad 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -4393,7 +4393,7 @@ void bfq_bfqq_expire(struct bfq_data *bfqd,
 	 * service with the same budget.
 	 */
 	entity = entity->parent;
-	for_each_entity(entity)
+	for_each_entity_not_root(entity)
 		entity->service = 0;
 }
 
diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h
index d703492714e2..ddd8eff5c272 100644
--- a/block/bfq-iosched.h
+++ b/block/bfq-iosched.h
@@ -1024,25 +1024,22 @@ extern struct blkcg_policy blkcg_policy_bfq;
 /* - interface of the internal hierarchical B-WF2Q+ scheduler - */
 
 #ifdef CONFIG_BFQ_GROUP_IOSCHED
-/* both next loops stop at one of the child entities of the root group */
+/* stop at one of the child entities of the root group */
 #define for_each_entity(entity)	\
 	for (; entity ; entity = entity->parent)
 
-/*
- * For each iteration, compute parent in advance, so as to be safe if
- * entity is deallocated during the iteration. Such a deallocation may
- * happen as a consequence of a bfq_put_queue that frees the bfq_queue
- * containing entity.
- */
-#define for_each_entity_safe(entity, parent) \
-	for (; entity && ({ parent = entity->parent; 1; }); entity = parent)
-
 #define is_root_entity(entity) \
 	(entity->sched_data == NULL)
 
 #define for_each_entity_not_root(entity) \
 	for (; entity && !is_root_entity(entity); entity = entity->parent)
 
+/*
+ * For each iteration, compute parent in advance, so as to be safe if
+ * entity is deallocated during the iteration. Such a deallocation may
+ * happen as a consequence of a bfq_put_queue that frees the bfq_queue
+ * containing entity.
+ */
 #define for_each_entity_not_root_safe(entity, parent) \
 	for (; entity && !is_root_entity(entity) && \
 	       ({ parent = entity->parent; 1; }); entity = parent)
@@ -1050,16 +1047,13 @@ extern struct blkcg_policy blkcg_policy_bfq;
 #define is_root_entity(entity) (false)
 
 /*
- * Next four macros are fake loops when cgroups support is not
+ * Next three macros are fake loops when cgroups support is not
  * enabled. I fact, in such a case, there is only one level to go up
  * (to reach the root group).
  */
 #define for_each_entity(entity)	\
 	for (; entity ; entity = NULL)
 
-#define for_each_entity_safe(entity, parent) \
-	for (parent = NULL; entity ; entity = parent)
-
 #define for_each_entity_not_root(entity) \
 	for (; entity ; entity = NULL)
 
diff --git a/block/bfq-wf2q.c b/block/bfq-wf2q.c
index f8eb340381cf..c4cb935a615a 100644
--- a/block/bfq-wf2q.c
+++ b/block/bfq-wf2q.c
@@ -815,7 +815,7 @@ void bfq_bfqq_served(struct bfq_queue *bfqq, int served)
 		bfqq->service_from_wr += served;
 
 	bfqq->service_from_backlogged += served;
-	for_each_entity(entity) {
+	for_each_entity_not_root(entity) {
 		st = bfq_entity_service_tree(entity);
 
 		entity->service += served;
@@ -1201,7 +1201,7 @@ static void bfq_deactivate_entity(struct bfq_entity *entity,
 	struct bfq_sched_data *sd;
 	struct bfq_entity *parent = NULL;
 
-	for_each_entity_safe(entity, parent) {
+	for_each_entity_not_root_safe(entity, parent) {
 		sd = entity->sched_data;
 
 		if (!__bfq_deactivate_entity(entity, ins_into_idle_tree)) {
@@ -1270,7 +1270,7 @@ static void bfq_deactivate_entity(struct bfq_entity *entity,
 	 * is not the case.
 	 */
 	entity = parent;
-	for_each_entity(entity) {
+	for_each_entity_not_root(entity) {
 		/*
 		 * Invoke __bfq_requeue_entity on entity, even if
 		 * already active, to requeue/reposition it in the
@@ -1570,7 +1570,7 @@ struct bfq_queue *bfq_get_next_queue(struct bfq_data *bfqd)
 	 * We can finally update all next-to-serve entities along the
 	 * path from the leaf entity just set in service to the root.
 	 */
-	for_each_entity(entity) {
+	for_each_entity_not_root(entity) {
 		struct bfq_sched_data *sd = entity->sched_data;
 
 		if (!bfq_update_next_in_service(sd, NULL, false))
@@ -1597,7 +1597,7 @@ bool __bfq_bfqd_reset_in_service(struct bfq_data *bfqd)
 	 * execute the final step: reset in_service_entity along the
 	 * path from entity to the root.
 	 */
-	for_each_entity(entity)
+	for_each_entity_not_root(entity)
 		entity->sched_data->in_service_entity = NULL;
 
 	/*
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH -next 03/11] block, bfq: cleanup for __bfq_activate_requeue_entity()
  2022-03-05  9:11 [PATCH -next 00/11] support concurrent sync io for bfq on a specail occasion Yu Kuai
  2022-03-05  9:11 ` [PATCH -next 01/11] block, bfq: add new apis to iterate bfq entities Yu Kuai
  2022-03-05  9:11 ` [PATCH -next 02/11] block, bfq: apply news apis where root group is not expected Yu Kuai
@ 2022-03-05  9:11 ` Yu Kuai
  2022-03-05  9:11 ` [PATCH -next 04/11] block, bfq: move the increasement of 'num_groups_with_pending_reqs' to it's caller Yu Kuai
                   ` (9 subsequent siblings)
  12 siblings, 0 replies; 32+ messages in thread
From: Yu Kuai @ 2022-03-05  9:11 UTC (permalink / raw)
  To: tj, axboe, paolo.valente, jack
  Cc: cgroups, linux-block, linux-kernel, yukuai3, yi.zhang

Remove the parameter 'sd', which can be access by 'entity'. Just to
make the code a litter cleaner.

Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
 block/bfq-wf2q.c | 13 +++++--------
 1 file changed, 5 insertions(+), 8 deletions(-)

diff --git a/block/bfq-wf2q.c b/block/bfq-wf2q.c
index c4cb935a615a..e30da27f356d 100644
--- a/block/bfq-wf2q.c
+++ b/block/bfq-wf2q.c
@@ -1082,12 +1082,12 @@ static void __bfq_requeue_entity(struct bfq_entity *entity)
 }
 
 static void __bfq_activate_requeue_entity(struct bfq_entity *entity,
-					  struct bfq_sched_data *sd,
 					  bool non_blocking_wait_rq)
 {
 	struct bfq_service_tree *st = bfq_entity_service_tree(entity);
 
-	if (sd->in_service_entity == entity || entity->tree == &st->active)
+	if (entity->sched_data->in_service_entity == entity ||
+	    entity->tree == &st->active)
 		 /*
 		  * in service or already queued on the active tree,
 		  * requeue or reposition
@@ -1119,14 +1119,11 @@ static void bfq_activate_requeue_entity(struct bfq_entity *entity,
 					bool non_blocking_wait_rq,
 					bool requeue, bool expiration)
 {
-	struct bfq_sched_data *sd;
-
 	for_each_entity(entity) {
-		sd = entity->sched_data;
-		__bfq_activate_requeue_entity(entity, sd, non_blocking_wait_rq);
+		__bfq_activate_requeue_entity(entity, non_blocking_wait_rq);
 
-		if (!bfq_update_next_in_service(sd, entity, expiration) &&
-		    !requeue)
+		if (!bfq_update_next_in_service(entity->sched_data, entity,
+					expiration) && !requeue)
 			break;
 	}
 }
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH -next 04/11] block, bfq: move the increasement of 'num_groups_with_pending_reqs' to it's caller
  2022-03-05  9:11 [PATCH -next 00/11] support concurrent sync io for bfq on a specail occasion Yu Kuai
                   ` (2 preceding siblings ...)
  2022-03-05  9:11 ` [PATCH -next 03/11] block, bfq: cleanup for __bfq_activate_requeue_entity() Yu Kuai
@ 2022-03-05  9:11 ` Yu Kuai
  2022-03-05  9:11 ` [PATCH -next 05/11] block, bfq: count root group into 'num_groups_with_pending_reqs' Yu Kuai
                   ` (8 subsequent siblings)
  12 siblings, 0 replies; 32+ messages in thread
From: Yu Kuai @ 2022-03-05  9:11 UTC (permalink / raw)
  To: tj, axboe, paolo.valente, jack
  Cc: cgroups, linux-block, linux-kernel, yukuai3, yi.zhang

Root group is not in service tree, thus __bfq_activate_entity() is not
needed for root_group. This will simplify counting root group into
'num_groups_with_pending_reqs'.

Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
 block/bfq-wf2q.c | 31 ++++++++++++++++++-------------
 1 file changed, 18 insertions(+), 13 deletions(-)

diff --git a/block/bfq-wf2q.c b/block/bfq-wf2q.c
index e30da27f356d..17f1d2c5b8dc 100644
--- a/block/bfq-wf2q.c
+++ b/block/bfq-wf2q.c
@@ -218,6 +218,19 @@ static bool bfq_no_longer_next_in_service(struct bfq_entity *entity)
 	return false;
 }
 
+static void bfq_update_groups_with_pending_reqs(struct bfq_entity *entity)
+{
+	if (!bfq_entity_to_bfqq(entity) && /* bfq_group */
+	    !entity->in_groups_with_pending_reqs) {
+		struct bfq_group *bfqg =
+			container_of(entity, struct bfq_group, entity);
+		struct bfq_data *bfqd = bfqg->bfqd;
+
+		entity->in_groups_with_pending_reqs = true;
+		bfqd->num_groups_with_pending_reqs++;
+	}
+}
+
 #else /* CONFIG_BFQ_GROUP_IOSCHED */
 
 static bool bfq_update_parent_budget(struct bfq_entity *next_in_service)
@@ -230,6 +243,10 @@ static bool bfq_no_longer_next_in_service(struct bfq_entity *entity)
 	return true;
 }
 
+static void bfq_update_groups_with_pending_reqs(struct bfq_entity *entity)
+{
+}
+
 #endif /* CONFIG_BFQ_GROUP_IOSCHED */
 
 /*
@@ -984,19 +1001,6 @@ static void __bfq_activate_entity(struct bfq_entity *entity,
 		entity->on_st_or_in_serv = true;
 	}
 
-#ifdef CONFIG_BFQ_GROUP_IOSCHED
-	if (!bfq_entity_to_bfqq(entity)) { /* bfq_group */
-		struct bfq_group *bfqg =
-			container_of(entity, struct bfq_group, entity);
-		struct bfq_data *bfqd = bfqg->bfqd;
-
-		if (!entity->in_groups_with_pending_reqs) {
-			entity->in_groups_with_pending_reqs = true;
-			bfqd->num_groups_with_pending_reqs++;
-		}
-	}
-#endif
-
 	bfq_update_fin_time_enqueue(entity, st, backshifted);
 }
 
@@ -1120,6 +1124,7 @@ static void bfq_activate_requeue_entity(struct bfq_entity *entity,
 					bool requeue, bool expiration)
 {
 	for_each_entity(entity) {
+		bfq_update_groups_with_pending_reqs(entity);
 		__bfq_activate_requeue_entity(entity, non_blocking_wait_rq);
 
 		if (!bfq_update_next_in_service(entity->sched_data, entity,
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH -next 05/11] block, bfq: count root group into 'num_groups_with_pending_reqs'
  2022-03-05  9:11 [PATCH -next 00/11] support concurrent sync io for bfq on a specail occasion Yu Kuai
                   ` (3 preceding siblings ...)
  2022-03-05  9:11 ` [PATCH -next 04/11] block, bfq: move the increasement of 'num_groups_with_pending_reqs' to it's caller Yu Kuai
@ 2022-03-05  9:11 ` Yu Kuai
  2022-04-13 11:05   ` Jan Kara
  2022-03-05  9:12 ` [PATCH -next 06/11] block, bfq: do not idle if only one cgroup is activated Yu Kuai
                   ` (7 subsequent siblings)
  12 siblings, 1 reply; 32+ messages in thread
From: Yu Kuai @ 2022-03-05  9:11 UTC (permalink / raw)
  To: tj, axboe, paolo.valente, jack
  Cc: cgroups, linux-block, linux-kernel, yukuai3, yi.zhang

Root group is not counted into 'num_groups_with_pending_reqs' because
'entity->parent' is set to NULL for child entities, thus
for_each_entity() can't access root group.

This patch set root_group's entity to 'entity->parent' for child
entities, this way root_group will be counted because for_each_entity()
can access root_group in bfq_activate_requeue_entity(),

Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
 block/bfq-cgroup.c  | 6 +++---
 block/bfq-iosched.h | 3 ++-
 block/bfq-wf2q.c    | 5 +++++
 3 files changed, 10 insertions(+), 4 deletions(-)

diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
index 420eda2589c0..6cd65b5e790d 100644
--- a/block/bfq-cgroup.c
+++ b/block/bfq-cgroup.c
@@ -436,7 +436,7 @@ void bfq_init_entity(struct bfq_entity *entity, struct bfq_group *bfqg)
 		 */
 		bfqg_and_blkg_get(bfqg);
 	}
-	entity->parent = bfqg->my_entity; /* NULL for root group */
+	entity->parent = &bfqg->entity;
 	entity->sched_data = &bfqg->sched_data;
 }
 
@@ -581,7 +581,7 @@ static void bfq_group_set_parent(struct bfq_group *bfqg,
 	struct bfq_entity *entity;
 
 	entity = &bfqg->entity;
-	entity->parent = parent->my_entity;
+	entity->parent = &parent->entity;
 	entity->sched_data = &parent->sched_data;
 }
 
@@ -688,7 +688,7 @@ void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq,
 	else if (bfqd->last_bfqq_created == bfqq)
 		bfqd->last_bfqq_created = NULL;
 
-	entity->parent = bfqg->my_entity;
+	entity->parent = &bfqg->entity;
 	entity->sched_data = &bfqg->sched_data;
 	/* pin down bfqg and its associated blkg  */
 	bfqg_and_blkg_get(bfqg);
diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h
index ddd8eff5c272..4530ab8b42ac 100644
--- a/block/bfq-iosched.h
+++ b/block/bfq-iosched.h
@@ -1024,13 +1024,14 @@ extern struct blkcg_policy blkcg_policy_bfq;
 /* - interface of the internal hierarchical B-WF2Q+ scheduler - */
 
 #ifdef CONFIG_BFQ_GROUP_IOSCHED
-/* stop at one of the child entities of the root group */
+/* stop at root group */
 #define for_each_entity(entity)	\
 	for (; entity ; entity = entity->parent)
 
 #define is_root_entity(entity) \
 	(entity->sched_data == NULL)
 
+/* stop at one of the child entities of the root group */
 #define for_each_entity_not_root(entity) \
 	for (; entity && !is_root_entity(entity); entity = entity->parent)
 
diff --git a/block/bfq-wf2q.c b/block/bfq-wf2q.c
index 17f1d2c5b8dc..138a2950b841 100644
--- a/block/bfq-wf2q.c
+++ b/block/bfq-wf2q.c
@@ -1125,6 +1125,11 @@ static void bfq_activate_requeue_entity(struct bfq_entity *entity,
 {
 	for_each_entity(entity) {
 		bfq_update_groups_with_pending_reqs(entity);
+
+		/* root group is not in service tree */
+		if (is_root_entity(entity))
+			break;
+
 		__bfq_activate_requeue_entity(entity, non_blocking_wait_rq);
 
 		if (!bfq_update_next_in_service(entity->sched_data, entity,
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH -next 06/11] block, bfq: do not idle if only one cgroup is activated
  2022-03-05  9:11 [PATCH -next 00/11] support concurrent sync io for bfq on a specail occasion Yu Kuai
                   ` (4 preceding siblings ...)
  2022-03-05  9:11 ` [PATCH -next 05/11] block, bfq: count root group into 'num_groups_with_pending_reqs' Yu Kuai
@ 2022-03-05  9:12 ` Yu Kuai
  2022-03-05  9:12 ` [PATCH -next 07/11] block, bfq: only count parent bfqg when bfqq " Yu Kuai
                   ` (6 subsequent siblings)
  12 siblings, 0 replies; 32+ messages in thread
From: Yu Kuai @ 2022-03-05  9:12 UTC (permalink / raw)
  To: tj, axboe, paolo.valente, jack
  Cc: cgroups, linux-block, linux-kernel, yukuai3, yi.zhang

Now that root group is counted into 'num_groups_with_pending_reqs',
'num_groups_with_pending_reqs > 0' is always true in
bfq_asymmetric_scenario().

Thus change the condition to 'num_groups_with_pending_reqs > 1', so
it's consistent without counting root group.

On the other hand, with the following patches to only count groups(not
ancestors) with pending requests, sync io can be handled concurrently
if only one group has pending requests.

Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
 block/bfq-iosched.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index 3bc7a7686aad..07027dc9dc4c 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -812,7 +812,7 @@ bfq_pos_tree_add_move(struct bfq_data *bfqd, struct bfq_queue *bfqq)
  * much easier to maintain the needed state:
  * 1) all active queues have the same weight,
  * 2) all active queues belong to the same I/O-priority class,
- * 3) there are no active groups.
+ * 3) there are one active groups at most.
  * In particular, the last condition is always true if hierarchical
  * support or the cgroups interface are not enabled, thus no state
  * needs to be maintained in this case.
@@ -844,7 +844,7 @@ static bool bfq_asymmetric_scenario(struct bfq_data *bfqd,
 
 	return varied_queue_weights || multiple_classes_busy
 #ifdef CONFIG_BFQ_GROUP_IOSCHED
-	       || bfqd->num_groups_with_pending_reqs > 0
+	       || bfqd->num_groups_with_pending_reqs > 1
 #endif
 		;
 }
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH -next 07/11] block, bfq: only count parent bfqg when bfqq is activated
  2022-03-05  9:11 [PATCH -next 00/11] support concurrent sync io for bfq on a specail occasion Yu Kuai
                   ` (5 preceding siblings ...)
  2022-03-05  9:12 ` [PATCH -next 06/11] block, bfq: do not idle if only one cgroup is activated Yu Kuai
@ 2022-03-05  9:12 ` Yu Kuai
  2022-03-05  9:12 ` [PATCH -next 08/11] block, bfq: record how many queues have pending requests in bfq_group Yu Kuai
                   ` (5 subsequent siblings)
  12 siblings, 0 replies; 32+ messages in thread
From: Yu Kuai @ 2022-03-05  9:12 UTC (permalink / raw)
  To: tj, axboe, paolo.valente, jack
  Cc: cgroups, linux-block, linux-kernel, yukuai3, yi.zhang

Currently, bfqg will be counted into 'num_groups_with_pending_reqs'
once it's child cgroup is activated, even if the group doesn't have
any pending requests itself.

For example, if we issue sync io in cgroup /root/c1/c2, root, c1 and c2
will all be counted into 'num_groups_with_pending_reqs', which makes it
impossible to handle requests concurrently.

This patch doesn't count the group that doesn't have any pending
requests while it's child group has pending requests.

Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
 block/bfq-wf2q.c | 9 ++-------
 1 file changed, 2 insertions(+), 7 deletions(-)

diff --git a/block/bfq-wf2q.c b/block/bfq-wf2q.c
index 138a2950b841..db066ae35a71 100644
--- a/block/bfq-wf2q.c
+++ b/block/bfq-wf2q.c
@@ -1123,13 +1123,7 @@ static void bfq_activate_requeue_entity(struct bfq_entity *entity,
 					bool non_blocking_wait_rq,
 					bool requeue, bool expiration)
 {
-	for_each_entity(entity) {
-		bfq_update_groups_with_pending_reqs(entity);
-
-		/* root group is not in service tree */
-		if (is_root_entity(entity))
-			break;
-
+	for_each_entity_not_root(entity) {
 		__bfq_activate_requeue_entity(entity, non_blocking_wait_rq);
 
 		if (!bfq_update_next_in_service(entity->sched_data, entity,
@@ -1640,6 +1634,7 @@ void bfq_activate_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq)
 {
 	struct bfq_entity *entity = &bfqq->entity;
 
+	bfq_update_groups_with_pending_reqs(bfqq->entity.parent);
 	bfq_activate_requeue_entity(entity, bfq_bfqq_non_blocking_wait_rq(bfqq),
 				    false, false);
 	bfq_clear_bfqq_non_blocking_wait_rq(bfqq);
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH -next 08/11] block, bfq: record how many queues have pending requests in bfq_group
  2022-03-05  9:11 [PATCH -next 00/11] support concurrent sync io for bfq on a specail occasion Yu Kuai
                   ` (6 preceding siblings ...)
  2022-03-05  9:12 ` [PATCH -next 07/11] block, bfq: only count parent bfqg when bfqq " Yu Kuai
@ 2022-03-05  9:12 ` Yu Kuai
  2022-03-05  9:12 ` [PATCH -next 09/11] block, bfq: move forward __bfq_weights_tree_remove() Yu Kuai
                   ` (4 subsequent siblings)
  12 siblings, 0 replies; 32+ messages in thread
From: Yu Kuai @ 2022-03-05  9:12 UTC (permalink / raw)
  To: tj, axboe, paolo.valente, jack
  Cc: cgroups, linux-block, linux-kernel, yukuai3, yi.zhang

Prepare to decrease 'num_groups_with_pending_reqs' earlier.

bfqq will be inserted to weights_tree when new io is inserted to it, and
bfqq will be removed from weights_tree when all the requests are completed.
Thus use weights_tree insertion and removal to track how many queues have
pending requests.

Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
 block/bfq-cgroup.c  |  1 +
 block/bfq-iosched.c | 15 +++++++++++++++
 block/bfq-iosched.h |  1 +
 3 files changed, 17 insertions(+)

diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
index 6cd65b5e790d..58acaf14a91d 100644
--- a/block/bfq-cgroup.c
+++ b/block/bfq-cgroup.c
@@ -557,6 +557,7 @@ static void bfq_pd_init(struct blkg_policy_data *pd)
 				   */
 	bfqg->bfqd = bfqd;
 	bfqg->active_entities = 0;
+	bfqg->num_entities_with_pending_reqs = 0;
 	bfqg->rq_pos_tree = RB_ROOT;
 }
 
diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index 07027dc9dc4c..2a48c40b4f02 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -928,6 +928,13 @@ void bfq_weights_tree_add(struct bfq_data *bfqd, struct bfq_queue *bfqq,
 inc_counter:
 	bfqq->weight_counter->num_active++;
 	bfqq->ref++;
+
+#ifdef CONFIG_BFQ_GROUP_IOSCHED
+	if (!entity->in_groups_with_pending_reqs) {
+		entity->in_groups_with_pending_reqs = true;
+		bfqq_group(bfqq)->num_entities_with_pending_reqs++;
+	}
+#endif
 }
 
 /*
@@ -944,6 +951,14 @@ void __bfq_weights_tree_remove(struct bfq_data *bfqd,
 		return;
 
 	bfqq->weight_counter->num_active--;
+
+#ifdef CONFIG_BFQ_GROUP_IOSCHED
+	if (bfqq->entity.in_groups_with_pending_reqs) {
+		bfqq->entity.in_groups_with_pending_reqs = false;
+		bfqq_group(bfqq)->num_entities_with_pending_reqs--;
+	}
+#endif
+
 	if (bfqq->weight_counter->num_active > 0)
 		goto reset_entity_pointer;
 
diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h
index 4530ab8b42ac..5d904851519c 100644
--- a/block/bfq-iosched.h
+++ b/block/bfq-iosched.h
@@ -940,6 +940,7 @@ struct bfq_group {
 	struct bfq_entity *my_entity;
 
 	int active_entities;
+	int num_entities_with_pending_reqs;
 
 	struct rb_root rq_pos_tree;
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH -next 09/11] block, bfq: move forward __bfq_weights_tree_remove()
  2022-03-05  9:11 [PATCH -next 00/11] support concurrent sync io for bfq on a specail occasion Yu Kuai
                   ` (7 preceding siblings ...)
  2022-03-05  9:12 ` [PATCH -next 08/11] block, bfq: record how many queues have pending requests in bfq_group Yu Kuai
@ 2022-03-05  9:12 ` Yu Kuai
  2022-03-05  9:12 ` [PATCH -next 10/11] block, bfq: decrease 'num_groups_with_pending_reqs' earlier Yu Kuai
                   ` (3 subsequent siblings)
  12 siblings, 0 replies; 32+ messages in thread
From: Yu Kuai @ 2022-03-05  9:12 UTC (permalink / raw)
  To: tj, axboe, paolo.valente, jack
  Cc: cgroups, linux-block, linux-kernel, yukuai3, yi.zhang

Prepare to decrease 'num_groups_with_pending_reqs' earlier.

Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
 block/bfq-iosched.c | 22 ++++++++++++++--------
 1 file changed, 14 insertions(+), 8 deletions(-)

diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index 2a48c40b4f02..f221e9cab4d0 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -979,6 +979,19 @@ void bfq_weights_tree_remove(struct bfq_data *bfqd,
 {
 	struct bfq_entity *entity = bfqq->entity.parent;
 
+	/*
+	 * grab a ref to prevent bfqq to be freed in
+	 * __bfq_weights_tree_remove
+	 */
+	bfqq->ref++;
+
+	/*
+	 * remove bfqq from weights tree first, so that how many queues have
+	 * pending requests in parent bfqg is updated.
+	 */
+	__bfq_weights_tree_remove(bfqd, bfqq,
+				  &bfqd->queue_weights_tree);
+
 	for_each_entity(entity) {
 		struct bfq_sched_data *sd = entity->my_sched_data;
 
@@ -1013,14 +1026,7 @@ void bfq_weights_tree_remove(struct bfq_data *bfqd,
 		}
 	}
 
-	/*
-	 * Next function is invoked last, because it causes bfqq to be
-	 * freed if the following holds: bfqq is not in service and
-	 * has no dispatched request. DO NOT use bfqq after the next
-	 * function invocation.
-	 */
-	__bfq_weights_tree_remove(bfqd, bfqq,
-				  &bfqd->queue_weights_tree);
+	bfq_put_queue(bfqq);
 }
 
 /*
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH -next 10/11] block, bfq: decrease 'num_groups_with_pending_reqs' earlier
  2022-03-05  9:11 [PATCH -next 00/11] support concurrent sync io for bfq on a specail occasion Yu Kuai
                   ` (8 preceding siblings ...)
  2022-03-05  9:12 ` [PATCH -next 09/11] block, bfq: move forward __bfq_weights_tree_remove() Yu Kuai
@ 2022-03-05  9:12 ` Yu Kuai
  2022-04-13 11:28   ` Jan Kara
  2022-03-05  9:12 ` [PATCH -next 11/11] block, bfq: cleanup bfqq_group() Yu Kuai
                   ` (2 subsequent siblings)
  12 siblings, 1 reply; 32+ messages in thread
From: Yu Kuai @ 2022-03-05  9:12 UTC (permalink / raw)
  To: tj, axboe, paolo.valente, jack
  Cc: cgroups, linux-block, linux-kernel, yukuai3, yi.zhang

Currently 'num_groups_with_pending_reqs' won't be decreased when
the group doesn't have any pending requests, while some child group
still have pending requests. The decrement is delayed to when all the
child groups doesn't have any pending requests.

For example:
1) t1 issue sync io on root group, t2 and t3 issue sync io on the same
child group. num_groups_with_pending_reqs is 2 now.
2) t1 stopped, num_groups_with_pending_reqs is still 2. io from t2 and
t3 still can't be handled concurrently.

Fix the problem by decreasing 'num_groups_with_pending_reqs'
immediately upon the weights_tree removal of last bfqq of the group.

Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
 block/bfq-iosched.c | 56 +++++++++++++++------------------------------
 block/bfq-iosched.h | 16 ++++++-------
 2 files changed, 27 insertions(+), 45 deletions(-)

diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index f221e9cab4d0..119b64c9c1d9 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -970,6 +970,24 @@ void __bfq_weights_tree_remove(struct bfq_data *bfqd,
 	bfq_put_queue(bfqq);
 }
 
+static void decrease_groups_with_pending_reqs(struct bfq_data *bfqd,
+					      struct bfq_queue *bfqq)
+{
+#ifdef CONFIG_BFQ_GROUP_IOSCHED
+	struct bfq_entity *entity = bfqq->entity.parent;
+
+	/*
+	 * The decrement of num_groups_with_pending_reqs is performed
+	 * immediately when the last bfqq completes all the requests.
+	 */
+	if (!bfqq_group(bfqq)->num_entities_with_pending_reqs &&
+	    entity->in_groups_with_pending_reqs) {
+		entity->in_groups_with_pending_reqs = false;
+		bfqd->num_groups_with_pending_reqs--;
+	}
+#endif
+}
+
 /*
  * Invoke __bfq_weights_tree_remove on bfqq and decrement the number
  * of active groups for each queue's inactive parent entity.
@@ -977,8 +995,6 @@ void __bfq_weights_tree_remove(struct bfq_data *bfqd,
 void bfq_weights_tree_remove(struct bfq_data *bfqd,
 			     struct bfq_queue *bfqq)
 {
-	struct bfq_entity *entity = bfqq->entity.parent;
-
 	/*
 	 * grab a ref to prevent bfqq to be freed in
 	 * __bfq_weights_tree_remove
@@ -991,41 +1007,7 @@ void bfq_weights_tree_remove(struct bfq_data *bfqd,
 	 */
 	__bfq_weights_tree_remove(bfqd, bfqq,
 				  &bfqd->queue_weights_tree);
-
-	for_each_entity(entity) {
-		struct bfq_sched_data *sd = entity->my_sched_data;
-
-		if (sd->next_in_service || sd->in_service_entity) {
-			/*
-			 * entity is still active, because either
-			 * next_in_service or in_service_entity is not
-			 * NULL (see the comments on the definition of
-			 * next_in_service for details on why
-			 * in_service_entity must be checked too).
-			 *
-			 * As a consequence, its parent entities are
-			 * active as well, and thus this loop must
-			 * stop here.
-			 */
-			break;
-		}
-
-		/*
-		 * The decrement of num_groups_with_pending_reqs is
-		 * not performed immediately upon the deactivation of
-		 * entity, but it is delayed to when it also happens
-		 * that the first leaf descendant bfqq of entity gets
-		 * all its pending requests completed. The following
-		 * instructions perform this delayed decrement, if
-		 * needed. See the comments on
-		 * num_groups_with_pending_reqs for details.
-		 */
-		if (entity->in_groups_with_pending_reqs) {
-			entity->in_groups_with_pending_reqs = false;
-			bfqd->num_groups_with_pending_reqs--;
-		}
-	}
-
+	decrease_groups_with_pending_reqs(bfqd, bfqq);
 	bfq_put_queue(bfqq);
 }
 
diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h
index 5d904851519c..9ec72bd24fc2 100644
--- a/block/bfq-iosched.h
+++ b/block/bfq-iosched.h
@@ -495,7 +495,7 @@ struct bfq_data {
 	struct rb_root_cached queue_weights_tree;
 
 	/*
-	 * Number of groups with at least one descendant process that
+	 * Number of groups with at least one process that
 	 * has at least one request waiting for completion. Note that
 	 * this accounts for also requests already dispatched, but not
 	 * yet completed. Therefore this number of groups may differ
@@ -508,14 +508,14 @@ struct bfq_data {
 	 * bfq_better_to_idle().
 	 *
 	 * However, it is hard to compute this number exactly, for
-	 * groups with multiple descendant processes. Consider a group
-	 * that is inactive, i.e., that has no descendant process with
+	 * groups with multiple processes. Consider a group
+	 * that is inactive, i.e., that has no process with
 	 * pending I/O inside BFQ queues. Then suppose that
 	 * num_groups_with_pending_reqs is still accounting for this
-	 * group, because the group has descendant processes with some
+	 * group, because the group has processes with some
 	 * I/O request still in flight. num_groups_with_pending_reqs
 	 * should be decremented when the in-flight request of the
-	 * last descendant process is finally completed (assuming that
+	 * last process is finally completed (assuming that
 	 * nothing else has changed for the group in the meantime, in
 	 * terms of composition of the group and active/inactive state of child
 	 * groups and processes). To accomplish this, an additional
@@ -524,7 +524,7 @@ struct bfq_data {
 	 * we resort to the following tradeoff between simplicity and
 	 * accuracy: for an inactive group that is still counted in
 	 * num_groups_with_pending_reqs, we decrement
-	 * num_groups_with_pending_reqs when the first descendant
+	 * num_groups_with_pending_reqs when the last
 	 * process of the group remains with no request waiting for
 	 * completion.
 	 *
@@ -532,12 +532,12 @@ struct bfq_data {
 	 * carefulness: to avoid multiple decrements, we flag a group,
 	 * more precisely an entity representing a group, as still
 	 * counted in num_groups_with_pending_reqs when it becomes
-	 * inactive. Then, when the first descendant queue of the
+	 * inactive. Then, when the last queue of the
 	 * entity remains with no request waiting for completion,
 	 * num_groups_with_pending_reqs is decremented, and this flag
 	 * is reset. After this flag is reset for the entity,
 	 * num_groups_with_pending_reqs won't be decremented any
-	 * longer in case a new descendant queue of the entity remains
+	 * longer in case a new queue of the entity remains
 	 * with no request waiting for completion.
 	 */
 	unsigned int num_groups_with_pending_reqs;
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH -next 11/11] block, bfq: cleanup bfqq_group()
  2022-03-05  9:11 [PATCH -next 00/11] support concurrent sync io for bfq on a specail occasion Yu Kuai
                   ` (9 preceding siblings ...)
  2022-03-05  9:12 ` [PATCH -next 10/11] block, bfq: decrease 'num_groups_with_pending_reqs' earlier Yu Kuai
@ 2022-03-05  9:12 ` Yu Kuai
  2022-03-11  6:31 ` [PATCH -next 00/11] support concurrent sync io for bfq on a specail occasion yukuai (C)
  2022-04-13 11:12 ` Jan Kara
  12 siblings, 0 replies; 32+ messages in thread
From: Yu Kuai @ 2022-03-05  9:12 UTC (permalink / raw)
  To: tj, axboe, paolo.valente, jack
  Cc: cgroups, linux-block, linux-kernel, yukuai3, yi.zhang

Now that if bfqq is under root group, 'bfqq->entity.parent' is set to
root group's entity instead of NULL, there is no point for the judgement
in bfqq_group() anymore.

Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
 block/bfq-cgroup.c | 6 +-----
 1 file changed, 1 insertion(+), 5 deletions(-)

diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
index 58acaf14a91d..1fcb13e97cf0 100644
--- a/block/bfq-cgroup.c
+++ b/block/bfq-cgroup.c
@@ -307,11 +307,7 @@ static struct bfq_group *bfqg_parent(struct bfq_group *bfqg)
 
 struct bfq_group *bfqq_group(struct bfq_queue *bfqq)
 {
-	struct bfq_entity *group_entity = bfqq->entity.parent;
-
-	return group_entity ? container_of(group_entity, struct bfq_group,
-					   entity) :
-			      bfqq->bfqd->root_group;
+	return container_of(bfqq->entity.parent, struct bfq_group, entity);
 }
 
 /*
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* Re: [PATCH -next 00/11] support concurrent sync io for bfq on a specail occasion
  2022-03-05  9:11 [PATCH -next 00/11] support concurrent sync io for bfq on a specail occasion Yu Kuai
                   ` (10 preceding siblings ...)
  2022-03-05  9:12 ` [PATCH -next 11/11] block, bfq: cleanup bfqq_group() Yu Kuai
@ 2022-03-11  6:31 ` yukuai (C)
  2022-03-17  1:49   ` yukuai (C)
  2022-04-13 11:12 ` Jan Kara
  12 siblings, 1 reply; 32+ messages in thread
From: yukuai (C) @ 2022-03-11  6:31 UTC (permalink / raw)
  To: tj, axboe, paolo.valente, jack
  Cc: cgroups, linux-block, linux-kernel, yi.zhang

friendly ping ...

在 2022/03/05 17:11, Yu Kuai 写道:
> Currently, bfq can't handle sync io concurrently as long as they
> are not issued from root group. This is because
> 'bfqd->num_groups_with_pending_reqs > 0' is always true in
> bfq_asymmetric_scenario().
> 
> This patchset tries to support concurrent sync io if all the sync ios
> are issued from the same cgroup:
> 
> 1) Count root_group into 'num_groups_with_pending_reqs', patch 1-5;
> 
> 2) Don't idle if 'num_groups_with_pending_reqs' is 1, patch 6;
> 
> 3) Don't count the group if the group doesn't have pending requests,
> while it's child groups may have pending requests, patch 7;
> 
> This is because, for example:
> if sync ios are issued from cgroup /root/c1/c2, root, c1 and c2
> will all be counted into 'num_groups_with_pending_reqs',
> which makes it impossible to handle sync ios concurrently.
> 
> 4) Decrease 'num_groups_with_pending_reqs' when the last queue completes
> all the requests, while child groups may still have pending
> requests, patch 8-10;
> 
> This is because, for example:
> t1 issue sync io on root group, t2 and t3 issue sync io on the same
> child group. num_groups_with_pending_reqs is 2 now.
> After t1 stopped, num_groups_with_pending_reqs is still 2. sync io from
> t2 and t3 still can't be handled concurrently.
> 
> fio test script: startdelay is used to avoid queue merging
> [global]
> filename=/dev/nvme0n1
> allow_mounted_write=0
> ioengine=psync
> direct=1
> ioscheduler=bfq
> offset_increment=10g
> group_reporting
> rw=randwrite
> bs=4k
> 
> [test1]
> numjobs=1
> 
> [test2]
> startdelay=1
> numjobs=1
> 
> [test3]
> startdelay=2
> numjobs=1
> 
> [test4]
> startdelay=3
> numjobs=1
> 
> [test5]
> startdelay=4
> numjobs=1
> 
> [test6]
> startdelay=5
> numjobs=1
> 
> [test7]
> startdelay=6
> numjobs=1
> 
> [test8]
> startdelay=7
> numjobs=1
> 
> test result:
> running fio on root cgroup
> v5.17-rc6:	   550 Mib/s
> v5.17-rc6-patched: 550 Mib/s
> 
> running fio on non-root cgroup
> v5.17-rc6:	   349 Mib/s
> v5.17-rc6-patched: 550 Mib/s
> 
> Yu Kuai (11):
>    block, bfq: add new apis to iterate bfq entities
>    block, bfq: apply news apis where root group is not expected
>    block, bfq: cleanup for __bfq_activate_requeue_entity()
>    block, bfq: move the increasement of 'num_groups_with_pending_reqs' to
>      it's caller
>    block, bfq: count root group into 'num_groups_with_pending_reqs'
>    block, bfq: do not idle if only one cgroup is activated
>    block, bfq: only count parent bfqg when bfqq is activated
>    block, bfq: record how many queues have pending requests in bfq_group
>    block, bfq: move forward __bfq_weights_tree_remove()
>    block, bfq: decrease 'num_groups_with_pending_reqs' earlier
>    block, bfq: cleanup bfqq_group()
> 
>   block/bfq-cgroup.c  | 13 +++----
>   block/bfq-iosched.c | 87 +++++++++++++++++++++++----------------------
>   block/bfq-iosched.h | 41 +++++++++++++--------
>   block/bfq-wf2q.c    | 56 +++++++++++++++--------------
>   4 files changed, 106 insertions(+), 91 deletions(-)
> 

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH -next 00/11] support concurrent sync io for bfq on a specail occasion
  2022-03-11  6:31 ` [PATCH -next 00/11] support concurrent sync io for bfq on a specail occasion yukuai (C)
@ 2022-03-17  1:49   ` yukuai (C)
  2022-03-18 12:38     ` Paolo Valente
  2022-03-25  7:30     ` yukuai (C)
  0 siblings, 2 replies; 32+ messages in thread
From: yukuai (C) @ 2022-03-17  1:49 UTC (permalink / raw)
  To: tj, axboe, paolo.valente, jack
  Cc: cgroups, linux-block, linux-kernel, yi.zhang

friendly ping ...

在 2022/03/11 14:31, yukuai (C) 写道:
> friendly ping ...
> 
> 在 2022/03/05 17:11, Yu Kuai 写道:
>> Currently, bfq can't handle sync io concurrently as long as they
>> are not issued from root group. This is because
>> 'bfqd->num_groups_with_pending_reqs > 0' is always true in
>> bfq_asymmetric_scenario().
>>
>> This patchset tries to support concurrent sync io if all the sync ios
>> are issued from the same cgroup:
>>
>> 1) Count root_group into 'num_groups_with_pending_reqs', patch 1-5;
>>
>> 2) Don't idle if 'num_groups_with_pending_reqs' is 1, patch 6;
>>
>> 3) Don't count the group if the group doesn't have pending requests,
>> while it's child groups may have pending requests, patch 7;
>>
>> This is because, for example:
>> if sync ios are issued from cgroup /root/c1/c2, root, c1 and c2
>> will all be counted into 'num_groups_with_pending_reqs',
>> which makes it impossible to handle sync ios concurrently.
>>
>> 4) Decrease 'num_groups_with_pending_reqs' when the last queue completes
>> all the requests, while child groups may still have pending
>> requests, patch 8-10;
>>
>> This is because, for example:
>> t1 issue sync io on root group, t2 and t3 issue sync io on the same
>> child group. num_groups_with_pending_reqs is 2 now.
>> After t1 stopped, num_groups_with_pending_reqs is still 2. sync io from
>> t2 and t3 still can't be handled concurrently.
>>
>> fio test script: startdelay is used to avoid queue merging
>> [global]
>> filename=/dev/nvme0n1
>> allow_mounted_write=0
>> ioengine=psync
>> direct=1
>> ioscheduler=bfq
>> offset_increment=10g
>> group_reporting
>> rw=randwrite
>> bs=4k
>>
>> [test1]
>> numjobs=1
>>
>> [test2]
>> startdelay=1
>> numjobs=1
>>
>> [test3]
>> startdelay=2
>> numjobs=1
>>
>> [test4]
>> startdelay=3
>> numjobs=1
>>
>> [test5]
>> startdelay=4
>> numjobs=1
>>
>> [test6]
>> startdelay=5
>> numjobs=1
>>
>> [test7]
>> startdelay=6
>> numjobs=1
>>
>> [test8]
>> startdelay=7
>> numjobs=1
>>
>> test result:
>> running fio on root cgroup
>> v5.17-rc6:       550 Mib/s
>> v5.17-rc6-patched: 550 Mib/s
>>
>> running fio on non-root cgroup
>> v5.17-rc6:       349 Mib/s
>> v5.17-rc6-patched: 550 Mib/s
>>
>> Yu Kuai (11):
>>    block, bfq: add new apis to iterate bfq entities
>>    block, bfq: apply news apis where root group is not expected
>>    block, bfq: cleanup for __bfq_activate_requeue_entity()
>>    block, bfq: move the increasement of 'num_groups_with_pending_reqs' to
>>      it's caller
>>    block, bfq: count root group into 'num_groups_with_pending_reqs'
>>    block, bfq: do not idle if only one cgroup is activated
>>    block, bfq: only count parent bfqg when bfqq is activated
>>    block, bfq: record how many queues have pending requests in bfq_group
>>    block, bfq: move forward __bfq_weights_tree_remove()
>>    block, bfq: decrease 'num_groups_with_pending_reqs' earlier
>>    block, bfq: cleanup bfqq_group()
>>
>>   block/bfq-cgroup.c  | 13 +++----
>>   block/bfq-iosched.c | 87 +++++++++++++++++++++++----------------------
>>   block/bfq-iosched.h | 41 +++++++++++++--------
>>   block/bfq-wf2q.c    | 56 +++++++++++++++--------------
>>   4 files changed, 106 insertions(+), 91 deletions(-)
>>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH -next 00/11] support concurrent sync io for bfq on a specail occasion
  2022-03-17  1:49   ` yukuai (C)
@ 2022-03-18 12:38     ` Paolo Valente
  2022-03-19  2:34       ` yukuai (C)
  2022-03-25  7:30     ` yukuai (C)
  1 sibling, 1 reply; 32+ messages in thread
From: Paolo Valente @ 2022-03-18 12:38 UTC (permalink / raw)
  To: yukuai (C)
  Cc: Tejun Heo, Jens Axboe, Jan Kara, cgroups, linux-block, LKML, yi.zhang

Hi,
could you please add pointers to the thread(s) where we have already revised this series (if we have). I don't see any reference to that in this cover letter.

Paolo

> Il giorno 17 mar 2022, alle ore 02:49, yukuai (C) <yukuai3@huawei.com> ha scritto:
> 
> friendly ping ...
> 
> 在 2022/03/11 14:31, yukuai (C) 写道:
>> friendly ping ...
>> 在 2022/03/05 17:11, Yu Kuai 写道:
>>> Currently, bfq can't handle sync io concurrently as long as they
>>> are not issued from root group. This is because
>>> 'bfqd->num_groups_with_pending_reqs > 0' is always true in
>>> bfq_asymmetric_scenario().
>>> 
>>> This patchset tries to support concurrent sync io if all the sync ios
>>> are issued from the same cgroup:
>>> 
>>> 1) Count root_group into 'num_groups_with_pending_reqs', patch 1-5;
>>> 
>>> 2) Don't idle if 'num_groups_with_pending_reqs' is 1, patch 6;
>>> 
>>> 3) Don't count the group if the group doesn't have pending requests,
>>> while it's child groups may have pending requests, patch 7;
>>> 
>>> This is because, for example:
>>> if sync ios are issued from cgroup /root/c1/c2, root, c1 and c2
>>> will all be counted into 'num_groups_with_pending_reqs',
>>> which makes it impossible to handle sync ios concurrently.
>>> 
>>> 4) Decrease 'num_groups_with_pending_reqs' when the last queue completes
>>> all the requests, while child groups may still have pending
>>> requests, patch 8-10;
>>> 
>>> This is because, for example:
>>> t1 issue sync io on root group, t2 and t3 issue sync io on the same
>>> child group. num_groups_with_pending_reqs is 2 now.
>>> After t1 stopped, num_groups_with_pending_reqs is still 2. sync io from
>>> t2 and t3 still can't be handled concurrently.
>>> 
>>> fio test script: startdelay is used to avoid queue merging
>>> [global]
>>> filename=/dev/nvme0n1
>>> allow_mounted_write=0
>>> ioengine=psync
>>> direct=1
>>> ioscheduler=bfq
>>> offset_increment=10g
>>> group_reporting
>>> rw=randwrite
>>> bs=4k
>>> 
>>> [test1]
>>> numjobs=1
>>> 
>>> [test2]
>>> startdelay=1
>>> numjobs=1
>>> 
>>> [test3]
>>> startdelay=2
>>> numjobs=1
>>> 
>>> [test4]
>>> startdelay=3
>>> numjobs=1
>>> 
>>> [test5]
>>> startdelay=4
>>> numjobs=1
>>> 
>>> [test6]
>>> startdelay=5
>>> numjobs=1
>>> 
>>> [test7]
>>> startdelay=6
>>> numjobs=1
>>> 
>>> [test8]
>>> startdelay=7
>>> numjobs=1
>>> 
>>> test result:
>>> running fio on root cgroup
>>> v5.17-rc6:       550 Mib/s
>>> v5.17-rc6-patched: 550 Mib/s
>>> 
>>> running fio on non-root cgroup
>>> v5.17-rc6:       349 Mib/s
>>> v5.17-rc6-patched: 550 Mib/s
>>> 
>>> Yu Kuai (11):
>>>    block, bfq: add new apis to iterate bfq entities
>>>    block, bfq: apply news apis where root group is not expected
>>>    block, bfq: cleanup for __bfq_activate_requeue_entity()
>>>    block, bfq: move the increasement of 'num_groups_with_pending_reqs' to
>>>      it's caller
>>>    block, bfq: count root group into 'num_groups_with_pending_reqs'
>>>    block, bfq: do not idle if only one cgroup is activated
>>>    block, bfq: only count parent bfqg when bfqq is activated
>>>    block, bfq: record how many queues have pending requests in bfq_group
>>>    block, bfq: move forward __bfq_weights_tree_remove()
>>>    block, bfq: decrease 'num_groups_with_pending_reqs' earlier
>>>    block, bfq: cleanup bfqq_group()
>>> 
>>>   block/bfq-cgroup.c  | 13 +++----
>>>   block/bfq-iosched.c | 87 +++++++++++++++++++++++----------------------
>>>   block/bfq-iosched.h | 41 +++++++++++++--------
>>>   block/bfq-wf2q.c    | 56 +++++++++++++++--------------
>>>   4 files changed, 106 insertions(+), 91 deletions(-)
>>> 


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH -next 00/11] support concurrent sync io for bfq on a specail occasion
  2022-03-18 12:38     ` Paolo Valente
@ 2022-03-19  2:34       ` yukuai (C)
  0 siblings, 0 replies; 32+ messages in thread
From: yukuai (C) @ 2022-03-19  2:34 UTC (permalink / raw)
  To: Paolo Valente
  Cc: Tejun Heo, Jens Axboe, Jan Kara, cgroups, linux-block, LKML, yi.zhang

在 2022/03/18 20:38, Paolo Valente 写道:
> Hi,
> could you please add pointers to the thread(s) where we have already revised this series (if we have). I don't see any reference to that in this cover letter.

Hi,

Ok, sorry for that, following is the previours threads.

This is a new patchset after RFC
- Fix some term in commit messages and comments
- Add some cleanup patches

New RFC: use a new solution, and it has little relevance to
previous versions.
https://lore.kernel.org/lkml/20211127101132.486806-1-yukuai3@huawei.com/T/
- as suggested by Paolo, count root group into
'num_groups_with_pending_reqs' instead of handling root group
separately.
- Change the patchset title
- New changes about when to modify 'num_groups_with_pending_reqs'

Orignal v4:
https://lore.kernel.org/lkml/20211014014556.3597008-2-yukuai3@huawei.com/t/
  - fix a compile warning when CONFIG_BLK_CGROUP is not enabled.

Orignal v3:
https://www.spinics.net/lists/linux-block/msg74836.html
  - Instead of tracking each queue in root group, tracking root group
  directly just like non-root group does.
  - remove patch 3,4 from these series.

Orignal v2:
https://lore.kernel.org/lkml/20210806020826.1407257-1-yukuai3@huawei.com/
- as suggested by Paolo, add support to track if root_group have any
  pending requests, and use that to handle the situation when only one
  group is activated while root group doesn't have any pending requests.
  - modify commit message in patch 2

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH -next 00/11] support concurrent sync io for bfq on a specail occasion
  2022-03-17  1:49   ` yukuai (C)
  2022-03-18 12:38     ` Paolo Valente
@ 2022-03-25  7:30     ` yukuai (C)
  2022-04-01  3:43       ` yukuai (C)
  1 sibling, 1 reply; 32+ messages in thread
From: yukuai (C) @ 2022-03-25  7:30 UTC (permalink / raw)
  To: tj, axboe, paolo.valente, jack
  Cc: cgroups, linux-block, linux-kernel, yi.zhang

friendly ping ...

在 2022/03/17 9:49, yukuai (C) 写道:
> friendly ping ...
> 
> 在 2022/03/11 14:31, yukuai (C) 写道:
>> friendly ping ...
>>
>> 在 2022/03/05 17:11, Yu Kuai 写道:
>>> Currently, bfq can't handle sync io concurrently as long as they
>>> are not issued from root group. This is because
>>> 'bfqd->num_groups_with_pending_reqs > 0' is always true in
>>> bfq_asymmetric_scenario().
>>>
>>> This patchset tries to support concurrent sync io if all the sync ios
>>> are issued from the same cgroup:
>>>
>>> 1) Count root_group into 'num_groups_with_pending_reqs', patch 1-5;
>>>
>>> 2) Don't idle if 'num_groups_with_pending_reqs' is 1, patch 6;
>>>
>>> 3) Don't count the group if the group doesn't have pending requests,
>>> while it's child groups may have pending requests, patch 7;
>>>
>>> This is because, for example:
>>> if sync ios are issued from cgroup /root/c1/c2, root, c1 and c2
>>> will all be counted into 'num_groups_with_pending_reqs',
>>> which makes it impossible to handle sync ios concurrently.
>>>
>>> 4) Decrease 'num_groups_with_pending_reqs' when the last queue completes
>>> all the requests, while child groups may still have pending
>>> requests, patch 8-10;
>>>
>>> This is because, for example:
>>> t1 issue sync io on root group, t2 and t3 issue sync io on the same
>>> child group. num_groups_with_pending_reqs is 2 now.
>>> After t1 stopped, num_groups_with_pending_reqs is still 2. sync io from
>>> t2 and t3 still can't be handled concurrently.
>>>
>>> fio test script: startdelay is used to avoid queue merging
>>> [global]
>>> filename=/dev/nvme0n1
>>> allow_mounted_write=0
>>> ioengine=psync
>>> direct=1
>>> ioscheduler=bfq
>>> offset_increment=10g
>>> group_reporting
>>> rw=randwrite
>>> bs=4k
>>>
>>> [test1]
>>> numjobs=1
>>>
>>> [test2]
>>> startdelay=1
>>> numjobs=1
>>>
>>> [test3]
>>> startdelay=2
>>> numjobs=1
>>>
>>> [test4]
>>> startdelay=3
>>> numjobs=1
>>>
>>> [test5]
>>> startdelay=4
>>> numjobs=1
>>>
>>> [test6]
>>> startdelay=5
>>> numjobs=1
>>>
>>> [test7]
>>> startdelay=6
>>> numjobs=1
>>>
>>> [test8]
>>> startdelay=7
>>> numjobs=1
>>>
>>> test result:
>>> running fio on root cgroup
>>> v5.17-rc6:       550 Mib/s
>>> v5.17-rc6-patched: 550 Mib/s
>>>
>>> running fio on non-root cgroup
>>> v5.17-rc6:       349 Mib/s
>>> v5.17-rc6-patched: 550 Mib/s
>>>
>>> Yu Kuai (11):
>>>    block, bfq: add new apis to iterate bfq entities
>>>    block, bfq: apply news apis where root group is not expected
>>>    block, bfq: cleanup for __bfq_activate_requeue_entity()
>>>    block, bfq: move the increasement of 
>>> 'num_groups_with_pending_reqs' to
>>>      it's caller
>>>    block, bfq: count root group into 'num_groups_with_pending_reqs'
>>>    block, bfq: do not idle if only one cgroup is activated
>>>    block, bfq: only count parent bfqg when bfqq is activated
>>>    block, bfq: record how many queues have pending requests in bfq_group
>>>    block, bfq: move forward __bfq_weights_tree_remove()
>>>    block, bfq: decrease 'num_groups_with_pending_reqs' earlier
>>>    block, bfq: cleanup bfqq_group()
>>>
>>>   block/bfq-cgroup.c  | 13 +++----
>>>   block/bfq-iosched.c | 87 +++++++++++++++++++++++----------------------
>>>   block/bfq-iosched.h | 41 +++++++++++++--------
>>>   block/bfq-wf2q.c    | 56 +++++++++++++++--------------
>>>   4 files changed, 106 insertions(+), 91 deletions(-)
>>>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH -next 00/11] support concurrent sync io for bfq on a specail occasion
  2022-03-25  7:30     ` yukuai (C)
@ 2022-04-01  3:43       ` yukuai (C)
  2022-04-08  6:50         ` yukuai (C)
  0 siblings, 1 reply; 32+ messages in thread
From: yukuai (C) @ 2022-04-01  3:43 UTC (permalink / raw)
  To: tj, axboe, paolo.valente, jack
  Cc: cgroups, linux-block, linux-kernel, yi.zhang

friendly ping ...

在 2022/03/25 15:30, yukuai (C) 写道:
> friendly ping ...
> 
> 在 2022/03/17 9:49, yukuai (C) 写道:
>> friendly ping ...
>>
>> 在 2022/03/11 14:31, yukuai (C) 写道:
>>> friendly ping ...
>>>
>>> 在 2022/03/05 17:11, Yu Kuai 写道:
>>>> Currently, bfq can't handle sync io concurrently as long as they
>>>> are not issued from root group. This is because
>>>> 'bfqd->num_groups_with_pending_reqs > 0' is always true in
>>>> bfq_asymmetric_scenario().
>>>>
>>>> This patchset tries to support concurrent sync io if all the sync ios
>>>> are issued from the same cgroup:
>>>>
>>>> 1) Count root_group into 'num_groups_with_pending_reqs', patch 1-5;
>>>>
>>>> 2) Don't idle if 'num_groups_with_pending_reqs' is 1, patch 6;
>>>>
>>>> 3) Don't count the group if the group doesn't have pending requests,
>>>> while it's child groups may have pending requests, patch 7;
>>>>
>>>> This is because, for example:
>>>> if sync ios are issued from cgroup /root/c1/c2, root, c1 and c2
>>>> will all be counted into 'num_groups_with_pending_reqs',
>>>> which makes it impossible to handle sync ios concurrently.
>>>>
>>>> 4) Decrease 'num_groups_with_pending_reqs' when the last queue 
>>>> completes
>>>> all the requests, while child groups may still have pending
>>>> requests, patch 8-10;
>>>>
>>>> This is because, for example:
>>>> t1 issue sync io on root group, t2 and t3 issue sync io on the same
>>>> child group. num_groups_with_pending_reqs is 2 now.
>>>> After t1 stopped, num_groups_with_pending_reqs is still 2. sync io from
>>>> t2 and t3 still can't be handled concurrently.
>>>>
>>>> fio test script: startdelay is used to avoid queue merging
>>>> [global]
>>>> filename=/dev/nvme0n1
>>>> allow_mounted_write=0
>>>> ioengine=psync
>>>> direct=1
>>>> ioscheduler=bfq
>>>> offset_increment=10g
>>>> group_reporting
>>>> rw=randwrite
>>>> bs=4k
>>>>
>>>> [test1]
>>>> numjobs=1
>>>>
>>>> [test2]
>>>> startdelay=1
>>>> numjobs=1
>>>>
>>>> [test3]
>>>> startdelay=2
>>>> numjobs=1
>>>>
>>>> [test4]
>>>> startdelay=3
>>>> numjobs=1
>>>>
>>>> [test5]
>>>> startdelay=4
>>>> numjobs=1
>>>>
>>>> [test6]
>>>> startdelay=5
>>>> numjobs=1
>>>>
>>>> [test7]
>>>> startdelay=6
>>>> numjobs=1
>>>>
>>>> [test8]
>>>> startdelay=7
>>>> numjobs=1
>>>>
>>>> test result:
>>>> running fio on root cgroup
>>>> v5.17-rc6:       550 Mib/s
>>>> v5.17-rc6-patched: 550 Mib/s
>>>>
>>>> running fio on non-root cgroup
>>>> v5.17-rc6:       349 Mib/s
>>>> v5.17-rc6-patched: 550 Mib/s
>>>>
>>>> Yu Kuai (11):
>>>>    block, bfq: add new apis to iterate bfq entities
>>>>    block, bfq: apply news apis where root group is not expected
>>>>    block, bfq: cleanup for __bfq_activate_requeue_entity()
>>>>    block, bfq: move the increasement of 
>>>> 'num_groups_with_pending_reqs' to
>>>>      it's caller
>>>>    block, bfq: count root group into 'num_groups_with_pending_reqs'
>>>>    block, bfq: do not idle if only one cgroup is activated
>>>>    block, bfq: only count parent bfqg when bfqq is activated
>>>>    block, bfq: record how many queues have pending requests in 
>>>> bfq_group
>>>>    block, bfq: move forward __bfq_weights_tree_remove()
>>>>    block, bfq: decrease 'num_groups_with_pending_reqs' earlier
>>>>    block, bfq: cleanup bfqq_group()
>>>>
>>>>   block/bfq-cgroup.c  | 13 +++----
>>>>   block/bfq-iosched.c | 87 
>>>> +++++++++++++++++++++++----------------------
>>>>   block/bfq-iosched.h | 41 +++++++++++++--------
>>>>   block/bfq-wf2q.c    | 56 +++++++++++++++--------------
>>>>   4 files changed, 106 insertions(+), 91 deletions(-)
>>>>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH -next 00/11] support concurrent sync io for bfq on a specail occasion
  2022-04-01  3:43       ` yukuai (C)
@ 2022-04-08  6:50         ` yukuai (C)
  0 siblings, 0 replies; 32+ messages in thread
From: yukuai (C) @ 2022-04-08  6:50 UTC (permalink / raw)
  To: tj, axboe, paolo.valente, jack
  Cc: cgroups, linux-block, linux-kernel, yi.zhang

friendly ping ...

在 2022/04/01 11:43, yukuai (C) 写道:
> friendly ping ...
> 
> 在 2022/03/25 15:30, yukuai (C) 写道:
>> friendly ping ...
>>
>> 在 2022/03/17 9:49, yukuai (C) 写道:
>>> friendly ping ...
>>>
>>> 在 2022/03/11 14:31, yukuai (C) 写道:
>>>> friendly ping ...
>>>>
>>>> 在 2022/03/05 17:11, Yu Kuai 写道:
>>>>> Currently, bfq can't handle sync io concurrently as long as they
>>>>> are not issued from root group. This is because
>>>>> 'bfqd->num_groups_with_pending_reqs > 0' is always true in
>>>>> bfq_asymmetric_scenario().
>>>>>
>>>>> This patchset tries to support concurrent sync io if all the sync ios
>>>>> are issued from the same cgroup:
>>>>>
>>>>> 1) Count root_group into 'num_groups_with_pending_reqs', patch 1-5;
>>>>>
>>>>> 2) Don't idle if 'num_groups_with_pending_reqs' is 1, patch 6;
>>>>>
>>>>> 3) Don't count the group if the group doesn't have pending requests,
>>>>> while it's child groups may have pending requests, patch 7;
>>>>>
>>>>> This is because, for example:
>>>>> if sync ios are issued from cgroup /root/c1/c2, root, c1 and c2
>>>>> will all be counted into 'num_groups_with_pending_reqs',
>>>>> which makes it impossible to handle sync ios concurrently.
>>>>>
>>>>> 4) Decrease 'num_groups_with_pending_reqs' when the last queue 
>>>>> completes
>>>>> all the requests, while child groups may still have pending
>>>>> requests, patch 8-10;
>>>>>
>>>>> This is because, for example:
>>>>> t1 issue sync io on root group, t2 and t3 issue sync io on the same
>>>>> child group. num_groups_with_pending_reqs is 2 now.
>>>>> After t1 stopped, num_groups_with_pending_reqs is still 2. sync io 
>>>>> from
>>>>> t2 and t3 still can't be handled concurrently.
>>>>>
>>>>> fio test script: startdelay is used to avoid queue merging
>>>>> [global]
>>>>> filename=/dev/nvme0n1
>>>>> allow_mounted_write=0
>>>>> ioengine=psync
>>>>> direct=1
>>>>> ioscheduler=bfq
>>>>> offset_increment=10g
>>>>> group_reporting
>>>>> rw=randwrite
>>>>> bs=4k
>>>>>
>>>>> [test1]
>>>>> numjobs=1
>>>>>
>>>>> [test2]
>>>>> startdelay=1
>>>>> numjobs=1
>>>>>
>>>>> [test3]
>>>>> startdelay=2
>>>>> numjobs=1
>>>>>
>>>>> [test4]
>>>>> startdelay=3
>>>>> numjobs=1
>>>>>
>>>>> [test5]
>>>>> startdelay=4
>>>>> numjobs=1
>>>>>
>>>>> [test6]
>>>>> startdelay=5
>>>>> numjobs=1
>>>>>
>>>>> [test7]
>>>>> startdelay=6
>>>>> numjobs=1
>>>>>
>>>>> [test8]
>>>>> startdelay=7
>>>>> numjobs=1
>>>>>
>>>>> test result:
>>>>> running fio on root cgroup
>>>>> v5.17-rc6:       550 Mib/s
>>>>> v5.17-rc6-patched: 550 Mib/s
>>>>>
>>>>> running fio on non-root cgroup
>>>>> v5.17-rc6:       349 Mib/s
>>>>> v5.17-rc6-patched: 550 Mib/s
>>>>>
>>>>> Yu Kuai (11):
>>>>>    block, bfq: add new apis to iterate bfq entities
>>>>>    block, bfq: apply news apis where root group is not expected
>>>>>    block, bfq: cleanup for __bfq_activate_requeue_entity()
>>>>>    block, bfq: move the increasement of 
>>>>> 'num_groups_with_pending_reqs' to
>>>>>      it's caller
>>>>>    block, bfq: count root group into 'num_groups_with_pending_reqs'
>>>>>    block, bfq: do not idle if only one cgroup is activated
>>>>>    block, bfq: only count parent bfqg when bfqq is activated
>>>>>    block, bfq: record how many queues have pending requests in 
>>>>> bfq_group
>>>>>    block, bfq: move forward __bfq_weights_tree_remove()
>>>>>    block, bfq: decrease 'num_groups_with_pending_reqs' earlier
>>>>>    block, bfq: cleanup bfqq_group()
>>>>>
>>>>>   block/bfq-cgroup.c  | 13 +++----
>>>>>   block/bfq-iosched.c | 87 
>>>>> +++++++++++++++++++++++----------------------
>>>>>   block/bfq-iosched.h | 41 +++++++++++++--------
>>>>>   block/bfq-wf2q.c    | 56 +++++++++++++++--------------
>>>>>   4 files changed, 106 insertions(+), 91 deletions(-)
>>>>>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH -next 02/11] block, bfq: apply news apis where root group is not expected
  2022-03-05  9:11 ` [PATCH -next 02/11] block, bfq: apply news apis where root group is not expected Yu Kuai
@ 2022-04-13  9:50   ` Jan Kara
  2022-04-13 10:59     ` Jan Kara
  0 siblings, 1 reply; 32+ messages in thread
From: Jan Kara @ 2022-04-13  9:50 UTC (permalink / raw)
  To: Yu Kuai
  Cc: tj, axboe, paolo.valente, jack, cgroups, linux-block,
	linux-kernel, yi.zhang

On Sat 05-03-22 17:11:56, Yu Kuai wrote:
> 'entity->sched_data' is set to parent group's sched_data, thus it's NULL
> for root group. And for_each_entity() is used widely to access
> 'entity->sched_data', thus aplly news apis if root group is not
                             ^^ apply

> expected. Prepare to count root group into 'num_groups_with_pending_reqs'.
> 
> Signed-off-by: Yu Kuai <yukuai3@huawei.com>
> ---
>  block/bfq-iosched.c |  2 +-
>  block/bfq-iosched.h | 22 ++++++++--------------
>  block/bfq-wf2q.c    | 10 +++++-----
>  3 files changed, 14 insertions(+), 20 deletions(-)
> 
> diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
> index 69ddf6b0f01d..3bc7a7686aad 100644
> --- a/block/bfq-iosched.c
> +++ b/block/bfq-iosched.c
> @@ -4393,7 +4393,7 @@ void bfq_bfqq_expire(struct bfq_data *bfqd,
>  	 * service with the same budget.
>  	 */
>  	entity = entity->parent;
> -	for_each_entity(entity)
> +	for_each_entity_not_root(entity)
>  		entity->service = 0;
>  }

So why is it a problem to clear the service for root cgroup here?

> diff --git a/block/bfq-wf2q.c b/block/bfq-wf2q.c
> index f8eb340381cf..c4cb935a615a 100644
> --- a/block/bfq-wf2q.c
> +++ b/block/bfq-wf2q.c
> @@ -815,7 +815,7 @@ void bfq_bfqq_served(struct bfq_queue *bfqq, int served)
>  		bfqq->service_from_wr += served;
>  
>  	bfqq->service_from_backlogged += served;
> -	for_each_entity(entity) {
> +	for_each_entity_not_root(entity) {
>  		st = bfq_entity_service_tree(entity);

Hum, right so how come this was not crashing? Because entity->sched_data is
indeed NULL for bfqd->root_group->entity and so bfq_entity_service_tree()
returned some bogus pointer? Similarly for the cases you are changing
below?

								Honza

> 
>  		entity->service += served;
> @@ -1201,7 +1201,7 @@ static void bfq_deactivate_entity(struct bfq_entity *entity,
>  	struct bfq_sched_data *sd;
>  	struct bfq_entity *parent = NULL;
>  
> -	for_each_entity_safe(entity, parent) {
> +	for_each_entity_not_root_safe(entity, parent) {
>  		sd = entity->sched_data;
>  
>  		if (!__bfq_deactivate_entity(entity, ins_into_idle_tree)) {
> @@ -1270,7 +1270,7 @@ static void bfq_deactivate_entity(struct bfq_entity *entity,
>  	 * is not the case.
>  	 */
>  	entity = parent;
> -	for_each_entity(entity) {
> +	for_each_entity_not_root(entity) {
>  		/*
>  		 * Invoke __bfq_requeue_entity on entity, even if
>  		 * already active, to requeue/reposition it in the
> @@ -1570,7 +1570,7 @@ struct bfq_queue *bfq_get_next_queue(struct bfq_data *bfqd)
>  	 * We can finally update all next-to-serve entities along the
>  	 * path from the leaf entity just set in service to the root.
>  	 */
> -	for_each_entity(entity) {
> +	for_each_entity_not_root(entity) {
>  		struct bfq_sched_data *sd = entity->sched_data;
>  
>  		if (!bfq_update_next_in_service(sd, NULL, false))
> @@ -1597,7 +1597,7 @@ bool __bfq_bfqd_reset_in_service(struct bfq_data *bfqd)
>  	 * execute the final step: reset in_service_entity along the
>  	 * path from entity to the root.
>  	 */
> -	for_each_entity(entity)
> +	for_each_entity_not_root(entity)
>  		entity->sched_data->in_service_entity = NULL;
>  
>  	/*
> -- 
> 2.31.1
> 
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH -next 02/11] block, bfq: apply news apis where root group is not expected
  2022-04-13  9:50   ` Jan Kara
@ 2022-04-13 10:59     ` Jan Kara
  2022-04-13 11:11       ` yukuai (C)
  0 siblings, 1 reply; 32+ messages in thread
From: Jan Kara @ 2022-04-13 10:59 UTC (permalink / raw)
  To: Yu Kuai
  Cc: tj, axboe, paolo.valente, jack, cgroups, linux-block,
	linux-kernel, yi.zhang

On Wed 13-04-22 11:50:44, Jan Kara wrote:
> On Sat 05-03-22 17:11:56, Yu Kuai wrote:
> > 'entity->sched_data' is set to parent group's sched_data, thus it's NULL
> > for root group. And for_each_entity() is used widely to access
> > 'entity->sched_data', thus aplly news apis if root group is not
>                              ^^ apply
> 
> > expected. Prepare to count root group into 'num_groups_with_pending_reqs'.
> > 
> > Signed-off-by: Yu Kuai <yukuai3@huawei.com>
> > ---
> >  block/bfq-iosched.c |  2 +-
> >  block/bfq-iosched.h | 22 ++++++++--------------
> >  block/bfq-wf2q.c    | 10 +++++-----
> >  3 files changed, 14 insertions(+), 20 deletions(-)
> > 
> > diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
> > index 69ddf6b0f01d..3bc7a7686aad 100644
> > --- a/block/bfq-iosched.c
> > +++ b/block/bfq-iosched.c
> > @@ -4393,7 +4393,7 @@ void bfq_bfqq_expire(struct bfq_data *bfqd,
> >  	 * service with the same budget.
> >  	 */
> >  	entity = entity->parent;
> > -	for_each_entity(entity)
> > +	for_each_entity_not_root(entity)
> >  		entity->service = 0;
> >  }
> 
> So why is it a problem to clear the service for root cgroup here?
> 
> > diff --git a/block/bfq-wf2q.c b/block/bfq-wf2q.c
> > index f8eb340381cf..c4cb935a615a 100644
> > --- a/block/bfq-wf2q.c
> > +++ b/block/bfq-wf2q.c
> > @@ -815,7 +815,7 @@ void bfq_bfqq_served(struct bfq_queue *bfqq, int served)
> >  		bfqq->service_from_wr += served;
> >  
> >  	bfqq->service_from_backlogged += served;
> > -	for_each_entity(entity) {
> > +	for_each_entity_not_root(entity) {
> >  		st = bfq_entity_service_tree(entity);
> 
> Hum, right so how come this was not crashing? Because entity->sched_data is
> indeed NULL for bfqd->root_group->entity and so bfq_entity_service_tree()
> returned some bogus pointer? Similarly for the cases you are changing
> below?

Oh, I see now. Because for_each_entity() currently does not iterate through
root cgroup because it has root_group->my_entity set to NULL and thus as a
result immediate children of root_group will have their parent set to NULL
as well.

								Honza
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH -next 05/11] block, bfq: count root group into 'num_groups_with_pending_reqs'
  2022-03-05  9:11 ` [PATCH -next 05/11] block, bfq: count root group into 'num_groups_with_pending_reqs' Yu Kuai
@ 2022-04-13 11:05   ` Jan Kara
  0 siblings, 0 replies; 32+ messages in thread
From: Jan Kara @ 2022-04-13 11:05 UTC (permalink / raw)
  To: Yu Kuai
  Cc: tj, axboe, paolo.valente, jack, cgroups, linux-block,
	linux-kernel, yi.zhang

On Sat 05-03-22 17:11:59, Yu Kuai wrote:
> Root group is not counted into 'num_groups_with_pending_reqs' because
> 'entity->parent' is set to NULL for child entities, thus
> for_each_entity() can't access root group.
> 
> This patch set root_group's entity to 'entity->parent' for child
> entities, this way root_group will be counted because for_each_entity()
> can access root_group in bfq_activate_requeue_entity(),
> 
> Signed-off-by: Yu Kuai <yukuai3@huawei.com>
> ---
>  block/bfq-cgroup.c  | 6 +++---
>  block/bfq-iosched.h | 3 ++-
>  block/bfq-wf2q.c    | 5 +++++
>  3 files changed, 10 insertions(+), 4 deletions(-)

I think you can remove bfqg->my_entity after this patch, can't you? Because
effectively it's only purpose was so that you don't have to special-case
children of root_group...

								Honza

> 
> diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
> index 420eda2589c0..6cd65b5e790d 100644
> --- a/block/bfq-cgroup.c
> +++ b/block/bfq-cgroup.c
> @@ -436,7 +436,7 @@ void bfq_init_entity(struct bfq_entity *entity, struct bfq_group *bfqg)
>  		 */
>  		bfqg_and_blkg_get(bfqg);
>  	}
> -	entity->parent = bfqg->my_entity; /* NULL for root group */
> +	entity->parent = &bfqg->entity;
>  	entity->sched_data = &bfqg->sched_data;
>  }
>  
> @@ -581,7 +581,7 @@ static void bfq_group_set_parent(struct bfq_group *bfqg,
>  	struct bfq_entity *entity;
>  
>  	entity = &bfqg->entity;
> -	entity->parent = parent->my_entity;
> +	entity->parent = &parent->entity;
>  	entity->sched_data = &parent->sched_data;
>  }
>  
> @@ -688,7 +688,7 @@ void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq,
>  	else if (bfqd->last_bfqq_created == bfqq)
>  		bfqd->last_bfqq_created = NULL;
>  
> -	entity->parent = bfqg->my_entity;
> +	entity->parent = &bfqg->entity;
>  	entity->sched_data = &bfqg->sched_data;
>  	/* pin down bfqg and its associated blkg  */
>  	bfqg_and_blkg_get(bfqg);
> diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h
> index ddd8eff5c272..4530ab8b42ac 100644
> --- a/block/bfq-iosched.h
> +++ b/block/bfq-iosched.h
> @@ -1024,13 +1024,14 @@ extern struct blkcg_policy blkcg_policy_bfq;
>  /* - interface of the internal hierarchical B-WF2Q+ scheduler - */
>  
>  #ifdef CONFIG_BFQ_GROUP_IOSCHED
> -/* stop at one of the child entities of the root group */
> +/* stop at root group */
>  #define for_each_entity(entity)	\
>  	for (; entity ; entity = entity->parent)
>  
>  #define is_root_entity(entity) \
>  	(entity->sched_data == NULL)
>  
> +/* stop at one of the child entities of the root group */
>  #define for_each_entity_not_root(entity) \
>  	for (; entity && !is_root_entity(entity); entity = entity->parent)
>  
> diff --git a/block/bfq-wf2q.c b/block/bfq-wf2q.c
> index 17f1d2c5b8dc..138a2950b841 100644
> --- a/block/bfq-wf2q.c
> +++ b/block/bfq-wf2q.c
> @@ -1125,6 +1125,11 @@ static void bfq_activate_requeue_entity(struct bfq_entity *entity,
>  {
>  	for_each_entity(entity) {
>  		bfq_update_groups_with_pending_reqs(entity);
> +
> +		/* root group is not in service tree */
> +		if (is_root_entity(entity))
> +			break;
> +
>  		__bfq_activate_requeue_entity(entity, non_blocking_wait_rq);
>  
>  		if (!bfq_update_next_in_service(entity->sched_data, entity,
> -- 
> 2.31.1
> 
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH -next 02/11] block, bfq: apply news apis where root group is not expected
  2022-04-13 10:59     ` Jan Kara
@ 2022-04-13 11:11       ` yukuai (C)
  0 siblings, 0 replies; 32+ messages in thread
From: yukuai (C) @ 2022-04-13 11:11 UTC (permalink / raw)
  To: Jan Kara
  Cc: tj, axboe, paolo.valente, cgroups, linux-block, linux-kernel, yi.zhang

在 2022/04/13 18:59, Jan Kara 写道:
> On Wed 13-04-22 11:50:44, Jan Kara wrote:
>> On Sat 05-03-22 17:11:56, Yu Kuai wrote:
>>> 'entity->sched_data' is set to parent group's sched_data, thus it's NULL
>>> for root group. And for_each_entity() is used widely to access
>>> 'entity->sched_data', thus aplly news apis if root group is not
>>                               ^^ apply
>>
Hi,

Thanks for spotting this.
>>> expected. Prepare to count root group into 'num_groups_with_pending_reqs'.
>>>
>>> Signed-off-by: Yu Kuai <yukuai3@huawei.com>
>>> ---
>>>   block/bfq-iosched.c |  2 +-
>>>   block/bfq-iosched.h | 22 ++++++++--------------
>>>   block/bfq-wf2q.c    | 10 +++++-----
>>>   3 files changed, 14 insertions(+), 20 deletions(-)
>>>
>>> diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
>>> index 69ddf6b0f01d..3bc7a7686aad 100644
>>> --- a/block/bfq-iosched.c
>>> +++ b/block/bfq-iosched.c
>>> @@ -4393,7 +4393,7 @@ void bfq_bfqq_expire(struct bfq_data *bfqd,
>>>   	 * service with the same budget.
>>>   	 */
>>>   	entity = entity->parent;
>>> -	for_each_entity(entity)
>>> +	for_each_entity_not_root(entity)
>>>   		entity->service = 0;
>>>   }
>>
>> So why is it a problem to clear the service for root cgroup here?

This is not a problem in theory, however 'entity->service' should always
be 0 for root_group. Thus I think there is no need to do this.

>>
>>> diff --git a/block/bfq-wf2q.c b/block/bfq-wf2q.c
>>> index f8eb340381cf..c4cb935a615a 100644
>>> --- a/block/bfq-wf2q.c
>>> +++ b/block/bfq-wf2q.c
>>> @@ -815,7 +815,7 @@ void bfq_bfqq_served(struct bfq_queue *bfqq, int served)
>>>   		bfqq->service_from_wr += served;
>>>   
>>>   	bfqq->service_from_backlogged += served;
>>> -	for_each_entity(entity) {
>>> +	for_each_entity_not_root(entity) {
>>>   		st = bfq_entity_service_tree(entity);
>>
>> Hum, right so how come this was not crashing? Because entity->sched_data is
>> indeed NULL for bfqd->root_group->entity and so bfq_entity_service_tree()
>> returned some bogus pointer? Similarly for the cases you are changing
>> below?
> 
> Oh, I see now. Because for_each_entity() currently does not iterate through
> root cgroup because it has root_group->my_entity set to NULL and thus as a
> result immediate children of root_group will have their parent set to NULL
> as well.

Yes, currently for_each_entity() and for_each_entity_not_root() are the
same, they will stop before root_group.

With patch 5, for_each_entity_not_root() will stay the same, while
for_each_entity() will access root_group's entity in addition. And
because bfq_entity_service_tree() will access 'entity->sched_data', thus
I change to the new api here to avoid null-ptr-deref after patch 5.

Same reasons for below changes.

Thanks,
Kuai

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH -next 00/11] support concurrent sync io for bfq on a specail occasion
  2022-03-05  9:11 [PATCH -next 00/11] support concurrent sync io for bfq on a specail occasion Yu Kuai
                   ` (11 preceding siblings ...)
  2022-03-11  6:31 ` [PATCH -next 00/11] support concurrent sync io for bfq on a specail occasion yukuai (C)
@ 2022-04-13 11:12 ` Jan Kara
  2022-04-13 11:33   ` yukuai (C)
  2022-04-26 14:24   ` Paolo Valente
  12 siblings, 2 replies; 32+ messages in thread
From: Jan Kara @ 2022-04-13 11:12 UTC (permalink / raw)
  To: Yu Kuai
  Cc: tj, axboe, paolo.valente, jack, cgroups, linux-block,
	linux-kernel, yi.zhang

On Sat 05-03-22 17:11:54, Yu Kuai wrote:
> Currently, bfq can't handle sync io concurrently as long as they
> are not issued from root group. This is because
> 'bfqd->num_groups_with_pending_reqs > 0' is always true in
> bfq_asymmetric_scenario().
> 
> This patchset tries to support concurrent sync io if all the sync ios
> are issued from the same cgroup:
> 
> 1) Count root_group into 'num_groups_with_pending_reqs', patch 1-5;

Seeing the complications and special casing for root_group I wonder: Won't
we be better off to create fake bfq_sched_data in bfq_data and point
root_group->sched_data there? AFAICS it would simplify the code
considerably as root_group would be just another bfq_group, no need to
special case it in various places, no games with bfqg->my_entity, etc.
Paolo, do you see any problem with that?

								Honza
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH -next 10/11] block, bfq: decrease 'num_groups_with_pending_reqs' earlier
  2022-03-05  9:12 ` [PATCH -next 10/11] block, bfq: decrease 'num_groups_with_pending_reqs' earlier Yu Kuai
@ 2022-04-13 11:28   ` Jan Kara
  2022-04-13 11:40     ` yukuai (C)
  0 siblings, 1 reply; 32+ messages in thread
From: Jan Kara @ 2022-04-13 11:28 UTC (permalink / raw)
  To: Yu Kuai
  Cc: tj, axboe, paolo.valente, jack, cgroups, linux-block,
	linux-kernel, yi.zhang

On Sat 05-03-22 17:12:04, Yu Kuai wrote:
> Currently 'num_groups_with_pending_reqs' won't be decreased when
> the group doesn't have any pending requests, while some child group
> still have pending requests. The decrement is delayed to when all the
> child groups doesn't have any pending requests.
> 
> For example:
> 1) t1 issue sync io on root group, t2 and t3 issue sync io on the same
> child group. num_groups_with_pending_reqs is 2 now.
> 2) t1 stopped, num_groups_with_pending_reqs is still 2. io from t2 and
> t3 still can't be handled concurrently.
> 
> Fix the problem by decreasing 'num_groups_with_pending_reqs'
> immediately upon the weights_tree removal of last bfqq of the group.
> 
> Signed-off-by: Yu Kuai <yukuai3@huawei.com>

So I'd find the logic easier to follow if you completely removed
entity->in_groups_with_pending_reqs and did updates of
bfqd->num_groups_with_pending_reqs like:

	if (!bfqg->num_entities_with_pending_reqs++)
		bfqd->num_groups_with_pending_reqs++;

and similarly on the remove side. And there would we literally two places
(addition & removal from weight tree) that would need to touch these
counters. Pretty obvious and all can be done in patch 9.

								Honza

> ---
>  block/bfq-iosched.c | 56 +++++++++++++++------------------------------
>  block/bfq-iosched.h | 16 ++++++-------
>  2 files changed, 27 insertions(+), 45 deletions(-)
> 
> diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
> index f221e9cab4d0..119b64c9c1d9 100644
> --- a/block/bfq-iosched.c
> +++ b/block/bfq-iosched.c
> @@ -970,6 +970,24 @@ void __bfq_weights_tree_remove(struct bfq_data *bfqd,
>  	bfq_put_queue(bfqq);
>  }
>  
> +static void decrease_groups_with_pending_reqs(struct bfq_data *bfqd,
> +					      struct bfq_queue *bfqq)
> +{
> +#ifdef CONFIG_BFQ_GROUP_IOSCHED
> +	struct bfq_entity *entity = bfqq->entity.parent;
> +
> +	/*
> +	 * The decrement of num_groups_with_pending_reqs is performed
> +	 * immediately when the last bfqq completes all the requests.
> +	 */
> +	if (!bfqq_group(bfqq)->num_entities_with_pending_reqs &&
> +	    entity->in_groups_with_pending_reqs) {
> +		entity->in_groups_with_pending_reqs = false;
> +		bfqd->num_groups_with_pending_reqs--;
> +	}
> +#endif
> +}
> +
>  /*
>   * Invoke __bfq_weights_tree_remove on bfqq and decrement the number
>   * of active groups for each queue's inactive parent entity.
> @@ -977,8 +995,6 @@ void __bfq_weights_tree_remove(struct bfq_data *bfqd,
>  void bfq_weights_tree_remove(struct bfq_data *bfqd,
>  			     struct bfq_queue *bfqq)
>  {
> -	struct bfq_entity *entity = bfqq->entity.parent;
> -
>  	/*
>  	 * grab a ref to prevent bfqq to be freed in
>  	 * __bfq_weights_tree_remove
> @@ -991,41 +1007,7 @@ void bfq_weights_tree_remove(struct bfq_data *bfqd,
>  	 */
>  	__bfq_weights_tree_remove(bfqd, bfqq,
>  				  &bfqd->queue_weights_tree);
> -
> -	for_each_entity(entity) {
> -		struct bfq_sched_data *sd = entity->my_sched_data;
> -
> -		if (sd->next_in_service || sd->in_service_entity) {
> -			/*
> -			 * entity is still active, because either
> -			 * next_in_service or in_service_entity is not
> -			 * NULL (see the comments on the definition of
> -			 * next_in_service for details on why
> -			 * in_service_entity must be checked too).
> -			 *
> -			 * As a consequence, its parent entities are
> -			 * active as well, and thus this loop must
> -			 * stop here.
> -			 */
> -			break;
> -		}
> -
> -		/*
> -		 * The decrement of num_groups_with_pending_reqs is
> -		 * not performed immediately upon the deactivation of
> -		 * entity, but it is delayed to when it also happens
> -		 * that the first leaf descendant bfqq of entity gets
> -		 * all its pending requests completed. The following
> -		 * instructions perform this delayed decrement, if
> -		 * needed. See the comments on
> -		 * num_groups_with_pending_reqs for details.
> -		 */
> -		if (entity->in_groups_with_pending_reqs) {
> -			entity->in_groups_with_pending_reqs = false;
> -			bfqd->num_groups_with_pending_reqs--;
> -		}
> -	}
> -
> +	decrease_groups_with_pending_reqs(bfqd, bfqq);
>  	bfq_put_queue(bfqq);
>  }
>  
> diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h
> index 5d904851519c..9ec72bd24fc2 100644
> --- a/block/bfq-iosched.h
> +++ b/block/bfq-iosched.h
> @@ -495,7 +495,7 @@ struct bfq_data {
>  	struct rb_root_cached queue_weights_tree;
>  
>  	/*
> -	 * Number of groups with at least one descendant process that
> +	 * Number of groups with at least one process that
>  	 * has at least one request waiting for completion. Note that
>  	 * this accounts for also requests already dispatched, but not
>  	 * yet completed. Therefore this number of groups may differ
> @@ -508,14 +508,14 @@ struct bfq_data {
>  	 * bfq_better_to_idle().
>  	 *
>  	 * However, it is hard to compute this number exactly, for
> -	 * groups with multiple descendant processes. Consider a group
> -	 * that is inactive, i.e., that has no descendant process with
> +	 * groups with multiple processes. Consider a group
> +	 * that is inactive, i.e., that has no process with
>  	 * pending I/O inside BFQ queues. Then suppose that
>  	 * num_groups_with_pending_reqs is still accounting for this
> -	 * group, because the group has descendant processes with some
> +	 * group, because the group has processes with some
>  	 * I/O request still in flight. num_groups_with_pending_reqs
>  	 * should be decremented when the in-flight request of the
> -	 * last descendant process is finally completed (assuming that
> +	 * last process is finally completed (assuming that
>  	 * nothing else has changed for the group in the meantime, in
>  	 * terms of composition of the group and active/inactive state of child
>  	 * groups and processes). To accomplish this, an additional
> @@ -524,7 +524,7 @@ struct bfq_data {
>  	 * we resort to the following tradeoff between simplicity and
>  	 * accuracy: for an inactive group that is still counted in
>  	 * num_groups_with_pending_reqs, we decrement
> -	 * num_groups_with_pending_reqs when the first descendant
> +	 * num_groups_with_pending_reqs when the last
>  	 * process of the group remains with no request waiting for
>  	 * completion.
>  	 *
> @@ -532,12 +532,12 @@ struct bfq_data {
>  	 * carefulness: to avoid multiple decrements, we flag a group,
>  	 * more precisely an entity representing a group, as still
>  	 * counted in num_groups_with_pending_reqs when it becomes
> -	 * inactive. Then, when the first descendant queue of the
> +	 * inactive. Then, when the last queue of the
>  	 * entity remains with no request waiting for completion,
>  	 * num_groups_with_pending_reqs is decremented, and this flag
>  	 * is reset. After this flag is reset for the entity,
>  	 * num_groups_with_pending_reqs won't be decremented any
> -	 * longer in case a new descendant queue of the entity remains
> +	 * longer in case a new queue of the entity remains
>  	 * with no request waiting for completion.
>  	 */
>  	unsigned int num_groups_with_pending_reqs;
> -- 
> 2.31.1
> 
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH -next 00/11] support concurrent sync io for bfq on a specail occasion
  2022-04-13 11:12 ` Jan Kara
@ 2022-04-13 11:33   ` yukuai (C)
  2022-04-26 14:24   ` Paolo Valente
  1 sibling, 0 replies; 32+ messages in thread
From: yukuai (C) @ 2022-04-13 11:33 UTC (permalink / raw)
  To: Jan Kara
  Cc: tj, axboe, paolo.valente, cgroups, linux-block, linux-kernel, yi.zhang

在 2022/04/13 19:12, Jan Kara 写道:
> On Sat 05-03-22 17:11:54, Yu Kuai wrote:
>> Currently, bfq can't handle sync io concurrently as long as they
>> are not issued from root group. This is because
>> 'bfqd->num_groups_with_pending_reqs > 0' is always true in
>> bfq_asymmetric_scenario().
>>
>> This patchset tries to support concurrent sync io if all the sync ios
>> are issued from the same cgroup:
>>
>> 1) Count root_group into 'num_groups_with_pending_reqs', patch 1-5;
> 
> Seeing the complications and special casing for root_group I wonder: Won't
> we be better off to create fake bfq_sched_data in bfq_data and point
> root_group->sched_data there? AFAICS it would simplify the code

Hi,

That sounds an good idel, in this case we only need to make sure the
fake service tree will always be empty, which means we only need to
special casing bfq_active/idle_insert to the fake service tree.

Thanks,
Kuai
> considerably as root_group would be just another bfq_group, no need to
> special case it in various places, no games with bfqg->my_entity, etc.
> Paolo, do you see any problem with that?
> 
> 								Honza
> 

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH -next 10/11] block, bfq: decrease 'num_groups_with_pending_reqs' earlier
  2022-04-13 11:28   ` Jan Kara
@ 2022-04-13 11:40     ` yukuai (C)
  2022-04-15  1:10       ` yukuai (C)
  0 siblings, 1 reply; 32+ messages in thread
From: yukuai (C) @ 2022-04-13 11:40 UTC (permalink / raw)
  To: Jan Kara
  Cc: tj, axboe, paolo.valente, cgroups, linux-block, linux-kernel, yi.zhang

在 2022/04/13 19:28, Jan Kara 写道:
> On Sat 05-03-22 17:12:04, Yu Kuai wrote:
>> Currently 'num_groups_with_pending_reqs' won't be decreased when
>> the group doesn't have any pending requests, while some child group
>> still have pending requests. The decrement is delayed to when all the
>> child groups doesn't have any pending requests.
>>
>> For example:
>> 1) t1 issue sync io on root group, t2 and t3 issue sync io on the same
>> child group. num_groups_with_pending_reqs is 2 now.
>> 2) t1 stopped, num_groups_with_pending_reqs is still 2. io from t2 and
>> t3 still can't be handled concurrently.
>>
>> Fix the problem by decreasing 'num_groups_with_pending_reqs'
>> immediately upon the weights_tree removal of last bfqq of the group.
>>
>> Signed-off-by: Yu Kuai <yukuai3@huawei.com>
> 
> So I'd find the logic easier to follow if you completely removed
> entity->in_groups_with_pending_reqs and did updates of
> bfqd->num_groups_with_pending_reqs like:
> 
> 	if (!bfqg->num_entities_with_pending_reqs++)
> 		bfqd->num_groups_with_pending_reqs++;
> 
Hi,

Indeed, this is an excellent idle, and much better than the way I did.

Thanks,
Kuai

> and similarly on the remove side. And there would we literally two places
> (addition & removal from weight tree) that would need to touch these
> counters. Pretty obvious and all can be done in patch 9.
> 
> 								Honza
> 


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH -next 10/11] block, bfq: decrease 'num_groups_with_pending_reqs' earlier
  2022-04-13 11:40     ` yukuai (C)
@ 2022-04-15  1:10       ` yukuai (C)
  2022-04-19  9:49         ` Jan Kara
  0 siblings, 1 reply; 32+ messages in thread
From: yukuai (C) @ 2022-04-15  1:10 UTC (permalink / raw)
  To: Jan Kara
  Cc: tj, axboe, paolo.valente, cgroups, linux-block, linux-kernel, yi.zhang

在 2022/04/13 19:40, yukuai (C) 写道:
> 在 2022/04/13 19:28, Jan Kara 写道:
>> On Sat 05-03-22 17:12:04, Yu Kuai wrote:
>>> Currently 'num_groups_with_pending_reqs' won't be decreased when
>>> the group doesn't have any pending requests, while some child group
>>> still have pending requests. The decrement is delayed to when all the
>>> child groups doesn't have any pending requests.
>>>
>>> For example:
>>> 1) t1 issue sync io on root group, t2 and t3 issue sync io on the same
>>> child group. num_groups_with_pending_reqs is 2 now.
>>> 2) t1 stopped, num_groups_with_pending_reqs is still 2. io from t2 and
>>> t3 still can't be handled concurrently.
>>>
>>> Fix the problem by decreasing 'num_groups_with_pending_reqs'
>>> immediately upon the weights_tree removal of last bfqq of the group.
>>>
>>> Signed-off-by: Yu Kuai <yukuai3@huawei.com>
>>
>> So I'd find the logic easier to follow if you completely removed
>> entity->in_groups_with_pending_reqs and did updates of
>> bfqd->num_groups_with_pending_reqs like:
>>
>>     if (!bfqg->num_entities_with_pending_reqs++)
>>         bfqd->num_groups_with_pending_reqs++;
>>
> Hi,
> 
> Indeed, this is an excellent idle, and much better than the way I did.
> 
> Thanks,
> Kuai
> 
>> and similarly on the remove side. And there would we literally two places
>> (addition & removal from weight tree) that would need to touch these
>> counters. Pretty obvious and all can be done in patch 9.
>>
>>                                 Honza
Hi, Jan

I think with this change, we can count root_group while activating bfqqs
that are under root_group, thus there is no need to modify
for_each_entity(or fake bfq_sched_data) any more.

The special case is that weight racing bfqqs are not inserted into
weights tree, and I think this can be handled by adding a fake
bfq_weight_counter for such bfqqs.

What do you think ?

Kuai

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH -next 10/11] block, bfq: decrease 'num_groups_with_pending_reqs' earlier
  2022-04-15  1:10       ` yukuai (C)
@ 2022-04-19  9:49         ` Jan Kara
  2022-04-19 11:37           ` yukuai (C)
  0 siblings, 1 reply; 32+ messages in thread
From: Jan Kara @ 2022-04-19  9:49 UTC (permalink / raw)
  To: yukuai (C)
  Cc: Jan Kara, tj, axboe, paolo.valente, cgroups, linux-block,
	linux-kernel, yi.zhang

On Fri 15-04-22 09:10:06, yukuai (C) wrote:
> 在 2022/04/13 19:40, yukuai (C) 写道:
> > 在 2022/04/13 19:28, Jan Kara 写道:
> > > On Sat 05-03-22 17:12:04, Yu Kuai wrote:
> > > > Currently 'num_groups_with_pending_reqs' won't be decreased when
> > > > the group doesn't have any pending requests, while some child group
> > > > still have pending requests. The decrement is delayed to when all the
> > > > child groups doesn't have any pending requests.
> > > > 
> > > > For example:
> > > > 1) t1 issue sync io on root group, t2 and t3 issue sync io on the same
> > > > child group. num_groups_with_pending_reqs is 2 now.
> > > > 2) t1 stopped, num_groups_with_pending_reqs is still 2. io from t2 and
> > > > t3 still can't be handled concurrently.
> > > > 
> > > > Fix the problem by decreasing 'num_groups_with_pending_reqs'
> > > > immediately upon the weights_tree removal of last bfqq of the group.
> > > > 
> > > > Signed-off-by: Yu Kuai <yukuai3@huawei.com>
> > > 
> > > So I'd find the logic easier to follow if you completely removed
> > > entity->in_groups_with_pending_reqs and did updates of
> > > bfqd->num_groups_with_pending_reqs like:
> > > 
> > >     if (!bfqg->num_entities_with_pending_reqs++)
> > >         bfqd->num_groups_with_pending_reqs++;
> > > 
> > Hi,
> > 
> > Indeed, this is an excellent idle, and much better than the way I did.
> > 
> > Thanks,
> > Kuai
> > 
> > > and similarly on the remove side. And there would we literally two places
> > > (addition & removal from weight tree) that would need to touch these
> > > counters. Pretty obvious and all can be done in patch 9.
> 
> I think with this change, we can count root_group while activating bfqqs
> that are under root_group, thus there is no need to modify
> for_each_entity(or fake bfq_sched_data) any more.

Sure, if you can make this work, it would be easier :)

> The special case is that weight racing bfqqs are not inserted into
> weights tree, and I think this can be handled by adding a fake
> bfq_weight_counter for such bfqqs.

Do you mean "weight raised bfqqs"? Yes, you are right they would need
special treatment - maybe bfq_weights_tree_add() is not the best function
to use for this and we should rather use insertion / removal from the
service tree for maintaining num_entities_with_pending_reqs counter?
I can even see we already have bfqg->active_entities so maybe we could just
somehow tweak that accounting and use it for our purposes?

								Honza
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH -next 10/11] block, bfq: decrease 'num_groups_with_pending_reqs' earlier
  2022-04-19  9:49         ` Jan Kara
@ 2022-04-19 11:37           ` yukuai (C)
  2022-04-21  8:17             ` Jan Kara
  0 siblings, 1 reply; 32+ messages in thread
From: yukuai (C) @ 2022-04-19 11:37 UTC (permalink / raw)
  To: Jan Kara
  Cc: tj, axboe, paolo.valente, cgroups, linux-block, linux-kernel, yi.zhang

在 2022/04/19 17:49, Jan Kara 写道:
> On Fri 15-04-22 09:10:06, yukuai (C) wrote:
>> 在 2022/04/13 19:40, yukuai (C) 写道:
>>> 在 2022/04/13 19:28, Jan Kara 写道:
>>>> On Sat 05-03-22 17:12:04, Yu Kuai wrote:
>>>>> Currently 'num_groups_with_pending_reqs' won't be decreased when
>>>>> the group doesn't have any pending requests, while some child group
>>>>> still have pending requests. The decrement is delayed to when all the
>>>>> child groups doesn't have any pending requests.
>>>>>
>>>>> For example:
>>>>> 1) t1 issue sync io on root group, t2 and t3 issue sync io on the same
>>>>> child group. num_groups_with_pending_reqs is 2 now.
>>>>> 2) t1 stopped, num_groups_with_pending_reqs is still 2. io from t2 and
>>>>> t3 still can't be handled concurrently.
>>>>>
>>>>> Fix the problem by decreasing 'num_groups_with_pending_reqs'
>>>>> immediately upon the weights_tree removal of last bfqq of the group.
>>>>>
>>>>> Signed-off-by: Yu Kuai <yukuai3@huawei.com>
>>>>
>>>> So I'd find the logic easier to follow if you completely removed
>>>> entity->in_groups_with_pending_reqs and did updates of
>>>> bfqd->num_groups_with_pending_reqs like:
>>>>
>>>>      if (!bfqg->num_entities_with_pending_reqs++)
>>>>          bfqd->num_groups_with_pending_reqs++;
>>>>
>>> Hi,
>>>
>>> Indeed, this is an excellent idle, and much better than the way I did.
>>>
>>> Thanks,
>>> Kuai
>>>
>>>> and similarly on the remove side. And there would we literally two places
>>>> (addition & removal from weight tree) that would need to touch these
>>>> counters. Pretty obvious and all can be done in patch 9.
>>
>> I think with this change, we can count root_group while activating bfqqs
>> that are under root_group, thus there is no need to modify
>> for_each_entity(or fake bfq_sched_data) any more.
> 
> Sure, if you can make this work, it would be easier :)
> 
>> The special case is that weight racing bfqqs are not inserted into
>> weights tree, and I think this can be handled by adding a fake
>> bfq_weight_counter for such bfqqs.
> 
> Do you mean "weight raised bfqqs"? Yes, you are right they would need
> special treatment - maybe bfq_weights_tree_add() is not the best function
> to use for this and we should rather use insertion / removal from the
> service tree for maintaining num_entities_with_pending_reqs counter?
> I can even see we already have bfqg->active_entities so maybe we could just
> somehow tweak that accounting and use it for our purposes?

The problem to use 'active_entities' is that bfqq can be deactivated
while it still has pending requests.

Anyway, I posted a new version aready, which still use weights_tree
insertion / removal to count pending bfqqs. I'll be great if you can
take a look:

https://patchwork.kernel.org/project/linux-block/cover/20220416093753.3054696-1-yukuai3@huawei.com/

BTW, I was worried that you can't receive the emails because I got
warnings that mails can't deliver to you:

Your message could not be delivered for more than 6 hour(s).
It will be retried until it is 1 day(s) old.

For further assistance, please send mail to postmaster.

If you do so, please include this problem report. You can
delete your own text from the attached returned message.

                    The mail system

<jack@imap.suse.de> (expanded from <jack@suse.cz>): host
     mail2.suse.de[149.44.160.157] said: 452 4.3.1 Insufficient system 
storage

Thanks,
Kuai
> 
> 								Honza
> 

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH -next 10/11] block, bfq: decrease 'num_groups_with_pending_reqs' earlier
  2022-04-19 11:37           ` yukuai (C)
@ 2022-04-21  8:17             ` Jan Kara
  0 siblings, 0 replies; 32+ messages in thread
From: Jan Kara @ 2022-04-21  8:17 UTC (permalink / raw)
  To: yukuai (C)
  Cc: Jan Kara, tj, axboe, paolo.valente, cgroups, linux-block,
	linux-kernel, yi.zhang

On Tue 19-04-22 19:37:11, yukuai (C) wrote:
> 在 2022/04/19 17:49, Jan Kara 写道:
> > On Fri 15-04-22 09:10:06, yukuai (C) wrote:
> > > 在 2022/04/13 19:40, yukuai (C) 写道:
> > > > 在 2022/04/13 19:28, Jan Kara 写道:
> > > > > On Sat 05-03-22 17:12:04, Yu Kuai wrote:
> > > > > > Currently 'num_groups_with_pending_reqs' won't be decreased when
> > > > > > the group doesn't have any pending requests, while some child group
> > > > > > still have pending requests. The decrement is delayed to when all the
> > > > > > child groups doesn't have any pending requests.
> > > > > > 
> > > > > > For example:
> > > > > > 1) t1 issue sync io on root group, t2 and t3 issue sync io on the same
> > > > > > child group. num_groups_with_pending_reqs is 2 now.
> > > > > > 2) t1 stopped, num_groups_with_pending_reqs is still 2. io from t2 and
> > > > > > t3 still can't be handled concurrently.
> > > > > > 
> > > > > > Fix the problem by decreasing 'num_groups_with_pending_reqs'
> > > > > > immediately upon the weights_tree removal of last bfqq of the group.
> > > > > > 
> > > > > > Signed-off-by: Yu Kuai <yukuai3@huawei.com>
> > > > > 
> > > > > So I'd find the logic easier to follow if you completely removed
> > > > > entity->in_groups_with_pending_reqs and did updates of
> > > > > bfqd->num_groups_with_pending_reqs like:
> > > > > 
> > > > >      if (!bfqg->num_entities_with_pending_reqs++)
> > > > >          bfqd->num_groups_with_pending_reqs++;
> > > > > 
> > > > Hi,
> > > > 
> > > > Indeed, this is an excellent idle, and much better than the way I did.
> > > > 
> > > > Thanks,
> > > > Kuai
> > > > 
> > > > > and similarly on the remove side. And there would we literally two places
> > > > > (addition & removal from weight tree) that would need to touch these
> > > > > counters. Pretty obvious and all can be done in patch 9.
> > > 
> > > I think with this change, we can count root_group while activating bfqqs
> > > that are under root_group, thus there is no need to modify
> > > for_each_entity(or fake bfq_sched_data) any more.
> > 
> > Sure, if you can make this work, it would be easier :)
> > 
> > > The special case is that weight racing bfqqs are not inserted into
> > > weights tree, and I think this can be handled by adding a fake
> > > bfq_weight_counter for such bfqqs.
> > 
> > Do you mean "weight raised bfqqs"? Yes, you are right they would need
> > special treatment - maybe bfq_weights_tree_add() is not the best function
> > to use for this and we should rather use insertion / removal from the
> > service tree for maintaining num_entities_with_pending_reqs counter?
> > I can even see we already have bfqg->active_entities so maybe we could just
> > somehow tweak that accounting and use it for our purposes?
> 
> The problem to use 'active_entities' is that bfqq can be deactivated
> while it still has pending requests.
> 
> Anyway, I posted a new version aready, which still use weights_tree
> insertion / removal to count pending bfqqs. I'll be great if you can
> take a look:
> 
> https://patchwork.kernel.org/project/linux-block/cover/20220416093753.3054696-1-yukuai3@huawei.com/

Thanks, I'll have a look.

> BTW, I was worried that you can't receive the emails because I got
> warnings that mails can't deliver to you:
> 
> Your message could not be delivered for more than 6 hour(s).
> It will be retried until it is 1 day(s) old.

Yes, I didn't get those emails because our mail system ran out of disk
space and it took a few days to resolve so emails got bounced...

								Honza

-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH -next 00/11] support concurrent sync io for bfq on a specail occasion
  2022-04-13 11:12 ` Jan Kara
  2022-04-13 11:33   ` yukuai (C)
@ 2022-04-26 14:24   ` Paolo Valente
  1 sibling, 0 replies; 32+ messages in thread
From: Paolo Valente @ 2022-04-26 14:24 UTC (permalink / raw)
  To: Jan Kara
  Cc: Yu Kuai, Tejun Heo, Jens Axboe, cgroups, linux-block, LKML, yi.zhang



> Il giorno 13 apr 2022, alle ore 13:12, Jan Kara <jack@suse.cz> ha scritto:
> 
> On Sat 05-03-22 17:11:54, Yu Kuai wrote:
>> Currently, bfq can't handle sync io concurrently as long as they
>> are not issued from root group. This is because
>> 'bfqd->num_groups_with_pending_reqs > 0' is always true in
>> bfq_asymmetric_scenario().
>> 
>> This patchset tries to support concurrent sync io if all the sync ios
>> are issued from the same cgroup:
>> 
>> 1) Count root_group into 'num_groups_with_pending_reqs', patch 1-5;
> 
> Seeing the complications and special casing for root_group I wonder: Won't
> we be better off to create fake bfq_sched_data in bfq_data and point
> root_group->sched_data there? AFAICS it would simplify the code
> considerably as root_group would be just another bfq_group, no need to
> special case it in various places, no games with bfqg->my_entity, etc.
> Paolo, do you see any problem with that?
> 

I do see the benefits.  My only concern is that then we also need to
check/change the places that rely on the assumption that we would
change.

Thanks,
Paolo

> 								Honza
> -- 
> Jan Kara <jack@suse.com>
> SUSE Labs, CR


^ permalink raw reply	[flat|nested] 32+ messages in thread

end of thread, other threads:[~2022-04-26 14:25 UTC | newest]

Thread overview: 32+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-03-05  9:11 [PATCH -next 00/11] support concurrent sync io for bfq on a specail occasion Yu Kuai
2022-03-05  9:11 ` [PATCH -next 01/11] block, bfq: add new apis to iterate bfq entities Yu Kuai
2022-03-05  9:11 ` [PATCH -next 02/11] block, bfq: apply news apis where root group is not expected Yu Kuai
2022-04-13  9:50   ` Jan Kara
2022-04-13 10:59     ` Jan Kara
2022-04-13 11:11       ` yukuai (C)
2022-03-05  9:11 ` [PATCH -next 03/11] block, bfq: cleanup for __bfq_activate_requeue_entity() Yu Kuai
2022-03-05  9:11 ` [PATCH -next 04/11] block, bfq: move the increasement of 'num_groups_with_pending_reqs' to it's caller Yu Kuai
2022-03-05  9:11 ` [PATCH -next 05/11] block, bfq: count root group into 'num_groups_with_pending_reqs' Yu Kuai
2022-04-13 11:05   ` Jan Kara
2022-03-05  9:12 ` [PATCH -next 06/11] block, bfq: do not idle if only one cgroup is activated Yu Kuai
2022-03-05  9:12 ` [PATCH -next 07/11] block, bfq: only count parent bfqg when bfqq " Yu Kuai
2022-03-05  9:12 ` [PATCH -next 08/11] block, bfq: record how many queues have pending requests in bfq_group Yu Kuai
2022-03-05  9:12 ` [PATCH -next 09/11] block, bfq: move forward __bfq_weights_tree_remove() Yu Kuai
2022-03-05  9:12 ` [PATCH -next 10/11] block, bfq: decrease 'num_groups_with_pending_reqs' earlier Yu Kuai
2022-04-13 11:28   ` Jan Kara
2022-04-13 11:40     ` yukuai (C)
2022-04-15  1:10       ` yukuai (C)
2022-04-19  9:49         ` Jan Kara
2022-04-19 11:37           ` yukuai (C)
2022-04-21  8:17             ` Jan Kara
2022-03-05  9:12 ` [PATCH -next 11/11] block, bfq: cleanup bfqq_group() Yu Kuai
2022-03-11  6:31 ` [PATCH -next 00/11] support concurrent sync io for bfq on a specail occasion yukuai (C)
2022-03-17  1:49   ` yukuai (C)
2022-03-18 12:38     ` Paolo Valente
2022-03-19  2:34       ` yukuai (C)
2022-03-25  7:30     ` yukuai (C)
2022-04-01  3:43       ` yukuai (C)
2022-04-08  6:50         ` yukuai (C)
2022-04-13 11:12 ` Jan Kara
2022-04-13 11:33   ` yukuai (C)
2022-04-26 14:24   ` Paolo Valente

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).