linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/20] A few bugfix and cleanup patches for bfq-iosched
@ 2022-11-01  9:33 Kemeng Shi
  2022-11-01  9:33 ` [PATCH 01/20] block, bfq: fix typo in comment Kemeng Shi
                   ` (19 more replies)
  0 siblings, 20 replies; 21+ messages in thread
From: Kemeng Shi @ 2022-11-01  9:33 UTC (permalink / raw)
  To: paolo.valente, axboe; +Cc: linux-block, linux-kernel, shikemeng

This series contain a few patches to fix typo in comment, protect
max_budget from concurrent access and so on. More detail can be
found in the respective changelogs.

Kemeng Shi (20):
  block, bfq: fix typo in comment
  block, bfq: Update bfqd->max_budget with bfqd->lock held
  block, bfq: correct bfq_max_budget and bfq_min_budget
  block, bfq: simpfy computation of bfqd->budgets_assigned
  block, bfq: recover the "service hole" if enough budget is left
  block, bfq: correct interactive weight-raise check in
    bfq_set_budget_timeout
  block, bfq: simpfy check for interactive bfqq in bfq_update_wr_data
  block, bfq: do srt filtering for interactive queues in
    bfq_completed_request
  block, bfq: remove redundant check if (bfqq->dispatched > 0)
  block, bfq: define and use soft_rt, in_burst and wr_or_deserves_wr
    only low_latency is enable
  block, bfq: remove unnecessary "wr" part of wr_or_deserves_wr
  block, bfq: start/restart service_from_wr accumulating correctly
  block,bfq: remove redundant nonrot_with_queueing check in
    bfq_setup_cooperator
  block, bfq: remove redundant oom_bfqq check for bfqq from
    bfq_find_close_cooperator
  block, bfq: some cleanups for function bfq_pos_tree_add_move
  block, bfq: remove unnecessary goto tag in __bfq_weights_tree_remove
  block, bfq: remove unnecessary traverse in bfq_add_to_burst
  block, bfq: remove unnecessary bfqq->next_rq = NULL in
    bfq_remove_request
  block, bfq: remove unnecessary local variable __bfqq in
    bfq_setup_merge
  block, bfq: remove unncessary process_ref check for merged queue in
    bfq_setup_merge

 block/bfq-iosched.c | 280 +++++++++++++++++++++-----------------------
 1 file changed, 133 insertions(+), 147 deletions(-)

-- 
2.30.0


^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH 01/20] block, bfq: fix typo in comment
  2022-11-01  9:33 [PATCH 00/20] A few bugfix and cleanup patches for bfq-iosched Kemeng Shi
@ 2022-11-01  9:33 ` Kemeng Shi
  2022-11-01  9:33 ` [PATCH 02/20] block, bfq: Update bfqd->max_budget with bfqd->lock held Kemeng Shi
                   ` (18 subsequent siblings)
  19 siblings, 0 replies; 21+ messages in thread
From: Kemeng Shi @ 2022-11-01  9:33 UTC (permalink / raw)
  To: paolo.valente, axboe; +Cc: linux-block, linux-kernel, shikemeng

IO/ -> I/0

Signed-off-by: Kemeng Shi <shikemeng@huawei.com>
---
 block/bfq-iosched.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index 7ea427817f7f..25a88ffd997e 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -5633,7 +5633,7 @@ bfq_do_early_stable_merge(struct bfq_data *bfqd, struct bfq_queue *bfqq,
  *
  * Putting these two facts together, this commits merges stably the
  * bfq_queues associated with these I/O flows, i.e., with the
- * processes that generate these IO/ flows, regardless of how many the
+ * processes that generate these I/O flows, regardless of how many the
  * involved processes are.
  *
  * To decide whether a set of bfq_queues is actually associated with
-- 
2.30.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 02/20] block, bfq: Update bfqd->max_budget with bfqd->lock held
  2022-11-01  9:33 [PATCH 00/20] A few bugfix and cleanup patches for bfq-iosched Kemeng Shi
  2022-11-01  9:33 ` [PATCH 01/20] block, bfq: fix typo in comment Kemeng Shi
@ 2022-11-01  9:33 ` Kemeng Shi
  2022-11-01  9:34 ` [PATCH 03/20] block, bfq: correct bfq_max_budget and bfq_min_budget Kemeng Shi
                   ` (17 subsequent siblings)
  19 siblings, 0 replies; 21+ messages in thread
From: Kemeng Shi @ 2022-11-01  9:33 UTC (permalink / raw)
  To: paolo.valente, axboe; +Cc: linux-block, linux-kernel, shikemeng

bfq_max_budget and bfq_user_max_budget maybe be in inconsisten state if
bfq_max_budget configuration and bfq_max_budget auto-update happen
concurrently.
Example sequence:
config                          auto-update
bfqd->bfq_max_budget =
  __data
                                if (bfqd->bfq_user_max_budget == 0)
                                  bfqd->bfq_max_budget =
                                    bfq_calc_max_budget(bfqd);
bfqd->bfq_user_max_budget =
  __data;

bfq_max_budget is only update in update_thr_responsiveness_params and
configuration code. As update_thr_responsiveness_params is always called
under bfqd->lock, fix this by holding lock in configuration code.

Signed-off-by: Kemeng Shi <shikemeng@huawei.com>
---
 block/bfq-iosched.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index 25a88ffd997e..13370c41ad36 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -7328,6 +7328,7 @@ static ssize_t bfq_max_budget_store(struct elevator_queue *e,
 	if (ret)
 		return ret;
 
+	spin_lock_irq(&bfqd->lock);
 	if (__data == 0)
 		bfqd->bfq_max_budget = bfq_calc_max_budget(bfqd);
 	else {
@@ -7337,6 +7338,7 @@ static ssize_t bfq_max_budget_store(struct elevator_queue *e,
 	}
 
 	bfqd->bfq_user_max_budget = __data;
+	spin_unlock_irq(&bfqd->lock);
 
 	return count;
 }
@@ -7362,8 +7364,11 @@ static ssize_t bfq_timeout_sync_store(struct elevator_queue *e,
 		__data = INT_MAX;
 
 	bfqd->bfq_timeout = msecs_to_jiffies(__data);
+
+	spin_lock_irq(&bfqd->lock);
 	if (bfqd->bfq_user_max_budget == 0)
 		bfqd->bfq_max_budget = bfq_calc_max_budget(bfqd);
+	spin_unlock_irq(&bfqd->lock);
 
 	return count;
 }
-- 
2.30.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 03/20] block, bfq: correct bfq_max_budget and bfq_min_budget
  2022-11-01  9:33 [PATCH 00/20] A few bugfix and cleanup patches for bfq-iosched Kemeng Shi
  2022-11-01  9:33 ` [PATCH 01/20] block, bfq: fix typo in comment Kemeng Shi
  2022-11-01  9:33 ` [PATCH 02/20] block, bfq: Update bfqd->max_budget with bfqd->lock held Kemeng Shi
@ 2022-11-01  9:34 ` Kemeng Shi
  2022-11-01  9:34 ` [PATCH 04/20] block, bfq: simpfy computation of bfqd->budgets_assigned Kemeng Shi
                   ` (16 subsequent siblings)
  19 siblings, 0 replies; 21+ messages in thread
From: Kemeng Shi @ 2022-11-01  9:34 UTC (permalink / raw)
  To: paolo.valente, axboe; +Cc: linux-block, linux-kernel, shikemeng

The budgets_assigned is checked to make sure enough samples have been
computed for auto-tuning. Ignore budgets_assigned if auto-tuning is
disabled as bfq_default_max_budget is set by user.
The bfq_default_max_budget check is added after budgets_assigned check,
so no extra cost is added for normal branch to return bfqd->bfq_max_budget.

Signed-off-by: Kemeng Shi <shikemeng@huawei.com>
---
 block/bfq-iosched.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index 13370c41ad36..b21c3711d69d 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -1485,7 +1485,8 @@ static int bfq_bfqq_budget_left(struct bfq_queue *bfqq)
  */
 static int bfq_max_budget(struct bfq_data *bfqd)
 {
-	if (bfqd->budgets_assigned < bfq_stats_min_budgets)
+	if (bfqd->budgets_assigned < bfq_stats_min_budgets &&
+			bfqd->bfq_user_max_budget == 0)
 		return bfq_default_max_budget;
 	else
 		return bfqd->bfq_max_budget;
@@ -1497,7 +1498,8 @@ static int bfq_max_budget(struct bfq_data *bfqd)
  */
 static int bfq_min_budget(struct bfq_data *bfqd)
 {
-	if (bfqd->budgets_assigned < bfq_stats_min_budgets)
+	if (bfqd->budgets_assigned < bfq_stats_min_budgets &&
+			bfqd->bfq_user_max_budget == 0)
 		return bfq_default_max_budget / 32;
 	else
 		return bfqd->bfq_max_budget / 32;
-- 
2.30.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 04/20] block, bfq: simpfy computation of bfqd->budgets_assigned
  2022-11-01  9:33 [PATCH 00/20] A few bugfix and cleanup patches for bfq-iosched Kemeng Shi
                   ` (2 preceding siblings ...)
  2022-11-01  9:34 ` [PATCH 03/20] block, bfq: correct bfq_max_budget and bfq_min_budget Kemeng Shi
@ 2022-11-01  9:34 ` Kemeng Shi
  2022-11-01  9:34 ` [PATCH 05/20] block, bfq: recover the "service hole" if enough budget is left Kemeng Shi
                   ` (15 subsequent siblings)
  19 siblings, 0 replies; 21+ messages in thread
From: Kemeng Shi @ 2022-11-01  9:34 UTC (permalink / raw)
  To: paolo.valente, axboe; +Cc: linux-block, linux-kernel, shikemeng

The computation of budgets_assigned is a little confusing as we only need
to check if it's updated more than 10 times. Simpfy the computation to
improve readability.

Signed-off-by: Kemeng Shi <shikemeng@huawei.com>
---
 block/bfq-iosched.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index b21c3711d69d..be69b0e061f7 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -175,7 +175,7 @@ static const int bfq_back_penalty = 2;
 static u64 bfq_slice_idle = NSEC_PER_SEC / 125;
 
 /* Minimum number of assigned budgets for which stats are safe to compute. */
-static const int bfq_stats_min_budgets = 194;
+static const int bfq_stats_min_budgets = 11;
 
 /* Default maximum budget values, in sectors and number of requests. */
 static const int bfq_default_max_budget = 16 * 1024;
@@ -3288,7 +3288,7 @@ static void __bfq_set_in_service_queue(struct bfq_data *bfqd,
 	if (bfqq) {
 		bfq_clear_bfqq_fifo_expire(bfqq);
 
-		bfqd->budgets_assigned = (bfqd->budgets_assigned * 7 + 256) / 8;
+		bfqd->budgets_assigned++;
 
 		if (time_is_before_jiffies(bfqq->last_wr_start_finish) &&
 		    bfqq->wr_coeff > 1 &&
-- 
2.30.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 05/20] block, bfq: recover the "service hole" if enough budget is left
  2022-11-01  9:33 [PATCH 00/20] A few bugfix and cleanup patches for bfq-iosched Kemeng Shi
                   ` (3 preceding siblings ...)
  2022-11-01  9:34 ` [PATCH 04/20] block, bfq: simpfy computation of bfqd->budgets_assigned Kemeng Shi
@ 2022-11-01  9:34 ` Kemeng Shi
  2022-11-01  9:34 ` [PATCH 06/20] block, bfq: correct interactive weight-raise check in bfq_set_budget_timeout Kemeng Shi
                   ` (14 subsequent siblings)
  19 siblings, 0 replies; 21+ messages in thread
From: Kemeng Shi @ 2022-11-01  9:34 UTC (permalink / raw)
  To: paolo.valente, axboe; +Cc: linux-block, linux-kernel, shikemeng

If budget left is less than budget need by next request, the activated bfqq
will be expired if it preempts the in-service queue. Avoid to cause useless
overhead to check budget left is enough for next request.

Signed-off-by: Kemeng Shi <shikemeng@huawei.com>
---
 block/bfq-iosched.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index be69b0e061f7..f4b4ba05f804 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -1623,7 +1623,7 @@ static bool bfq_bfqq_update_budg_for_activation(struct bfq_data *bfqd,
 	 * service. This would only cause useless overhead.
 	 */
 	if (bfq_bfqq_non_blocking_wait_rq(bfqq) && arrived_in_time &&
-	    bfq_bfqq_budget_left(bfqq) > 0) {
+	    bfq_bfqq_budget_left(bfqq) >= bfq_serv_to_charge(bfqq->next_rq, bfqq)) {
 		/*
 		 * We do not clear the flag non_blocking_wait_rq here, as
 		 * the latter is used in bfq_activate_bfqq to signal
-- 
2.30.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 06/20] block, bfq: correct interactive weight-raise check in bfq_set_budget_timeout
  2022-11-01  9:33 [PATCH 00/20] A few bugfix and cleanup patches for bfq-iosched Kemeng Shi
                   ` (4 preceding siblings ...)
  2022-11-01  9:34 ` [PATCH 05/20] block, bfq: recover the "service hole" if enough budget is left Kemeng Shi
@ 2022-11-01  9:34 ` Kemeng Shi
  2022-11-01  9:34 ` [PATCH 07/20] block, bfq: simpfy check for interactive bfqq in bfq_update_wr_data Kemeng Shi
                   ` (13 subsequent siblings)
  19 siblings, 0 replies; 21+ messages in thread
From: Kemeng Shi @ 2022-11-01  9:34 UTC (permalink / raw)
  To: paolo.valente, axboe; +Cc: linux-block, linux-kernel, shikemeng

After weight-raise finished, bfqq->wr_coeff is reset to 1 while
bfqq->wr_cur_max_time may not be reset. For example,
Function bfq_update_bfqq_wr_on_rq_arrival will only reset wr_coeff to 1 if
bfqq is created in burst creation. Function bfq_set_budget_timeout will be
called when bfqq is selected while it's wr_cur_max_time is set and wr_coeff
is 1. Fix this by check wr_coeff > 1 before check wr_cur_max_time.

Signed-off-by: Kemeng Shi <shikemeng@huawei.com>
---
 block/bfq-iosched.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index f4b4ba05f804..20ae52882235 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -3271,7 +3271,8 @@ static void bfq_set_budget_timeout(struct bfq_data *bfqd,
 {
 	unsigned int timeout_coeff;
 
-	if (bfqq->wr_cur_max_time == bfqd->bfq_wr_rt_max_time)
+	if (bfqq->wr_coeff > 1 &&
+	    bfqq->wr_cur_max_time == bfqd->bfq_wr_rt_max_time)
 		timeout_coeff = 1;
 	else
 		timeout_coeff = bfqq->entity.weight / bfqq->entity.orig_weight;
-- 
2.30.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 07/20] block, bfq: simpfy check for interactive bfqq in bfq_update_wr_data
  2022-11-01  9:33 [PATCH 00/20] A few bugfix and cleanup patches for bfq-iosched Kemeng Shi
                   ` (5 preceding siblings ...)
  2022-11-01  9:34 ` [PATCH 06/20] block, bfq: correct interactive weight-raise check in bfq_set_budget_timeout Kemeng Shi
@ 2022-11-01  9:34 ` Kemeng Shi
  2022-11-01  9:34 ` [PATCH 08/20] block, bfq: do srt filtering for interactive queues in bfq_completed_request Kemeng Shi
                   ` (12 subsequent siblings)
  19 siblings, 0 replies; 21+ messages in thread
From: Kemeng Shi @ 2022-11-01  9:34 UTC (permalink / raw)
  To: paolo.valente, axboe; +Cc: linux-block, linux-kernel, shikemeng

The bfqq->wr_coeff will be set to bfqd->bfq_wr_coeff if bfqq is in
interactive weight raising. So we can simpfy interactive weight raising
check in function bfq_update_wr_data.

Signed-off-by: Kemeng Shi <shikemeng@huawei.com>
---
 block/bfq-iosched.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index 20ae52882235..15e7d6c6fa83 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -4997,8 +4997,7 @@ static void bfq_update_wr_data(struct bfq_data *bfqd, struct bfq_queue *bfqq)
 				bfqq->entity.prio_changed = 1;
 			}
 		}
-		if (bfqq->wr_coeff > 1 &&
-		    bfqq->wr_cur_max_time != bfqd->bfq_wr_rt_max_time &&
+		if (bfqq->wr_coeff == bfqd->bfq_wr_coeff &&
 		    bfqq->service_from_wr > max_service_from_wr) {
 			/* see comments on max_service_from_wr */
 			bfq_bfqq_end_wr(bfqq);
-- 
2.30.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 08/20] block, bfq: do srt filtering for interactive queues in bfq_completed_request
  2022-11-01  9:33 [PATCH 00/20] A few bugfix and cleanup patches for bfq-iosched Kemeng Shi
                   ` (6 preceding siblings ...)
  2022-11-01  9:34 ` [PATCH 07/20] block, bfq: simpfy check for interactive bfqq in bfq_update_wr_data Kemeng Shi
@ 2022-11-01  9:34 ` Kemeng Shi
  2022-11-01  9:34 ` [PATCH 09/20] block, bfq: remove redundant check if (bfqq->dispatched > 0) Kemeng Shi
                   ` (11 subsequent siblings)
  19 siblings, 0 replies; 21+ messages in thread
From: Kemeng Shi @ 2022-11-01  9:34 UTC (permalink / raw)
  To: paolo.valente, axboe; +Cc: linux-block, linux-kernel, shikemeng

Commit 20cd32450bcbec ("block, bfq: do not consider interactive queues in
srt filtering") remove the updating of soft_rt_next_start for bfq_queues
in interactive weight raising.
Commit 3c337690d2ebb7 ("block, bfq: avoid spurious switches to soft_rt of
interactive queues") try to revert the first commit to add back updating of
soft_rt_next_start for bfq_queues in interactive weight raising and fix
original problem by reset consumed bandwidth if a bfq_queue switches from
interactive to normal mode.
The problem this patch fixes is that first commit is not fully reverted as
updating of soft_rt_next_start for bfq_queues in interactive weight
raising is still removed in bfq_completed_request. Fix this by updating
soft_rt_next_start for bfq_queues in interactive weight raising in
bfq_completed_request.

Signed-off-by: Kemeng Shi <shikemeng@huawei.com>
---
 block/bfq-iosched.c | 9 +++------
 1 file changed, 3 insertions(+), 6 deletions(-)

diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index 15e7d6c6fa83..fa96cbca7814 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -6317,14 +6317,11 @@ static void bfq_completed_request(struct bfq_queue *bfqq, struct bfq_data *bfqd)
 	 * isochronous, and both requisites for this condition to hold
 	 * are now satisfied, then compute soft_rt_next_start (see the
 	 * comments on the function bfq_bfqq_softrt_next_start()). We
-	 * do not compute soft_rt_next_start if bfqq is in interactive
-	 * weight raising (see the comments in bfq_bfqq_expire() for
-	 * an explanation). We schedule this delayed update when bfqq
-	 * expires, if it still has in-flight requests.
+	 * schedule this delayed update when bfqq expires, if it still
+	 * has in-flight requests.
 	 */
 	if (bfq_bfqq_softrt_update(bfqq) && bfqq->dispatched == 0 &&
-	    RB_EMPTY_ROOT(&bfqq->sort_list) &&
-	    bfqq->wr_coeff != bfqd->bfq_wr_coeff)
+	    RB_EMPTY_ROOT(&bfqq->sort_list))
 		bfqq->soft_rt_next_start =
 			bfq_bfqq_softrt_next_start(bfqd, bfqq);
 
-- 
2.30.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 09/20] block, bfq: remove redundant check if (bfqq->dispatched > 0)
  2022-11-01  9:33 [PATCH 00/20] A few bugfix and cleanup patches for bfq-iosched Kemeng Shi
                   ` (7 preceding siblings ...)
  2022-11-01  9:34 ` [PATCH 08/20] block, bfq: do srt filtering for interactive queues in bfq_completed_request Kemeng Shi
@ 2022-11-01  9:34 ` Kemeng Shi
  2022-11-01  9:34 ` [PATCH 10/20] block, bfq: define and use soft_rt, in_burst and wr_or_deserves_wr only low_latency is enable Kemeng Shi
                   ` (10 subsequent siblings)
  19 siblings, 0 replies; 21+ messages in thread
From: Kemeng Shi @ 2022-11-01  9:34 UTC (permalink / raw)
  To: paolo.valente, axboe; +Cc: linux-block, linux-kernel, shikemeng

Commit 3c337690d2ebb7 ("block, bfq: avoid spurious switches to soft_rt of
interactive queues") remove bfqq->wr_coeff != bfqd->bfq_wr_coeff check
along with bfqq->dispatched == 0 to update soft_rt_next_start for
interactive bfq_queues. So we can remove redundant bfqq->dispatched > 0
in else branch with current bfqq->dispatched == 0 check.

Signed-off-by: Kemeng Shi <shikemeng@huawei.com>
---
 block/bfq-iosched.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index fa96cbca7814..8f6d05258f22 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -4375,7 +4375,7 @@ void bfq_bfqq_expire(struct bfq_data *bfqd,
 		if (bfqq->dispatched == 0)
 			bfqq->soft_rt_next_start =
 				bfq_bfqq_softrt_next_start(bfqd, bfqq);
-		else if (bfqq->dispatched > 0) {
+		else {
 			/*
 			 * Schedule an update of soft_rt_next_start to when
 			 * the task may be discovered to be isochronous.
-- 
2.30.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 10/20] block, bfq: define and use soft_rt, in_burst and wr_or_deserves_wr only low_latency is enable
  2022-11-01  9:33 [PATCH 00/20] A few bugfix and cleanup patches for bfq-iosched Kemeng Shi
                   ` (8 preceding siblings ...)
  2022-11-01  9:34 ` [PATCH 09/20] block, bfq: remove redundant check if (bfqq->dispatched > 0) Kemeng Shi
@ 2022-11-01  9:34 ` Kemeng Shi
  2022-11-01  9:34 ` [PATCH 11/20] block, bfq: remove unnecessary "wr" part of wr_or_deserves_wr Kemeng Shi
                   ` (9 subsequent siblings)
  19 siblings, 0 replies; 21+ messages in thread
From: Kemeng Shi @ 2022-11-01  9:34 UTC (permalink / raw)
  To: paolo.valente, axboe; +Cc: linux-block, linux-kernel, shikemeng

The soft_rt, in_burst and wr_or_deserves_wr are only used when low_latency
is enable. Currently, these variables are computed even low_latency is
disable. Define these variables in successful branch of bfqd->low_latency
check and compute them if needed to remove redundant computation and
improve readability.
The interactive parameter will be used only if low_latency is enabled
outside bfq_bfqq_handle_idle_busy_switch, so it's safe to move
interactive assignment inside branch where low_latency is true.

Signed-off-by: Kemeng Shi <shikemeng@huawei.com>
---
 block/bfq-iosched.c | 77 ++++++++++++++++++++++-----------------------
 1 file changed, 38 insertions(+), 39 deletions(-)

diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index 8f6d05258f22..0ecb3640d715 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -1822,8 +1822,7 @@ static void bfq_bfqq_handle_idle_busy_switch(struct bfq_data *bfqd,
 					     struct request *rq,
 					     bool *interactive)
 {
-	bool soft_rt, in_burst,	wr_or_deserves_wr,
-		bfqq_wants_to_preempt,
+	bool bfqq_wants_to_preempt,
 		idle_for_long_time = bfq_bfqq_idle_for_long_time(bfqd, bfqq),
 		/*
 		 * See the comments on
@@ -1834,43 +1833,6 @@ static void bfq_bfqq_handle_idle_busy_switch(struct bfq_data *bfqd,
 			bfqq->ttime.last_end_request +
 			bfqd->bfq_slice_idle * 3;
 
-
-	/*
-	 * bfqq deserves to be weight-raised if:
-	 * - it is sync,
-	 * - it does not belong to a large burst,
-	 * - it has been idle for enough time or is soft real-time,
-	 * - is linked to a bfq_io_cq (it is not shared in any sense),
-	 * - has a default weight (otherwise we assume the user wanted
-	 *   to control its weight explicitly)
-	 */
-	in_burst = bfq_bfqq_in_large_burst(bfqq);
-	soft_rt = bfqd->bfq_wr_max_softrt_rate > 0 &&
-		!BFQQ_TOTALLY_SEEKY(bfqq) &&
-		!in_burst &&
-		time_is_before_jiffies(bfqq->soft_rt_next_start) &&
-		bfqq->dispatched == 0 &&
-		bfqq->entity.new_weight == 40;
-	*interactive = !in_burst && idle_for_long_time &&
-		bfqq->entity.new_weight == 40;
-	/*
-	 * Merged bfq_queues are kept out of weight-raising
-	 * (low-latency) mechanisms. The reason is that these queues
-	 * are usually created for non-interactive and
-	 * non-soft-real-time tasks. Yet this is not the case for
-	 * stably-merged queues. These queues are merged just because
-	 * they are created shortly after each other. So they may
-	 * easily serve the I/O of an interactive or soft-real time
-	 * application, if the application happens to spawn multiple
-	 * processes. So let also stably-merged queued enjoy weight
-	 * raising.
-	 */
-	wr_or_deserves_wr = bfqd->low_latency &&
-		(bfqq->wr_coeff > 1 ||
-		 (bfq_bfqq_sync(bfqq) &&
-		  (bfqq->bic || RQ_BIC(rq)->stably_merged) &&
-		   (*interactive || soft_rt)));
-
 	/*
 	 * Using the last flag, update budget and check whether bfqq
 	 * may want to preempt the in-service queue.
@@ -1904,6 +1866,20 @@ static void bfq_bfqq_handle_idle_busy_switch(struct bfq_data *bfqd,
 	bfq_clear_bfqq_just_created(bfqq);
 
 	if (bfqd->low_latency) {
+		bool soft_rt, in_burst,	wr_or_deserves_wr;
+		/*
+		 * bfqq deserves to be weight-raised if:
+		 * - it is sync,
+		 * - it does not belong to a large burst,
+		 * - it has been idle for enough time or is soft real-time,
+		 * - is linked to a bfq_io_cq (it is not shared in any sense),
+		 * - has a default weight (otherwise we assume the user wanted
+		 *   to control its weight explicitly)
+		 */
+		in_burst = bfq_bfqq_in_large_burst(bfqq);
+		*interactive = !in_burst && idle_for_long_time &&
+			bfqq->entity.new_weight == 40;
+
 		if (unlikely(time_is_after_jiffies(bfqq->split_time)))
 			/* wraparound */
 			bfqq->split_time =
@@ -1911,6 +1887,29 @@ static void bfq_bfqq_handle_idle_busy_switch(struct bfq_data *bfqd,
 
 		if (time_is_before_jiffies(bfqq->split_time +
 					   bfqd->bfq_wr_min_idle_time)) {
+			soft_rt = bfqd->bfq_wr_max_softrt_rate > 0 &&
+				!BFQQ_TOTALLY_SEEKY(bfqq) &&
+				!in_burst &&
+				time_is_before_jiffies(bfqq->soft_rt_next_start) &&
+				bfqq->dispatched == 0 &&
+				bfqq->entity.new_weight == 40;
+			/*
+			 * Merged bfq_queues are kept out of weight-raising
+			 * (low-latency) mechanisms. The reason is that these queues
+			 * are usually created for non-interactive and
+			 * non-soft-real-time tasks. Yet this is not the case for
+			 * stably-merged queues. These queues are merged just because
+			 * they are created shortly after each other. So they may
+			 * easily serve the I/O of an interactive or soft-real time
+			 * application, if the application happens to spawn multiple
+			 * processes. So let also stably-merged queued enjoy weight
+			 * raising.
+			 */
+			wr_or_deserves_wr = (bfqq->wr_coeff > 1 ||
+					(bfq_bfqq_sync(bfqq) &&
+					 (bfqq->bic || RQ_BIC(rq)->stably_merged) &&
+					 (*interactive || soft_rt)));
+
 			bfq_update_bfqq_wr_on_rq_arrival(bfqd, bfqq,
 							 old_wr_coeff,
 							 wr_or_deserves_wr,
-- 
2.30.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 11/20] block, bfq: remove unnecessary "wr" part of wr_or_deserves_wr
  2022-11-01  9:33 [PATCH 00/20] A few bugfix and cleanup patches for bfq-iosched Kemeng Shi
                   ` (9 preceding siblings ...)
  2022-11-01  9:34 ` [PATCH 10/20] block, bfq: define and use soft_rt, in_burst and wr_or_deserves_wr only low_latency is enable Kemeng Shi
@ 2022-11-01  9:34 ` Kemeng Shi
  2022-11-01  9:34 ` [PATCH 12/20] block, bfq: start/restart service_from_wr accumulating correctly Kemeng Shi
                   ` (8 subsequent siblings)
  19 siblings, 0 replies; 21+ messages in thread
From: Kemeng Shi @ 2022-11-01  9:34 UTC (permalink / raw)
  To: paolo.valente, axboe; +Cc: linux-block, linux-kernel, shikemeng

Before commit 96a291c38c3299 ("block, bfq: preempt lower-weight or
lower-priority queues"), wr_or_deserves_wr is used to check if
preempt is wanted. Currently, wr_or_deserves_wr is only used in
bfq_update_bfqq_wr_on_rq_arrival to check if weight raising is
needed, so the "wr" part of wr_or_deserves_wr is not needed
anymore. Rename wr_or_deserves_wr to deserves_wr and remove
unnecessary bfqq->wr_coeff > 1 for original wr_or_deserves_wr.

Signed-off-by: Kemeng Shi <shikemeng@huawei.com>
---
 block/bfq-iosched.c | 13 ++++++-------
 1 file changed, 6 insertions(+), 7 deletions(-)

diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index 0ecb3640d715..261ba2d63925 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -1681,12 +1681,12 @@ static unsigned long bfq_smallest_from_now(void)
 static void bfq_update_bfqq_wr_on_rq_arrival(struct bfq_data *bfqd,
 					     struct bfq_queue *bfqq,
 					     unsigned int old_wr_coeff,
-					     bool wr_or_deserves_wr,
+					     bool deserves_wr,
 					     bool interactive,
 					     bool in_burst,
 					     bool soft_rt)
 {
-	if (old_wr_coeff == 1 && wr_or_deserves_wr) {
+	if (old_wr_coeff == 1 && deserves_wr) {
 		/* start a weight-raising period */
 		if (interactive) {
 			bfqq->service_from_wr = 0;
@@ -1866,7 +1866,7 @@ static void bfq_bfqq_handle_idle_busy_switch(struct bfq_data *bfqd,
 	bfq_clear_bfqq_just_created(bfqq);
 
 	if (bfqd->low_latency) {
-		bool soft_rt, in_burst,	wr_or_deserves_wr;
+		bool soft_rt, in_burst,	deserves_wr;
 		/*
 		 * bfqq deserves to be weight-raised if:
 		 * - it is sync,
@@ -1905,14 +1905,13 @@ static void bfq_bfqq_handle_idle_busy_switch(struct bfq_data *bfqd,
 			 * processes. So let also stably-merged queued enjoy weight
 			 * raising.
 			 */
-			wr_or_deserves_wr = (bfqq->wr_coeff > 1 ||
-					(bfq_bfqq_sync(bfqq) &&
+			deserves_wr = (bfq_bfqq_sync(bfqq) &&
 					 (bfqq->bic || RQ_BIC(rq)->stably_merged) &&
-					 (*interactive || soft_rt)));
+					 (*interactive || soft_rt));
 
 			bfq_update_bfqq_wr_on_rq_arrival(bfqd, bfqq,
 							 old_wr_coeff,
-							 wr_or_deserves_wr,
+							 deserves_wr,
 							 *interactive,
 							 in_burst,
 							 soft_rt);
-- 
2.30.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 12/20] block, bfq: start/restart service_from_wr accumulating correctly
  2022-11-01  9:33 [PATCH 00/20] A few bugfix and cleanup patches for bfq-iosched Kemeng Shi
                   ` (10 preceding siblings ...)
  2022-11-01  9:34 ` [PATCH 11/20] block, bfq: remove unnecessary "wr" part of wr_or_deserves_wr Kemeng Shi
@ 2022-11-01  9:34 ` Kemeng Shi
  2022-11-01  9:34 ` [PATCH 13/20] block,bfq: remove redundant nonrot_with_queueing check in bfq_setup_cooperator Kemeng Shi
                   ` (7 subsequent siblings)
  19 siblings, 0 replies; 21+ messages in thread
From: Kemeng Shi @ 2022-11-01  9:34 UTC (permalink / raw)
  To: paolo.valente, axboe; +Cc: linux-block, linux-kernel, shikemeng

1. Start accumulating service_from_wr when async queues are weight raised.
The service_from_wr for async queues is accumulated and checked to finish
weight-raise as sync queues. We need to charge service_from_wr for async
queues about to weight-raise.
2. Restart accumulating service_from_wr when queues are deemed interactive
again The weight-raising period is restarted when queues are deemed
interactive again in bfq_add_request. It's more reasonable to restart
service_from_wr accumulating too.

Signed-off-by: Kemeng Shi <shikemeng@huawei.com>
---
 block/bfq-iosched.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index 261ba2d63925..a46e49de895a 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -1726,6 +1726,7 @@ static void bfq_update_bfqq_wr_on_rq_arrival(struct bfq_data *bfqd,
 					    2 * bfq_min_budget(bfqd));
 	} else if (old_wr_coeff > 1) {
 		if (interactive) { /* update wr coeff and duration */
+			bfqq->service_from_wr = 0;
 			bfqq->wr_coeff = bfqd->bfq_wr_coeff;
 			bfqq->wr_cur_max_time = bfq_wr_duration(bfqd);
 		} else if (in_burst)
@@ -2311,6 +2312,7 @@ static void bfq_add_request(struct request *rq)
 		    time_is_before_jiffies(
 				bfqq->last_wr_start_finish +
 				bfqd->bfq_wr_min_inter_arr_async)) {
+			bfqq->service_from_wr = 0;
 			bfqq->wr_coeff = bfqd->bfq_wr_coeff;
 			bfqq->wr_cur_max_time = bfq_wr_duration(bfqd);
 
-- 
2.30.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 13/20] block,bfq: remove redundant nonrot_with_queueing check in bfq_setup_cooperator
  2022-11-01  9:33 [PATCH 00/20] A few bugfix and cleanup patches for bfq-iosched Kemeng Shi
                   ` (11 preceding siblings ...)
  2022-11-01  9:34 ` [PATCH 12/20] block, bfq: start/restart service_from_wr accumulating correctly Kemeng Shi
@ 2022-11-01  9:34 ` Kemeng Shi
  2022-11-01  9:34 ` [PATCH 14/20] block, bfq: remove redundant oom_bfqq check for bfqq from bfq_find_close_cooperator Kemeng Shi
                   ` (6 subsequent siblings)
  19 siblings, 0 replies; 21+ messages in thread
From: Kemeng Shi @ 2022-11-01  9:34 UTC (permalink / raw)
  To: paolo.valente, axboe; +Cc: linux-block, linux-kernel, shikemeng

Commit 430a67f9d6169 ("block, bfq: merge bursts of newly-created queues")
add stable merge logic in bfq_setup_cooperator and will only be executed
for !nonrot_with_queueing device. Actually, bfq_setup_cooperator is
designed for only !nonrot_with_queueing and has already returned NULL
before doing real work if device is nonrot_with_queueing. We can add
stable merge after existing nonrot_with_queueing check and no need to
re-check nonrot_with_queueing.

Signed-off-by: Kemeng Shi <shikemeng@huawei.com>
---
 block/bfq-iosched.c | 97 ++++++++++++++++++++++-----------------------
 1 file changed, 47 insertions(+), 50 deletions(-)

diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index a46e49de895a..b8af0bb98d66 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -2886,56 +2886,6 @@ bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq,
 	if (bfqq->new_bfqq)
 		return bfqq->new_bfqq;
 
-	/*
-	 * Check delayed stable merge for rotational or non-queueing
-	 * devs. For this branch to be executed, bfqq must not be
-	 * currently merged with some other queue (i.e., bfqq->bic
-	 * must be non null). If we considered also merged queues,
-	 * then we should also check whether bfqq has already been
-	 * merged with bic->stable_merge_bfqq. But this would be
-	 * costly and complicated.
-	 */
-	if (unlikely(!bfqd->nonrot_with_queueing)) {
-		/*
-		 * Make sure also that bfqq is sync, because
-		 * bic->stable_merge_bfqq may point to some queue (for
-		 * stable merging) also if bic is associated with a
-		 * sync queue, but this bfqq is async
-		 */
-		if (bfq_bfqq_sync(bfqq) && bic->stable_merge_bfqq &&
-		    !bfq_bfqq_just_created(bfqq) &&
-		    time_is_before_jiffies(bfqq->split_time +
-					  msecs_to_jiffies(bfq_late_stable_merging)) &&
-		    time_is_before_jiffies(bfqq->creation_time +
-					   msecs_to_jiffies(bfq_late_stable_merging))) {
-			struct bfq_queue *stable_merge_bfqq =
-				bic->stable_merge_bfqq;
-			int proc_ref = min(bfqq_process_refs(bfqq),
-					   bfqq_process_refs(stable_merge_bfqq));
-
-			/* deschedule stable merge, because done or aborted here */
-			bfq_put_stable_ref(stable_merge_bfqq);
-
-			bic->stable_merge_bfqq = NULL;
-
-			if (!idling_boosts_thr_without_issues(bfqd, bfqq) &&
-			    proc_ref > 0) {
-				/* next function will take at least one ref */
-				struct bfq_queue *new_bfqq =
-					bfq_setup_merge(bfqq, stable_merge_bfqq);
-
-				if (new_bfqq) {
-					bic->stably_merged = true;
-					if (new_bfqq->bic)
-						new_bfqq->bic->stably_merged =
-									true;
-				}
-				return new_bfqq;
-			} else
-				return NULL;
-		}
-	}
-
 	/*
 	 * Do not perform queue merging if the device is non
 	 * rotational and performs internal queueing. In fact, such a
@@ -2976,6 +2926,53 @@ bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq,
 	if (likely(bfqd->nonrot_with_queueing))
 		return NULL;
 
+	/*
+	 * Check delayed stable merge for rotational or non-queueing
+	 * devs. For this branch to be executed, bfqq must not be
+	 * currently merged with some other queue (i.e., bfqq->bic
+	 * must be non null). If we considered also merged queues,
+	 * then we should also check whether bfqq has already been
+	 * merged with bic->stable_merge_bfqq. But this would be
+	 * costly and complicated.
+	 * Make sure also that bfqq is sync, because
+	 * bic->stable_merge_bfqq may point to some queue (for
+	 * stable merging) also if bic is associated with a
+	 * sync queue, but this bfqq is async
+	 */
+	if (bfq_bfqq_sync(bfqq) && bic->stable_merge_bfqq &&
+			!bfq_bfqq_just_created(bfqq) &&
+			time_is_before_jiffies(bfqq->split_time +
+				msecs_to_jiffies(bfq_late_stable_merging)) &&
+			time_is_before_jiffies(bfqq->creation_time +
+				msecs_to_jiffies(bfq_late_stable_merging))) {
+		struct bfq_queue *stable_merge_bfqq =
+			bic->stable_merge_bfqq;
+		int proc_ref = min(bfqq_process_refs(bfqq),
+				bfqq_process_refs(stable_merge_bfqq));
+
+		/* deschedule stable merge, because done or aborted here */
+		bfq_put_stable_ref(stable_merge_bfqq);
+
+		bic->stable_merge_bfqq = NULL;
+
+		if (!idling_boosts_thr_without_issues(bfqd, bfqq) &&
+				proc_ref > 0) {
+			/* next function will take at least one ref */
+			struct bfq_queue *new_bfqq =
+				bfq_setup_merge(bfqq, stable_merge_bfqq);
+
+			if (new_bfqq) {
+				bic->stably_merged = true;
+				if (new_bfqq->bic)
+					new_bfqq->bic->stably_merged =
+						true;
+			}
+			return new_bfqq;
+		} else
+			return NULL;
+	}
+
+
 	/*
 	 * Prevent bfqq from being merged if it has been created too
 	 * long ago. The idea is that true cooperating processes, and
-- 
2.30.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 14/20] block, bfq: remove redundant oom_bfqq check for bfqq from bfq_find_close_cooperator
  2022-11-01  9:33 [PATCH 00/20] A few bugfix and cleanup patches for bfq-iosched Kemeng Shi
                   ` (12 preceding siblings ...)
  2022-11-01  9:34 ` [PATCH 13/20] block,bfq: remove redundant nonrot_with_queueing check in bfq_setup_cooperator Kemeng Shi
@ 2022-11-01  9:34 ` Kemeng Shi
  2022-11-01  9:34 ` [PATCH 15/20] block, bfq: some cleanups for function bfq_pos_tree_add_move Kemeng Shi
                   ` (5 subsequent siblings)
  19 siblings, 0 replies; 21+ messages in thread
From: Kemeng Shi @ 2022-11-01  9:34 UTC (permalink / raw)
  To: paolo.valente, axboe; +Cc: linux-block, linux-kernel, shikemeng

The oom_bfqq is never added to tree in bfq_pos_tree_add_move, so bfqq
returned from bfq_find_close_cooperator is no need to be checked if it's
oom_bfqq.

Signed-off-by: Kemeng Shi <shikemeng@huawei.com>
---
 block/bfq-iosched.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index b8af0bb98d66..776951156fbc 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -3014,8 +3014,7 @@ bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq,
 	new_bfqq = bfq_find_close_cooperator(bfqd, bfqq,
 			bfq_io_struct_pos(io_struct, request));
 
-	if (new_bfqq && likely(new_bfqq != &bfqd->oom_bfqq) &&
-	    bfq_may_be_close_cooperator(bfqq, new_bfqq))
+	if (new_bfqq && bfq_may_be_close_cooperator(bfqq, new_bfqq))
 		return bfq_setup_merge(bfqq, new_bfqq);
 
 	return NULL;
-- 
2.30.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 15/20] block, bfq: some cleanups for function bfq_pos_tree_add_move
  2022-11-01  9:33 [PATCH 00/20] A few bugfix and cleanup patches for bfq-iosched Kemeng Shi
                   ` (13 preceding siblings ...)
  2022-11-01  9:34 ` [PATCH 14/20] block, bfq: remove redundant oom_bfqq check for bfqq from bfq_find_close_cooperator Kemeng Shi
@ 2022-11-01  9:34 ` Kemeng Shi
  2022-11-01  9:34 ` [PATCH 16/20] block, bfq: remove unnecessary goto tag in __bfq_weights_tree_remove Kemeng Shi
                   ` (4 subsequent siblings)
  19 siblings, 0 replies; 21+ messages in thread
From: Kemeng Shi @ 2022-11-01  9:34 UTC (permalink / raw)
  To: paolo.valente, axboe; +Cc: linux-block, linux-kernel, shikemeng

There are three cleanups in this patch:
1. The oom_bfqq is never inserted to tree, so return immediately for
oom_bfqq before unnecessary NULL check of bfqq->pos_root.
2. Only set bfqq->pos_root after bfq_rq_pos_tree_lookup successes rather
than set bfqq->pos_root unconditionally and reset to NULL if
bfq_rq_pos_tree_lookup fails.
3. Remove unnecessary local variable __bfqq which is only used to check
return value of bfq_rq_pos_tree_lookup.

Signed-off-by: Kemeng Shi <shikemeng@huawei.com>
---
 block/bfq-iosched.c | 22 ++++++++++------------
 1 file changed, 10 insertions(+), 12 deletions(-)

diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index 776951156fbc..dd9a51255a0f 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -758,17 +758,16 @@ void __cold
 bfq_pos_tree_add_move(struct bfq_data *bfqd, struct bfq_queue *bfqq)
 {
 	struct rb_node **p, *parent;
-	struct bfq_queue *__bfqq;
+
+	/* oom_bfqq does not participate in queue merging */
+	if (bfqq == &bfqd->oom_bfqq)
+		return;
 
 	if (bfqq->pos_root) {
 		rb_erase(&bfqq->pos_node, bfqq->pos_root);
 		bfqq->pos_root = NULL;
 	}
 
-	/* oom_bfqq does not participate in queue merging */
-	if (bfqq == &bfqd->oom_bfqq)
-		return;
-
 	/*
 	 * bfqq cannot be merged any longer (see comments in
 	 * bfq_setup_cooperator): no point in adding bfqq into the
@@ -782,14 +781,13 @@ bfq_pos_tree_add_move(struct bfq_data *bfqd, struct bfq_queue *bfqq)
 	if (!bfqq->next_rq)
 		return;
 
+	if (bfq_rq_pos_tree_lookup(bfqd, &bfqq_group(bfqq)->rq_pos_tree,
+			blk_rq_pos(bfqq->next_rq), &parent, &p))
+		return;
+
 	bfqq->pos_root = &bfqq_group(bfqq)->rq_pos_tree;
-	__bfqq = bfq_rq_pos_tree_lookup(bfqd, bfqq->pos_root,
-			blk_rq_pos(bfqq->next_rq), &parent, &p);
-	if (!__bfqq) {
-		rb_link_node(&bfqq->pos_node, parent, p);
-		rb_insert_color(&bfqq->pos_node, bfqq->pos_root);
-	} else
-		bfqq->pos_root = NULL;
+	rb_link_node(&bfqq->pos_node, parent, p);
+	rb_insert_color(&bfqq->pos_node, bfqq->pos_root);
 }
 
 /*
-- 
2.30.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 16/20] block, bfq: remove unnecessary goto tag in __bfq_weights_tree_remove
  2022-11-01  9:33 [PATCH 00/20] A few bugfix and cleanup patches for bfq-iosched Kemeng Shi
                   ` (14 preceding siblings ...)
  2022-11-01  9:34 ` [PATCH 15/20] block, bfq: some cleanups for function bfq_pos_tree_add_move Kemeng Shi
@ 2022-11-01  9:34 ` Kemeng Shi
  2022-11-01  9:34 ` [PATCH 17/20] block, bfq: remove unnecessary traverse in bfq_add_to_burst Kemeng Shi
                   ` (3 subsequent siblings)
  19 siblings, 0 replies; 21+ messages in thread
From: Kemeng Shi @ 2022-11-01  9:34 UTC (permalink / raw)
  To: paolo.valente, axboe; +Cc: linux-block, linux-kernel, shikemeng

Do free work if num_active == 0 and remove unnecessary tag
reset_entity_pointer.

Signed-off-by: Kemeng Shi <shikemeng@huawei.com>
---
 block/bfq-iosched.c | 10 ++++------
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index dd9a51255a0f..1402dfd9f448 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -950,13 +950,11 @@ void __bfq_weights_tree_remove(struct bfq_data *bfqd,
 		return;
 
 	bfqq->weight_counter->num_active--;
-	if (bfqq->weight_counter->num_active > 0)
-		goto reset_entity_pointer;
-
-	rb_erase_cached(&bfqq->weight_counter->weights_node, root);
-	kfree(bfqq->weight_counter);
+	if (bfqq->weight_counter->num_active == 0) {
+		rb_erase_cached(&bfqq->weight_counter->weights_node, root);
+		kfree(bfqq->weight_counter);
+	}
 
-reset_entity_pointer:
 	bfqq->weight_counter = NULL;
 	bfq_put_queue(bfqq);
 }
-- 
2.30.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 17/20] block, bfq: remove unnecessary traverse in bfq_add_to_burst
  2022-11-01  9:33 [PATCH 00/20] A few bugfix and cleanup patches for bfq-iosched Kemeng Shi
                   ` (15 preceding siblings ...)
  2022-11-01  9:34 ` [PATCH 16/20] block, bfq: remove unnecessary goto tag in __bfq_weights_tree_remove Kemeng Shi
@ 2022-11-01  9:34 ` Kemeng Shi
  2022-11-01  9:34 ` [PATCH 18/20] block, bfq: remove unnecessary bfqq->next_rq = NULL in bfq_remove_request Kemeng Shi
                   ` (2 subsequent siblings)
  19 siblings, 0 replies; 21+ messages in thread
From: Kemeng Shi @ 2022-11-01  9:34 UTC (permalink / raw)
  To: paolo.valente, axboe; +Cc: linux-block, linux-kernel, shikemeng

Do mark burst and remove from list in a single traverse to remove
unnecessary re-traverse.

Signed-off-by: Kemeng Shi <shikemeng@huawei.com>
---
 block/bfq-iosched.c | 14 +++++---------
 1 file changed, 5 insertions(+), 9 deletions(-)

diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index 1402dfd9f448..0864254b8dcd 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -1255,7 +1255,7 @@ static void bfq_add_to_burst(struct bfq_data *bfqd, struct bfq_queue *bfqq)
 	bfqd->burst_size++;
 
 	if (bfqd->burst_size == bfqd->bfq_large_burst_thresh) {
-		struct bfq_queue *pos, *bfqq_item;
+		struct bfq_queue *pos;
 		struct hlist_node *n;
 
 		/*
@@ -1267,13 +1267,6 @@ static void bfq_add_to_burst(struct bfq_data *bfqd, struct bfq_queue *bfqq)
 		/*
 		 * We can now mark all queues in the burst list as
 		 * belonging to a large burst.
-		 */
-		hlist_for_each_entry(bfqq_item, &bfqd->burst_list,
-				     burst_list_node)
-			bfq_mark_bfqq_in_large_burst(bfqq_item);
-		bfq_mark_bfqq_in_large_burst(bfqq);
-
-		/*
 		 * From now on, and until the current burst finishes, any
 		 * new queue being activated shortly after the last queue
 		 * was inserted in the burst can be immediately marked as
@@ -1281,8 +1274,11 @@ static void bfq_add_to_burst(struct bfq_data *bfqd, struct bfq_queue *bfqq)
 		 * needed any more. Remove it.
 		 */
 		hlist_for_each_entry_safe(pos, n, &bfqd->burst_list,
-					  burst_list_node)
+					  burst_list_node) {
+			bfq_mark_bfqq_in_large_burst(pos);
 			hlist_del_init(&pos->burst_list_node);
+		}
+		bfq_mark_bfqq_in_large_burst(bfqq);
 	} else /*
 		* Burst not yet large: add bfqq to the burst list. Do
 		* not increment the ref counter for bfqq, because bfqq
-- 
2.30.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 18/20] block, bfq: remove unnecessary bfqq->next_rq = NULL in bfq_remove_request
  2022-11-01  9:33 [PATCH 00/20] A few bugfix and cleanup patches for bfq-iosched Kemeng Shi
                   ` (16 preceding siblings ...)
  2022-11-01  9:34 ` [PATCH 17/20] block, bfq: remove unnecessary traverse in bfq_add_to_burst Kemeng Shi
@ 2022-11-01  9:34 ` Kemeng Shi
  2022-11-01  9:34 ` [PATCH 19/20] block, bfq: remove unnecessary local variable __bfqq in bfq_setup_merge Kemeng Shi
  2022-11-01  9:34 ` [PATCH 20/20] block, bfq: remove unncessary process_ref check for merged queue " Kemeng Shi
  19 siblings, 0 replies; 21+ messages in thread
From: Kemeng Shi @ 2022-11-01  9:34 UTC (permalink / raw)
  To: paolo.valente, axboe; +Cc: linux-block, linux-kernel, shikemeng

bfqq->next_rq is properly assigned at beginning of bfq_remove_request
for condition that bfq_queue will be empty as bfqq->next_rq will be
NULL only if bfqq->next_rq is removed and bfqq->next_rq is the last
request existing are both met.

Signed-off-by: Kemeng Shi <shikemeng@huawei.com>
---
 block/bfq-iosched.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index 0864254b8dcd..d3f4d995f84a 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -2410,8 +2410,6 @@ static void bfq_remove_request(struct request_queue *q,
 		q->last_merge = NULL;
 
 	if (RB_EMPTY_ROOT(&bfqq->sort_list)) {
-		bfqq->next_rq = NULL;
-
 		if (bfq_bfqq_busy(bfqq) && bfqq != bfqd->in_service_queue) {
 			bfq_del_bfqq_busy(bfqq, false);
 			/*
@@ -2445,7 +2443,6 @@ static void bfq_remove_request(struct request_queue *q,
 
 	if (rq->cmd_flags & REQ_META)
 		bfqq->meta_pending--;
-
 }
 
 static bool bfq_bio_merge(struct request_queue *q, struct bio *bio,
-- 
2.30.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 19/20] block, bfq: remove unnecessary local variable __bfqq in bfq_setup_merge
  2022-11-01  9:33 [PATCH 00/20] A few bugfix and cleanup patches for bfq-iosched Kemeng Shi
                   ` (17 preceding siblings ...)
  2022-11-01  9:34 ` [PATCH 18/20] block, bfq: remove unnecessary bfqq->next_rq = NULL in bfq_remove_request Kemeng Shi
@ 2022-11-01  9:34 ` Kemeng Shi
  2022-11-01  9:34 ` [PATCH 20/20] block, bfq: remove unncessary process_ref check for merged queue " Kemeng Shi
  19 siblings, 0 replies; 21+ messages in thread
From: Kemeng Shi @ 2022-11-01  9:34 UTC (permalink / raw)
  To: paolo.valente, axboe; +Cc: linux-block, linux-kernel, shikemeng

We could traversal bfqq->new_bfqq list without a local variable easily and
intuitively. So remove unnecessary local variable __bfqq.

Signed-off-by: Kemeng Shi <shikemeng@huawei.com>
---
 block/bfq-iosched.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index d3f4d995f84a..589ab59abcf5 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -2741,7 +2741,6 @@ static struct bfq_queue *
 bfq_setup_merge(struct bfq_queue *bfqq, struct bfq_queue *new_bfqq)
 {
 	int process_refs, new_process_refs;
-	struct bfq_queue *__bfqq;
 
 	/*
 	 * If there are no process references on the new_bfqq, then it is
@@ -2753,10 +2752,9 @@ bfq_setup_merge(struct bfq_queue *bfqq, struct bfq_queue *new_bfqq)
 		return NULL;
 
 	/* Avoid a circular list and skip interim queue merges. */
-	while ((__bfqq = new_bfqq->new_bfqq)) {
-		if (__bfqq == bfqq)
+	while ((new_bfqq = new_bfqq->new_bfqq)) {
+		if (new_bfqq == bfqq)
 			return NULL;
-		new_bfqq = __bfqq;
 	}
 
 	process_refs = bfqq_process_refs(bfqq);
-- 
2.30.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 20/20] block, bfq: remove unncessary process_ref check for merged queue in bfq_setup_merge
  2022-11-01  9:33 [PATCH 00/20] A few bugfix and cleanup patches for bfq-iosched Kemeng Shi
                   ` (18 preceding siblings ...)
  2022-11-01  9:34 ` [PATCH 19/20] block, bfq: remove unnecessary local variable __bfqq in bfq_setup_merge Kemeng Shi
@ 2022-11-01  9:34 ` Kemeng Shi
  19 siblings, 0 replies; 21+ messages in thread
From: Kemeng Shi @ 2022-11-01  9:34 UTC (permalink / raw)
  To: paolo.valente, axboe; +Cc: linux-block, linux-kernel, shikemeng

As we already check process_refs of original new_bfqq is not zero. The
later new_bfqq maybe the merged queue of original new_bfqq which inherit
the process_refs of original new_bfqq which is not zero, so process_refs
of merged queue will not be zero.
Remove unncessary check for merged queue and remove new_process_refs
which will not be used.

Signed-off-by: Kemeng Shi <shikemeng@huawei.com>
---
 block/bfq-iosched.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index 589ab59abcf5..0736577bfbfe 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -2740,7 +2740,7 @@ static struct bfq_queue *bfq_find_close_cooperator(struct bfq_data *bfqd,
 static struct bfq_queue *
 bfq_setup_merge(struct bfq_queue *bfqq, struct bfq_queue *new_bfqq)
 {
-	int process_refs, new_process_refs;
+	int process_refs;
 
 	/*
 	 * If there are no process references on the new_bfqq, then it is
@@ -2758,12 +2758,11 @@ bfq_setup_merge(struct bfq_queue *bfqq, struct bfq_queue *new_bfqq)
 	}
 
 	process_refs = bfqq_process_refs(bfqq);
-	new_process_refs = bfqq_process_refs(new_bfqq);
 	/*
 	 * If the process for the bfqq has gone away, there is no
 	 * sense in merging the queues.
 	 */
-	if (process_refs == 0 || new_process_refs == 0)
+	if (process_refs == 0)
 		return NULL;
 
 	/*
-- 
2.30.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2022-11-01  9:35 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-11-01  9:33 [PATCH 00/20] A few bugfix and cleanup patches for bfq-iosched Kemeng Shi
2022-11-01  9:33 ` [PATCH 01/20] block, bfq: fix typo in comment Kemeng Shi
2022-11-01  9:33 ` [PATCH 02/20] block, bfq: Update bfqd->max_budget with bfqd->lock held Kemeng Shi
2022-11-01  9:34 ` [PATCH 03/20] block, bfq: correct bfq_max_budget and bfq_min_budget Kemeng Shi
2022-11-01  9:34 ` [PATCH 04/20] block, bfq: simpfy computation of bfqd->budgets_assigned Kemeng Shi
2022-11-01  9:34 ` [PATCH 05/20] block, bfq: recover the "service hole" if enough budget is left Kemeng Shi
2022-11-01  9:34 ` [PATCH 06/20] block, bfq: correct interactive weight-raise check in bfq_set_budget_timeout Kemeng Shi
2022-11-01  9:34 ` [PATCH 07/20] block, bfq: simpfy check for interactive bfqq in bfq_update_wr_data Kemeng Shi
2022-11-01  9:34 ` [PATCH 08/20] block, bfq: do srt filtering for interactive queues in bfq_completed_request Kemeng Shi
2022-11-01  9:34 ` [PATCH 09/20] block, bfq: remove redundant check if (bfqq->dispatched > 0) Kemeng Shi
2022-11-01  9:34 ` [PATCH 10/20] block, bfq: define and use soft_rt, in_burst and wr_or_deserves_wr only low_latency is enable Kemeng Shi
2022-11-01  9:34 ` [PATCH 11/20] block, bfq: remove unnecessary "wr" part of wr_or_deserves_wr Kemeng Shi
2022-11-01  9:34 ` [PATCH 12/20] block, bfq: start/restart service_from_wr accumulating correctly Kemeng Shi
2022-11-01  9:34 ` [PATCH 13/20] block,bfq: remove redundant nonrot_with_queueing check in bfq_setup_cooperator Kemeng Shi
2022-11-01  9:34 ` [PATCH 14/20] block, bfq: remove redundant oom_bfqq check for bfqq from bfq_find_close_cooperator Kemeng Shi
2022-11-01  9:34 ` [PATCH 15/20] block, bfq: some cleanups for function bfq_pos_tree_add_move Kemeng Shi
2022-11-01  9:34 ` [PATCH 16/20] block, bfq: remove unnecessary goto tag in __bfq_weights_tree_remove Kemeng Shi
2022-11-01  9:34 ` [PATCH 17/20] block, bfq: remove unnecessary traverse in bfq_add_to_burst Kemeng Shi
2022-11-01  9:34 ` [PATCH 18/20] block, bfq: remove unnecessary bfqq->next_rq = NULL in bfq_remove_request Kemeng Shi
2022-11-01  9:34 ` [PATCH 19/20] block, bfq: remove unnecessary local variable __bfqq in bfq_setup_merge Kemeng Shi
2022-11-01  9:34 ` [PATCH 20/20] block, bfq: remove unncessary process_ref check for merged queue " Kemeng Shi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).