linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH V6 0/8] block, bfq: extend bfq to support multi-actuator drives
@ 2022-11-03 16:26 Paolo Valente
  2022-11-03 16:26 ` [PATCH V6 1/8] block, bfq: split sync bfq_queues on a per-actuator basis Paolo Valente
                   ` (7 more replies)
  0 siblings, 8 replies; 31+ messages in thread
From: Paolo Valente @ 2022-11-03 16:26 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, linux-kernel, arie.vanderhoeven, rory.c.chen, Paolo Valente

Hi,
this V6 differs from V5 in that it addresses the issue reported in
[2].

Here is the whole description of this patch series again.  This
extension addresses the following issue. Single-LUN multi-actuator
SCSI drives, as well as all multi-actuator SATA drives appear as a
single device to the I/O subsystem [1].  Yet they address commands to
different actuators internally, as a function of Logical Block
Addressing (LBAs). A given sector is reachable by only one of the
actuators. For example, Seagate’s Serial Advanced Technology
Attachment (SATA) version contains two actuators and maps the lower
half of the SATA LBA space to the lower actuator and the upper half to
the upper actuator.

Evidently, to fully utilize actuators, no actuator must be left idle
or underutilized while there is pending I/O for it. To reach this
goal, the block layer must somehow control the load of each actuator
individually. This series enriches BFQ with such a per-actuator
control, as a first step. Then it also adds a simple mechanism for
guaranteeing that actuators with pending I/O are never left idle.

See [1] for a more detailed overview of the problem and of the
solutions implemented in this patch series. There you will also find
some preliminary performance results.

Thanks,
Paolo

[1] https://www.linaro.org/blog/budget-fair-queueing-bfq-linux-io-scheduler-optimizations-for-multi-actuator-sata-hard-drives/
[2] https://lore.kernel.org/lkml/202210301803.9wm3k7D8-lkp@intel.com/T/#m3685011dacaf0801bd084b2a3194c861a4e940ba

Davide Zini (3):
  block, bfq: split also async bfq_queues on a per-actuator basis
  block, bfq: inject I/O to underutilized actuators
  block, bfq: balance I/O injection among underutilized actuators

Federico Gavioli (1):
  block, bfq: retrieve independent access ranges from request queue

Paolo Valente (4):
  block, bfq: split sync bfq_queues on a per-actuator basis
  block, bfq: forbid stable merging of queues associated with different
    actuators
  block, bfq: move io_cq-persistent bfqq data into a dedicated struct
  block, bfq: turn bfqq_data into an array in bfq_io_cq

 block/bfq-cgroup.c  |  97 +++++----
 block/bfq-iosched.c | 521 ++++++++++++++++++++++++++++++--------------
 block/bfq-iosched.h | 141 +++++++++---
 block/bfq-wf2q.c    |   2 +-
 4 files changed, 528 insertions(+), 233 deletions(-)

--
2.20.1

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH V6 1/8] block, bfq: split sync bfq_queues on a per-actuator basis
  2022-11-03 16:26 [PATCH V6 0/8] block, bfq: extend bfq to support multi-actuator drives Paolo Valente
@ 2022-11-03 16:26 ` Paolo Valente
  2022-11-21  0:18   ` Damien Le Moal
  2022-11-03 16:26 ` [PATCH V6 2/8] block, bfq: forbid stable merging of queues associated with different actuators Paolo Valente
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 31+ messages in thread
From: Paolo Valente @ 2022-11-03 16:26 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, linux-kernel, arie.vanderhoeven, rory.c.chen,
	Paolo Valente, Gabriele Felici, Carmine Zaccagnino

Single-LUN multi-actuator SCSI drives, as well as all multi-actuator
SATA drives appear as a single device to the I/O subsystem [1].  Yet
they address commands to different actuators internally, as a function
of Logical Block Addressing (LBAs). A given sector is reachable by
only one of the actuators. For example, Seagate’s Serial Advanced
Technology Attachment (SATA) version contains two actuators and maps
the lower half of the SATA LBA space to the lower actuator and the
upper half to the upper actuator.

Evidently, to fully utilize actuators, no actuator must be left idle
or underutilized while there is pending I/O for it. The block layer
must somehow control the load of each actuator individually. This
commit lays the ground for allowing BFQ to provide such a per-actuator
control.

BFQ associates an I/O-request sync bfq_queue with each process doing
synchronous I/O, or with a group of processes, in case of queue
merging. Then BFQ serves one bfq_queue at a time. While in service, a
bfq_queue is emptied in request-position order. Yet the same process,
or group of processes, may generate I/O for different actuators. In
this case, different streams of I/O (each for a different actuator)
get all inserted into the same sync bfq_queue. So there is basically
no individual control on when each stream is served, i.e., on when the
I/O requests of the stream are picked from the bfq_queue and
dispatched to the drive.

This commit enables BFQ to control the service of each actuator
individually for synchronous I/O, by simply splitting each sync
bfq_queue into N queues, one for each actuator. In other words, a sync
bfq_queue is now associated to a pair (process, actuator). As a
consequence of this split, the per-queue proportional-share policy
implemented by BFQ will guarantee that the sync I/O generated for each
actuator, by each process, receives its fair share of service.

This is just a preparatory patch. If the I/O of the same process
happens to be sent to different queues, then each of these queues may
undergo queue merging. To handle this event, the bfq_io_cq data
structure must be properly extended. In addition, stable merging must
be disabled to avoid loss of control on individual actuators. Finally,
also async queues must be split. These issues are described in detail
and addressed in next commits. As for this commit, although multiple
per-process bfq_queues are provided, the I/O of each process or group
of processes is still sent to only one queue, regardless of the
actuator the I/O is for. The forwarding to distinct bfq_queues will be
enabled after addressing the above issues.

[1] https://www.linaro.org/blog/budget-fair-queueing-bfq-linux-io-scheduler-optimizations-for-multi-actuator-sata-hard-drives/

Signed-off-by: Gabriele Felici <felicigb@gmail.com>
Signed-off-by: Carmine Zaccagnino <carmine@carminezacc.com>
Signed-off-by: Paolo Valente <paolo.valente@linaro.org>
---
 block/bfq-cgroup.c  |  95 ++++++++++++++++------------
 block/bfq-iosched.c | 151 +++++++++++++++++++++++++++++---------------
 block/bfq-iosched.h |  51 ++++++++++++---
 3 files changed, 194 insertions(+), 103 deletions(-)

diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
index 144bca006463..d243c429d9c0 100644
--- a/block/bfq-cgroup.c
+++ b/block/bfq-cgroup.c
@@ -700,6 +700,48 @@ void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq,
 	bfq_put_queue(bfqq);
 }
 
+static void bfq_sync_bfqq_move(struct bfq_data *bfqd,
+			       struct bfq_queue *sync_bfqq,
+			       struct bfq_io_cq *bic,
+			       struct bfq_group *bfqg,
+			       unsigned int act_idx)
+{
+	if (!sync_bfqq->new_bfqq && !bfq_bfqq_coop(sync_bfqq)) {
+		/* We are the only user of this bfqq, just move it */
+		if (sync_bfqq->entity.sched_data != &bfqg->sched_data)
+			bfq_bfqq_move(bfqd, sync_bfqq, bfqg);
+	} else {
+		struct bfq_queue *bfqq;
+
+		/*
+		 * The queue was merged to a different queue. Check
+		 * that the merge chain still belongs to the same
+		 * cgroup.
+		 */
+		for (bfqq = sync_bfqq; bfqq; bfqq = bfqq->new_bfqq)
+			if (bfqq->entity.sched_data !=
+			    &bfqg->sched_data)
+				break;
+		if (bfqq) {
+			/*
+			 * Some queue changed cgroup so the merge is
+			 * not valid anymore. We cannot easily just
+			 * cancel the merge (by clearing new_bfqq) as
+			 * there may be other processes using this
+			 * queue and holding refs to all queues below
+			 * sync_bfqq->new_bfqq. Similarly if the merge
+			 * already happened, we need to detach from
+			 * bfqq now so that we cannot merge bio to a
+			 * request from the old cgroup.
+			 */
+			bfq_put_cooperator(sync_bfqq);
+			bfq_release_process_ref(bfqd, sync_bfqq);
+			bic_set_bfqq(bic, NULL, 1, act_idx);
+		}
+	}
+}
+
+
 /**
  * __bfq_bic_change_cgroup - move @bic to @bfqg.
  * @bfqd: the queue descriptor.
@@ -714,53 +756,24 @@ static void *__bfq_bic_change_cgroup(struct bfq_data *bfqd,
 				     struct bfq_io_cq *bic,
 				     struct bfq_group *bfqg)
 {
-	struct bfq_queue *async_bfqq = bic_to_bfqq(bic, 0);
-	struct bfq_queue *sync_bfqq = bic_to_bfqq(bic, 1);
 	struct bfq_entity *entity;
+	unsigned int act_idx;
 
-	if (async_bfqq) {
-		entity = &async_bfqq->entity;
-
-		if (entity->sched_data != &bfqg->sched_data) {
-			bic_set_bfqq(bic, NULL, 0);
-			bfq_release_process_ref(bfqd, async_bfqq);
-		}
-	}
+	for (act_idx = 0; act_idx < bfqd->num_actuators; act_idx++) {
+		struct bfq_queue *async_bfqq = bic_to_bfqq(bic, 0, act_idx);
+		struct bfq_queue *sync_bfqq = bic_to_bfqq(bic, 1, act_idx);
 
-	if (sync_bfqq) {
-		if (!sync_bfqq->new_bfqq && !bfq_bfqq_coop(sync_bfqq)) {
-			/* We are the only user of this bfqq, just move it */
-			if (sync_bfqq->entity.sched_data != &bfqg->sched_data)
-				bfq_bfqq_move(bfqd, sync_bfqq, bfqg);
-		} else {
-			struct bfq_queue *bfqq;
+		if (async_bfqq) {
+			entity = &async_bfqq->entity;
 
-			/*
-			 * The queue was merged to a different queue. Check
-			 * that the merge chain still belongs to the same
-			 * cgroup.
-			 */
-			for (bfqq = sync_bfqq; bfqq; bfqq = bfqq->new_bfqq)
-				if (bfqq->entity.sched_data !=
-				    &bfqg->sched_data)
-					break;
-			if (bfqq) {
-				/*
-				 * Some queue changed cgroup so the merge is
-				 * not valid anymore. We cannot easily just
-				 * cancel the merge (by clearing new_bfqq) as
-				 * there may be other processes using this
-				 * queue and holding refs to all queues below
-				 * sync_bfqq->new_bfqq. Similarly if the merge
-				 * already happened, we need to detach from
-				 * bfqq now so that we cannot merge bio to a
-				 * request from the old cgroup.
-				 */
-				bfq_put_cooperator(sync_bfqq);
-				bfq_release_process_ref(bfqd, sync_bfqq);
-				bic_set_bfqq(bic, NULL, 1);
+			if (entity->sched_data != &bfqg->sched_data) {
+				bic_set_bfqq(bic, NULL, 0, act_idx);
+				bfq_release_process_ref(bfqd, async_bfqq);
 			}
 		}
+
+		if (sync_bfqq)
+			bfq_sync_bfqq_move(bfqd, sync_bfqq, bic, bfqg, act_idx);
 	}
 
 	return bfqg;
diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index 7ea427817f7f..5c69394bbb65 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -377,14 +377,19 @@ static const unsigned long bfq_late_stable_merging = 600;
 #define RQ_BIC(rq)		((struct bfq_io_cq *)((rq)->elv.priv[0]))
 #define RQ_BFQQ(rq)		((rq)->elv.priv[1])
 
-struct bfq_queue *bic_to_bfqq(struct bfq_io_cq *bic, bool is_sync)
+struct bfq_queue *bic_to_bfqq(struct bfq_io_cq *bic,
+			      bool is_sync,
+			      unsigned int actuator_idx)
 {
-	return bic->bfqq[is_sync];
+	return bic->bfqq[is_sync][actuator_idx];
 }
 
 static void bfq_put_stable_ref(struct bfq_queue *bfqq);
 
-void bic_set_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq, bool is_sync)
+void bic_set_bfqq(struct bfq_io_cq *bic,
+		  struct bfq_queue *bfqq,
+		  bool is_sync,
+		  unsigned int actuator_idx)
 {
 	/*
 	 * If bfqq != NULL, then a non-stable queue merge between
@@ -399,7 +404,7 @@ void bic_set_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq, bool is_sync)
 	 * we cancel the stable merge if
 	 * bic->stable_merge_bfqq == bfqq.
 	 */
-	bic->bfqq[is_sync] = bfqq;
+	bic->bfqq[is_sync][actuator_idx] = bfqq;
 
 	if (bfqq && bic->stable_merge_bfqq == bfqq) {
 		/*
@@ -672,9 +677,9 @@ static void bfq_limit_depth(blk_opf_t opf, struct blk_mq_alloc_data *data)
 {
 	struct bfq_data *bfqd = data->q->elevator->elevator_data;
 	struct bfq_io_cq *bic = bfq_bic_lookup(data->q);
-	struct bfq_queue *bfqq = bic ? bic_to_bfqq(bic, op_is_sync(opf)) : NULL;
 	int depth;
 	unsigned limit = data->q->nr_requests;
+	unsigned int act_idx;
 
 	/* Sync reads have full depth available */
 	if (op_is_sync(opf) && !op_is_write(opf)) {
@@ -684,14 +689,21 @@ static void bfq_limit_depth(blk_opf_t opf, struct blk_mq_alloc_data *data)
 		limit = (limit * depth) >> bfqd->full_depth_shift;
 	}
 
-	/*
-	 * Does queue (or any parent entity) exceed number of requests that
-	 * should be available to it? Heavily limit depth so that it cannot
-	 * consume more available requests and thus starve other entities.
-	 */
-	if (bfqq && bfqq_request_over_limit(bfqq, limit))
-		depth = 1;
+	for (act_idx = 0; act_idx < bfqd->num_actuators; act_idx++) {
+		struct bfq_queue *bfqq =
+			bic ? bic_to_bfqq(bic, op_is_sync(opf), act_idx) : NULL;
 
+		/*
+		 * Does queue (or any parent entity) exceed number of
+		 * requests that should be available to it? Heavily
+		 * limit depth so that it cannot consume more
+		 * available requests and thus starve other entities.
+		 */
+		if (bfqq && bfqq_request_over_limit(bfqq, limit)) {
+			depth = 1;
+			break;
+		}
+	}
 	bfq_log(bfqd, "[%s] wr_busy %d sync %d depth %u",
 		__func__, bfqd->wr_busy_queues, op_is_sync(opf), depth);
 	if (depth)
@@ -2142,7 +2154,7 @@ static void bfq_check_waker(struct bfq_data *bfqd, struct bfq_queue *bfqq,
 	 * We reset waker detection logic also if too much time has passed
  	 * since the first detection. If wakeups are rare, pointless idling
 	 * doesn't hurt throughput that much. The condition below makes sure
-	 * we do not uselessly idle blocking waker in more than 1/64 cases. 
+	 * we do not uselessly idle blocking waker in more than 1/64 cases.
 	 */
 	if (bfqd->last_completed_rq_bfqq !=
 	    bfqq->tentative_waker_bfqq ||
@@ -2454,6 +2466,16 @@ static void bfq_remove_request(struct request_queue *q,
 
 }
 
+/* get the index of the actuator that will serve bio */
+static unsigned int bfq_actuator_index(struct bfq_data *bfqd, struct bio *bio)
+{
+	/*
+	 * Multi-actuator support not complete yet, so always return 0
+	 * for the moment.
+	 */
+	return 0;
+}
+
 static bool bfq_bio_merge(struct request_queue *q, struct bio *bio,
 		unsigned int nr_segs)
 {
@@ -2478,7 +2500,8 @@ static bool bfq_bio_merge(struct request_queue *q, struct bio *bio,
 		 */
 		bfq_bic_update_cgroup(bic, bio);
 
-		bfqd->bio_bfqq = bic_to_bfqq(bic, op_is_sync(bio->bi_opf));
+		bfqd->bio_bfqq = bic_to_bfqq(bic, op_is_sync(bio->bi_opf),
+					     bfq_actuator_index(bfqd, bio));
 	} else {
 		bfqd->bio_bfqq = NULL;
 	}
@@ -3174,7 +3197,7 @@ bfq_merge_bfqqs(struct bfq_data *bfqd, struct bfq_io_cq *bic,
 	/*
 	 * Merge queues (that is, let bic redirect its requests to new_bfqq)
 	 */
-	bic_set_bfqq(bic, new_bfqq, 1);
+	bic_set_bfqq(bic, new_bfqq, 1, bfqq->actuator_idx);
 	bfq_mark_bfqq_coop(new_bfqq);
 	/*
 	 * new_bfqq now belongs to at least two bics (it is a shared queue):
@@ -4808,11 +4831,12 @@ static struct bfq_queue *bfq_select_queue(struct bfq_data *bfqd)
 	 */
 	if (bfq_bfqq_wait_request(bfqq) ||
 	    (bfqq->dispatched != 0 && bfq_better_to_idle(bfqq))) {
+		unsigned int act_idx = bfqq->actuator_idx;
 		struct bfq_queue *async_bfqq =
-			bfqq->bic && bfqq->bic->bfqq[0] &&
-			bfq_bfqq_busy(bfqq->bic->bfqq[0]) &&
-			bfqq->bic->bfqq[0]->next_rq ?
-			bfqq->bic->bfqq[0] : NULL;
+			bfqq->bic && bfqq->bic->bfqq[0][act_idx] &&
+			bfq_bfqq_busy(bfqq->bic->bfqq[0][act_idx]) &&
+			bfqq->bic->bfqq[0][act_idx]->next_rq ?
+			bfqq->bic->bfqq[0][act_idx] : NULL;
 		struct bfq_queue *blocked_bfqq =
 			!hlist_empty(&bfqq->woken_list) ?
 			container_of(bfqq->woken_list.first,
@@ -4904,7 +4928,7 @@ static struct bfq_queue *bfq_select_queue(struct bfq_data *bfqd)
 		    icq_to_bic(async_bfqq->next_rq->elv.icq) == bfqq->bic &&
 		    bfq_serv_to_charge(async_bfqq->next_rq, async_bfqq) <=
 		    bfq_bfqq_budget_left(async_bfqq))
-			bfqq = bfqq->bic->bfqq[0];
+			bfqq = bfqq->bic->bfqq[0][act_idx];
 		else if (bfqq->waker_bfqq &&
 			   bfq_bfqq_busy(bfqq->waker_bfqq) &&
 			   bfqq->waker_bfqq->next_rq &&
@@ -5365,49 +5389,59 @@ static void bfq_exit_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq)
 	bfq_release_process_ref(bfqd, bfqq);
 }
 
-static void bfq_exit_icq_bfqq(struct bfq_io_cq *bic, bool is_sync)
+static void bfq_exit_icq_bfqq(struct bfq_io_cq *bic,
+			      bool is_sync,
+			      unsigned int actuator_idx)
 {
-	struct bfq_queue *bfqq = bic_to_bfqq(bic, is_sync);
+	struct bfq_queue *bfqq = bic_to_bfqq(bic, is_sync, actuator_idx);
 	struct bfq_data *bfqd;
 
 	if (bfqq)
 		bfqd = bfqq->bfqd; /* NULL if scheduler already exited */
 
 	if (bfqq && bfqd) {
-		unsigned long flags;
-
-		spin_lock_irqsave(&bfqd->lock, flags);
 		bfqq->bic = NULL;
 		bfq_exit_bfqq(bfqd, bfqq);
-		bic_set_bfqq(bic, NULL, is_sync);
-		spin_unlock_irqrestore(&bfqd->lock, flags);
+		bic_set_bfqq(bic, NULL, is_sync, actuator_idx);
 	}
 }
 
 static void bfq_exit_icq(struct io_cq *icq)
 {
 	struct bfq_io_cq *bic = icq_to_bic(icq);
+	struct bfq_data *bfqd = bic_to_bfqd(bic);
+	unsigned long flags;
+	unsigned int act_idx;
+	unsigned int num_actuators;
 
-	if (bic->stable_merge_bfqq) {
-		struct bfq_data *bfqd = bic->stable_merge_bfqq->bfqd;
-
+	/*
+	 * bfqd is NULL if scheduler already exited, and in that case
+	 * this is the last time these queues are accessed.
+	 */
+	if (bfqd) {
+		spin_lock_irqsave(&bfqd->lock, flags);
+		num_actuators = bfqd->num_actuators;
+	} else {
 		/*
-		 * bfqd is NULL if scheduler already exited, and in
-		 * that case this is the last time bfqq is accessed.
+		 * bfqd->num_actuators not available any longer, cycle
+		 * over all possible per-actuator bfqqs in next
+		 * loop. We rely on bic being zeroed on creation, and
+		 * therefore on its unused per-actuator fields being
+		 * NULL.
 		 */
-		if (bfqd) {
-			unsigned long flags;
+		num_actuators = BFQ_MAX_ACTUATORS;
+	}
 
-			spin_lock_irqsave(&bfqd->lock, flags);
-			bfq_put_stable_ref(bic->stable_merge_bfqq);
-			spin_unlock_irqrestore(&bfqd->lock, flags);
-		} else {
-			bfq_put_stable_ref(bic->stable_merge_bfqq);
-		}
+	if (bic->stable_merge_bfqq)
+		bfq_put_stable_ref(bic->stable_merge_bfqq);
+
+	for (act_idx = 0; act_idx < num_actuators; act_idx++) {
+		bfq_exit_icq_bfqq(bic, true, act_idx);
+		bfq_exit_icq_bfqq(bic, false, act_idx);
 	}
 
-	bfq_exit_icq_bfqq(bic, true);
-	bfq_exit_icq_bfqq(bic, false);
+	if (bfqd)
+		spin_unlock_irqrestore(&bfqd->lock, flags);
 }
 
 /*
@@ -5484,23 +5518,25 @@ static void bfq_check_ioprio_change(struct bfq_io_cq *bic, struct bio *bio)
 
 	bic->ioprio = ioprio;
 
-	bfqq = bic_to_bfqq(bic, false);
+	bfqq = bic_to_bfqq(bic, false, bfq_actuator_index(bfqd, bio));
 	if (bfqq) {
 		bfq_release_process_ref(bfqd, bfqq);
 		bfqq = bfq_get_queue(bfqd, bio, false, bic, true);
-		bic_set_bfqq(bic, bfqq, false);
+		bic_set_bfqq(bic, bfqq, false, bfq_actuator_index(bfqd, bio));
 	}
 
-	bfqq = bic_to_bfqq(bic, true);
+	bfqq = bic_to_bfqq(bic, true, bfq_actuator_index(bfqd, bio));
 	if (bfqq)
 		bfq_set_next_ioprio_data(bfqq, bic);
 }
 
 static void bfq_init_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq,
-			  struct bfq_io_cq *bic, pid_t pid, int is_sync)
+			  struct bfq_io_cq *bic, pid_t pid, int is_sync,
+			  unsigned int act_idx)
 {
 	u64 now_ns = ktime_get_ns();
 
+	bfqq->actuator_idx = act_idx;
 	RB_CLEAR_NODE(&bfqq->entity.rb_node);
 	INIT_LIST_HEAD(&bfqq->fifo);
 	INIT_HLIST_NODE(&bfqq->burst_list_node);
@@ -5739,6 +5775,7 @@ static struct bfq_queue *bfq_get_queue(struct bfq_data *bfqd,
 	struct bfq_group *bfqg;
 
 	bfqg = bfq_bio_bfqg(bfqd, bio);
+
 	if (!is_sync) {
 		async_bfqq = bfq_async_queue_prio(bfqd, bfqg, ioprio_class,
 						  ioprio);
@@ -5753,7 +5790,7 @@ static struct bfq_queue *bfq_get_queue(struct bfq_data *bfqd,
 
 	if (bfqq) {
 		bfq_init_bfqq(bfqd, bfqq, bic, current->pid,
-			      is_sync);
+			      is_sync, bfq_actuator_index(bfqd, bio));
 		bfq_init_entity(&bfqq->entity, bfqg);
 		bfq_log_bfqq(bfqd, bfqq, "allocated");
 	} else {
@@ -6068,7 +6105,8 @@ static bool __bfq_insert_request(struct bfq_data *bfqd, struct request *rq)
 		 * then complete the merge and redirect it to
 		 * new_bfqq.
 		 */
-		if (bic_to_bfqq(RQ_BIC(rq), 1) == bfqq)
+		if (bic_to_bfqq(RQ_BIC(rq), 1,
+				bfq_actuator_index(bfqd, rq->bio)) == bfqq)
 			bfq_merge_bfqqs(bfqd, RQ_BIC(rq),
 					bfqq, new_bfqq);
 
@@ -6622,7 +6660,7 @@ bfq_split_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq)
 		return bfqq;
 	}
 
-	bic_set_bfqq(bic, NULL, 1);
+	bic_set_bfqq(bic, NULL, 1, bfqq->actuator_idx);
 
 	bfq_put_cooperator(bfqq);
 
@@ -6636,7 +6674,8 @@ static struct bfq_queue *bfq_get_bfqq_handle_split(struct bfq_data *bfqd,
 						   bool split, bool is_sync,
 						   bool *new_queue)
 {
-	struct bfq_queue *bfqq = bic_to_bfqq(bic, is_sync);
+	unsigned int act_idx = bfq_actuator_index(bfqd, bio);
+	struct bfq_queue *bfqq = bic_to_bfqq(bic, is_sync, act_idx);
 
 	if (likely(bfqq && bfqq != &bfqd->oom_bfqq))
 		return bfqq;
@@ -6648,7 +6687,7 @@ static struct bfq_queue *bfq_get_bfqq_handle_split(struct bfq_data *bfqd,
 		bfq_put_queue(bfqq);
 	bfqq = bfq_get_queue(bfqd, bio, is_sync, bic, split);
 
-	bic_set_bfqq(bic, bfqq, is_sync);
+	bic_set_bfqq(bic, bfqq, is_sync, act_idx);
 	if (split && is_sync) {
 		if ((bic->was_in_burst_list && bfqd->large_burst) ||
 		    bic->saved_in_large_burst)
@@ -7090,8 +7129,10 @@ static int bfq_init_queue(struct request_queue *q, struct elevator_type *e)
 	 * Our fallback bfqq if bfq_find_alloc_queue() runs into OOM issues.
 	 * Grab a permanent reference to it, so that the normal code flow
 	 * will not attempt to free it.
+	 * Set zero as actuator index: we will pretend that
+	 * all I/O requests are for the same actuator.
 	 */
-	bfq_init_bfqq(bfqd, &bfqd->oom_bfqq, NULL, 1, 0);
+	bfq_init_bfqq(bfqd, &bfqd->oom_bfqq, NULL, 1, 0, 0);
 	bfqd->oom_bfqq.ref++;
 	bfqd->oom_bfqq.new_ioprio = BFQ_DEFAULT_QUEUE_IOPRIO;
 	bfqd->oom_bfqq.new_ioprio_class = IOPRIO_CLASS_BE;
@@ -7110,6 +7151,12 @@ static int bfq_init_queue(struct request_queue *q, struct elevator_type *e)
 
 	bfqd->queue = q;
 
+	/*
+	 * Multi-actuator support not complete yet, default to single
+	 * actuator for the moment.
+	 */
+	bfqd->num_actuators = 1;
+
 	INIT_LIST_HEAD(&bfqd->dispatch);
 
 	hrtimer_init(&bfqd->idle_slice_timer, CLOCK_MONOTONIC,
diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h
index 71f721670ab6..bfcbd8ea9000 100644
--- a/block/bfq-iosched.h
+++ b/block/bfq-iosched.h
@@ -33,6 +33,14 @@
  */
 #define BFQ_SOFTRT_WEIGHT_FACTOR	100
 
+/*
+ * Maximum number of actuators supported. This constant is used simply
+ * to define the size of the static array that will contain
+ * per-actuator data. The current value is hopefully a good upper
+ * bound to the possible number of actuators of any actual drive.
+ */
+#define BFQ_MAX_ACTUATORS 32
+
 struct bfq_entity;
 
 /**
@@ -225,12 +233,14 @@ struct bfq_ttime {
  * struct bfq_queue - leaf schedulable entity.
  *
  * A bfq_queue is a leaf request queue; it can be associated with an
- * io_context or more, if it  is  async or shared  between  cooperating
- * processes. @cgroup holds a reference to the cgroup, to be sure that it
- * does not disappear while a bfqq still references it (mostly to avoid
- * races between request issuing and task migration followed by cgroup
- * destruction).
- * All the fields are protected by the queue lock of the containing bfqd.
+ * io_context or more, if it is async or shared between cooperating
+ * processes. Besides, it contains I/O requests for only one actuator
+ * (an io_context is associated with a different bfq_queue for each
+ * actuator it generates I/O for). @cgroup holds a reference to the
+ * cgroup, to be sure that it does not disappear while a bfqq still
+ * references it (mostly to avoid races between request issuing and
+ * task migration followed by cgroup destruction).  All the fields are
+ * protected by the queue lock of the containing bfqd.
  */
 struct bfq_queue {
 	/* reference counter */
@@ -395,6 +405,9 @@ struct bfq_queue {
 	 * the woken queues when this queue exits.
 	 */
 	struct hlist_head woken_list;
+
+	/* index of the actuator this queue is associated with */
+	unsigned int actuator_idx;
 };
 
 /**
@@ -403,8 +416,17 @@ struct bfq_queue {
 struct bfq_io_cq {
 	/* associated io_cq structure */
 	struct io_cq icq; /* must be the first member */
-	/* array of two process queues, the sync and the async */
-	struct bfq_queue *bfqq[2];
+	/*
+	 * Matrix of associated process queues: first row for async
+	 * queues, second row sync queues. Each row contains one
+	 * column for each actuator. An I/O request generated by the
+	 * process is inserted into the queue pointed by bfqq[i][j] if
+	 * the request is to be served by the j-th actuator of the
+	 * drive, where i==0 or i==1, depending on whether the request
+	 * is async or sync. So there is a distinct queue for each
+	 * actuator.
+	 */
+	struct bfq_queue *bfqq[2][BFQ_MAX_ACTUATORS];
 	/* per (request_queue, blkcg) ioprio */
 	int ioprio;
 #ifdef CONFIG_BFQ_GROUP_IOSCHED
@@ -768,6 +790,13 @@ struct bfq_data {
 	 */
 	unsigned int word_depths[2][2];
 	unsigned int full_depth_shift;
+
+	/*
+	 * Number of independent actuators. This is equal to 1 in
+	 * case of single-actuator drives.
+	 */
+	unsigned int num_actuators;
+
 };
 
 enum bfqq_state_flags {
@@ -964,8 +993,10 @@ struct bfq_group {
 
 extern const int bfq_timeout;
 
-struct bfq_queue *bic_to_bfqq(struct bfq_io_cq *bic, bool is_sync);
-void bic_set_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq, bool is_sync);
+struct bfq_queue *bic_to_bfqq(struct bfq_io_cq *bic, bool is_sync,
+				unsigned int actuator_idx);
+void bic_set_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq, bool is_sync,
+				unsigned int actuator_idx);
 struct bfq_data *bic_to_bfqd(struct bfq_io_cq *bic);
 void bfq_pos_tree_add_move(struct bfq_data *bfqd, struct bfq_queue *bfqq);
 void bfq_weights_tree_add(struct bfq_data *bfqd, struct bfq_queue *bfqq,
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH V6 2/8] block, bfq: forbid stable merging of queues associated with different actuators
  2022-11-03 16:26 [PATCH V6 0/8] block, bfq: extend bfq to support multi-actuator drives Paolo Valente
  2022-11-03 16:26 ` [PATCH V6 1/8] block, bfq: split sync bfq_queues on a per-actuator basis Paolo Valente
@ 2022-11-03 16:26 ` Paolo Valente
  2022-11-21  0:20   ` Damien Le Moal
  2022-11-03 16:26 ` [PATCH V6 3/8] block, bfq: move io_cq-persistent bfqq data into a dedicated struct Paolo Valente
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 31+ messages in thread
From: Paolo Valente @ 2022-11-03 16:26 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, linux-kernel, arie.vanderhoeven, rory.c.chen, Paolo Valente

If queues associated with different actuators are merged, then control
is lost on each actuator. Therefore some actuator may be
underutilized, and throughput may decrease. This problem cannot occur
with basic queue merging, because the latter is triggered by spatial
locality, and sectors for different actuators are not close to each
other. Yet it may happen with stable merging. To address this issue,
this commit prevents stable merging from occurring among queues
associated with different actuators.

Signed-off-by: Paolo Valente <paolo.valente@linaro.org>
---
 block/bfq-iosched.c | 13 +++++++++----
 1 file changed, 9 insertions(+), 4 deletions(-)

diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index 5c69394bbb65..ec4b0e70265f 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -5705,9 +5705,13 @@ static struct bfq_queue *bfq_do_or_sched_stable_merge(struct bfq_data *bfqd,
 	 * it has been set already, but too long ago, then move it
 	 * forward to bfqq. Finally, move also if bfqq belongs to a
 	 * different group than last_bfqq_created, or if bfqq has a
-	 * different ioprio or ioprio_class. If none of these
-	 * conditions holds true, then try an early stable merge or
-	 * schedule a delayed stable merge.
+	 * different ioprio, ioprio_class or actuator_idx. If none of
+	 * these conditions holds true, then try an early stable merge
+	 * or schedule a delayed stable merge. As for the condition on
+	 * actuator_idx, the reason is that, if queues associated with
+	 * different actuators are merged, then control is lost on
+	 * each actuator. Therefore some actuator may be
+	 * underutilized, and throughput may decrease.
 	 *
 	 * A delayed merge is scheduled (instead of performing an
 	 * early merge), in case bfqq might soon prove to be more
@@ -5725,7 +5729,8 @@ static struct bfq_queue *bfq_do_or_sched_stable_merge(struct bfq_data *bfqd,
 			bfqq->creation_time) ||
 		bfqq->entity.parent != last_bfqq_created->entity.parent ||
 		bfqq->ioprio != last_bfqq_created->ioprio ||
-		bfqq->ioprio_class != last_bfqq_created->ioprio_class)
+		bfqq->ioprio_class != last_bfqq_created->ioprio_class ||
+		bfqq->actuator_idx != last_bfqq_created->actuator_idx)
 		*source_bfqq = bfqq;
 	else if (time_after_eq(last_bfqq_created->creation_time +
 				 bfqd->bfq_burst_interval,
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH V6 3/8] block, bfq: move io_cq-persistent bfqq data into a dedicated struct
  2022-11-03 16:26 [PATCH V6 0/8] block, bfq: extend bfq to support multi-actuator drives Paolo Valente
  2022-11-03 16:26 ` [PATCH V6 1/8] block, bfq: split sync bfq_queues on a per-actuator basis Paolo Valente
  2022-11-03 16:26 ` [PATCH V6 2/8] block, bfq: forbid stable merging of queues associated with different actuators Paolo Valente
@ 2022-11-03 16:26 ` Paolo Valente
  2022-11-21  0:24   ` Damien Le Moal
  2022-11-03 16:26 ` [PATCH V6 4/8] block, bfq: turn bfqq_data into an array in bfq_io_cq Paolo Valente
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 31+ messages in thread
From: Paolo Valente @ 2022-11-03 16:26 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, linux-kernel, arie.vanderhoeven, rory.c.chen,
	Paolo Valente, Damien Le Moal, Gianmarco Lusvardi,
	Giulio Barabino, Emiliano Maccaferri

With a multi-actuator drive, a process may get associated with multiple
bfq_queues: one queue for each of the N actuators. So, the bfq_io_cq
data structure must be able to accommodate its per-queue persistent
information for N queues. Currently it stores this information for
just one queue, in several scalar fields.

This is a preparatory commit for moving to accommodating persistent
information for N queues. In particular, this commit packs all the
above scalar fields into a single data structure. Then there is now
only one fieldi, in bfq_io_cq, that stores all the above information. This
scalar field will then be turned into an array by a following commit.

Suggested-by: Damien Le Moal <damien.lemoal@opensource.wdc.com>
Signed-off-by: Gianmarco Lusvardi <glusvardi@posteo.net>
Signed-off-by: Giulio Barabino <giuliobarabino99@gmail.com>
Signed-off-by: Emiliano Maccaferri <inbox@emilianomaccaferri.com>
Signed-off-by: Paolo Valente <paolo.valente@linaro.org>
---
 block/bfq-iosched.c | 129 +++++++++++++++++++++++++-------------------
 block/bfq-iosched.h |  52 ++++++++++--------
 2 files changed, 105 insertions(+), 76 deletions(-)

diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index ec4b0e70265f..01528182c0c5 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -404,9 +404,10 @@ void bic_set_bfqq(struct bfq_io_cq *bic,
 	 * we cancel the stable merge if
 	 * bic->stable_merge_bfqq == bfqq.
 	 */
+	struct bfq_iocq_bfqq_data *bfqq_data = &bic->bfqq_data;
 	bic->bfqq[is_sync][actuator_idx] = bfqq;
 
-	if (bfqq && bic->stable_merge_bfqq == bfqq) {
+	if (bfqq && bfqq_data->stable_merge_bfqq == bfqq) {
 		/*
 		 * Actually, these same instructions are executed also
 		 * in bfq_setup_cooperator, in case of abort or actual
@@ -415,9 +416,9 @@ void bic_set_bfqq(struct bfq_io_cq *bic,
 		 * did so, we would nest even more complexity in this
 		 * function.
 		 */
-		bfq_put_stable_ref(bic->stable_merge_bfqq);
+		bfq_put_stable_ref(bfqq_data->stable_merge_bfqq);
 
-		bic->stable_merge_bfqq = NULL;
+		bfqq_data->stable_merge_bfqq = NULL;
 	}
 }
 
@@ -1174,38 +1175,40 @@ static void
 bfq_bfqq_resume_state(struct bfq_queue *bfqq, struct bfq_data *bfqd,
 		      struct bfq_io_cq *bic, bool bfq_already_existing)
 {
+	struct bfq_iocq_bfqq_data *bfqq_data = &bic->bfqq_data;
 	unsigned int old_wr_coeff = 1;
 	bool busy = bfq_already_existing && bfq_bfqq_busy(bfqq);
 
-	if (bic->saved_has_short_ttime)
+	if (bfqq_data->saved_has_short_ttime)
 		bfq_mark_bfqq_has_short_ttime(bfqq);
 	else
 		bfq_clear_bfqq_has_short_ttime(bfqq);
 
-	if (bic->saved_IO_bound)
+	if (bfqq_data->saved_IO_bound)
 		bfq_mark_bfqq_IO_bound(bfqq);
 	else
 		bfq_clear_bfqq_IO_bound(bfqq);
 
-	bfqq->last_serv_time_ns = bic->saved_last_serv_time_ns;
-	bfqq->inject_limit = bic->saved_inject_limit;
-	bfqq->decrease_time_jif = bic->saved_decrease_time_jif;
+	bfqq->last_serv_time_ns = bfqq_data->saved_last_serv_time_ns;
+	bfqq->inject_limit = bfqq_data->saved_inject_limit;
+	bfqq->decrease_time_jif = bfqq_data->saved_decrease_time_jif;
 
-	bfqq->entity.new_weight = bic->saved_weight;
-	bfqq->ttime = bic->saved_ttime;
-	bfqq->io_start_time = bic->saved_io_start_time;
-	bfqq->tot_idle_time = bic->saved_tot_idle_time;
+	bfqq->entity.new_weight = bfqq_data->saved_weight;
+	bfqq->ttime = bfqq_data->saved_ttime;
+	bfqq->io_start_time = bfqq_data->saved_io_start_time;
+	bfqq->tot_idle_time = bfqq_data->saved_tot_idle_time;
 	/*
 	 * Restore weight coefficient only if low_latency is on
 	 */
 	if (bfqd->low_latency) {
 		old_wr_coeff = bfqq->wr_coeff;
-		bfqq->wr_coeff = bic->saved_wr_coeff;
+		bfqq->wr_coeff = bfqq_data->saved_wr_coeff;
 	}
-	bfqq->service_from_wr = bic->saved_service_from_wr;
-	bfqq->wr_start_at_switch_to_srt = bic->saved_wr_start_at_switch_to_srt;
-	bfqq->last_wr_start_finish = bic->saved_last_wr_start_finish;
-	bfqq->wr_cur_max_time = bic->saved_wr_cur_max_time;
+	bfqq->service_from_wr = bfqq_data->saved_service_from_wr;
+	bfqq->wr_start_at_switch_to_srt =
+		bfqq_data->saved_wr_start_at_switch_to_srt;
+	bfqq->last_wr_start_finish = bfqq_data->saved_last_wr_start_finish;
+	bfqq->wr_cur_max_time = bfqq_data->saved_wr_cur_max_time;
 
 	if (bfqq->wr_coeff > 1 && (bfq_bfqq_in_large_burst(bfqq) ||
 	    time_is_before_jiffies(bfqq->last_wr_start_finish +
@@ -1878,7 +1881,7 @@ static void bfq_bfqq_handle_idle_busy_switch(struct bfq_data *bfqd,
 	wr_or_deserves_wr = bfqd->low_latency &&
 		(bfqq->wr_coeff > 1 ||
 		 (bfq_bfqq_sync(bfqq) &&
-		  (bfqq->bic || RQ_BIC(rq)->stably_merged) &&
+		  (bfqq->bic || RQ_BIC(rq)->bfqq_data.stably_merged) &&
 		   (*interactive || soft_rt)));
 
 	/*
@@ -2902,6 +2905,7 @@ bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq,
 		     void *io_struct, bool request, struct bfq_io_cq *bic)
 {
 	struct bfq_queue *in_service_bfqq, *new_bfqq;
+	struct bfq_iocq_bfqq_data *bfqq_data = &bic->bfqq_data;
 
 	/* if a merge has already been setup, then proceed with that first */
 	if (bfqq->new_bfqq)
@@ -2923,21 +2927,21 @@ bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq,
 		 * stable merging) also if bic is associated with a
 		 * sync queue, but this bfqq is async
 		 */
-		if (bfq_bfqq_sync(bfqq) && bic->stable_merge_bfqq &&
+		if (bfq_bfqq_sync(bfqq) && bfqq_data->stable_merge_bfqq &&
 		    !bfq_bfqq_just_created(bfqq) &&
 		    time_is_before_jiffies(bfqq->split_time +
 					  msecs_to_jiffies(bfq_late_stable_merging)) &&
 		    time_is_before_jiffies(bfqq->creation_time +
 					   msecs_to_jiffies(bfq_late_stable_merging))) {
 			struct bfq_queue *stable_merge_bfqq =
-				bic->stable_merge_bfqq;
+				bfqq_data->stable_merge_bfqq;
 			int proc_ref = min(bfqq_process_refs(bfqq),
 					   bfqq_process_refs(stable_merge_bfqq));
 
 			/* deschedule stable merge, because done or aborted here */
 			bfq_put_stable_ref(stable_merge_bfqq);
 
-			bic->stable_merge_bfqq = NULL;
+			bfqq_data->stable_merge_bfqq = NULL;
 
 			if (!idling_boosts_thr_without_issues(bfqd, bfqq) &&
 			    proc_ref > 0) {
@@ -2946,10 +2950,10 @@ bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq,
 					bfq_setup_merge(bfqq, stable_merge_bfqq);
 
 				if (new_bfqq) {
-					bic->stably_merged = true;
+					bfqq_data->stably_merged = true;
 					if (new_bfqq->bic)
-						new_bfqq->bic->stably_merged =
-									true;
+						new_bfqq->bic->bfqq_data.stably_merged =
+							true;
 				}
 				return new_bfqq;
 			} else
@@ -3048,6 +3052,7 @@ bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq,
 static void bfq_bfqq_save_state(struct bfq_queue *bfqq)
 {
 	struct bfq_io_cq *bic = bfqq->bic;
+	struct bfq_iocq_bfqq_data *bfqq_data = &bic->bfqq_data;
 
 	/*
 	 * If !bfqq->bic, the queue is already shared or its requests
@@ -3057,18 +3062,21 @@ static void bfq_bfqq_save_state(struct bfq_queue *bfqq)
 	if (!bic)
 		return;
 
-	bic->saved_last_serv_time_ns = bfqq->last_serv_time_ns;
-	bic->saved_inject_limit = bfqq->inject_limit;
-	bic->saved_decrease_time_jif = bfqq->decrease_time_jif;
-
-	bic->saved_weight = bfqq->entity.orig_weight;
-	bic->saved_ttime = bfqq->ttime;
-	bic->saved_has_short_ttime = bfq_bfqq_has_short_ttime(bfqq);
-	bic->saved_IO_bound = bfq_bfqq_IO_bound(bfqq);
-	bic->saved_io_start_time = bfqq->io_start_time;
-	bic->saved_tot_idle_time = bfqq->tot_idle_time;
-	bic->saved_in_large_burst = bfq_bfqq_in_large_burst(bfqq);
-	bic->was_in_burst_list = !hlist_unhashed(&bfqq->burst_list_node);
+	bfqq_data->saved_last_serv_time_ns = bfqq->last_serv_time_ns;
+	bfqq_data->saved_inject_limit = bfqq->inject_limit;
+	bfqq_data->saved_decrease_time_jif = bfqq->decrease_time_jif;
+
+	bfqq_data->saved_weight = bfqq->entity.orig_weight;
+	bfqq_data->saved_ttime = bfqq->ttime;
+	bfqq_data->saved_has_short_ttime =
+		bfq_bfqq_has_short_ttime(bfqq);
+	bfqq_data->saved_IO_bound = bfq_bfqq_IO_bound(bfqq);
+	bfqq_data->saved_io_start_time = bfqq->io_start_time;
+	bfqq_data->saved_tot_idle_time = bfqq->tot_idle_time;
+	bfqq_data->saved_in_large_burst = bfq_bfqq_in_large_burst(bfqq);
+	bfqq_data->was_in_burst_list =
+		!hlist_unhashed(&bfqq->burst_list_node);
+
 	if (unlikely(bfq_bfqq_just_created(bfqq) &&
 		     !bfq_bfqq_in_large_burst(bfqq) &&
 		     bfqq->bfqd->low_latency)) {
@@ -3081,17 +3089,21 @@ static void bfq_bfqq_save_state(struct bfq_queue *bfqq)
 		 * to bfqq, so that to avoid that bfqq unjustly fails
 		 * to enjoy weight raising if split soon.
 		 */
-		bic->saved_wr_coeff = bfqq->bfqd->bfq_wr_coeff;
-		bic->saved_wr_start_at_switch_to_srt = bfq_smallest_from_now();
-		bic->saved_wr_cur_max_time = bfq_wr_duration(bfqq->bfqd);
-		bic->saved_last_wr_start_finish = jiffies;
+		bfqq_data->saved_wr_coeff = bfqq->bfqd->bfq_wr_coeff;
+		bfqq_data->saved_wr_start_at_switch_to_srt =
+			bfq_smallest_from_now();
+		bfqq_data->saved_wr_cur_max_time =
+			bfq_wr_duration(bfqq->bfqd);
+		bfqq_data->saved_last_wr_start_finish = jiffies;
 	} else {
-		bic->saved_wr_coeff = bfqq->wr_coeff;
-		bic->saved_wr_start_at_switch_to_srt =
+		bfqq_data->saved_wr_coeff = bfqq->wr_coeff;
+		bfqq_data->saved_wr_start_at_switch_to_srt =
 			bfqq->wr_start_at_switch_to_srt;
-		bic->saved_service_from_wr = bfqq->service_from_wr;
-		bic->saved_last_wr_start_finish = bfqq->last_wr_start_finish;
-		bic->saved_wr_cur_max_time = bfqq->wr_cur_max_time;
+		bfqq_data->saved_service_from_wr =
+			bfqq->service_from_wr;
+		bfqq_data->saved_last_wr_start_finish =
+			bfqq->last_wr_start_finish;
+		bfqq_data->saved_wr_cur_max_time = bfqq->wr_cur_max_time;
 	}
 }
 
@@ -5413,6 +5425,7 @@ static void bfq_exit_icq(struct io_cq *icq)
 	unsigned long flags;
 	unsigned int act_idx;
 	unsigned int num_actuators;
+	struct bfq_iocq_bfqq_data *bfqq_data = &bic->bfqq_data;
 
 	/*
 	 * bfqd is NULL if scheduler already exited, and in that case
@@ -5432,8 +5445,8 @@ static void bfq_exit_icq(struct io_cq *icq)
 		num_actuators = BFQ_MAX_ACTUATORS;
 	}
 
-	if (bic->stable_merge_bfqq)
-		bfq_put_stable_ref(bic->stable_merge_bfqq);
+	if (bfqq_data->stable_merge_bfqq)
+		bfq_put_stable_ref(bfqq_data->stable_merge_bfqq);
 
 	for (act_idx = 0; act_idx < num_actuators; act_idx++) {
 		bfq_exit_icq_bfqq(bic, true, act_idx);
@@ -5624,13 +5637,14 @@ bfq_do_early_stable_merge(struct bfq_data *bfqd, struct bfq_queue *bfqq,
 {
 	struct bfq_queue *new_bfqq =
 		bfq_setup_merge(bfqq, last_bfqq_created);
+	struct bfq_iocq_bfqq_data *bfqq_data = &bic->bfqq_data;
 
 	if (!new_bfqq)
 		return bfqq;
 
 	if (new_bfqq->bic)
-		new_bfqq->bic->stably_merged = true;
-	bic->stably_merged = true;
+		new_bfqq->bic->bfqq_data.stably_merged = true;
+	bfqq_data->stably_merged = true;
 
 	/*
 	 * Reusing merge functions. This implies that
@@ -5699,6 +5713,7 @@ static struct bfq_queue *bfq_do_or_sched_stable_merge(struct bfq_data *bfqd,
 		&bfqd->last_bfqq_created;
 
 	struct bfq_queue *last_bfqq_created = *source_bfqq;
+	struct bfq_iocq_bfqq_data *bfqq_data = &bic->bfqq_data;
 
 	/*
 	 * If last_bfqq_created has not been set yet, then init it. If
@@ -5760,7 +5775,7 @@ static struct bfq_queue *bfq_do_or_sched_stable_merge(struct bfq_data *bfqd,
 			/*
 			 * Record the bfqq to merge to.
 			 */
-			bic->stable_merge_bfqq = last_bfqq_created;
+			bfqq_data->stable_merge_bfqq = last_bfqq_created;
 		}
 	}
 
@@ -6681,6 +6696,7 @@ static struct bfq_queue *bfq_get_bfqq_handle_split(struct bfq_data *bfqd,
 {
 	unsigned int act_idx = bfq_actuator_index(bfqd, bio);
 	struct bfq_queue *bfqq = bic_to_bfqq(bic, is_sync, act_idx);
+	struct bfq_iocq_bfqq_data *bfqq_data = &bic->bfqq_data;
 
 	if (likely(bfqq && bfqq != &bfqd->oom_bfqq))
 		return bfqq;
@@ -6694,12 +6710,12 @@ static struct bfq_queue *bfq_get_bfqq_handle_split(struct bfq_data *bfqd,
 
 	bic_set_bfqq(bic, bfqq, is_sync, act_idx);
 	if (split && is_sync) {
-		if ((bic->was_in_burst_list && bfqd->large_burst) ||
-		    bic->saved_in_large_burst)
+		if ((bfqq_data->was_in_burst_list && bfqd->large_burst) ||
+		    bfqq_data->saved_in_large_burst)
 			bfq_mark_bfqq_in_large_burst(bfqq);
 		else {
 			bfq_clear_bfqq_in_large_burst(bfqq);
-			if (bic->was_in_burst_list)
+			if (bfqq_data->was_in_burst_list)
 				/*
 				 * If bfqq was in the current
 				 * burst list before being
@@ -6788,6 +6804,7 @@ static struct bfq_queue *bfq_init_rq(struct request *rq)
 	struct bfq_queue *bfqq;
 	bool new_queue = false;
 	bool bfqq_already_existing = false, split = false;
+	struct bfq_iocq_bfqq_data *bfqq_data;
 
 	if (unlikely(!rq->elv.icq))
 		return NULL;
@@ -6811,15 +6828,17 @@ static struct bfq_queue *bfq_init_rq(struct request *rq)
 	bfqq = bfq_get_bfqq_handle_split(bfqd, bic, bio, false, is_sync,
 					 &new_queue);
 
+	bfqq_data = &bic->bfqq_data;
+
 	if (likely(!new_queue)) {
 		/* If the queue was seeky for too long, break it apart. */
 		if (bfq_bfqq_coop(bfqq) && bfq_bfqq_split_coop(bfqq) &&
-			!bic->stably_merged) {
+			!bfqq_data->stably_merged) {
 			struct bfq_queue *old_bfqq = bfqq;
 
 			/* Update bic before losing reference to bfqq */
 			if (bfq_bfqq_in_large_burst(bfqq))
-				bic->saved_in_large_burst = true;
+				bfqq_data->saved_in_large_burst = true;
 
 			bfqq = bfq_split_bfqq(bic, bfqq);
 			split = true;
diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h
index bfcbd8ea9000..f2e8ab91951c 100644
--- a/block/bfq-iosched.h
+++ b/block/bfq-iosched.h
@@ -411,27 +411,9 @@ struct bfq_queue {
 };
 
 /**
- * struct bfq_io_cq - per (request_queue, io_context) structure.
- */
-struct bfq_io_cq {
-	/* associated io_cq structure */
-	struct io_cq icq; /* must be the first member */
-	/*
-	 * Matrix of associated process queues: first row for async
-	 * queues, second row sync queues. Each row contains one
-	 * column for each actuator. An I/O request generated by the
-	 * process is inserted into the queue pointed by bfqq[i][j] if
-	 * the request is to be served by the j-th actuator of the
-	 * drive, where i==0 or i==1, depending on whether the request
-	 * is async or sync. So there is a distinct queue for each
-	 * actuator.
-	 */
-	struct bfq_queue *bfqq[2][BFQ_MAX_ACTUATORS];
-	/* per (request_queue, blkcg) ioprio */
-	int ioprio;
-#ifdef CONFIG_BFQ_GROUP_IOSCHED
-	uint64_t blkcg_serial_nr; /* the current blkcg serial */
-#endif
+* struct bfq_data - bfqq data unique and persistent for associated bfq_io_cq
+*/
+struct bfq_iocq_bfqq_data {
 	/*
 	 * Snapshot of the has_short_time flag before merging; taken
 	 * to remember its value while the queue is merged, so as to
@@ -486,6 +468,34 @@ struct bfq_io_cq {
 	struct bfq_queue *stable_merge_bfqq;
 
 	bool stably_merged;	/* non splittable if true */
+};
+
+/**
+ * struct bfq_io_cq - per (request_queue, io_context) structure.
+ */
+struct bfq_io_cq {
+	/* associated io_cq structure */
+	struct io_cq icq; /* must be the first member */
+	/*
+	 * Matrix of associated process queues: first row for async
+	 * queues, second row sync queues. Each row contains one
+	 * column for each actuator. An I/O request generated by the
+	 * process is inserted into the queue pointed by bfqq[i][j] if
+	 * the request is to be served by the j-th actuator of the
+	 * drive, where i==0 or i==1, depending on whether the request
+	 * is async or sync. So there is a distinct queue for each
+	 * actuator.
+	 */
+	struct bfq_queue *bfqq[2][BFQ_MAX_ACTUATORS];
+	/* per (request_queue, blkcg) ioprio */
+	int ioprio;
+#ifdef CONFIG_BFQ_GROUP_IOSCHED
+	uint64_t blkcg_serial_nr; /* the current blkcg serial */
+#endif
+
+	/* persistent data for associated synchronous process queue */
+	struct bfq_iocq_bfqq_data bfqq_data;
+
 	unsigned int requests;	/* Number of requests this process has in flight */
 };
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH V6 4/8] block, bfq: turn bfqq_data into an array in bfq_io_cq
  2022-11-03 16:26 [PATCH V6 0/8] block, bfq: extend bfq to support multi-actuator drives Paolo Valente
                   ` (2 preceding siblings ...)
  2022-11-03 16:26 ` [PATCH V6 3/8] block, bfq: move io_cq-persistent bfqq data into a dedicated struct Paolo Valente
@ 2022-11-03 16:26 ` Paolo Valente
  2022-11-21  0:39   ` Damien Le Moal
  2022-11-03 16:26 ` [PATCH V6 5/8] block, bfq: split also async bfq_queues on a per-actuator basis Paolo Valente
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 31+ messages in thread
From: Paolo Valente @ 2022-11-03 16:26 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, linux-kernel, arie.vanderhoeven, rory.c.chen,
	Paolo Valente, Gabriele Felici, Gianmarco Lusvardi,
	Giulio Barabino, Emiliano Maccaferri

When a bfq_queue Q is merged with another queue, several pieces of
information are saved about Q. These pieces are stored in the
bfqq_data field in the bfq_io_cq data structure of the process
associated with Q.

Yet, with a multi-actuator drive, a process may get associated with
multiple bfq_queues: one queue for each of the N actuators. Each of
these queues may undergo a merge. So, the bfq_io_cq data structure
must be able to accommodate the above information for N queues.

This commit solves this problem by turning the bfqq_data scalar field
into an array of N elements (and by changing code so as to handle
this array).

This solution is written under the assumption that bfq_queues
associated with different actuators cannot be cross-merged. This
assumption holds naturally with basic queue merging: the latter is
triggered by spatial locality, and sectors for different actuators are
not close to each other. As for stable cross-merging, the assumption
here is that it is disabled.

Signed-off-by: Gabriele Felici <felicigb@gmail.com>
Signed-off-by: Gianmarco Lusvardi <glusvardi@posteo.net>
Signed-off-by: Giulio Barabino <giuliobarabino99@gmail.com>
Signed-off-by: Emiliano Maccaferri <inbox@emilianomaccaferri.com>
Signed-off-by: Paolo Valente <paolo.valente@linaro.org>
---
 block/bfq-iosched.c | 72 ++++++++++++++++++++++++---------------------
 block/bfq-iosched.h | 12 +++++---
 2 files changed, 47 insertions(+), 37 deletions(-)

diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index 01528182c0c5..f44bac054aaf 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -404,7 +404,7 @@ void bic_set_bfqq(struct bfq_io_cq *bic,
 	 * we cancel the stable merge if
 	 * bic->stable_merge_bfqq == bfqq.
 	 */
-	struct bfq_iocq_bfqq_data *bfqq_data = &bic->bfqq_data;
+	struct bfq_iocq_bfqq_data *bfqq_data = &bic->bfqq_data[actuator_idx];
 	bic->bfqq[is_sync][actuator_idx] = bfqq;
 
 	if (bfqq && bfqq_data->stable_merge_bfqq == bfqq) {
@@ -1175,9 +1175,10 @@ static void
 bfq_bfqq_resume_state(struct bfq_queue *bfqq, struct bfq_data *bfqd,
 		      struct bfq_io_cq *bic, bool bfq_already_existing)
 {
-	struct bfq_iocq_bfqq_data *bfqq_data = &bic->bfqq_data;
 	unsigned int old_wr_coeff = 1;
 	bool busy = bfq_already_existing && bfq_bfqq_busy(bfqq);
+	unsigned int a_idx = bfqq->actuator_idx;
+	struct bfq_iocq_bfqq_data *bfqq_data = &bic->bfqq_data[a_idx];
 
 	if (bfqq_data->saved_has_short_ttime)
 		bfq_mark_bfqq_has_short_ttime(bfqq);
@@ -1827,6 +1828,16 @@ static bool bfq_bfqq_higher_class_or_weight(struct bfq_queue *bfqq,
 	return bfqq_weight > in_serv_weight;
 }
 
+/* get the index of the actuator that will serve bio */
+static unsigned int bfq_actuator_index(struct bfq_data *bfqd, struct bio *bio)
+{
+	/*
+	 * Multi-actuator support not complete yet, so always return 0
+	 * for the moment.
+	 */
+	return 0;
+}
+
 static bool bfq_better_to_idle(struct bfq_queue *bfqq);
 
 static void bfq_bfqq_handle_idle_busy_switch(struct bfq_data *bfqd,
@@ -1881,7 +1892,9 @@ static void bfq_bfqq_handle_idle_busy_switch(struct bfq_data *bfqd,
 	wr_or_deserves_wr = bfqd->low_latency &&
 		(bfqq->wr_coeff > 1 ||
 		 (bfq_bfqq_sync(bfqq) &&
-		  (bfqq->bic || RQ_BIC(rq)->bfqq_data.stably_merged) &&
+		  (bfqq->bic ||
+		   RQ_BIC(rq)->bfqq_data[bfq_actuator_index(bfqd, rq->bio)]
+		   .stably_merged) &&
 		   (*interactive || soft_rt)));
 
 	/*
@@ -2469,16 +2482,6 @@ static void bfq_remove_request(struct request_queue *q,
 
 }
 
-/* get the index of the actuator that will serve bio */
-static unsigned int bfq_actuator_index(struct bfq_data *bfqd, struct bio *bio)
-{
-	/*
-	 * Multi-actuator support not complete yet, so always return 0
-	 * for the moment.
-	 */
-	return 0;
-}
-
 static bool bfq_bio_merge(struct request_queue *q, struct bio *bio,
 		unsigned int nr_segs)
 {
@@ -2905,7 +2908,8 @@ bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq,
 		     void *io_struct, bool request, struct bfq_io_cq *bic)
 {
 	struct bfq_queue *in_service_bfqq, *new_bfqq;
-	struct bfq_iocq_bfqq_data *bfqq_data = &bic->bfqq_data;
+	unsigned int a_idx = bfqq->actuator_idx;
+	struct bfq_iocq_bfqq_data *bfqq_data = &bic->bfqq_data[a_idx];
 
 	/* if a merge has already been setup, then proceed with that first */
 	if (bfqq->new_bfqq)
@@ -2952,8 +2956,9 @@ bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq,
 				if (new_bfqq) {
 					bfqq_data->stably_merged = true;
 					if (new_bfqq->bic)
-						new_bfqq->bic->bfqq_data.stably_merged =
-							true;
+						new_bfqq->bic->bfqq_data
+							[new_bfqq->actuator_idx]
+							.stably_merged = true;
 				}
 				return new_bfqq;
 			} else
@@ -3052,7 +3057,9 @@ bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq,
 static void bfq_bfqq_save_state(struct bfq_queue *bfqq)
 {
 	struct bfq_io_cq *bic = bfqq->bic;
-	struct bfq_iocq_bfqq_data *bfqq_data = &bic->bfqq_data;
+	/* State must be saved for the right queue index. */
+	unsigned int a_idx = bfqq->actuator_idx;
+	struct bfq_iocq_bfqq_data *bfqq_data = &bic->bfqq_data[a_idx];
 
 	/*
 	 * If !bfqq->bic, the queue is already shared or its requests
@@ -3063,7 +3070,7 @@ static void bfq_bfqq_save_state(struct bfq_queue *bfqq)
 		return;
 
 	bfqq_data->saved_last_serv_time_ns = bfqq->last_serv_time_ns;
-	bfqq_data->saved_inject_limit = bfqq->inject_limit;
+	bfqq_data->saved_inject_limit =	bfqq->inject_limit;
 	bfqq_data->saved_decrease_time_jif = bfqq->decrease_time_jif;
 
 	bfqq_data->saved_weight = bfqq->entity.orig_weight;
@@ -5425,7 +5432,7 @@ static void bfq_exit_icq(struct io_cq *icq)
 	unsigned long flags;
 	unsigned int act_idx;
 	unsigned int num_actuators;
-	struct bfq_iocq_bfqq_data *bfqq_data = &bic->bfqq_data;
+	struct bfq_iocq_bfqq_data *bfqq_data = bic->bfqq_data;
 
 	/*
 	 * bfqd is NULL if scheduler already exited, and in that case
@@ -5445,10 +5452,10 @@ static void bfq_exit_icq(struct io_cq *icq)
 		num_actuators = BFQ_MAX_ACTUATORS;
 	}
 
-	if (bfqq_data->stable_merge_bfqq)
-		bfq_put_stable_ref(bfqq_data->stable_merge_bfqq);
-
 	for (act_idx = 0; act_idx < num_actuators; act_idx++) {
+		if (bfqq_data[act_idx].stable_merge_bfqq)
+			bfq_put_stable_ref(bfqq_data[act_idx].stable_merge_bfqq);
+
 		bfq_exit_icq_bfqq(bic, true, act_idx);
 		bfq_exit_icq_bfqq(bic, false, act_idx);
 	}
@@ -5635,16 +5642,16 @@ bfq_do_early_stable_merge(struct bfq_data *bfqd, struct bfq_queue *bfqq,
 			  struct bfq_io_cq *bic,
 			  struct bfq_queue *last_bfqq_created)
 {
+	unsigned int a_idx = last_bfqq_created->actuator_idx;
 	struct bfq_queue *new_bfqq =
 		bfq_setup_merge(bfqq, last_bfqq_created);
-	struct bfq_iocq_bfqq_data *bfqq_data = &bic->bfqq_data;
 
 	if (!new_bfqq)
 		return bfqq;
 
 	if (new_bfqq->bic)
-		new_bfqq->bic->bfqq_data.stably_merged = true;
-	bfqq_data->stably_merged = true;
+		new_bfqq->bic->bfqq_data[a_idx].stably_merged = true;
+	bic->bfqq_data[a_idx].stably_merged = true;
 
 	/*
 	 * Reusing merge functions. This implies that
@@ -5713,7 +5720,6 @@ static struct bfq_queue *bfq_do_or_sched_stable_merge(struct bfq_data *bfqd,
 		&bfqd->last_bfqq_created;
 
 	struct bfq_queue *last_bfqq_created = *source_bfqq;
-	struct bfq_iocq_bfqq_data *bfqq_data = &bic->bfqq_data;
 
 	/*
 	 * If last_bfqq_created has not been set yet, then init it. If
@@ -5775,7 +5781,8 @@ static struct bfq_queue *bfq_do_or_sched_stable_merge(struct bfq_data *bfqd,
 			/*
 			 * Record the bfqq to merge to.
 			 */
-			bfqq_data->stable_merge_bfqq = last_bfqq_created;
+			bic->bfqq_data[last_bfqq_created->actuator_idx].stable_merge_bfqq =
+				last_bfqq_created;
 		}
 	}
 
@@ -6696,7 +6703,7 @@ static struct bfq_queue *bfq_get_bfqq_handle_split(struct bfq_data *bfqd,
 {
 	unsigned int act_idx = bfq_actuator_index(bfqd, bio);
 	struct bfq_queue *bfqq = bic_to_bfqq(bic, is_sync, act_idx);
-	struct bfq_iocq_bfqq_data *bfqq_data = &bic->bfqq_data;
+	struct bfq_iocq_bfqq_data *bfqq_data = &bic->bfqq_data[act_idx];
 
 	if (likely(bfqq && bfqq != &bfqd->oom_bfqq))
 		return bfqq;
@@ -6804,7 +6811,7 @@ static struct bfq_queue *bfq_init_rq(struct request *rq)
 	struct bfq_queue *bfqq;
 	bool new_queue = false;
 	bool bfqq_already_existing = false, split = false;
-	struct bfq_iocq_bfqq_data *bfqq_data;
+	unsigned int a_idx = bfq_actuator_index(bfqd, bio);
 
 	if (unlikely(!rq->elv.icq))
 		return NULL;
@@ -6828,17 +6835,16 @@ static struct bfq_queue *bfq_init_rq(struct request *rq)
 	bfqq = bfq_get_bfqq_handle_split(bfqd, bic, bio, false, is_sync,
 					 &new_queue);
 
-	bfqq_data = &bic->bfqq_data;
-
 	if (likely(!new_queue)) {
 		/* If the queue was seeky for too long, break it apart. */
 		if (bfq_bfqq_coop(bfqq) && bfq_bfqq_split_coop(bfqq) &&
-			!bfqq_data->stably_merged) {
+			!bic->bfqq_data[a_idx].stably_merged) {
 			struct bfq_queue *old_bfqq = bfqq;
 
 			/* Update bic before losing reference to bfqq */
 			if (bfq_bfqq_in_large_burst(bfqq))
-				bfqq_data->saved_in_large_burst = true;
+				bic->bfqq_data[a_idx].saved_in_large_burst =
+					true;
 
 			bfqq = bfq_split_bfqq(bic, bfqq);
 			split = true;
diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h
index f2e8ab91951c..e27897d66a0f 100644
--- a/block/bfq-iosched.h
+++ b/block/bfq-iosched.h
@@ -416,7 +416,7 @@ struct bfq_queue {
 struct bfq_iocq_bfqq_data {
 	/*
 	 * Snapshot of the has_short_time flag before merging; taken
-	 * to remember its value while the queue is merged, so as to
+	 * to remember its values while the queue is merged, so as to
 	 * be able to restore it in case of split.
 	 */
 	bool saved_has_short_ttime;
@@ -430,7 +430,7 @@ struct bfq_iocq_bfqq_data {
 	u64 saved_tot_idle_time;
 
 	/*
-	 * Same purpose as the previous fields for the value of the
+	 * Same purpose as the previous fields for the values of the
 	 * field keeping the queue's belonging to a large burst
 	 */
 	bool saved_in_large_burst;
@@ -493,8 +493,12 @@ struct bfq_io_cq {
 	uint64_t blkcg_serial_nr; /* the current blkcg serial */
 #endif
 
-	/* persistent data for associated synchronous process queue */
-	struct bfq_iocq_bfqq_data bfqq_data;
+	/*
+	 * Persistent data for associated synchronous process queues
+	 * (one queue per actuator, see field bfqq above). In
+	 * particular, each of these queues may undergo a merge.
+	 */
+	struct bfq_iocq_bfqq_data bfqq_data[BFQ_MAX_ACTUATORS];
 
 	unsigned int requests;	/* Number of requests this process has in flight */
 };
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH V6 5/8] block, bfq: split also async bfq_queues on a per-actuator basis
  2022-11-03 16:26 [PATCH V6 0/8] block, bfq: extend bfq to support multi-actuator drives Paolo Valente
                   ` (3 preceding siblings ...)
  2022-11-03 16:26 ` [PATCH V6 4/8] block, bfq: turn bfqq_data into an array in bfq_io_cq Paolo Valente
@ 2022-11-03 16:26 ` Paolo Valente
  2022-11-21  0:52   ` Damien Le Moal
  2022-11-03 16:26 ` [PATCH V6 6/8] block, bfq: retrieve independent access ranges from request queue Paolo Valente
                   ` (2 subsequent siblings)
  7 siblings, 1 reply; 31+ messages in thread
From: Paolo Valente @ 2022-11-03 16:26 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, linux-kernel, arie.vanderhoeven, rory.c.chen,
	Davide Zini, Paolo Valente

From: Davide Zini <davidezini2@gmail.com>

Similarly to sync bfq_queues, also async bfq_queues need to be split
on a per-actuator basis.

Signed-off-by: Paolo Valente <paolo.valente@linaro.org>
Signed-off-by: Davide Zini <davidezini2@gmail.com>
---
 block/bfq-iosched.c | 41 +++++++++++++++++++++++------------------
 block/bfq-iosched.h |  8 ++++----
 2 files changed, 27 insertions(+), 22 deletions(-)

diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index f44bac054aaf..c94b80e3f685 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -2673,14 +2673,16 @@ static void bfq_bfqq_end_wr(struct bfq_queue *bfqq)
 void bfq_end_wr_async_queues(struct bfq_data *bfqd,
 			     struct bfq_group *bfqg)
 {
-	int i, j;
-
-	for (i = 0; i < 2; i++)
-		for (j = 0; j < IOPRIO_NR_LEVELS; j++)
-			if (bfqg->async_bfqq[i][j])
-				bfq_bfqq_end_wr(bfqg->async_bfqq[i][j]);
-	if (bfqg->async_idle_bfqq)
-		bfq_bfqq_end_wr(bfqg->async_idle_bfqq);
+	int i, j, k;
+
+	for (k = 0; k < bfqd->num_actuators; k++) {
+		for (i = 0; i < 2; i++)
+			for (j = 0; j < IOPRIO_NR_LEVELS; j++)
+				if (bfqg->async_bfqq[i][j][k])
+					bfq_bfqq_end_wr(bfqg->async_bfqq[i][j][k]);
+		if (bfqg->async_idle_bfqq[k])
+			bfq_bfqq_end_wr(bfqg->async_idle_bfqq[k]);
+	}
 }
 
 static void bfq_end_wr(struct bfq_data *bfqd)
@@ -5620,18 +5622,18 @@ static void bfq_init_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq,
 
 static struct bfq_queue **bfq_async_queue_prio(struct bfq_data *bfqd,
 					       struct bfq_group *bfqg,
-					       int ioprio_class, int ioprio)
+					       int ioprio_class, int ioprio, int act_idx)
 {
 	switch (ioprio_class) {
 	case IOPRIO_CLASS_RT:
-		return &bfqg->async_bfqq[0][ioprio];
+		return &bfqg->async_bfqq[0][ioprio][act_idx];
 	case IOPRIO_CLASS_NONE:
 		ioprio = IOPRIO_BE_NORM;
 		fallthrough;
 	case IOPRIO_CLASS_BE:
-		return &bfqg->async_bfqq[1][ioprio];
+		return &bfqg->async_bfqq[1][ioprio][act_idx];
 	case IOPRIO_CLASS_IDLE:
-		return &bfqg->async_idle_bfqq;
+		return &bfqg->async_idle_bfqq[act_idx];
 	default:
 		return NULL;
 	}
@@ -5805,7 +5807,8 @@ static struct bfq_queue *bfq_get_queue(struct bfq_data *bfqd,
 
 	if (!is_sync) {
 		async_bfqq = bfq_async_queue_prio(bfqd, bfqg, ioprio_class,
-						  ioprio);
+						  ioprio,
+						  bfq_actuator_index(bfqd, bio));
 		bfqq = *async_bfqq;
 		if (bfqq)
 			goto out;
@@ -7022,13 +7025,15 @@ static void __bfq_put_async_bfqq(struct bfq_data *bfqd,
  */
 void bfq_put_async_queues(struct bfq_data *bfqd, struct bfq_group *bfqg)
 {
-	int i, j;
+	int i, j, k;
 
-	for (i = 0; i < 2; i++)
-		for (j = 0; j < IOPRIO_NR_LEVELS; j++)
-			__bfq_put_async_bfqq(bfqd, &bfqg->async_bfqq[i][j]);
+	for (k = 0; k < bfqd->num_actuators; k++) {
+		for (i = 0; i < 2; i++)
+			for (j = 0; j < IOPRIO_NR_LEVELS; j++)
+				__bfq_put_async_bfqq(bfqd, &bfqg->async_bfqq[i][j][k]);
 
-	__bfq_put_async_bfqq(bfqd, &bfqg->async_idle_bfqq);
+		__bfq_put_async_bfqq(bfqd, &bfqg->async_idle_bfqq[k]);
+	}
 }
 
 /*
diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h
index e27897d66a0f..f1c2e77cbf9a 100644
--- a/block/bfq-iosched.h
+++ b/block/bfq-iosched.h
@@ -976,8 +976,8 @@ struct bfq_group {
 
 	void *bfqd;
 
-	struct bfq_queue *async_bfqq[2][IOPRIO_NR_LEVELS];
-	struct bfq_queue *async_idle_bfqq;
+	struct bfq_queue *async_bfqq[2][IOPRIO_NR_LEVELS][BFQ_MAX_ACTUATORS];
+	struct bfq_queue *async_idle_bfqq[BFQ_MAX_ACTUATORS];
 
 	struct bfq_entity *my_entity;
 
@@ -993,8 +993,8 @@ struct bfq_group {
 	struct bfq_entity entity;
 	struct bfq_sched_data sched_data;
 
-	struct bfq_queue *async_bfqq[2][IOPRIO_NR_LEVELS];
-	struct bfq_queue *async_idle_bfqq;
+	struct bfq_queue *async_bfqq[2][IOPRIO_NR_LEVELS][BFQ_MAX_ACTUATORS];
+	struct bfq_queue *async_idle_bfqq[BFQ_MAX_ACTUATORS];
 
 	struct rb_root rq_pos_tree;
 };
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH V6 6/8] block, bfq: retrieve independent access ranges from request queue
  2022-11-03 16:26 [PATCH V6 0/8] block, bfq: extend bfq to support multi-actuator drives Paolo Valente
                   ` (4 preceding siblings ...)
  2022-11-03 16:26 ` [PATCH V6 5/8] block, bfq: split also async bfq_queues on a per-actuator basis Paolo Valente
@ 2022-11-03 16:26 ` Paolo Valente
  2022-11-21  1:01   ` Damien Le Moal
  2022-11-03 16:26 ` [PATCH V6 7/8] block, bfq: inject I/O to underutilized actuators Paolo Valente
  2022-11-03 16:26 ` [PATCH V6 8/8] block, bfq: balance I/O injection among " Paolo Valente
  7 siblings, 1 reply; 31+ messages in thread
From: Paolo Valente @ 2022-11-03 16:26 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, linux-kernel, arie.vanderhoeven, rory.c.chen,
	Federico Gavioli, Paolo Valente

From: Federico Gavioli <f.gavioli97@gmail.com>

This patch implements the code to gather the content of the
independent_access_ranges structure from the request_queue and copy
it into the queue's bfq_data. This copy is done at queue initialization.

We copy the access ranges into the bfq_data to avoid taking the queue
lock each time we access the ranges.

This implementation, however, puts a limit to the maximum independent
ranges supported by the scheduler. Such a limit is equal to the constant
BFQ_MAX_ACTUATORS. This limit was placed to avoid the allocation of
dynamic memory.

Co-developed-by: Rory Chen <rory.c.chen@seagate.com>
Signed-off-by: Rory Chen <rory.c.chen@seagate.com>
Signed-off-by: Federico Gavioli <f.gavioli97@gmail.com>
Signed-off-by: Paolo Valente <paolo.valente@linaro.org>
---
 block/bfq-iosched.c | 54 ++++++++++++++++++++++++++++++++++++++-------
 block/bfq-iosched.h |  5 +++++
 2 files changed, 51 insertions(+), 8 deletions(-)

diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index c94b80e3f685..106c8820cc5c 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -1831,10 +1831,26 @@ static bool bfq_bfqq_higher_class_or_weight(struct bfq_queue *bfqq,
 /* get the index of the actuator that will serve bio */
 static unsigned int bfq_actuator_index(struct bfq_data *bfqd, struct bio *bio)
 {
-	/*
-	 * Multi-actuator support not complete yet, so always return 0
-	 * for the moment.
-	 */
+	struct blk_independent_access_range *iar;
+	unsigned int i;
+	sector_t end;
+
+	/* no search needed if one or zero ranges present */
+	if (bfqd->num_actuators < 2)
+		return 0;
+
+	/* bio_end_sector(bio) gives the sector after the last one */
+	end = bio_end_sector(bio) - 1;
+
+	for (i = 0; i < bfqd->num_actuators; i++) {
+		iar = &(bfqd->ia_ranges[i]);
+		if (end >= iar->sector && end < iar->sector + iar->nr_sectors)
+			return i;
+	}
+
+	WARN_ONCE(true,
+		  "bfq_actuator_index: bio sector out of ranges: end=%llu\n",
+		  end);
 	return 0;
 }
 
@@ -2479,7 +2495,6 @@ static void bfq_remove_request(struct request_queue *q,
 
 	if (rq->cmd_flags & REQ_META)
 		bfqq->meta_pending--;
-
 }
 
 static bool bfq_bio_merge(struct request_queue *q, struct bio *bio,
@@ -7144,6 +7159,8 @@ static int bfq_init_queue(struct request_queue *q, struct elevator_type *e)
 {
 	struct bfq_data *bfqd;
 	struct elevator_queue *eq;
+	unsigned int i;
+	struct blk_independent_access_ranges *ia_ranges = q->disk->ia_ranges;
 
 	eq = elevator_alloc(q, e);
 	if (!eq)
@@ -7187,10 +7204,31 @@ static int bfq_init_queue(struct request_queue *q, struct elevator_type *e)
 	bfqd->queue = q;
 
 	/*
-	 * Multi-actuator support not complete yet, default to single
-	 * actuator for the moment.
+	 * If the disk supports multiple actuators, we copy the independent
+	 * access ranges from the request queue structure.
 	 */
-	bfqd->num_actuators = 1;
+	spin_lock_irq(&q->queue_lock);
+	if (ia_ranges) {
+		/*
+		 * Check if the disk ia_ranges size exceeds the current bfq
+		 * actuator limit.
+		 */
+		if (ia_ranges->nr_ia_ranges > BFQ_MAX_ACTUATORS) {
+			pr_crit("nr_ia_ranges higher than act limit: iars=%d, max=%d.\n",
+				ia_ranges->nr_ia_ranges, BFQ_MAX_ACTUATORS);
+			pr_crit("Falling back to single actuator mode.\n");
+			bfqd->num_actuators = 0;
+		} else {
+			bfqd->num_actuators = ia_ranges->nr_ia_ranges;
+
+			for (i = 0; i < bfqd->num_actuators; i++)
+				bfqd->ia_ranges[i] = ia_ranges->ia_range[i];
+		}
+	} else {
+		bfqd->num_actuators = 0;
+	}
+
+	spin_unlock_irq(&q->queue_lock);
 
 	INIT_LIST_HEAD(&bfqd->dispatch);
 
diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h
index f1c2e77cbf9a..90130a893c8f 100644
--- a/block/bfq-iosched.h
+++ b/block/bfq-iosched.h
@@ -811,6 +811,11 @@ struct bfq_data {
 	 */
 	unsigned int num_actuators;
 
+	/*
+	 * Disk independent access ranges for each actuator
+	 * in this device.
+	 */
+	struct blk_independent_access_range ia_ranges[BFQ_MAX_ACTUATORS];
 };
 
 enum bfqq_state_flags {
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH V6 7/8] block, bfq: inject I/O to underutilized actuators
  2022-11-03 16:26 [PATCH V6 0/8] block, bfq: extend bfq to support multi-actuator drives Paolo Valente
                   ` (5 preceding siblings ...)
  2022-11-03 16:26 ` [PATCH V6 6/8] block, bfq: retrieve independent access ranges from request queue Paolo Valente
@ 2022-11-03 16:26 ` Paolo Valente
  2022-11-21  1:17   ` Damien Le Moal
  2022-11-03 16:26 ` [PATCH V6 8/8] block, bfq: balance I/O injection among " Paolo Valente
  7 siblings, 1 reply; 31+ messages in thread
From: Paolo Valente @ 2022-11-03 16:26 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, linux-kernel, arie.vanderhoeven, rory.c.chen,
	Davide Zini, Paolo Valente

From: Davide Zini <davidezini2@gmail.com>

The main service scheme of BFQ for sync I/O is serving one sync
bfq_queue at a time, for a while. In particular, BFQ enforces this
scheme when it deems the latter necessary to boost throughput or
to preserve service guarantees. Unfortunately, when BFQ enforces
this policy, only one actuator at a time gets served for a while,
because each bfq_queue contains I/O only for one actuator. The
other actuators may remain underutilized.

Actually, BFQ may serve (inject) extra I/O, taken from other
bfq_queues, in parallel with that of the in-service queue. This
injection mechanism may provide the ground for dealing also with
the above actuator-underutilization problem. Yet BFQ does not take
the actuator load into account when choosing which queue to pick
extra I/O from. In addition, BFQ may happen to inject extra I/O
only when the in-service queue is temporarily empty.

In view of these facts, this commit extends the
injection mechanism in such a way that the latter:
(1) takes into account also the actuator load;
(2) checks such a load on each dispatch, and injects I/O for an
    underutilized actuator, if there is one and there is I/O for it.

To perform the check in (2), this commit introduces a load
threshold, currently set to 4.  A linear scan of each actuator is
performed, until an actuator is found for which the following two
conditions hold: the load of the actuator is below the threshold,
and there is at least one non-in-service queue that contains I/O
for that actuator. If such a pair (actuator, queue) is found, then
the head request of that queue is returned for dispatch, instead
of the head request of the in-service queue.

We have set the threshold, empirically, to the minimum possible
value for which an actuator is fully utilized, or close to be
fully utilized. By doing so, injected I/O 'steals' as few
drive-queue slots as possibile to the in-service queue. This
reduces as much as possible the probability that the service of
I/O from the in-service bfq_queue gets delayed because of slot
exhaustion, i.e., because all the slots of the drive queue are
filled with I/O injected from other queues (NCQ provides for 32
slots).

This new mechanism also counters actuator underutilization in the
case of asymmetric configurations of bfq_queues. Namely if there
are few bfq_queues containing I/O for some actuators and many
bfq_queues containing I/O for other actuators. Or if the
bfq_queues containing I/O for some actuators have lower weights
than the other bfq_queues.

Signed-off-by: Paolo Valente <paolo.valente@linaro.org>
Signed-off-by: Davide Zini <davidezini2@gmail.com>
---
 block/bfq-cgroup.c  |   2 +-
 block/bfq-iosched.c | 139 +++++++++++++++++++++++++++++++++-----------
 block/bfq-iosched.h |  39 ++++++++++++-
 block/bfq-wf2q.c    |   2 +-
 4 files changed, 143 insertions(+), 39 deletions(-)

diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
index d243c429d9c0..38ccfe55ad46 100644
--- a/block/bfq-cgroup.c
+++ b/block/bfq-cgroup.c
@@ -694,7 +694,7 @@ void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq,
 		bfq_activate_bfqq(bfqd, bfqq);
 	}
 
-	if (!bfqd->in_service_queue && !bfqd->rq_in_driver)
+	if (!bfqd->in_service_queue && !bfqd->tot_rq_in_driver)
 		bfq_schedule_dispatch(bfqd);
 	/* release extra ref taken above, bfqq may happen to be freed now */
 	bfq_put_queue(bfqq);
diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index 106c8820cc5c..db91f1a651d3 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -2252,6 +2252,7 @@ static void bfq_add_request(struct request *rq)
 
 	bfq_log_bfqq(bfqd, bfqq, "add_request %d", rq_is_sync(rq));
 	bfqq->queued[rq_is_sync(rq)]++;
+
 	/*
 	 * Updating of 'bfqd->queued' is protected by 'bfqd->lock', however, it
 	 * may be read without holding the lock in bfq_has_work().
@@ -2297,9 +2298,9 @@ static void bfq_add_request(struct request *rq)
 		 *   elapsed.
 		 */
 		if (bfqq == bfqd->in_service_queue &&
-		    (bfqd->rq_in_driver == 0 ||
+		    (bfqd->tot_rq_in_driver == 0 ||
 		     (bfqq->last_serv_time_ns > 0 &&
-		      bfqd->rqs_injected && bfqd->rq_in_driver > 0)) &&
+		      bfqd->rqs_injected && bfqd->tot_rq_in_driver > 0)) &&
 		    time_is_before_eq_jiffies(bfqq->decrease_time_jif +
 					      msecs_to_jiffies(10))) {
 			bfqd->last_empty_occupied_ns = ktime_get_ns();
@@ -2323,7 +2324,7 @@ static void bfq_add_request(struct request *rq)
 			 * will be set in case injection is performed
 			 * on bfqq before rq is completed).
 			 */
-			if (bfqd->rq_in_driver == 0)
+			if (bfqd->tot_rq_in_driver == 0)
 				bfqd->rqs_injected = false;
 		}
 	}
@@ -2421,15 +2422,18 @@ static sector_t get_sdist(sector_t last_pos, struct request *rq)
 static void bfq_activate_request(struct request_queue *q, struct request *rq)
 {
 	struct bfq_data *bfqd = q->elevator->elevator_data;
+	unsigned int act_idx = bfq_actuator_index(bfqd, rq->bio);
 
-	bfqd->rq_in_driver++;
+	bfqd->tot_rq_in_driver++;
+	bfqd->rq_in_driver[act_idx]++;
 }
 
 static void bfq_deactivate_request(struct request_queue *q, struct request *rq)
 {
 	struct bfq_data *bfqd = q->elevator->elevator_data;
 
-	bfqd->rq_in_driver--;
+	bfqd->tot_rq_in_driver--;
+	bfqd->rq_in_driver[bfq_actuator_index(bfqd, rq->bio)]--;
 }
 #endif
 
@@ -2703,11 +2707,14 @@ void bfq_end_wr_async_queues(struct bfq_data *bfqd,
 static void bfq_end_wr(struct bfq_data *bfqd)
 {
 	struct bfq_queue *bfqq;
+	int i;
 
 	spin_lock_irq(&bfqd->lock);
 
-	list_for_each_entry(bfqq, &bfqd->active_list, bfqq_list)
-		bfq_bfqq_end_wr(bfqq);
+	for (i = 0; i < bfqd->num_actuators; i++) {
+		list_for_each_entry(bfqq, &bfqd->active_list[i], bfqq_list)
+			bfq_bfqq_end_wr(bfqq);
+	}
 	list_for_each_entry(bfqq, &bfqd->idle_list, bfqq_list)
 		bfq_bfqq_end_wr(bfqq);
 	bfq_end_wr_async(bfqd);
@@ -3651,13 +3658,13 @@ static void bfq_update_peak_rate(struct bfq_data *bfqd, struct request *rq)
 	 * - start a new observation interval with this dispatch
 	 */
 	if (now_ns - bfqd->last_dispatch > 100*NSEC_PER_MSEC &&
-	    bfqd->rq_in_driver == 0)
+	    bfqd->tot_rq_in_driver == 0)
 		goto update_rate_and_reset;
 
 	/* Update sampling information */
 	bfqd->peak_rate_samples++;
 
-	if ((bfqd->rq_in_driver > 0 ||
+	if ((bfqd->tot_rq_in_driver > 0 ||
 		now_ns - bfqd->last_completion < BFQ_MIN_TT)
 	    && !BFQ_RQ_SEEKY(bfqd, bfqd->last_position, rq))
 		bfqd->sequential_samples++;
@@ -3924,7 +3931,7 @@ static bool idling_needed_for_service_guarantees(struct bfq_data *bfqd,
 	return (bfqq->wr_coeff > 1 &&
 		(bfqd->wr_busy_queues <
 		 tot_busy_queues ||
-		 bfqd->rq_in_driver >=
+		 bfqd->tot_rq_in_driver >=
 		 bfqq->dispatched + 4)) ||
 		bfq_asymmetric_scenario(bfqd, bfqq) ||
 		tot_busy_queues == 1;
@@ -4696,6 +4703,7 @@ bfq_choose_bfqq_for_injection(struct bfq_data *bfqd)
 {
 	struct bfq_queue *bfqq, *in_serv_bfqq = bfqd->in_service_queue;
 	unsigned int limit = in_serv_bfqq->inject_limit;
+	int i;
 	/*
 	 * If
 	 * - bfqq is not weight-raised and therefore does not carry
@@ -4727,7 +4735,7 @@ bfq_choose_bfqq_for_injection(struct bfq_data *bfqd)
 		)
 		limit = 1;
 
-	if (bfqd->rq_in_driver >= limit)
+	if (bfqd->tot_rq_in_driver >= limit)
 		return NULL;
 
 	/*
@@ -4742,11 +4750,12 @@ bfq_choose_bfqq_for_injection(struct bfq_data *bfqd)
 	 *   (and re-added only if it gets new requests, but then it
 	 *   is assigned again enough budget for its new backlog).
 	 */
-	list_for_each_entry(bfqq, &bfqd->active_list, bfqq_list)
-		if (!RB_EMPTY_ROOT(&bfqq->sort_list) &&
-		    (in_serv_always_inject || bfqq->wr_coeff > 1) &&
-		    bfq_serv_to_charge(bfqq->next_rq, bfqq) <=
-		    bfq_bfqq_budget_left(bfqq)) {
+	for (i = 0; i < bfqd->num_actuators; i++) {
+		list_for_each_entry(bfqq, &bfqd->active_list[i], bfqq_list)
+			if (!RB_EMPTY_ROOT(&bfqq->sort_list) &&
+				(in_serv_always_inject || bfqq->wr_coeff > 1) &&
+				bfq_serv_to_charge(bfqq->next_rq, bfqq) <=
+				bfq_bfqq_budget_left(bfqq)) {
 			/*
 			 * Allow for only one large in-flight request
 			 * on non-rotational devices, for the
@@ -4771,22 +4780,69 @@ bfq_choose_bfqq_for_injection(struct bfq_data *bfqd)
 			else
 				limit = in_serv_bfqq->inject_limit;
 
-			if (bfqd->rq_in_driver < limit) {
+			if (bfqd->tot_rq_in_driver < limit) {
 				bfqd->rqs_injected = true;
 				return bfqq;
 			}
 		}
+	}
+
+	return NULL;
+}
+
+static struct bfq_queue *
+bfq_find_active_bfqq_for_actuator(struct bfq_data *bfqd,
+						    int idx)
+{
+	struct bfq_queue *bfqq = NULL;
+
+	if (bfqd->in_service_queue &&
+	    bfqd->in_service_queue->actuator_idx == idx)
+		return bfqd->in_service_queue;
+
+	list_for_each_entry(bfqq, &bfqd->active_list[idx], bfqq_list) {
+		if (!RB_EMPTY_ROOT(&bfqq->sort_list) &&
+			bfq_serv_to_charge(bfqq->next_rq, bfqq) <=
+				bfq_bfqq_budget_left(bfqq)) {
+			return bfqq;
+		}
+	}
 
 	return NULL;
 }
 
+/*
+ * Perform a linear scan of each actuator, until an actuator is found
+ * for which the following two conditions hold: the load of the
+ * actuator is below the threshold (see comments on actuator_load_threshold
+ * for details), and there is a queue that contains I/O for that
+ * actuator. On success, return that queue.
+ */
+static struct bfq_queue *
+bfq_find_bfqq_for_underused_actuator(struct bfq_data *bfqd)
+{
+	int i;
+
+	for (i = 0 ; i < bfqd->num_actuators; i++)
+		if (bfqd->rq_in_driver[i] < bfqd->actuator_load_threshold) {
+			struct bfq_queue *bfqq =
+				bfq_find_active_bfqq_for_actuator(bfqd, i);
+
+			if (bfqq)
+				return bfqq;
+		}
+
+	return NULL;
+}
+
+
 /*
  * Select a queue for service.  If we have a current queue in service,
  * check whether to continue servicing it, or retrieve and set a new one.
  */
 static struct bfq_queue *bfq_select_queue(struct bfq_data *bfqd)
 {
-	struct bfq_queue *bfqq;
+	struct bfq_queue *bfqq, *inject_bfqq;
 	struct request *next_rq;
 	enum bfqq_expiration reason = BFQQE_BUDGET_TIMEOUT;
 
@@ -4808,6 +4864,15 @@ static struct bfq_queue *bfq_select_queue(struct bfq_data *bfqd)
 		goto expire;
 
 check_queue:
+	/*
+	 *  If some actuator is underutilized, but the in-service
+	 *  queue does not contain I/O for that actuator, then try to
+	 *  inject I/O for that actuator.
+	 */
+	inject_bfqq = bfq_find_bfqq_for_underused_actuator(bfqd);
+	if (inject_bfqq && inject_bfqq != bfqq)
+		return inject_bfqq;
+
 	/*
 	 * This loop is rarely executed more than once. Even when it
 	 * happens, it is much more convenient to re-execute this loop
@@ -5163,11 +5228,11 @@ static struct request *__bfq_dispatch_request(struct blk_mq_hw_ctx *hctx)
 
 		/*
 		 * We exploit the bfq_finish_requeue_request hook to
-		 * decrement rq_in_driver, but
+		 * decrement tot_rq_in_driver, but
 		 * bfq_finish_requeue_request will not be invoked on
 		 * this request. So, to avoid unbalance, just start
-		 * this request, without incrementing rq_in_driver. As
-		 * a negative consequence, rq_in_driver is deceptively
+		 * this request, without incrementing tot_rq_in_driver. As
+		 * a negative consequence, tot_rq_in_driver is deceptively
 		 * lower than it should be while this request is in
 		 * service. This may cause bfq_schedule_dispatch to be
 		 * invoked uselessly.
@@ -5176,7 +5241,7 @@ static struct request *__bfq_dispatch_request(struct blk_mq_hw_ctx *hctx)
 		 * bfq_finish_requeue_request hook, if defined, is
 		 * probably invoked also on this request. So, by
 		 * exploiting this hook, we could 1) increment
-		 * rq_in_driver here, and 2) decrement it in
+		 * tot_rq_in_driver here, and 2) decrement it in
 		 * bfq_finish_requeue_request. Such a solution would
 		 * let the value of the counter be always accurate,
 		 * but it would entail using an extra interface
@@ -5205,7 +5270,7 @@ static struct request *__bfq_dispatch_request(struct blk_mq_hw_ctx *hctx)
 	 * Of course, serving one request at a time may cause loss of
 	 * throughput.
 	 */
-	if (bfqd->strict_guarantees && bfqd->rq_in_driver > 0)
+	if (bfqd->strict_guarantees && bfqd->tot_rq_in_driver > 0)
 		goto exit;
 
 	bfqq = bfq_select_queue(bfqd);
@@ -5216,7 +5281,8 @@ static struct request *__bfq_dispatch_request(struct blk_mq_hw_ctx *hctx)
 
 	if (rq) {
 inc_in_driver_start_rq:
-		bfqd->rq_in_driver++;
+		bfqd->rq_in_driver[bfqq->actuator_idx]++;
+		bfqd->tot_rq_in_driver++;
 start_rq:
 		rq->rq_flags |= RQF_STARTED;
 	}
@@ -6289,7 +6355,7 @@ static void bfq_update_hw_tag(struct bfq_data *bfqd)
 	struct bfq_queue *bfqq = bfqd->in_service_queue;
 
 	bfqd->max_rq_in_driver = max_t(int, bfqd->max_rq_in_driver,
-				       bfqd->rq_in_driver);
+				       bfqd->tot_rq_in_driver);
 
 	if (bfqd->hw_tag == 1)
 		return;
@@ -6300,7 +6366,7 @@ static void bfq_update_hw_tag(struct bfq_data *bfqd)
 	 * sum is not exact, as it's not taking into account deactivated
 	 * requests.
 	 */
-	if (bfqd->rq_in_driver + bfqd->queued <= BFQ_HW_QUEUE_THRESHOLD)
+	if (bfqd->tot_rq_in_driver + bfqd->queued <= BFQ_HW_QUEUE_THRESHOLD)
 		return;
 
 	/*
@@ -6311,7 +6377,7 @@ static void bfq_update_hw_tag(struct bfq_data *bfqd)
 	if (bfqq && bfq_bfqq_has_short_ttime(bfqq) &&
 	    bfqq->dispatched + bfqq->queued[0] + bfqq->queued[1] <
 	    BFQ_HW_QUEUE_THRESHOLD &&
-	    bfqd->rq_in_driver < BFQ_HW_QUEUE_THRESHOLD)
+	    bfqd->tot_rq_in_driver < BFQ_HW_QUEUE_THRESHOLD)
 		return;
 
 	if (bfqd->hw_tag_samples++ < BFQ_HW_QUEUE_SAMPLES)
@@ -6332,7 +6398,8 @@ static void bfq_completed_request(struct bfq_queue *bfqq, struct bfq_data *bfqd)
 
 	bfq_update_hw_tag(bfqd);
 
-	bfqd->rq_in_driver--;
+	bfqd->rq_in_driver[bfqq->actuator_idx]--;
+	bfqd->tot_rq_in_driver--;
 	bfqq->dispatched--;
 
 	if (!bfqq->dispatched && !bfq_bfqq_busy(bfqq)) {
@@ -6451,7 +6518,7 @@ static void bfq_completed_request(struct bfq_queue *bfqq, struct bfq_data *bfqd)
 					BFQQE_NO_MORE_REQUESTS);
 	}
 
-	if (!bfqd->rq_in_driver)
+	if (!bfqd->tot_rq_in_driver)
 		bfq_schedule_dispatch(bfqd);
 }
 
@@ -6582,13 +6649,13 @@ static void bfq_update_inject_limit(struct bfq_data *bfqd,
 	 * conditions to do it, or we can lower the last base value
 	 * computed.
 	 *
-	 * NOTE: (bfqd->rq_in_driver == 1) means that there is no I/O
+	 * NOTE: (bfqd->tot_rq_in_driver == 1) means that there is no I/O
 	 * request in flight, because this function is in the code
 	 * path that handles the completion of a request of bfqq, and,
 	 * in particular, this function is executed before
-	 * bfqd->rq_in_driver is decremented in such a code path.
+	 * bfqd->tot_rq_in_driver is decremented in such a code path.
 	 */
-	if ((bfqq->last_serv_time_ns == 0 && bfqd->rq_in_driver == 1) ||
+	if ((bfqq->last_serv_time_ns == 0 && bfqd->tot_rq_in_driver == 1) ||
 	    tot_time_ns < bfqq->last_serv_time_ns) {
 		if (bfqq->last_serv_time_ns == 0) {
 			/*
@@ -6598,7 +6665,7 @@ static void bfq_update_inject_limit(struct bfq_data *bfqd,
 			bfqq->inject_limit = max_t(unsigned int, 1, old_limit);
 		}
 		bfqq->last_serv_time_ns = tot_time_ns;
-	} else if (!bfqd->rqs_injected && bfqd->rq_in_driver == 1)
+	} else if (!bfqd->rqs_injected && bfqd->tot_rq_in_driver == 1)
 		/*
 		 * No I/O injected and no request still in service in
 		 * the drive: these are the exact conditions for
@@ -7239,7 +7306,8 @@ static int bfq_init_queue(struct request_queue *q, struct elevator_type *e)
 	bfqd->queue_weights_tree = RB_ROOT_CACHED;
 	bfqd->num_groups_with_pending_reqs = 0;
 
-	INIT_LIST_HEAD(&bfqd->active_list);
+	INIT_LIST_HEAD(&bfqd->active_list[0]);
+	INIT_LIST_HEAD(&bfqd->active_list[1]);
 	INIT_LIST_HEAD(&bfqd->idle_list);
 	INIT_HLIST_HEAD(&bfqd->burst_list);
 
@@ -7284,6 +7352,9 @@ static int bfq_init_queue(struct request_queue *q, struct elevator_type *e)
 		ref_wr_duration[blk_queue_nonrot(bfqd->queue)];
 	bfqd->peak_rate = ref_rate[blk_queue_nonrot(bfqd->queue)] * 2 / 3;
 
+	/* see comments on the definition of next field inside bfq_data */
+	bfqd->actuator_load_threshold = 4;
+
 	spin_lock_init(&bfqd->lock);
 
 	/*
diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h
index 90130a893c8f..adb3ba6a9d90 100644
--- a/block/bfq-iosched.h
+++ b/block/bfq-iosched.h
@@ -586,7 +586,12 @@ struct bfq_data {
 	/* number of queued requests */
 	int queued;
 	/* number of requests dispatched and waiting for completion */
-	int rq_in_driver;
+	int tot_rq_in_driver;
+	/*
+	 * number of requests dispatched and waiting for completion
+	 * for each actuator
+	 */
+	int rq_in_driver[BFQ_MAX_ACTUATORS];
 
 	/* true if the device is non rotational and performs queueing */
 	bool nonrot_with_queueing;
@@ -680,8 +685,13 @@ struct bfq_data {
 	/* maximum budget allotted to a bfq_queue before rescheduling */
 	int bfq_max_budget;
 
-	/* list of all the bfq_queues active on the device */
-	struct list_head active_list;
+	/*
+	 * List of all the bfq_queues active for a specific actuator
+	 * on the device. Keeping active queues separate on a
+	 * per-actuator basis helps implementing per-actuator
+	 * injection more efficiently.
+	 */
+	struct list_head active_list[BFQ_MAX_ACTUATORS];
 	/* list of all the bfq_queues idle on the device */
 	struct list_head idle_list;
 
@@ -816,6 +826,29 @@ struct bfq_data {
 	 * in this device.
 	 */
 	struct blk_independent_access_range ia_ranges[BFQ_MAX_ACTUATORS];
+
+	/*
+	 * If the number of I/O requests queued in the device for a
+	 * given actuator is below next threshold, then the actuator
+	 * is deemed as underutilized. If this condition is found to
+	 * hold for some actuator upon a dispatch, but (i) the
+	 * in-service queue does not contain I/O for that actuator,
+	 * while (ii) some other queue does contain I/O for that
+	 * actuator, then the head I/O request of the latter queue is
+	 * returned (injected), instead of the head request of the
+	 * currently in-service queue.
+	 *
+	 * We set the threshold, empirically, to the minimum possible
+	 * value for which an actuator is fully utilized, or close to
+	 * be fully utilized. By doing so, injected I/O 'steals' as
+	 * few drive-queue slots as possibile to the in-service
+	 * queue. This reduces as much as possible the probability
+	 * that the service of I/O from the in-service bfq_queue gets
+	 * delayed because of slot exhaustion, i.e., because all the
+	 * slots of the drive queue are filled with I/O injected from
+	 * other queues (NCQ provides for 32 slots).
+	 */
+	unsigned int actuator_load_threshold;
 };
 
 enum bfqq_state_flags {
diff --git a/block/bfq-wf2q.c b/block/bfq-wf2q.c
index 8fc3da4c23bb..ec0273e2cd07 100644
--- a/block/bfq-wf2q.c
+++ b/block/bfq-wf2q.c
@@ -477,7 +477,7 @@ static void bfq_active_insert(struct bfq_service_tree *st,
 	bfqd = (struct bfq_data *)bfqg->bfqd;
 #endif
 	if (bfqq)
-		list_add(&bfqq->bfqq_list, &bfqq->bfqd->active_list);
+		list_add(&bfqq->bfqq_list, &bfqq->bfqd->active_list[bfqq->actuator_idx]);
 #ifdef CONFIG_BFQ_GROUP_IOSCHED
 	if (bfqg != bfqd->root_group)
 		bfqg->active_entities++;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH V6 8/8] block, bfq: balance I/O injection among underutilized actuators
  2022-11-03 16:26 [PATCH V6 0/8] block, bfq: extend bfq to support multi-actuator drives Paolo Valente
                   ` (6 preceding siblings ...)
  2022-11-03 16:26 ` [PATCH V6 7/8] block, bfq: inject I/O to underutilized actuators Paolo Valente
@ 2022-11-03 16:26 ` Paolo Valente
  2022-11-10 15:25   ` Arie van der Hoeven
  7 siblings, 1 reply; 31+ messages in thread
From: Paolo Valente @ 2022-11-03 16:26 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, linux-kernel, arie.vanderhoeven, rory.c.chen,
	Davide Zini, Paolo Valente

From: Davide Zini <davidezini2@gmail.com>

Upon the invocation of its dispatch function, BFQ returns the next I/O
request of the in-service bfq_queue, unless some exception holds. One
such exception is that there is some underutilized actuator, different
from the actuator for which the in-service queue contains I/O, and
that some other bfq_queue happens to contain I/O for such an
actuator. In this case, the next I/O request of the latter bfq_queue,
and not of the in-service bfq_queue, is returned (I/O is injected from
that bfq_queue). To find such an actuator, a linear scan, in
increasing index order, is performed among actuators.

Performing a linear scan entails a prioritization among actuators: an
underutilized actuator may be considered for injection only if all
actuators with a lower index are currently fully utilized, or if there
is no pending I/O for any lower-index actuator that happens to be
underutilized.

This commits breaks this prioritization and tends to distribute
injection uniformly across actuators. This is obtained by adding the
following condition to the linear scan: even if an actuator A is
underutilized, A is however skipped if its load is higher than that of
the next actuator.

Signed-off-by: Paolo Valente <paolo.valente@linaro.org>
Signed-off-by: Davide Zini <davidezini2@gmail.com>
---
 block/bfq-iosched.c | 18 +++++++++++++-----
 1 file changed, 13 insertions(+), 5 deletions(-)

diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index db91f1a651d3..c568a5a112a7 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -4813,10 +4813,16 @@ bfq_find_active_bfqq_for_actuator(struct bfq_data *bfqd,
 
 /*
  * Perform a linear scan of each actuator, until an actuator is found
- * for which the following two conditions hold: the load of the
- * actuator is below the threshold (see comments on actuator_load_threshold
- * for details), and there is a queue that contains I/O for that
- * actuator. On success, return that queue.
+ * for which the following three conditions hold: the load of the
+ * actuator is below the threshold (see comments on
+ * actuator_load_threshold for details) and lower than that of the
+ * next actuator (comments on this extra condition below), and there
+ * is a queue that contains I/O for that actuator. On success, return
+ * that queue.
+ *
+ * Performing a plain linear scan entails a prioritization among
+ * actuators. The extra condition above breaks this prioritization and
+ * tends to distribute injection uniformly across actuators.
  */
 static struct bfq_queue *
 bfq_find_bfqq_for_underused_actuator(struct bfq_data *bfqd)
@@ -4824,7 +4830,9 @@ bfq_find_bfqq_for_underused_actuator(struct bfq_data *bfqd)
 	int i;
 
 	for (i = 0 ; i < bfqd->num_actuators; i++)
-		if (bfqd->rq_in_driver[i] < bfqd->actuator_load_threshold) {
+		if (bfqd->rq_in_driver[i] < bfqd->actuator_load_threshold &&
+		    (i == bfqd->num_actuators - 1 ||
+		     bfqd->rq_in_driver[i] < bfqd->rq_in_driver[i+1])) {
 			struct bfq_queue *bfqq =
 				bfq_find_active_bfqq_for_actuator(bfqd, i);
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* Re: [PATCH V6 8/8] block, bfq: balance I/O injection among underutilized actuators
  2022-11-03 16:26 ` [PATCH V6 8/8] block, bfq: balance I/O injection among " Paolo Valente
@ 2022-11-10 15:25   ` Arie van der Hoeven
  2022-11-20  7:29     ` Paolo Valente
  0 siblings, 1 reply; 31+ messages in thread
From: Arie van der Hoeven @ 2022-11-10 15:25 UTC (permalink / raw)
  To: Paolo Valente, Jens Axboe
  Cc: linux-block, linux-kernel, Rory Chen, Davide Zini

Checking in on this series and what we can communicate to partners as to potential integration into Linux.  Is 6.1 viable?

We have at least one big partner whose launch schedule is gated on these changes.

Regards,  --Arie


From: Paolo Valente <paolo.valente@linaro.org>
Sent: Thursday, November 3, 2022 9:26 AM
To: Jens Axboe <axboe@kernel.dk>
Cc: linux-block@vger.kernel.org <linux-block@vger.kernel.org>; linux-kernel@vger.kernel.org <linux-kernel@vger.kernel.org>; Arie van der Hoeven <arie.vanderhoeven@seagate.com>; Rory Chen <rory.c.chen@seagate.com>; Davide Zini <davidezini2@gmail.com>; Paolo Valente <paolo.valente@linaro.org>
Subject: [PATCH V6 8/8] block, bfq: balance I/O injection among underutilized actuators


This message has originated from an External Source. Please use proper judgment and caution when opening attachments, clicking links, or responding to this email.


From: Davide Zini <davidezini2@gmail.com>

Upon the invocation of its dispatch function, BFQ returns the next I/O
request of the in-service bfq_queue, unless some exception holds. One
such exception is that there is some underutilized actuator, different
from the actuator for which the in-service queue contains I/O, and
that some other bfq_queue happens to contain I/O for such an
actuator. In this case, the next I/O request of the latter bfq_queue,
and not of the in-service bfq_queue, is returned (I/O is injected from
that bfq_queue). To find such an actuator, a linear scan, in
increasing index order, is performed among actuators.

Performing a linear scan entails a prioritization among actuators: an
underutilized actuator may be considered for injection only if all
actuators with a lower index are currently fully utilized, or if there
is no pending I/O for any lower-index actuator that happens to be
underutilized.

This commits breaks this prioritization and tends to distribute
injection uniformly across actuators. This is obtained by adding the
following condition to the linear scan: even if an actuator A is
underutilized, A is however skipped if its load is higher than that of
the next actuator.

Signed-off-by: Paolo Valente <paolo.valente@linaro.org>
Signed-off-by: Davide Zini <davidezini2@gmail.com>
---
 block/bfq-iosched.c | 18 +++++++++++++-----
 1 file changed, 13 insertions(+), 5 deletions(-)

diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index db91f1a651d3..c568a5a112a7 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -4813,10 +4813,16 @@ bfq_find_active_bfqq_for_actuator(struct bfq_data *bfqd,

 /*
  * Perform a linear scan of each actuator, until an actuator is found
- * for which the following two conditions hold: the load of the
- * actuator is below the threshold (see comments on actuator_load_threshold
- * for details), and there is a queue that contains I/O for that
- * actuator. On success, return that queue.
+ * for which the following three conditions hold: the load of the
+ * actuator is below the threshold (see comments on
+ * actuator_load_threshold for details) and lower than that of the
+ * next actuator (comments on this extra condition below), and there
+ * is a queue that contains I/O for that actuator. On success, return
+ * that queue.
+ *
+ * Performing a plain linear scan entails a prioritization among
+ * actuators. The extra condition above breaks this prioritization and
+ * tends to distribute injection uniformly across actuators.
  */
 static struct bfq_queue *
 bfq_find_bfqq_for_underused_actuator(struct bfq_data *bfqd)
@@ -4824,7 +4830,9 @@ bfq_find_bfqq_for_underused_actuator(struct bfq_data *bfqd)
        int i;

        for (i = 0 ; i < bfqd->num_actuators; i++)
-               if (bfqd->rq_in_driver[i] < bfqd->actuator_load_threshold) {
+               if (bfqd->rq_in_driver[i] < bfqd->actuator_load_threshold &&
+                   (i == bfqd->num_actuators - 1 ||
+                    bfqd->rq_in_driver[i] < bfqd->rq_in_driver[i+1])) {
                        struct bfq_queue *bfqq =
                                bfq_find_active_bfqq_for_actuator(bfqd, i);

--
2.20.1

Seagate Internal

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* Re: [PATCH V6 8/8] block, bfq: balance I/O injection among underutilized actuators
  2022-11-10 15:25   ` Arie van der Hoeven
@ 2022-11-20  7:29     ` Paolo Valente
  2022-11-20 22:06       ` Jens Axboe
  0 siblings, 1 reply; 31+ messages in thread
From: Paolo Valente @ 2022-11-20  7:29 UTC (permalink / raw)
  To: Arie van der Hoeven
  Cc: Jens Axboe, linux-block, linux-kernel, Rory Chen, Davide Zini,
	Damien Le Moal



> Il giorno 10 nov 2022, alle ore 16:25, Arie van der Hoeven <arie.vanderhoeven@seagate.com> ha scritto:
> 
> Checking in on this series and what we can communicate to partners as to potential integration into Linux.  Is 6.1 viable?

Hi Arie,
definitely too late for 6.1.

Jens, could you enqueue this for 6.2?  Unless Damien, or anybody else
still sees issues to address.

Thanks,
Paolo

> 
> We have at least one big partner whose launch schedule is gated on these changes.
> 
> Regards,  --Arie
> 
> 
> From: Paolo Valente <paolo.valente@linaro.org>
> Sent: Thursday, November 3, 2022 9:26 AM
> To: Jens Axboe <axboe@kernel.dk>
> Cc: linux-block@vger.kernel.org <linux-block@vger.kernel.org>; linux-kernel@vger.kernel.org <linux-kernel@vger.kernel.org>; Arie van der Hoeven <arie.vanderhoeven@seagate.com>; Rory Chen <rory.c.chen@seagate.com>; Davide Zini <davidezini2@gmail.com>; Paolo Valente <paolo.valente@linaro.org>
> Subject: [PATCH V6 8/8] block, bfq: balance I/O injection among underutilized actuators
> 
> 
> This message has originated from an External Source. Please use proper judgment and caution when opening attachments, clicking links, or responding to this email.
> 
> 
> From: Davide Zini <davidezini2@gmail.com>
> 
> Upon the invocation of its dispatch function, BFQ returns the next I/O
> request of the in-service bfq_queue, unless some exception holds. One
> such exception is that there is some underutilized actuator, different
> from the actuator for which the in-service queue contains I/O, and
> that some other bfq_queue happens to contain I/O for such an
> actuator. In this case, the next I/O request of the latter bfq_queue,
> and not of the in-service bfq_queue, is returned (I/O is injected from
> that bfq_queue). To find such an actuator, a linear scan, in
> increasing index order, is performed among actuators.
> 
> Performing a linear scan entails a prioritization among actuators: an
> underutilized actuator may be considered for injection only if all
> actuators with a lower index are currently fully utilized, or if there
> is no pending I/O for any lower-index actuator that happens to be
> underutilized.
> 
> This commits breaks this prioritization and tends to distribute
> injection uniformly across actuators. This is obtained by adding the
> following condition to the linear scan: even if an actuator A is
> underutilized, A is however skipped if its load is higher than that of
> the next actuator.
> 
> Signed-off-by: Paolo Valente <paolo.valente@linaro.org>
> Signed-off-by: Davide Zini <davidezini2@gmail.com>
> ---
> block/bfq-iosched.c | 18 +++++++++++++-----
> 1 file changed, 13 insertions(+), 5 deletions(-)
> 
> diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
> index db91f1a651d3..c568a5a112a7 100644
> --- a/block/bfq-iosched.c
> +++ b/block/bfq-iosched.c
> @@ -4813,10 +4813,16 @@ bfq_find_active_bfqq_for_actuator(struct bfq_data *bfqd,
> 
> /*
>  * Perform a linear scan of each actuator, until an actuator is found
> - * for which the following two conditions hold: the load of the
> - * actuator is below the threshold (see comments on actuator_load_threshold
> - * for details), and there is a queue that contains I/O for that
> - * actuator. On success, return that queue.
> + * for which the following three conditions hold: the load of the
> + * actuator is below the threshold (see comments on
> + * actuator_load_threshold for details) and lower than that of the
> + * next actuator (comments on this extra condition below), and there
> + * is a queue that contains I/O for that actuator. On success, return
> + * that queue.
> + *
> + * Performing a plain linear scan entails a prioritization among
> + * actuators. The extra condition above breaks this prioritization and
> + * tends to distribute injection uniformly across actuators.
>  */
> static struct bfq_queue *
> bfq_find_bfqq_for_underused_actuator(struct bfq_data *bfqd)
> @@ -4824,7 +4830,9 @@ bfq_find_bfqq_for_underused_actuator(struct bfq_data *bfqd)
>        int i;
> 
>        for (i = 0 ; i < bfqd->num_actuators; i++)
> -               if (bfqd->rq_in_driver[i] < bfqd->actuator_load_threshold) {
> +               if (bfqd->rq_in_driver[i] < bfqd->actuator_load_threshold &&
> +                   (i == bfqd->num_actuators - 1 ||
> +                    bfqd->rq_in_driver[i] < bfqd->rq_in_driver[i+1])) {
>                        struct bfq_queue *bfqq =
>                                bfq_find_active_bfqq_for_actuator(bfqd, i);
> 
> --
> 2.20.1
> 
> Seagate Internal


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH V6 8/8] block, bfq: balance I/O injection among underutilized actuators
  2022-11-20  7:29     ` Paolo Valente
@ 2022-11-20 22:06       ` Jens Axboe
  2022-11-20 23:41         ` Damien Le Moal
  0 siblings, 1 reply; 31+ messages in thread
From: Jens Axboe @ 2022-11-20 22:06 UTC (permalink / raw)
  To: Paolo Valente, Arie van der Hoeven
  Cc: linux-block, linux-kernel, Rory Chen, Davide Zini, Damien Le Moal

On 11/20/22 12:29 AM, Paolo Valente wrote:
> 
> 
>> Il giorno 10 nov 2022, alle ore 16:25, Arie van der Hoeven <arie.vanderhoeven@seagate.com> ha scritto:
>>
>> Checking in on this series and what we can communicate to partners as to potential integration into Linux.  Is 6.1 viable?
> 
> Hi Arie,
> definitely too late for 6.1.
> 
> Jens, could you enqueue this for 6.2?  Unless Damien, or anybody else
> still sees issues to address.

Would be nice to get Damien's feedback on the series.

-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH V6 8/8] block, bfq: balance I/O injection among underutilized actuators
  2022-11-20 22:06       ` Jens Axboe
@ 2022-11-20 23:41         ` Damien Le Moal
  0 siblings, 0 replies; 31+ messages in thread
From: Damien Le Moal @ 2022-11-20 23:41 UTC (permalink / raw)
  To: Jens Axboe, Paolo Valente, Arie van der Hoeven
  Cc: linux-block, linux-kernel, Rory Chen, Davide Zini

On 11/21/22 07:06, Jens Axboe wrote:
> On 11/20/22 12:29 AM, Paolo Valente wrote:
>>
>>
>>> Il giorno 10 nov 2022, alle ore 16:25, Arie van der Hoeven <arie.vanderhoeven@seagate.com> ha scritto:
>>>
>>> Checking in on this series and what we can communicate to partners as to potential integration into Linux.  Is 6.1 viable?
>>
>> Hi Arie,
>> definitely too late for 6.1.
>>
>> Jens, could you enqueue this for 6.2?  Unless Damien, or anybody else
>> still sees issues to address.
> 
> Would be nice to get Damien's feedback on the series.

I will try to have a look today.

> 

-- 
Damien Le Moal
Western Digital Research


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH V6 1/8] block, bfq: split sync bfq_queues on a per-actuator basis
  2022-11-03 16:26 ` [PATCH V6 1/8] block, bfq: split sync bfq_queues on a per-actuator basis Paolo Valente
@ 2022-11-21  0:18   ` Damien Le Moal
  2022-11-26 16:28     ` Paolo Valente
  2022-12-06  8:04     ` Paolo Valente
  0 siblings, 2 replies; 31+ messages in thread
From: Damien Le Moal @ 2022-11-21  0:18 UTC (permalink / raw)
  To: Paolo Valente, Jens Axboe
  Cc: linux-block, linux-kernel, arie.vanderhoeven, rory.c.chen,
	Gabriele Felici, Carmine Zaccagnino

On 11/4/22 01:26, Paolo Valente wrote:
> Single-LUN multi-actuator SCSI drives, as well as all multi-actuator
> SATA drives appear as a single device to the I/O subsystem [1].  Yet
> they address commands to different actuators internally, as a function
> of Logical Block Addressing (LBAs). A given sector is reachable by
> only one of the actuators. For example, Seagate’s Serial Advanced
> Technology Attachment (SATA) version contains two actuators and maps
> the lower half of the SATA LBA space to the lower actuator and the
> upper half to the upper actuator.
> 
> Evidently, to fully utilize actuators, no actuator must be left idle
> or underutilized while there is pending I/O for it. The block layer
> must somehow control the load of each actuator individually. This
> commit lays the ground for allowing BFQ to provide such a per-actuator
> control.
> 
> BFQ associates an I/O-request sync bfq_queue with each process doing
> synchronous I/O, or with a group of processes, in case of queue
> merging. Then BFQ serves one bfq_queue at a time. While in service, a
> bfq_queue is emptied in request-position order. Yet the same process,
> or group of processes, may generate I/O for different actuators. In
> this case, different streams of I/O (each for a different actuator)
> get all inserted into the same sync bfq_queue. So there is basically
> no individual control on when each stream is served, i.e., on when the
> I/O requests of the stream are picked from the bfq_queue and
> dispatched to the drive.
> 
> This commit enables BFQ to control the service of each actuator
> individually for synchronous I/O, by simply splitting each sync
> bfq_queue into N queues, one for each actuator. In other words, a sync
> bfq_queue is now associated to a pair (process, actuator). As a
> consequence of this split, the per-queue proportional-share policy
> implemented by BFQ will guarantee that the sync I/O generated for each
> actuator, by each process, receives its fair share of service.
> 
> This is just a preparatory patch. If the I/O of the same process
> happens to be sent to different queues, then each of these queues may
> undergo queue merging. To handle this event, the bfq_io_cq data
> structure must be properly extended. In addition, stable merging must
> be disabled to avoid loss of control on individual actuators. Finally,
> also async queues must be split. These issues are described in detail
> and addressed in next commits. As for this commit, although multiple
> per-process bfq_queues are provided, the I/O of each process or group
> of processes is still sent to only one queue, regardless of the
> actuator the I/O is for. The forwarding to distinct bfq_queues will be
> enabled after addressing the above issues.
> 
> [1] https://www.linaro.org/blog/budget-fair-queueing-bfq-linux-io-scheduler-optimizations-for-multi-actuator-sata-hard-drives/
> 
> Signed-off-by: Gabriele Felici <felicigb@gmail.com>
> Signed-off-by: Carmine Zaccagnino <carmine@carminezacc.com>
> Signed-off-by: Paolo Valente <paolo.valente@linaro.org>
> ---
>  block/bfq-cgroup.c  |  95 ++++++++++++++++------------
>  block/bfq-iosched.c | 151 +++++++++++++++++++++++++++++---------------
>  block/bfq-iosched.h |  51 ++++++++++++---
>  3 files changed, 194 insertions(+), 103 deletions(-)
> 
> diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
> index 144bca006463..d243c429d9c0 100644
> --- a/block/bfq-cgroup.c
> +++ b/block/bfq-cgroup.c
> @@ -700,6 +700,48 @@ void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq,
>  	bfq_put_queue(bfqq);
>  }
>  
> +static void bfq_sync_bfqq_move(struct bfq_data *bfqd,
> +			       struct bfq_queue *sync_bfqq,
> +			       struct bfq_io_cq *bic,
> +			       struct bfq_group *bfqg,
> +			       unsigned int act_idx)
> +{
> +	if (!sync_bfqq->new_bfqq && !bfq_bfqq_coop(sync_bfqq)) {
> +		/* We are the only user of this bfqq, just move it */
> +		if (sync_bfqq->entity.sched_data != &bfqg->sched_data)
> +			bfq_bfqq_move(bfqd, sync_bfqq, bfqg);

Nit: add a return here to avoid the "else" and save one tab indent level
for the code below. That will be more pleasant to the eyes :)

> +	} else {
> +		struct bfq_queue *bfqq;
> +
> +		/*
> +		 * The queue was merged to a different queue. Check
> +		 * that the merge chain still belongs to the same
> +		 * cgroup.
> +		 */
> +		for (bfqq = sync_bfqq; bfqq; bfqq = bfqq->new_bfqq)
> +			if (bfqq->entity.sched_data !=
> +			    &bfqg->sched_data)
> +				break;
> +		if (bfqq) {
> +			/*
> +			 * Some queue changed cgroup so the merge is
> +			 * not valid anymore. We cannot easily just
> +			 * cancel the merge (by clearing new_bfqq) as
> +			 * there may be other processes using this
> +			 * queue and holding refs to all queues below
> +			 * sync_bfqq->new_bfqq. Similarly if the merge
> +			 * already happened, we need to detach from
> +			 * bfqq now so that we cannot merge bio to a
> +			 * request from the old cgroup.
> +			 */
> +			bfq_put_cooperator(sync_bfqq);
> +			bfq_release_process_ref(bfqd, sync_bfqq);
> +			bic_set_bfqq(bic, NULL, 1, act_idx);
> +		}
> +	}
> +}
> +
> +

Double blank line.

>  /**
>   * __bfq_bic_change_cgroup - move @bic to @bfqg.
>   * @bfqd: the queue descriptor.
> @@ -714,53 +756,24 @@ static void *__bfq_bic_change_cgroup(struct bfq_data *bfqd,
>  				     struct bfq_io_cq *bic,
>  				     struct bfq_group *bfqg)
>  {
> -	struct bfq_queue *async_bfqq = bic_to_bfqq(bic, 0);
> -	struct bfq_queue *sync_bfqq = bic_to_bfqq(bic, 1);
>  	struct bfq_entity *entity;
> +	unsigned int act_idx;
>  
> -	if (async_bfqq) {
> -		entity = &async_bfqq->entity;
> -
> -		if (entity->sched_data != &bfqg->sched_data) {
> -			bic_set_bfqq(bic, NULL, 0);
> -			bfq_release_process_ref(bfqd, async_bfqq);
> -		}
> -	}
> +	for (act_idx = 0; act_idx < bfqd->num_actuators; act_idx++) {
> +		struct bfq_queue *async_bfqq = bic_to_bfqq(bic, 0, act_idx);
> +		struct bfq_queue *sync_bfqq = bic_to_bfqq(bic, 1, act_idx);

The second argument of bic_to_bfqq() is a bool. So it would make more
sense to use false/true instead of 0/1. That said, creating 2 helpers
bic_to_async_bfqq() and bic_to_sync_bfqq() without that second argument
would make the code more readable and less error prone I think.

>  
> -	if (sync_bfqq) {
> -		if (!sync_bfqq->new_bfqq && !bfq_bfqq_coop(sync_bfqq)) {
> -			/* We are the only user of this bfqq, just move it */
> -			if (sync_bfqq->entity.sched_data != &bfqg->sched_data)
> -				bfq_bfqq_move(bfqd, sync_bfqq, bfqg);
> -		} else {
> -			struct bfq_queue *bfqq;
> +		if (async_bfqq) {
> +			entity = &async_bfqq->entity;
>  
> -			/*
> -			 * The queue was merged to a different queue. Check
> -			 * that the merge chain still belongs to the same
> -			 * cgroup.
> -			 */
> -			for (bfqq = sync_bfqq; bfqq; bfqq = bfqq->new_bfqq)
> -				if (bfqq->entity.sched_data !=
> -				    &bfqg->sched_data)
> -					break;
> -			if (bfqq) {
> -				/*
> -				 * Some queue changed cgroup so the merge is
> -				 * not valid anymore. We cannot easily just
> -				 * cancel the merge (by clearing new_bfqq) as
> -				 * there may be other processes using this
> -				 * queue and holding refs to all queues below
> -				 * sync_bfqq->new_bfqq. Similarly if the merge
> -				 * already happened, we need to detach from
> -				 * bfqq now so that we cannot merge bio to a
> -				 * request from the old cgroup.
> -				 */
> -				bfq_put_cooperator(sync_bfqq);
> -				bfq_release_process_ref(bfqd, sync_bfqq);
> -				bic_set_bfqq(bic, NULL, 1);
> +			if (entity->sched_data != &bfqg->sched_data) {

You are referencing the entity variable once here only. So is it really
needed ?

> +				bic_set_bfqq(bic, NULL, 0, act_idx);
> +				bfq_release_process_ref(bfqd, async_bfqq);
>  			}
>  		}
> +
> +		if (sync_bfqq)
> +			bfq_sync_bfqq_move(bfqd, sync_bfqq, bic, bfqg, act_idx);
>  	}
>  
>  	return bfqg;
> diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
> index 7ea427817f7f..5c69394bbb65 100644
> --- a/block/bfq-iosched.c
> +++ b/block/bfq-iosched.c
> @@ -377,14 +377,19 @@ static const unsigned long bfq_late_stable_merging = 600;
>  #define RQ_BIC(rq)		((struct bfq_io_cq *)((rq)->elv.priv[0]))
>  #define RQ_BFQQ(rq)		((rq)->elv.priv[1])
>  
> -struct bfq_queue *bic_to_bfqq(struct bfq_io_cq *bic, bool is_sync)
> +struct bfq_queue *bic_to_bfqq(struct bfq_io_cq *bic,
> +			      bool is_sync,
> +			      unsigned int actuator_idx)

Why the line split for the last argument ?

>  {
> -	return bic->bfqq[is_sync];
> +	return bic->bfqq[is_sync][actuator_idx];

See comment above. This is really broken I think. You should not use a
bool as an array index. Write it as:

	if (is_sync)
		return bic->bfqq[1][actuator_idx];
	return bic->bfqq[0][actuator_idx];

Or as suggested, create 2 helpers.

>  }
>  
>  static void bfq_put_stable_ref(struct bfq_queue *bfqq);
>  
> -void bic_set_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq, bool is_sync)
> +void bic_set_bfqq(struct bfq_io_cq *bic,
> +		  struct bfq_queue *bfqq,
> +		  bool is_sync,
> +		  unsigned int actuator_idx)
>  {
>  	/*
>  	 * If bfqq != NULL, then a non-stable queue merge between
> @@ -399,7 +404,7 @@ void bic_set_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq, bool is_sync)
>  	 * we cancel the stable merge if
>  	 * bic->stable_merge_bfqq == bfqq.
>  	 */
> -	bic->bfqq[is_sync] = bfqq;
> +	bic->bfqq[is_sync][actuator_idx] = bfqq;

Same comment as above. Using a bool as an index array feels very wrong to
me. This seems very fragile/dependent on waht the compiler does with bool
type.

>  
>  	if (bfqq && bic->stable_merge_bfqq == bfqq) {
>  		/*
> @@ -672,9 +677,9 @@ static void bfq_limit_depth(blk_opf_t opf, struct blk_mq_alloc_data *data)
>  {
>  	struct bfq_data *bfqd = data->q->elevator->elevator_data;
>  	struct bfq_io_cq *bic = bfq_bic_lookup(data->q);
> -	struct bfq_queue *bfqq = bic ? bic_to_bfqq(bic, op_is_sync(opf)) : NULL;

Given that you are repeating this same pattern many times, the test for
bic != NULL should go into the bic_to_bfqq() helper.

>  	int depth;
>  	unsigned limit = data->q->nr_requests;
> +	unsigned int act_idx;
>  
>  	/* Sync reads have full depth available */
>  	if (op_is_sync(opf) && !op_is_write(opf)) {
> @@ -684,14 +689,21 @@ static void bfq_limit_depth(blk_opf_t opf, struct blk_mq_alloc_data *data)
>  		limit = (limit * depth) >> bfqd->full_depth_shift;
>  	}
>  
> -	/*
> -	 * Does queue (or any parent entity) exceed number of requests that
> -	 * should be available to it? Heavily limit depth so that it cannot
> -	 * consume more available requests and thus starve other entities.
> -	 */
> -	if (bfqq && bfqq_request_over_limit(bfqq, limit))
> -		depth = 1;
> +	for (act_idx = 0; act_idx < bfqd->num_actuators; act_idx++) {
> +		struct bfq_queue *bfqq =
> +			bic ? bic_to_bfqq(bic, op_is_sync(opf), act_idx) : NULL;
>  
> +		/*
> +		 * Does queue (or any parent entity) exceed number of
> +		 * requests that should be available to it? Heavily
> +		 * limit depth so that it cannot consume more
> +		 * available requests and thus starve other entities.
> +		 */
> +		if (bfqq && bfqq_request_over_limit(bfqq, limit)) {
> +			depth = 1;
> +			break;
> +		}
> +	}
>  	bfq_log(bfqd, "[%s] wr_busy %d sync %d depth %u",
>  		__func__, bfqd->wr_busy_queues, op_is_sync(opf), depth);
>  	if (depth)
> @@ -2142,7 +2154,7 @@ static void bfq_check_waker(struct bfq_data *bfqd, struct bfq_queue *bfqq,
>  	 * We reset waker detection logic also if too much time has passed
>   	 * since the first detection. If wakeups are rare, pointless idling
>  	 * doesn't hurt throughput that much. The condition below makes sure
> -	 * we do not uselessly idle blocking waker in more than 1/64 cases. 
> +	 * we do not uselessly idle blocking waker in more than 1/64 cases.
>  	 */
>  	if (bfqd->last_completed_rq_bfqq !=
>  	    bfqq->tentative_waker_bfqq ||
> @@ -2454,6 +2466,16 @@ static void bfq_remove_request(struct request_queue *q,
>  
>  }
>  
> +/* get the index of the actuator that will serve bio */

Generally, function comments use the multi-line comment style...

/*
 * Get the index of the actuator that will serve @bio.
 */

> +static unsigned int bfq_actuator_index(struct bfq_data *bfqd, struct bio *bio)
> +{
> +	/*
> +	 * Multi-actuator support not complete yet, so always return 0
> +	 * for the moment.
> +	 */

Nit: You did comment about this in the commit message. So I do not think
having this comment here helps.

Note that you could have a prep patch before this patch that gather the
actuator information for the device, and only does that. So this function
here could actually have meat instead of being empty. But that may make
the entire series more complicated, so not a big deal.

> +	return 0;
> +}
> +
>  static bool bfq_bio_merge(struct request_queue *q, struct bio *bio,
>  		unsigned int nr_segs)
>  {
> @@ -2478,7 +2500,8 @@ static bool bfq_bio_merge(struct request_queue *q, struct bio *bio,
>  		 */
>  		bfq_bic_update_cgroup(bic, bio);
>  
> -		bfqd->bio_bfqq = bic_to_bfqq(bic, op_is_sync(bio->bi_opf));
> +		bfqd->bio_bfqq = bic_to_bfqq(bic, op_is_sync(bio->bi_opf),
> +					     bfq_actuator_index(bfqd, bio));
>  	} else {
>  		bfqd->bio_bfqq = NULL;
>  	}
> @@ -3174,7 +3197,7 @@ bfq_merge_bfqqs(struct bfq_data *bfqd, struct bfq_io_cq *bic,
>  	/*
>  	 * Merge queues (that is, let bic redirect its requests to new_bfqq)
>  	 */
> -	bic_set_bfqq(bic, new_bfqq, 1);
> +	bic_set_bfqq(bic, new_bfqq, 1, bfqq->actuator_idx);
>  	bfq_mark_bfqq_coop(new_bfqq);
>  	/*
>  	 * new_bfqq now belongs to at least two bics (it is a shared queue):
> @@ -4808,11 +4831,12 @@ static struct bfq_queue *bfq_select_queue(struct bfq_data *bfqd)
>  	 */
>  	if (bfq_bfqq_wait_request(bfqq) ||
>  	    (bfqq->dispatched != 0 && bfq_better_to_idle(bfqq))) {
> +		unsigned int act_idx = bfqq->actuator_idx;
>  		struct bfq_queue *async_bfqq =
> -			bfqq->bic && bfqq->bic->bfqq[0] &&
> -			bfq_bfqq_busy(bfqq->bic->bfqq[0]) &&
> -			bfqq->bic->bfqq[0]->next_rq ?
> -			bfqq->bic->bfqq[0] : NULL;
> +			bfqq->bic && bfqq->bic->bfqq[0][act_idx] &&
> +			bfq_bfqq_busy(bfqq->bic->bfqq[0][act_idx]) &&
> +			bfqq->bic->bfqq[0][act_idx]->next_rq ?
> +			bfqq->bic->bfqq[0][act_idx] : NULL;

Missing a blank line before this hunk, after the declaration. And I
personally think that this is totally unreadable. Using a plain if/else
would improve that.

>  		struct bfq_queue *blocked_bfqq =
>  			!hlist_empty(&bfqq->woken_list) ?
>  			container_of(bfqq->woken_list.first,
> @@ -4904,7 +4928,7 @@ static struct bfq_queue *bfq_select_queue(struct bfq_data *bfqd)
>  		    icq_to_bic(async_bfqq->next_rq->elv.icq) == bfqq->bic &&
>  		    bfq_serv_to_charge(async_bfqq->next_rq, async_bfqq) <=
>  		    bfq_bfqq_budget_left(async_bfqq))
> -			bfqq = bfqq->bic->bfqq[0];
> +			bfqq = bfqq->bic->bfqq[0][act_idx];
>  		else if (bfqq->waker_bfqq &&
>  			   bfq_bfqq_busy(bfqq->waker_bfqq) &&
>  			   bfqq->waker_bfqq->next_rq &&
> @@ -5365,49 +5389,59 @@ static void bfq_exit_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq)
>  	bfq_release_process_ref(bfqd, bfqq);
>  }
>  
> -static void bfq_exit_icq_bfqq(struct bfq_io_cq *bic, bool is_sync)
> +static void bfq_exit_icq_bfqq(struct bfq_io_cq *bic,
> +			      bool is_sync,
> +			      unsigned int actuator_idx)

Why the line split here ?

>  {
> -	struct bfq_queue *bfqq = bic_to_bfqq(bic, is_sync);
> +	struct bfq_queue *bfqq = bic_to_bfqq(bic, is_sync, actuator_idx);
>  	struct bfq_data *bfqd;
>  
>  	if (bfqq)
>  		bfqd = bfqq->bfqd; /* NULL if scheduler already exited */
>  
>  	if (bfqq && bfqd) {
> -		unsigned long flags;
> -
> -		spin_lock_irqsave(&bfqd->lock, flags);
>  		bfqq->bic = NULL;
>  		bfq_exit_bfqq(bfqd, bfqq);
> -		bic_set_bfqq(bic, NULL, is_sync);
> -		spin_unlock_irqrestore(&bfqd->lock, flags);
> +		bic_set_bfqq(bic, NULL, is_sync, actuator_idx);
>  	}
>  }
>  
>  static void bfq_exit_icq(struct io_cq *icq)
>  {
>  	struct bfq_io_cq *bic = icq_to_bic(icq);
> +	struct bfq_data *bfqd = bic_to_bfqd(bic);
> +	unsigned long flags;
> +	unsigned int act_idx;
> +	unsigned int num_actuators;
>  
> -	if (bic->stable_merge_bfqq) {
> -		struct bfq_data *bfqd = bic->stable_merge_bfqq->bfqd;
> -
> +	/*
> +	 * bfqd is NULL if scheduler already exited, and in that case
> +	 * this is the last time these queues are accessed.
> +	 */
> +	if (bfqd) {
> +		spin_lock_irqsave(&bfqd->lock, flags);
> +		num_actuators = bfqd->num_actuators;
> +	} else {
>  		/*
> -		 * bfqd is NULL if scheduler already exited, and in
> -		 * that case this is the last time bfqq is accessed.
> +		 * bfqd->num_actuators not available any longer, cycle
> +		 * over all possible per-actuator bfqqs in next
> +		 * loop. We rely on bic being zeroed on creation, and
> +		 * therefore on its unused per-actuator fields being
> +		 * NULL.
>  		 */
> -		if (bfqd) {
> -			unsigned long flags;
> +		num_actuators = BFQ_MAX_ACTUATORS;
> +	}

Why not simply initialize num_actuators to BFQ_MAX_ACTUATORS when it is
declared and drop that "else" above ?

>  
> -			spin_lock_irqsave(&bfqd->lock, flags);
> -			bfq_put_stable_ref(bic->stable_merge_bfqq);
> -			spin_unlock_irqrestore(&bfqd->lock, flags);
> -		} else {
> -			bfq_put_stable_ref(bic->stable_merge_bfqq);
> -		}
> +	if (bic->stable_merge_bfqq)
> +		bfq_put_stable_ref(bic->stable_merge_bfqq);
> +
> +	for (act_idx = 0; act_idx < num_actuators; act_idx++) {
> +		bfq_exit_icq_bfqq(bic, true, act_idx);
> +		bfq_exit_icq_bfqq(bic, false, act_idx);
>  	}
>  
> -	bfq_exit_icq_bfqq(bic, true);
> -	bfq_exit_icq_bfqq(bic, false);
> +	if (bfqd)
> +		spin_unlock_irqrestore(&bfqd->lock, flags);
>  }
>  
>  /*
> @@ -5484,23 +5518,25 @@ static void bfq_check_ioprio_change(struct bfq_io_cq *bic, struct bio *bio)
>  
>  	bic->ioprio = ioprio;
>  
> -	bfqq = bic_to_bfqq(bic, false);
> +	bfqq = bic_to_bfqq(bic, false, bfq_actuator_index(bfqd, bio));
>  	if (bfqq) {
>  		bfq_release_process_ref(bfqd, bfqq);
>  		bfqq = bfq_get_queue(bfqd, bio, false, bic, true);
> -		bic_set_bfqq(bic, bfqq, false);
> +		bic_set_bfqq(bic, bfqq, false, bfq_actuator_index(bfqd, bio));
>  	}
>  
> -	bfqq = bic_to_bfqq(bic, true);
> +	bfqq = bic_to_bfqq(bic, true, bfq_actuator_index(bfqd, bio));
>  	if (bfqq)
>  		bfq_set_next_ioprio_data(bfqq, bic);
>  }
>  
>  static void bfq_init_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq,
> -			  struct bfq_io_cq *bic, pid_t pid, int is_sync)
> +			  struct bfq_io_cq *bic, pid_t pid, int is_sync,
> +			  unsigned int act_idx)
>  {
>  	u64 now_ns = ktime_get_ns();
>  
> +	bfqq->actuator_idx = act_idx;
>  	RB_CLEAR_NODE(&bfqq->entity.rb_node);
>  	INIT_LIST_HEAD(&bfqq->fifo);
>  	INIT_HLIST_NODE(&bfqq->burst_list_node);
> @@ -5739,6 +5775,7 @@ static struct bfq_queue *bfq_get_queue(struct bfq_data *bfqd,
>  	struct bfq_group *bfqg;
>  
>  	bfqg = bfq_bio_bfqg(bfqd, bio);
> +

whiteline change.

>  	if (!is_sync) {
>  		async_bfqq = bfq_async_queue_prio(bfqd, bfqg, ioprio_class,
>  						  ioprio);
> @@ -5753,7 +5790,7 @@ static struct bfq_queue *bfq_get_queue(struct bfq_data *bfqd,
>  
>  	if (bfqq) {
>  		bfq_init_bfqq(bfqd, bfqq, bic, current->pid,
> -			      is_sync);
> +			      is_sync, bfq_actuator_index(bfqd, bio));
>  		bfq_init_entity(&bfqq->entity, bfqg);
>  		bfq_log_bfqq(bfqd, bfqq, "allocated");
>  	} else {
> @@ -6068,7 +6105,8 @@ static bool __bfq_insert_request(struct bfq_data *bfqd, struct request *rq)
>  		 * then complete the merge and redirect it to
>  		 * new_bfqq.
>  		 */
> -		if (bic_to_bfqq(RQ_BIC(rq), 1) == bfqq)
> +		if (bic_to_bfqq(RQ_BIC(rq), 1,
> +				bfq_actuator_index(bfqd, rq->bio)) == bfqq)
>  			bfq_merge_bfqqs(bfqd, RQ_BIC(rq),
>  					bfqq, new_bfqq);
>  
> @@ -6622,7 +6660,7 @@ bfq_split_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq)
>  		return bfqq;
>  	}
>  
> -	bic_set_bfqq(bic, NULL, 1);
> +	bic_set_bfqq(bic, NULL, 1, bfqq->actuator_idx);
>  
>  	bfq_put_cooperator(bfqq);
>  
> @@ -6636,7 +6674,8 @@ static struct bfq_queue *bfq_get_bfqq_handle_split(struct bfq_data *bfqd,
>  						   bool split, bool is_sync,
>  						   bool *new_queue)
>  {
> -	struct bfq_queue *bfqq = bic_to_bfqq(bic, is_sync);
> +	unsigned int act_idx = bfq_actuator_index(bfqd, bio);
> +	struct bfq_queue *bfqq = bic_to_bfqq(bic, is_sync, act_idx);
>  
>  	if (likely(bfqq && bfqq != &bfqd->oom_bfqq))
>  		return bfqq;
> @@ -6648,7 +6687,7 @@ static struct bfq_queue *bfq_get_bfqq_handle_split(struct bfq_data *bfqd,
>  		bfq_put_queue(bfqq);
>  	bfqq = bfq_get_queue(bfqd, bio, is_sync, bic, split);
>  
> -	bic_set_bfqq(bic, bfqq, is_sync);
> +	bic_set_bfqq(bic, bfqq, is_sync, act_idx);
>  	if (split && is_sync) {
>  		if ((bic->was_in_burst_list && bfqd->large_burst) ||
>  		    bic->saved_in_large_burst)
> @@ -7090,8 +7129,10 @@ static int bfq_init_queue(struct request_queue *q, struct elevator_type *e)
>  	 * Our fallback bfqq if bfq_find_alloc_queue() runs into OOM issues.
>  	 * Grab a permanent reference to it, so that the normal code flow
>  	 * will not attempt to free it.
> +	 * Set zero as actuator index: we will pretend that
> +	 * all I/O requests are for the same actuator.
>  	 */
> -	bfq_init_bfqq(bfqd, &bfqd->oom_bfqq, NULL, 1, 0);
> +	bfq_init_bfqq(bfqd, &bfqd->oom_bfqq, NULL, 1, 0, 0);
>  	bfqd->oom_bfqq.ref++;
>  	bfqd->oom_bfqq.new_ioprio = BFQ_DEFAULT_QUEUE_IOPRIO;
>  	bfqd->oom_bfqq.new_ioprio_class = IOPRIO_CLASS_BE;
> @@ -7110,6 +7151,12 @@ static int bfq_init_queue(struct request_queue *q, struct elevator_type *e)
>  
>  	bfqd->queue = q;
>  
> +	/*
> +	 * Multi-actuator support not complete yet, default to single
> +	 * actuator for the moment.
> +	 */
> +	bfqd->num_actuators = 1;

This should be the default anyway, so is that comment really necessary ?

> +
>  	INIT_LIST_HEAD(&bfqd->dispatch);
>  
>  	hrtimer_init(&bfqd->idle_slice_timer, CLOCK_MONOTONIC,
> diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h
> index 71f721670ab6..bfcbd8ea9000 100644
> --- a/block/bfq-iosched.h
> +++ b/block/bfq-iosched.h
> @@ -33,6 +33,14 @@
>   */
>  #define BFQ_SOFTRT_WEIGHT_FACTOR	100
>  
> +/*
> + * Maximum number of actuators supported. This constant is used simply
> + * to define the size of the static array that will contain
> + * per-actuator data. The current value is hopefully a good upper
> + * bound to the possible number of actuators of any actual drive.
> + */
> +#define BFQ_MAX_ACTUATORS 32

That seems really excessive... What about 4 or 8 ?

> +
>  struct bfq_entity;
>  
>  /**
> @@ -225,12 +233,14 @@ struct bfq_ttime {
>   * struct bfq_queue - leaf schedulable entity.
>   *
>   * A bfq_queue is a leaf request queue; it can be associated with an
> - * io_context or more, if it  is  async or shared  between  cooperating
> - * processes. @cgroup holds a reference to the cgroup, to be sure that it
> - * does not disappear while a bfqq still references it (mostly to avoid
> - * races between request issuing and task migration followed by cgroup
> - * destruction).
> - * All the fields are protected by the queue lock of the containing bfqd.
> + * io_context or more, if it is async or shared between cooperating
> + * processes. Besides, it contains I/O requests for only one actuator
> + * (an io_context is associated with a different bfq_queue for each
> + * actuator it generates I/O for). @cgroup holds a reference to the
> + * cgroup, to be sure that it does not disappear while a bfqq still
> + * references it (mostly to avoid races between request issuing and
> + * task migration followed by cgroup destruction).  All the fields are
> + * protected by the queue lock of the containing bfqd.
>   */
>  struct bfq_queue {
>  	/* reference counter */
> @@ -395,6 +405,9 @@ struct bfq_queue {
>  	 * the woken queues when this queue exits.
>  	 */
>  	struct hlist_head woken_list;
> +
> +	/* index of the actuator this queue is associated with */
> +	unsigned int actuator_idx;
>  };
>  
>  /**
> @@ -403,8 +416,17 @@ struct bfq_queue {
>  struct bfq_io_cq {
>  	/* associated io_cq structure */
>  	struct io_cq icq; /* must be the first member */
> -	/* array of two process queues, the sync and the async */
> -	struct bfq_queue *bfqq[2];
> +	/*
> +	 * Matrix of associated process queues: first row for async
> +	 * queues, second row sync queues. Each row contains one
> +	 * column for each actuator. An I/O request generated by the
> +	 * process is inserted into the queue pointed by bfqq[i][j] if
> +	 * the request is to be served by the j-th actuator of the
> +	 * drive, where i==0 or i==1, depending on whether the request
> +	 * is async or sync. So there is a distinct queue for each
> +	 * actuator.
> +	 */
> +	struct bfq_queue *bfqq[2][BFQ_MAX_ACTUATORS];
>  	/* per (request_queue, blkcg) ioprio */
>  	int ioprio;
>  #ifdef CONFIG_BFQ_GROUP_IOSCHED
> @@ -768,6 +790,13 @@ struct bfq_data {
>  	 */
>  	unsigned int word_depths[2][2];
>  	unsigned int full_depth_shift;
> +
> +	/*
> +	 * Number of independent actuators. This is equal to 1 in
> +	 * case of single-actuator drives.
> +	 */
> +	unsigned int num_actuators;
> +
>  };
>  
>  enum bfqq_state_flags {
> @@ -964,8 +993,10 @@ struct bfq_group {
>  
>  extern const int bfq_timeout;
>  
> -struct bfq_queue *bic_to_bfqq(struct bfq_io_cq *bic, bool is_sync);
> -void bic_set_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq, bool is_sync);
> +struct bfq_queue *bic_to_bfqq(struct bfq_io_cq *bic, bool is_sync,
> +				unsigned int actuator_idx);
> +void bic_set_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq, bool is_sync,
> +				unsigned int actuator_idx);
>  struct bfq_data *bic_to_bfqd(struct bfq_io_cq *bic);
>  void bfq_pos_tree_add_move(struct bfq_data *bfqd, struct bfq_queue *bfqq);
>  void bfq_weights_tree_add(struct bfq_data *bfqd, struct bfq_queue *bfqq,

-- 
Damien Le Moal
Western Digital Research


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH V6 2/8] block, bfq: forbid stable merging of queues associated with different actuators
  2022-11-03 16:26 ` [PATCH V6 2/8] block, bfq: forbid stable merging of queues associated with different actuators Paolo Valente
@ 2022-11-21  0:20   ` Damien Le Moal
  0 siblings, 0 replies; 31+ messages in thread
From: Damien Le Moal @ 2022-11-21  0:20 UTC (permalink / raw)
  To: Paolo Valente, Jens Axboe
  Cc: linux-block, linux-kernel, arie.vanderhoeven, rory.c.chen

On 11/4/22 01:26, Paolo Valente wrote:
> If queues associated with different actuators are merged, then control
> is lost on each actuator. Therefore some actuator may be
> underutilized, and throughput may decrease. This problem cannot occur
> with basic queue merging, because the latter is triggered by spatial
> locality, and sectors for different actuators are not close to each
> other. Yet it may happen with stable merging. To address this issue,
> this commit prevents stable merging from occurring among queues
> associated with different actuators.
> 
> Signed-off-by: Paolo Valente <paolo.valente@linaro.org>
> ---
>  block/bfq-iosched.c | 13 +++++++++----
>  1 file changed, 9 insertions(+), 4 deletions(-)
> 
> diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
> index 5c69394bbb65..ec4b0e70265f 100644
> --- a/block/bfq-iosched.c
> +++ b/block/bfq-iosched.c
> @@ -5705,9 +5705,13 @@ static struct bfq_queue *bfq_do_or_sched_stable_merge(struct bfq_data *bfqd,
>  	 * it has been set already, but too long ago, then move it
>  	 * forward to bfqq. Finally, move also if bfqq belongs to a
>  	 * different group than last_bfqq_created, or if bfqq has a
> -	 * different ioprio or ioprio_class. If none of these
> -	 * conditions holds true, then try an early stable merge or
> -	 * schedule a delayed stable merge.
> +	 * different ioprio, ioprio_class or actuator_idx. If none of
> +	 * these conditions holds true, then try an early stable merge
> +	 * or schedule a delayed stable merge. As for the condition on
> +	 * actuator_idx, the reason is that, if queues associated with
> +	 * different actuators are merged, then control is lost on
> +	 * each actuator. Therefore some actuator may be
> +	 * underutilized, and throughput may decrease.
>  	 *
>  	 * A delayed merge is scheduled (instead of performing an
>  	 * early merge), in case bfqq might soon prove to be more
> @@ -5725,7 +5729,8 @@ static struct bfq_queue *bfq_do_or_sched_stable_merge(struct bfq_data *bfqd,
>  			bfqq->creation_time) ||
>  		bfqq->entity.parent != last_bfqq_created->entity.parent ||
>  		bfqq->ioprio != last_bfqq_created->ioprio ||
> -		bfqq->ioprio_class != last_bfqq_created->ioprio_class)
> +		bfqq->ioprio_class != last_bfqq_created->ioprio_class ||
> +		bfqq->actuator_idx != last_bfqq_created->actuator_idx)
>  		*source_bfqq = bfqq;
>  	else if (time_after_eq(last_bfqq_created->creation_time +
>  				 bfqd->bfq_burst_interval,

Reviewed-by: Damien Le Moal <damien.lemoal@opensource.wdc.com>

-- 
Damien Le Moal
Western Digital Research


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH V6 3/8] block, bfq: move io_cq-persistent bfqq data into a dedicated struct
  2022-11-03 16:26 ` [PATCH V6 3/8] block, bfq: move io_cq-persistent bfqq data into a dedicated struct Paolo Valente
@ 2022-11-21  0:24   ` Damien Le Moal
  0 siblings, 0 replies; 31+ messages in thread
From: Damien Le Moal @ 2022-11-21  0:24 UTC (permalink / raw)
  To: Paolo Valente, Jens Axboe
  Cc: linux-block, linux-kernel, arie.vanderhoeven, rory.c.chen,
	Gianmarco Lusvardi, Giulio Barabino, Emiliano Maccaferri

On 11/4/22 01:26, Paolo Valente wrote:
> With a multi-actuator drive, a process may get associated with multiple
> bfq_queues: one queue for each of the N actuators. So, the bfq_io_cq
> data structure must be able to accommodate its per-queue persistent
> information for N queues. Currently it stores this information for
> just one queue, in several scalar fields.
> 
> This is a preparatory commit for moving to accommodating persistent
> information for N queues. In particular, this commit packs all the
> above scalar fields into a single data structure. Then there is now
> only one fieldi, in bfq_io_cq, that stores all the above information. This

s/fieldi/field

Other than that, looks OK to me.

Reviewed-by: Damien Le Moal <damien.lemoal@opensource.wdc.com>

> scalar field will then be turned into an array by a following commit.
> 
> Suggested-by: Damien Le Moal <damien.lemoal@opensource.wdc.com>
> Signed-off-by: Gianmarco Lusvardi <glusvardi@posteo.net>
> Signed-off-by: Giulio Barabino <giuliobarabino99@gmail.com>
> Signed-off-by: Emiliano Maccaferri <inbox@emilianomaccaferri.com>
> Signed-off-by: Paolo Valente <paolo.valente@linaro.org>
> ---
>  block/bfq-iosched.c | 129 +++++++++++++++++++++++++-------------------
>  block/bfq-iosched.h |  52 ++++++++++--------
>  2 files changed, 105 insertions(+), 76 deletions(-)
> 
> diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
> index ec4b0e70265f..01528182c0c5 100644
> --- a/block/bfq-iosched.c
> +++ b/block/bfq-iosched.c
> @@ -404,9 +404,10 @@ void bic_set_bfqq(struct bfq_io_cq *bic,
>  	 * we cancel the stable merge if
>  	 * bic->stable_merge_bfqq == bfqq.
>  	 */
> +	struct bfq_iocq_bfqq_data *bfqq_data = &bic->bfqq_data;
>  	bic->bfqq[is_sync][actuator_idx] = bfqq;
>  
> -	if (bfqq && bic->stable_merge_bfqq == bfqq) {
> +	if (bfqq && bfqq_data->stable_merge_bfqq == bfqq) {
>  		/*
>  		 * Actually, these same instructions are executed also
>  		 * in bfq_setup_cooperator, in case of abort or actual
> @@ -415,9 +416,9 @@ void bic_set_bfqq(struct bfq_io_cq *bic,
>  		 * did so, we would nest even more complexity in this
>  		 * function.
>  		 */
> -		bfq_put_stable_ref(bic->stable_merge_bfqq);
> +		bfq_put_stable_ref(bfqq_data->stable_merge_bfqq);
>  
> -		bic->stable_merge_bfqq = NULL;
> +		bfqq_data->stable_merge_bfqq = NULL;
>  	}
>  }
>  
> @@ -1174,38 +1175,40 @@ static void
>  bfq_bfqq_resume_state(struct bfq_queue *bfqq, struct bfq_data *bfqd,
>  		      struct bfq_io_cq *bic, bool bfq_already_existing)
>  {
> +	struct bfq_iocq_bfqq_data *bfqq_data = &bic->bfqq_data;
>  	unsigned int old_wr_coeff = 1;
>  	bool busy = bfq_already_existing && bfq_bfqq_busy(bfqq);
>  
> -	if (bic->saved_has_short_ttime)
> +	if (bfqq_data->saved_has_short_ttime)
>  		bfq_mark_bfqq_has_short_ttime(bfqq);
>  	else
>  		bfq_clear_bfqq_has_short_ttime(bfqq);
>  
> -	if (bic->saved_IO_bound)
> +	if (bfqq_data->saved_IO_bound)
>  		bfq_mark_bfqq_IO_bound(bfqq);
>  	else
>  		bfq_clear_bfqq_IO_bound(bfqq);
>  
> -	bfqq->last_serv_time_ns = bic->saved_last_serv_time_ns;
> -	bfqq->inject_limit = bic->saved_inject_limit;
> -	bfqq->decrease_time_jif = bic->saved_decrease_time_jif;
> +	bfqq->last_serv_time_ns = bfqq_data->saved_last_serv_time_ns;
> +	bfqq->inject_limit = bfqq_data->saved_inject_limit;
> +	bfqq->decrease_time_jif = bfqq_data->saved_decrease_time_jif;
>  
> -	bfqq->entity.new_weight = bic->saved_weight;
> -	bfqq->ttime = bic->saved_ttime;
> -	bfqq->io_start_time = bic->saved_io_start_time;
> -	bfqq->tot_idle_time = bic->saved_tot_idle_time;
> +	bfqq->entity.new_weight = bfqq_data->saved_weight;
> +	bfqq->ttime = bfqq_data->saved_ttime;
> +	bfqq->io_start_time = bfqq_data->saved_io_start_time;
> +	bfqq->tot_idle_time = bfqq_data->saved_tot_idle_time;
>  	/*
>  	 * Restore weight coefficient only if low_latency is on
>  	 */
>  	if (bfqd->low_latency) {
>  		old_wr_coeff = bfqq->wr_coeff;
> -		bfqq->wr_coeff = bic->saved_wr_coeff;
> +		bfqq->wr_coeff = bfqq_data->saved_wr_coeff;
>  	}
> -	bfqq->service_from_wr = bic->saved_service_from_wr;
> -	bfqq->wr_start_at_switch_to_srt = bic->saved_wr_start_at_switch_to_srt;
> -	bfqq->last_wr_start_finish = bic->saved_last_wr_start_finish;
> -	bfqq->wr_cur_max_time = bic->saved_wr_cur_max_time;
> +	bfqq->service_from_wr = bfqq_data->saved_service_from_wr;
> +	bfqq->wr_start_at_switch_to_srt =
> +		bfqq_data->saved_wr_start_at_switch_to_srt;
> +	bfqq->last_wr_start_finish = bfqq_data->saved_last_wr_start_finish;
> +	bfqq->wr_cur_max_time = bfqq_data->saved_wr_cur_max_time;
>  
>  	if (bfqq->wr_coeff > 1 && (bfq_bfqq_in_large_burst(bfqq) ||
>  	    time_is_before_jiffies(bfqq->last_wr_start_finish +
> @@ -1878,7 +1881,7 @@ static void bfq_bfqq_handle_idle_busy_switch(struct bfq_data *bfqd,
>  	wr_or_deserves_wr = bfqd->low_latency &&
>  		(bfqq->wr_coeff > 1 ||
>  		 (bfq_bfqq_sync(bfqq) &&
> -		  (bfqq->bic || RQ_BIC(rq)->stably_merged) &&
> +		  (bfqq->bic || RQ_BIC(rq)->bfqq_data.stably_merged) &&
>  		   (*interactive || soft_rt)));
>  
>  	/*
> @@ -2902,6 +2905,7 @@ bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq,
>  		     void *io_struct, bool request, struct bfq_io_cq *bic)
>  {
>  	struct bfq_queue *in_service_bfqq, *new_bfqq;
> +	struct bfq_iocq_bfqq_data *bfqq_data = &bic->bfqq_data;
>  
>  	/* if a merge has already been setup, then proceed with that first */
>  	if (bfqq->new_bfqq)
> @@ -2923,21 +2927,21 @@ bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq,
>  		 * stable merging) also if bic is associated with a
>  		 * sync queue, but this bfqq is async
>  		 */
> -		if (bfq_bfqq_sync(bfqq) && bic->stable_merge_bfqq &&
> +		if (bfq_bfqq_sync(bfqq) && bfqq_data->stable_merge_bfqq &&
>  		    !bfq_bfqq_just_created(bfqq) &&
>  		    time_is_before_jiffies(bfqq->split_time +
>  					  msecs_to_jiffies(bfq_late_stable_merging)) &&
>  		    time_is_before_jiffies(bfqq->creation_time +
>  					   msecs_to_jiffies(bfq_late_stable_merging))) {
>  			struct bfq_queue *stable_merge_bfqq =
> -				bic->stable_merge_bfqq;
> +				bfqq_data->stable_merge_bfqq;
>  			int proc_ref = min(bfqq_process_refs(bfqq),
>  					   bfqq_process_refs(stable_merge_bfqq));
>  
>  			/* deschedule stable merge, because done or aborted here */
>  			bfq_put_stable_ref(stable_merge_bfqq);
>  
> -			bic->stable_merge_bfqq = NULL;
> +			bfqq_data->stable_merge_bfqq = NULL;
>  
>  			if (!idling_boosts_thr_without_issues(bfqd, bfqq) &&
>  			    proc_ref > 0) {
> @@ -2946,10 +2950,10 @@ bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq,
>  					bfq_setup_merge(bfqq, stable_merge_bfqq);
>  
>  				if (new_bfqq) {
> -					bic->stably_merged = true;
> +					bfqq_data->stably_merged = true;
>  					if (new_bfqq->bic)
> -						new_bfqq->bic->stably_merged =
> -									true;
> +						new_bfqq->bic->bfqq_data.stably_merged =
> +							true;
>  				}
>  				return new_bfqq;
>  			} else
> @@ -3048,6 +3052,7 @@ bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq,
>  static void bfq_bfqq_save_state(struct bfq_queue *bfqq)
>  {
>  	struct bfq_io_cq *bic = bfqq->bic;
> +	struct bfq_iocq_bfqq_data *bfqq_data = &bic->bfqq_data;
>  
>  	/*
>  	 * If !bfqq->bic, the queue is already shared or its requests
> @@ -3057,18 +3062,21 @@ static void bfq_bfqq_save_state(struct bfq_queue *bfqq)
>  	if (!bic)
>  		return;
>  
> -	bic->saved_last_serv_time_ns = bfqq->last_serv_time_ns;
> -	bic->saved_inject_limit = bfqq->inject_limit;
> -	bic->saved_decrease_time_jif = bfqq->decrease_time_jif;
> -
> -	bic->saved_weight = bfqq->entity.orig_weight;
> -	bic->saved_ttime = bfqq->ttime;
> -	bic->saved_has_short_ttime = bfq_bfqq_has_short_ttime(bfqq);
> -	bic->saved_IO_bound = bfq_bfqq_IO_bound(bfqq);
> -	bic->saved_io_start_time = bfqq->io_start_time;
> -	bic->saved_tot_idle_time = bfqq->tot_idle_time;
> -	bic->saved_in_large_burst = bfq_bfqq_in_large_burst(bfqq);
> -	bic->was_in_burst_list = !hlist_unhashed(&bfqq->burst_list_node);
> +	bfqq_data->saved_last_serv_time_ns = bfqq->last_serv_time_ns;
> +	bfqq_data->saved_inject_limit = bfqq->inject_limit;
> +	bfqq_data->saved_decrease_time_jif = bfqq->decrease_time_jif;
> +
> +	bfqq_data->saved_weight = bfqq->entity.orig_weight;
> +	bfqq_data->saved_ttime = bfqq->ttime;
> +	bfqq_data->saved_has_short_ttime =
> +		bfq_bfqq_has_short_ttime(bfqq);
> +	bfqq_data->saved_IO_bound = bfq_bfqq_IO_bound(bfqq);
> +	bfqq_data->saved_io_start_time = bfqq->io_start_time;
> +	bfqq_data->saved_tot_idle_time = bfqq->tot_idle_time;
> +	bfqq_data->saved_in_large_burst = bfq_bfqq_in_large_burst(bfqq);
> +	bfqq_data->was_in_burst_list =
> +		!hlist_unhashed(&bfqq->burst_list_node);
> +
>  	if (unlikely(bfq_bfqq_just_created(bfqq) &&
>  		     !bfq_bfqq_in_large_burst(bfqq) &&
>  		     bfqq->bfqd->low_latency)) {
> @@ -3081,17 +3089,21 @@ static void bfq_bfqq_save_state(struct bfq_queue *bfqq)
>  		 * to bfqq, so that to avoid that bfqq unjustly fails
>  		 * to enjoy weight raising if split soon.
>  		 */
> -		bic->saved_wr_coeff = bfqq->bfqd->bfq_wr_coeff;
> -		bic->saved_wr_start_at_switch_to_srt = bfq_smallest_from_now();
> -		bic->saved_wr_cur_max_time = bfq_wr_duration(bfqq->bfqd);
> -		bic->saved_last_wr_start_finish = jiffies;
> +		bfqq_data->saved_wr_coeff = bfqq->bfqd->bfq_wr_coeff;
> +		bfqq_data->saved_wr_start_at_switch_to_srt =
> +			bfq_smallest_from_now();
> +		bfqq_data->saved_wr_cur_max_time =
> +			bfq_wr_duration(bfqq->bfqd);
> +		bfqq_data->saved_last_wr_start_finish = jiffies;
>  	} else {
> -		bic->saved_wr_coeff = bfqq->wr_coeff;
> -		bic->saved_wr_start_at_switch_to_srt =
> +		bfqq_data->saved_wr_coeff = bfqq->wr_coeff;
> +		bfqq_data->saved_wr_start_at_switch_to_srt =
>  			bfqq->wr_start_at_switch_to_srt;
> -		bic->saved_service_from_wr = bfqq->service_from_wr;
> -		bic->saved_last_wr_start_finish = bfqq->last_wr_start_finish;
> -		bic->saved_wr_cur_max_time = bfqq->wr_cur_max_time;
> +		bfqq_data->saved_service_from_wr =
> +			bfqq->service_from_wr;
> +		bfqq_data->saved_last_wr_start_finish =
> +			bfqq->last_wr_start_finish;
> +		bfqq_data->saved_wr_cur_max_time = bfqq->wr_cur_max_time;
>  	}
>  }
>  
> @@ -5413,6 +5425,7 @@ static void bfq_exit_icq(struct io_cq *icq)
>  	unsigned long flags;
>  	unsigned int act_idx;
>  	unsigned int num_actuators;
> +	struct bfq_iocq_bfqq_data *bfqq_data = &bic->bfqq_data;
>  
>  	/*
>  	 * bfqd is NULL if scheduler already exited, and in that case
> @@ -5432,8 +5445,8 @@ static void bfq_exit_icq(struct io_cq *icq)
>  		num_actuators = BFQ_MAX_ACTUATORS;
>  	}
>  
> -	if (bic->stable_merge_bfqq)
> -		bfq_put_stable_ref(bic->stable_merge_bfqq);
> +	if (bfqq_data->stable_merge_bfqq)
> +		bfq_put_stable_ref(bfqq_data->stable_merge_bfqq);
>  
>  	for (act_idx = 0; act_idx < num_actuators; act_idx++) {
>  		bfq_exit_icq_bfqq(bic, true, act_idx);
> @@ -5624,13 +5637,14 @@ bfq_do_early_stable_merge(struct bfq_data *bfqd, struct bfq_queue *bfqq,
>  {
>  	struct bfq_queue *new_bfqq =
>  		bfq_setup_merge(bfqq, last_bfqq_created);
> +	struct bfq_iocq_bfqq_data *bfqq_data = &bic->bfqq_data;
>  
>  	if (!new_bfqq)
>  		return bfqq;
>  
>  	if (new_bfqq->bic)
> -		new_bfqq->bic->stably_merged = true;
> -	bic->stably_merged = true;
> +		new_bfqq->bic->bfqq_data.stably_merged = true;
> +	bfqq_data->stably_merged = true;
>  
>  	/*
>  	 * Reusing merge functions. This implies that
> @@ -5699,6 +5713,7 @@ static struct bfq_queue *bfq_do_or_sched_stable_merge(struct bfq_data *bfqd,
>  		&bfqd->last_bfqq_created;
>  
>  	struct bfq_queue *last_bfqq_created = *source_bfqq;
> +	struct bfq_iocq_bfqq_data *bfqq_data = &bic->bfqq_data;
>  
>  	/*
>  	 * If last_bfqq_created has not been set yet, then init it. If
> @@ -5760,7 +5775,7 @@ static struct bfq_queue *bfq_do_or_sched_stable_merge(struct bfq_data *bfqd,
>  			/*
>  			 * Record the bfqq to merge to.
>  			 */
> -			bic->stable_merge_bfqq = last_bfqq_created;
> +			bfqq_data->stable_merge_bfqq = last_bfqq_created;
>  		}
>  	}
>  
> @@ -6681,6 +6696,7 @@ static struct bfq_queue *bfq_get_bfqq_handle_split(struct bfq_data *bfqd,
>  {
>  	unsigned int act_idx = bfq_actuator_index(bfqd, bio);
>  	struct bfq_queue *bfqq = bic_to_bfqq(bic, is_sync, act_idx);
> +	struct bfq_iocq_bfqq_data *bfqq_data = &bic->bfqq_data;
>  
>  	if (likely(bfqq && bfqq != &bfqd->oom_bfqq))
>  		return bfqq;
> @@ -6694,12 +6710,12 @@ static struct bfq_queue *bfq_get_bfqq_handle_split(struct bfq_data *bfqd,
>  
>  	bic_set_bfqq(bic, bfqq, is_sync, act_idx);
>  	if (split && is_sync) {
> -		if ((bic->was_in_burst_list && bfqd->large_burst) ||
> -		    bic->saved_in_large_burst)
> +		if ((bfqq_data->was_in_burst_list && bfqd->large_burst) ||
> +		    bfqq_data->saved_in_large_burst)
>  			bfq_mark_bfqq_in_large_burst(bfqq);
>  		else {
>  			bfq_clear_bfqq_in_large_burst(bfqq);
> -			if (bic->was_in_burst_list)
> +			if (bfqq_data->was_in_burst_list)
>  				/*
>  				 * If bfqq was in the current
>  				 * burst list before being
> @@ -6788,6 +6804,7 @@ static struct bfq_queue *bfq_init_rq(struct request *rq)
>  	struct bfq_queue *bfqq;
>  	bool new_queue = false;
>  	bool bfqq_already_existing = false, split = false;
> +	struct bfq_iocq_bfqq_data *bfqq_data;
>  
>  	if (unlikely(!rq->elv.icq))
>  		return NULL;
> @@ -6811,15 +6828,17 @@ static struct bfq_queue *bfq_init_rq(struct request *rq)
>  	bfqq = bfq_get_bfqq_handle_split(bfqd, bic, bio, false, is_sync,
>  					 &new_queue);
>  
> +	bfqq_data = &bic->bfqq_data;
> +
>  	if (likely(!new_queue)) {
>  		/* If the queue was seeky for too long, break it apart. */
>  		if (bfq_bfqq_coop(bfqq) && bfq_bfqq_split_coop(bfqq) &&
> -			!bic->stably_merged) {
> +			!bfqq_data->stably_merged) {
>  			struct bfq_queue *old_bfqq = bfqq;
>  
>  			/* Update bic before losing reference to bfqq */
>  			if (bfq_bfqq_in_large_burst(bfqq))
> -				bic->saved_in_large_burst = true;
> +				bfqq_data->saved_in_large_burst = true;
>  
>  			bfqq = bfq_split_bfqq(bic, bfqq);
>  			split = true;
> diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h
> index bfcbd8ea9000..f2e8ab91951c 100644
> --- a/block/bfq-iosched.h
> +++ b/block/bfq-iosched.h
> @@ -411,27 +411,9 @@ struct bfq_queue {
>  };
>  
>  /**
> - * struct bfq_io_cq - per (request_queue, io_context) structure.
> - */
> -struct bfq_io_cq {
> -	/* associated io_cq structure */
> -	struct io_cq icq; /* must be the first member */
> -	/*
> -	 * Matrix of associated process queues: first row for async
> -	 * queues, second row sync queues. Each row contains one
> -	 * column for each actuator. An I/O request generated by the
> -	 * process is inserted into the queue pointed by bfqq[i][j] if
> -	 * the request is to be served by the j-th actuator of the
> -	 * drive, where i==0 or i==1, depending on whether the request
> -	 * is async or sync. So there is a distinct queue for each
> -	 * actuator.
> -	 */
> -	struct bfq_queue *bfqq[2][BFQ_MAX_ACTUATORS];
> -	/* per (request_queue, blkcg) ioprio */
> -	int ioprio;
> -#ifdef CONFIG_BFQ_GROUP_IOSCHED
> -	uint64_t blkcg_serial_nr; /* the current blkcg serial */
> -#endif
> +* struct bfq_data - bfqq data unique and persistent for associated bfq_io_cq
> +*/
> +struct bfq_iocq_bfqq_data {
>  	/*
>  	 * Snapshot of the has_short_time flag before merging; taken
>  	 * to remember its value while the queue is merged, so as to
> @@ -486,6 +468,34 @@ struct bfq_io_cq {
>  	struct bfq_queue *stable_merge_bfqq;
>  
>  	bool stably_merged;	/* non splittable if true */
> +};
> +
> +/**
> + * struct bfq_io_cq - per (request_queue, io_context) structure.
> + */
> +struct bfq_io_cq {
> +	/* associated io_cq structure */
> +	struct io_cq icq; /* must be the first member */
> +	/*
> +	 * Matrix of associated process queues: first row for async
> +	 * queues, second row sync queues. Each row contains one
> +	 * column for each actuator. An I/O request generated by the
> +	 * process is inserted into the queue pointed by bfqq[i][j] if
> +	 * the request is to be served by the j-th actuator of the
> +	 * drive, where i==0 or i==1, depending on whether the request
> +	 * is async or sync. So there is a distinct queue for each
> +	 * actuator.
> +	 */
> +	struct bfq_queue *bfqq[2][BFQ_MAX_ACTUATORS];
> +	/* per (request_queue, blkcg) ioprio */
> +	int ioprio;
> +#ifdef CONFIG_BFQ_GROUP_IOSCHED
> +	uint64_t blkcg_serial_nr; /* the current blkcg serial */
> +#endif
> +
> +	/* persistent data for associated synchronous process queue */
> +	struct bfq_iocq_bfqq_data bfqq_data;
> +
>  	unsigned int requests;	/* Number of requests this process has in flight */
>  };
>  

-- 
Damien Le Moal
Western Digital Research


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH V6 4/8] block, bfq: turn bfqq_data into an array in bfq_io_cq
  2022-11-03 16:26 ` [PATCH V6 4/8] block, bfq: turn bfqq_data into an array in bfq_io_cq Paolo Valente
@ 2022-11-21  0:39   ` Damien Le Moal
  2022-12-06  8:00     ` Paolo Valente
  0 siblings, 1 reply; 31+ messages in thread
From: Damien Le Moal @ 2022-11-21  0:39 UTC (permalink / raw)
  To: Paolo Valente, Jens Axboe
  Cc: linux-block, linux-kernel, arie.vanderhoeven, rory.c.chen,
	Gabriele Felici, Gianmarco Lusvardi, Giulio Barabino,
	Emiliano Maccaferri

On 11/4/22 01:26, Paolo Valente wrote:
> When a bfq_queue Q is merged with another queue, several pieces of
> information are saved about Q. These pieces are stored in the
> bfqq_data field in the bfq_io_cq data structure of the process
> associated with Q.
> 
> Yet, with a multi-actuator drive, a process may get associated with
> multiple bfq_queues: one queue for each of the N actuators. Each of
> these queues may undergo a merge. So, the bfq_io_cq data structure
> must be able to accommodate the above information for N queues.
> 
> This commit solves this problem by turning the bfqq_data scalar field
> into an array of N elements (and by changing code so as to handle
> this array).
> 
> This solution is written under the assumption that bfq_queues
> associated with different actuators cannot be cross-merged. This
> assumption holds naturally with basic queue merging: the latter is
> triggered by spatial locality, and sectors for different actuators are
> not close to each other. As for stable cross-merging, the assumption

The last sector served by actuator N is very close to the first sector
served by actuator N+1 :)

So I am not sure this argument is valid. Better explanation required here
I think.

> here is that it is disabled.
> 
> Signed-off-by: Gabriele Felici <felicigb@gmail.com>
> Signed-off-by: Gianmarco Lusvardi <glusvardi@posteo.net>
> Signed-off-by: Giulio Barabino <giuliobarabino99@gmail.com>
> Signed-off-by: Emiliano Maccaferri <inbox@emilianomaccaferri.com>
> Signed-off-by: Paolo Valente <paolo.valente@linaro.org>
> ---
>  block/bfq-iosched.c | 72 ++++++++++++++++++++++++---------------------
>  block/bfq-iosched.h | 12 +++++---
>  2 files changed, 47 insertions(+), 37 deletions(-)
> 
> diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
> index 01528182c0c5..f44bac054aaf 100644
> --- a/block/bfq-iosched.c
> +++ b/block/bfq-iosched.c
> @@ -404,7 +404,7 @@ void bic_set_bfqq(struct bfq_io_cq *bic,
>  	 * we cancel the stable merge if
>  	 * bic->stable_merge_bfqq == bfqq.
>  	 */
> -	struct bfq_iocq_bfqq_data *bfqq_data = &bic->bfqq_data;
> +	struct bfq_iocq_bfqq_data *bfqq_data = &bic->bfqq_data[actuator_idx];
>  	bic->bfqq[is_sync][actuator_idx] = bfqq;
>  
>  	if (bfqq && bfqq_data->stable_merge_bfqq == bfqq) {
> @@ -1175,9 +1175,10 @@ static void
>  bfq_bfqq_resume_state(struct bfq_queue *bfqq, struct bfq_data *bfqd,
>  		      struct bfq_io_cq *bic, bool bfq_already_existing)
>  {
> -	struct bfq_iocq_bfqq_data *bfqq_data = &bic->bfqq_data;
>  	unsigned int old_wr_coeff = 1;
>  	bool busy = bfq_already_existing && bfq_bfqq_busy(bfqq);
> +	unsigned int a_idx = bfqq->actuator_idx;
> +	struct bfq_iocq_bfqq_data *bfqq_data = &bic->bfqq_data[a_idx];
>  
>  	if (bfqq_data->saved_has_short_ttime)
>  		bfq_mark_bfqq_has_short_ttime(bfqq);
> @@ -1827,6 +1828,16 @@ static bool bfq_bfqq_higher_class_or_weight(struct bfq_queue *bfqq,
>  	return bfqq_weight > in_serv_weight;
>  }
>  
> +/* get the index of the actuator that will serve bio */
> +static unsigned int bfq_actuator_index(struct bfq_data *bfqd, struct bio *bio)
> +{
> +	/*
> +	 * Multi-actuator support not complete yet, so always return 0
> +	 * for the moment.
> +	 */
> +	return 0;
> +}

??? This was added in patch 1. Why again here ?

> +
>  static bool bfq_better_to_idle(struct bfq_queue *bfqq);
>  
>  static void bfq_bfqq_handle_idle_busy_switch(struct bfq_data *bfqd,
> @@ -1881,7 +1892,9 @@ static void bfq_bfqq_handle_idle_busy_switch(struct bfq_data *bfqd,
>  	wr_or_deserves_wr = bfqd->low_latency &&
>  		(bfqq->wr_coeff > 1 ||
>  		 (bfq_bfqq_sync(bfqq) &&
> -		  (bfqq->bic || RQ_BIC(rq)->bfqq_data.stably_merged) &&
> +		  (bfqq->bic ||
> +		   RQ_BIC(rq)->bfqq_data[bfq_actuator_index(bfqd, rq->bio)]
> +		   .stably_merged) &&
>  		   (*interactive || soft_rt)));
>  
>  	/*
> @@ -2469,16 +2482,6 @@ static void bfq_remove_request(struct request_queue *q,
>  
>  }
>  
> -/* get the index of the actuator that will serve bio */
> -static unsigned int bfq_actuator_index(struct bfq_data *bfqd, struct bio *bio)
> -{
> -	/*
> -	 * Multi-actuator support not complete yet, so always return 0
> -	 * for the moment.
> -	 */
> -	return 0;
> -}
> -

ah. You moved it... May be add it in the right place in patch 1 then to
avoid that ? That may be due to some git/diff artifacts though.

>  static bool bfq_bio_merge(struct request_queue *q, struct bio *bio,
>  		unsigned int nr_segs)
>  {
> @@ -2905,7 +2908,8 @@ bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq,
>  		     void *io_struct, bool request, struct bfq_io_cq *bic)
>  {
>  	struct bfq_queue *in_service_bfqq, *new_bfqq;
> -	struct bfq_iocq_bfqq_data *bfqq_data = &bic->bfqq_data;
> +	unsigned int a_idx = bfqq->actuator_idx;
> +	struct bfq_iocq_bfqq_data *bfqq_data = &bic->bfqq_data[a_idx];
>  
>  	/* if a merge has already been setup, then proceed with that first */
>  	if (bfqq->new_bfqq)
> @@ -2952,8 +2956,9 @@ bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq,
>  				if (new_bfqq) {
>  					bfqq_data->stably_merged = true;
>  					if (new_bfqq->bic)
> -						new_bfqq->bic->bfqq_data.stably_merged =
> -							true;
> +						new_bfqq->bic->bfqq_data
> +							[new_bfqq->actuator_idx]
> +							.stably_merged = true;

Aouch. This is really hard to read as that is really not supposed to be
split like this.

>  				}
>  				return new_bfqq;
>  			} else
> @@ -3052,7 +3057,9 @@ bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq,
>  static void bfq_bfqq_save_state(struct bfq_queue *bfqq)
>  {
>  	struct bfq_io_cq *bic = bfqq->bic;
> -	struct bfq_iocq_bfqq_data *bfqq_data = &bic->bfqq_data;
> +	/* State must be saved for the right queue index. */

Drop this comment. It serves no purpose in my opinion.

> +	unsigned int a_idx = bfqq->actuator_idx;
> +	struct bfq_iocq_bfqq_data *bfqq_data = &bic->bfqq_data[a_idx];
>  
>  	/*
>  	 * If !bfqq->bic, the queue is already shared or its requests
> @@ -3063,7 +3070,7 @@ static void bfq_bfqq_save_state(struct bfq_queue *bfqq)
>  		return;
>  
>  	bfqq_data->saved_last_serv_time_ns = bfqq->last_serv_time_ns;
> -	bfqq_data->saved_inject_limit = bfqq->inject_limit;
> +	bfqq_data->saved_inject_limit =	bfqq->inject_limit;
>  	bfqq_data->saved_decrease_time_jif = bfqq->decrease_time_jif;
>  
>  	bfqq_data->saved_weight = bfqq->entity.orig_weight;
> @@ -5425,7 +5432,7 @@ static void bfq_exit_icq(struct io_cq *icq)
>  	unsigned long flags;
>  	unsigned int act_idx;
>  	unsigned int num_actuators;
> -	struct bfq_iocq_bfqq_data *bfqq_data = &bic->bfqq_data;
> +	struct bfq_iocq_bfqq_data *bfqq_data = bic->bfqq_data;
>  
>  	/*
>  	 * bfqd is NULL if scheduler already exited, and in that case
> @@ -5445,10 +5452,10 @@ static void bfq_exit_icq(struct io_cq *icq)
>  		num_actuators = BFQ_MAX_ACTUATORS;
>  	}
>  
> -	if (bfqq_data->stable_merge_bfqq)
> -		bfq_put_stable_ref(bfqq_data->stable_merge_bfqq);
> -
>  	for (act_idx = 0; act_idx < num_actuators; act_idx++) {
> +		if (bfqq_data[act_idx].stable_merge_bfqq)
> +			bfq_put_stable_ref(bfqq_data[act_idx].stable_merge_bfqq);
> +
>  		bfq_exit_icq_bfqq(bic, true, act_idx);
>  		bfq_exit_icq_bfqq(bic, false, act_idx);
>  	}
> @@ -5635,16 +5642,16 @@ bfq_do_early_stable_merge(struct bfq_data *bfqd, struct bfq_queue *bfqq,
>  			  struct bfq_io_cq *bic,
>  			  struct bfq_queue *last_bfqq_created)
>  {
> +	unsigned int a_idx = last_bfqq_created->actuator_idx;
>  	struct bfq_queue *new_bfqq =
>  		bfq_setup_merge(bfqq, last_bfqq_created);
> -	struct bfq_iocq_bfqq_data *bfqq_data = &bic->bfqq_data;
>  
>  	if (!new_bfqq)
>  		return bfqq;
>  
>  	if (new_bfqq->bic)
> -		new_bfqq->bic->bfqq_data.stably_merged = true;
> -	bfqq_data->stably_merged = true;
> +		new_bfqq->bic->bfqq_data[a_idx].stably_merged = true;
> +	bic->bfqq_data[a_idx].stably_merged = true;
>  
>  	/*
>  	 * Reusing merge functions. This implies that
> @@ -5713,7 +5720,6 @@ static struct bfq_queue *bfq_do_or_sched_stable_merge(struct bfq_data *bfqd,
>  		&bfqd->last_bfqq_created;
>  
>  	struct bfq_queue *last_bfqq_created = *source_bfqq;
> -	struct bfq_iocq_bfqq_data *bfqq_data = &bic->bfqq_data;
>  
>  	/*
>  	 * If last_bfqq_created has not been set yet, then init it. If
> @@ -5775,7 +5781,8 @@ static struct bfq_queue *bfq_do_or_sched_stable_merge(struct bfq_data *bfqd,
>  			/*
>  			 * Record the bfqq to merge to.
>  			 */
> -			bfqq_data->stable_merge_bfqq = last_bfqq_created;
> +			bic->bfqq_data[last_bfqq_created->actuator_idx].stable_merge_bfqq =
> +				last_bfqq_created;
>  		}
>  	}
>  
> @@ -6696,7 +6703,7 @@ static struct bfq_queue *bfq_get_bfqq_handle_split(struct bfq_data *bfqd,
>  {
>  	unsigned int act_idx = bfq_actuator_index(bfqd, bio);
>  	struct bfq_queue *bfqq = bic_to_bfqq(bic, is_sync, act_idx);
> -	struct bfq_iocq_bfqq_data *bfqq_data = &bic->bfqq_data;
> +	struct bfq_iocq_bfqq_data *bfqq_data = &bic->bfqq_data[act_idx];
>  
>  	if (likely(bfqq && bfqq != &bfqd->oom_bfqq))
>  		return bfqq;
> @@ -6804,7 +6811,7 @@ static struct bfq_queue *bfq_init_rq(struct request *rq)
>  	struct bfq_queue *bfqq;
>  	bool new_queue = false;
>  	bool bfqq_already_existing = false, split = false;
> -	struct bfq_iocq_bfqq_data *bfqq_data;
> +	unsigned int a_idx = bfq_actuator_index(bfqd, bio);
>  
>  	if (unlikely(!rq->elv.icq))
>  		return NULL;
> @@ -6828,17 +6835,16 @@ static struct bfq_queue *bfq_init_rq(struct request *rq)
>  	bfqq = bfq_get_bfqq_handle_split(bfqd, bic, bio, false, is_sync,
>  					 &new_queue);
>  
> -	bfqq_data = &bic->bfqq_data;
> -
>  	if (likely(!new_queue)) {
>  		/* If the queue was seeky for too long, break it apart. */
>  		if (bfq_bfqq_coop(bfqq) && bfq_bfqq_split_coop(bfqq) &&
> -			!bfqq_data->stably_merged) {
> +			!bic->bfqq_data[a_idx].stably_merged) {
>  			struct bfq_queue *old_bfqq = bfqq;
>  
>  			/* Update bic before losing reference to bfqq */
>  			if (bfq_bfqq_in_large_burst(bfqq))
> -				bfqq_data->saved_in_large_burst = true;
> +				bic->bfqq_data[a_idx].saved_in_large_burst =
> +					true;
>  
>  			bfqq = bfq_split_bfqq(bic, bfqq);
>  			split = true;
> diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h
> index f2e8ab91951c..e27897d66a0f 100644
> --- a/block/bfq-iosched.h
> +++ b/block/bfq-iosched.h
> @@ -416,7 +416,7 @@ struct bfq_queue {
>  struct bfq_iocq_bfqq_data {
>  	/*
>  	 * Snapshot of the has_short_time flag before merging; taken
> -	 * to remember its value while the queue is merged, so as to
> +	 * to remember its values while the queue is merged, so as to
>  	 * be able to restore it in case of split.
>  	 */
>  	bool saved_has_short_ttime;
> @@ -430,7 +430,7 @@ struct bfq_iocq_bfqq_data {
>  	u64 saved_tot_idle_time;
>  
>  	/*
> -	 * Same purpose as the previous fields for the value of the
> +	 * Same purpose as the previous fields for the values of the
>  	 * field keeping the queue's belonging to a large burst
>  	 */
>  	bool saved_in_large_burst;
> @@ -493,8 +493,12 @@ struct bfq_io_cq {
>  	uint64_t blkcg_serial_nr; /* the current blkcg serial */
>  #endif
>  
> -	/* persistent data for associated synchronous process queue */
> -	struct bfq_iocq_bfqq_data bfqq_data;
> +	/*
> +	 * Persistent data for associated synchronous process queues
> +	 * (one queue per actuator, see field bfqq above). In
> +	 * particular, each of these queues may undergo a merge.
> +	 */
> +	struct bfq_iocq_bfqq_data bfqq_data[BFQ_MAX_ACTUATORS];

I wonder if packing this together with struct bfq_queue would be cleaner.
That would avoid the 2 arrays you have in this struct. Something like this:

struct bfq_queue_data {
	struct bfq_queue 	*bfqq[2];
	struct bfq_iocq_bfqq_data iocq_data;
}

struct bfq_io_cq {
	...
	struct bfq_queue_data bfqqd[BFQ_MAX_ACTUATORS];
	...
}

Thinking aloud here. That may actually make the code more complicated.

>  
>  	unsigned int requests;	/* Number of requests this process has in flight */
>  };

-- 
Damien Le Moal
Western Digital Research


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH V6 5/8] block, bfq: split also async bfq_queues on a per-actuator basis
  2022-11-03 16:26 ` [PATCH V6 5/8] block, bfq: split also async bfq_queues on a per-actuator basis Paolo Valente
@ 2022-11-21  0:52   ` Damien Le Moal
  0 siblings, 0 replies; 31+ messages in thread
From: Damien Le Moal @ 2022-11-21  0:52 UTC (permalink / raw)
  To: Paolo Valente, Jens Axboe
  Cc: linux-block, linux-kernel, arie.vanderhoeven, rory.c.chen, Davide Zini

On 11/4/22 01:26, Paolo Valente wrote:
> From: Davide Zini <davidezini2@gmail.com>
> 
> Similarly to sync bfq_queues, also async bfq_queues need to be split
> on a per-actuator basis.
> 
> Signed-off-by: Paolo Valente <paolo.valente@linaro.org>
> Signed-off-by: Davide Zini <davidezini2@gmail.com>

Reviewed-by: Damien Le Moal <damien.lemoal@opensource.wdc.com>

> ---
>  block/bfq-iosched.c | 41 +++++++++++++++++++++++------------------
>  block/bfq-iosched.h |  8 ++++----
>  2 files changed, 27 insertions(+), 22 deletions(-)
> 
> diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
> index f44bac054aaf..c94b80e3f685 100644
> --- a/block/bfq-iosched.c
> +++ b/block/bfq-iosched.c
> @@ -2673,14 +2673,16 @@ static void bfq_bfqq_end_wr(struct bfq_queue *bfqq)
>  void bfq_end_wr_async_queues(struct bfq_data *bfqd,
>  			     struct bfq_group *bfqg)
>  {
> -	int i, j;
> -
> -	for (i = 0; i < 2; i++)
> -		for (j = 0; j < IOPRIO_NR_LEVELS; j++)
> -			if (bfqg->async_bfqq[i][j])
> -				bfq_bfqq_end_wr(bfqg->async_bfqq[i][j]);
> -	if (bfqg->async_idle_bfqq)
> -		bfq_bfqq_end_wr(bfqg->async_idle_bfqq);
> +	int i, j, k;
> +
> +	for (k = 0; k < bfqd->num_actuators; k++) {
> +		for (i = 0; i < 2; i++)
> +			for (j = 0; j < IOPRIO_NR_LEVELS; j++)
> +				if (bfqg->async_bfqq[i][j][k])
> +					bfq_bfqq_end_wr(bfqg->async_bfqq[i][j][k]);
> +		if (bfqg->async_idle_bfqq[k])
> +			bfq_bfqq_end_wr(bfqg->async_idle_bfqq[k]);
> +	}
>  }
>  
>  static void bfq_end_wr(struct bfq_data *bfqd)
> @@ -5620,18 +5622,18 @@ static void bfq_init_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq,
>  
>  static struct bfq_queue **bfq_async_queue_prio(struct bfq_data *bfqd,
>  					       struct bfq_group *bfqg,
> -					       int ioprio_class, int ioprio)
> +					       int ioprio_class, int ioprio, int act_idx)
>  {
>  	switch (ioprio_class) {
>  	case IOPRIO_CLASS_RT:
> -		return &bfqg->async_bfqq[0][ioprio];
> +		return &bfqg->async_bfqq[0][ioprio][act_idx];
>  	case IOPRIO_CLASS_NONE:
>  		ioprio = IOPRIO_BE_NORM;
>  		fallthrough;
>  	case IOPRIO_CLASS_BE:
> -		return &bfqg->async_bfqq[1][ioprio];
> +		return &bfqg->async_bfqq[1][ioprio][act_idx];
>  	case IOPRIO_CLASS_IDLE:
> -		return &bfqg->async_idle_bfqq;
> +		return &bfqg->async_idle_bfqq[act_idx];
>  	default:
>  		return NULL;
>  	}
> @@ -5805,7 +5807,8 @@ static struct bfq_queue *bfq_get_queue(struct bfq_data *bfqd,
>  
>  	if (!is_sync) {
>  		async_bfqq = bfq_async_queue_prio(bfqd, bfqg, ioprio_class,
> -						  ioprio);
> +						  ioprio,
> +						  bfq_actuator_index(bfqd, bio));
>  		bfqq = *async_bfqq;
>  		if (bfqq)
>  			goto out;
> @@ -7022,13 +7025,15 @@ static void __bfq_put_async_bfqq(struct bfq_data *bfqd,
>   */
>  void bfq_put_async_queues(struct bfq_data *bfqd, struct bfq_group *bfqg)
>  {
> -	int i, j;
> +	int i, j, k;
>  
> -	for (i = 0; i < 2; i++)
> -		for (j = 0; j < IOPRIO_NR_LEVELS; j++)
> -			__bfq_put_async_bfqq(bfqd, &bfqg->async_bfqq[i][j]);
> +	for (k = 0; k < bfqd->num_actuators; k++) {
> +		for (i = 0; i < 2; i++)
> +			for (j = 0; j < IOPRIO_NR_LEVELS; j++)
> +				__bfq_put_async_bfqq(bfqd, &bfqg->async_bfqq[i][j][k]);
>  
> -	__bfq_put_async_bfqq(bfqd, &bfqg->async_idle_bfqq);
> +		__bfq_put_async_bfqq(bfqd, &bfqg->async_idle_bfqq[k]);
> +	}
>  }
>  
>  /*
> diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h
> index e27897d66a0f..f1c2e77cbf9a 100644
> --- a/block/bfq-iosched.h
> +++ b/block/bfq-iosched.h
> @@ -976,8 +976,8 @@ struct bfq_group {
>  
>  	void *bfqd;
>  
> -	struct bfq_queue *async_bfqq[2][IOPRIO_NR_LEVELS];
> -	struct bfq_queue *async_idle_bfqq;
> +	struct bfq_queue *async_bfqq[2][IOPRIO_NR_LEVELS][BFQ_MAX_ACTUATORS];
> +	struct bfq_queue *async_idle_bfqq[BFQ_MAX_ACTUATORS];
>  
>  	struct bfq_entity *my_entity;
>  
> @@ -993,8 +993,8 @@ struct bfq_group {
>  	struct bfq_entity entity;
>  	struct bfq_sched_data sched_data;
>  
> -	struct bfq_queue *async_bfqq[2][IOPRIO_NR_LEVELS];
> -	struct bfq_queue *async_idle_bfqq;
> +	struct bfq_queue *async_bfqq[2][IOPRIO_NR_LEVELS][BFQ_MAX_ACTUATORS];
> +	struct bfq_queue *async_idle_bfqq[BFQ_MAX_ACTUATORS];
>  
>  	struct rb_root rq_pos_tree;
>  };

-- 
Damien Le Moal
Western Digital Research


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH V6 6/8] block, bfq: retrieve independent access ranges from request queue
  2022-11-03 16:26 ` [PATCH V6 6/8] block, bfq: retrieve independent access ranges from request queue Paolo Valente
@ 2022-11-21  1:01   ` Damien Le Moal
  2022-12-06  8:06     ` Paolo Valente
  0 siblings, 1 reply; 31+ messages in thread
From: Damien Le Moal @ 2022-11-21  1:01 UTC (permalink / raw)
  To: Paolo Valente, Jens Axboe
  Cc: linux-block, linux-kernel, arie.vanderhoeven, rory.c.chen,
	Federico Gavioli

On 11/4/22 01:26, Paolo Valente wrote:
> From: Federico Gavioli <f.gavioli97@gmail.com>
> 
> This patch implements the code to gather the content of the
> independent_access_ranges structure from the request_queue and copy
> it into the queue's bfq_data. This copy is done at queue initialization.
> 
> We copy the access ranges into the bfq_data to avoid taking the queue
> lock each time we access the ranges.
> 
> This implementation, however, puts a limit to the maximum independent
> ranges supported by the scheduler. Such a limit is equal to the constant
> BFQ_MAX_ACTUATORS. This limit was placed to avoid the allocation of
> dynamic memory.
> 
> Co-developed-by: Rory Chen <rory.c.chen@seagate.com>
> Signed-off-by: Rory Chen <rory.c.chen@seagate.com>
> Signed-off-by: Federico Gavioli <f.gavioli97@gmail.com>
> Signed-off-by: Paolo Valente <paolo.valente@linaro.org>
> ---
>  block/bfq-iosched.c | 54 ++++++++++++++++++++++++++++++++++++++-------
>  block/bfq-iosched.h |  5 +++++
>  2 files changed, 51 insertions(+), 8 deletions(-)
> 
> diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
> index c94b80e3f685..106c8820cc5c 100644
> --- a/block/bfq-iosched.c
> +++ b/block/bfq-iosched.c
> @@ -1831,10 +1831,26 @@ static bool bfq_bfqq_higher_class_or_weight(struct bfq_queue *bfqq,
>  /* get the index of the actuator that will serve bio */
>  static unsigned int bfq_actuator_index(struct bfq_data *bfqd, struct bio *bio)
>  {
> -	/*
> -	 * Multi-actuator support not complete yet, so always return 0
> -	 * for the moment.
> -	 */
> +	struct blk_independent_access_range *iar;
> +	unsigned int i;
> +	sector_t end;
> +
> +	/* no search needed if one or zero ranges present */
> +	if (bfqd->num_actuators < 2)
> +		return 0;
> +
> +	/* bio_end_sector(bio) gives the sector after the last one */
> +	end = bio_end_sector(bio) - 1;
> +
> +	for (i = 0; i < bfqd->num_actuators; i++) {
> +		iar = &(bfqd->ia_ranges[i]);
> +		if (end >= iar->sector && end < iar->sector + iar->nr_sectors)
> +			return i;
> +	}
> +
> +	WARN_ONCE(true,
> +		  "bfq_actuator_index: bio sector out of ranges: end=%llu\n",
> +		  end);
>  	return 0;
>  }
>  
> @@ -2479,7 +2495,6 @@ static void bfq_remove_request(struct request_queue *q,
>  
>  	if (rq->cmd_flags & REQ_META)
>  		bfqq->meta_pending--;
> -

whiteline change

>  }
>  
>  static bool bfq_bio_merge(struct request_queue *q, struct bio *bio,
> @@ -7144,6 +7159,8 @@ static int bfq_init_queue(struct request_queue *q, struct elevator_type *e)
>  {
>  	struct bfq_data *bfqd;
>  	struct elevator_queue *eq;
> +	unsigned int i;
> +	struct blk_independent_access_ranges *ia_ranges = q->disk->ia_ranges;
>  
>  	eq = elevator_alloc(q, e);
>  	if (!eq)
> @@ -7187,10 +7204,31 @@ static int bfq_init_queue(struct request_queue *q, struct elevator_type *e)
>  	bfqd->queue = q;
>  
>  	/*
> -	 * Multi-actuator support not complete yet, default to single
> -	 * actuator for the moment.
> +	 * If the disk supports multiple actuators, we copy the independent
> +	 * access ranges from the request queue structure.
>  	 */
> -	bfqd->num_actuators = 1;
> +	spin_lock_irq(&q->queue_lock);
> +	if (ia_ranges) {
> +		/*
> +		 * Check if the disk ia_ranges size exceeds the current bfq
> +		 * actuator limit.
> +		 */
> +		if (ia_ranges->nr_ia_ranges > BFQ_MAX_ACTUATORS) {
> +			pr_crit("nr_ia_ranges higher than act limit: iars=%d, max=%d.\n",
> +				ia_ranges->nr_ia_ranges, BFQ_MAX_ACTUATORS);
> +			pr_crit("Falling back to single actuator mode.\n");
> +			bfqd->num_actuators = 0;
> +		} else {
> +			bfqd->num_actuators = ia_ranges->nr_ia_ranges;
> +
> +			for (i = 0; i < bfqd->num_actuators; i++)
> +				bfqd->ia_ranges[i] = ia_ranges->ia_range[i];
> +		}
> +	} else {
> +		bfqd->num_actuators = 0;

That is very weird. The default should be 1 actuator.
ia_ranges->nr_ia_ranges is 0 when the disk does not provide any range
information, meaning it is a regular disk with a single actuator.

> +	}
> +
> +	spin_unlock_irq(&q->queue_lock);
>  
>  	INIT_LIST_HEAD(&bfqd->dispatch);
>  
> diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h
> index f1c2e77cbf9a..90130a893c8f 100644
> --- a/block/bfq-iosched.h
> +++ b/block/bfq-iosched.h
> @@ -811,6 +811,11 @@ struct bfq_data {
>  	 */
>  	unsigned int num_actuators;
>  
> +	/*
> +	 * Disk independent access ranges for each actuator
> +	 * in this device.
> +	 */
> +	struct blk_independent_access_range ia_ranges[BFQ_MAX_ACTUATORS];

I fail to see how keeping this information is useful, especially given
that this struct contains a kobj. Why not just copy the sector &
nr_sectors fields into struct bfq_queue ? That would also work for the
single actuator case as then sector will be 0 and nr_sectors will be the
disk total capacity.

I think this patch should be first in the series. That will avoid having
the empty bfq_actuator_index() function.

>  };
>  
>  enum bfqq_state_flags {

-- 
Damien Le Moal
Western Digital Research


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH V6 7/8] block, bfq: inject I/O to underutilized actuators
  2022-11-03 16:26 ` [PATCH V6 7/8] block, bfq: inject I/O to underutilized actuators Paolo Valente
@ 2022-11-21  1:17   ` Damien Le Moal
  0 siblings, 0 replies; 31+ messages in thread
From: Damien Le Moal @ 2022-11-21  1:17 UTC (permalink / raw)
  To: Paolo Valente, Jens Axboe
  Cc: linux-block, linux-kernel, arie.vanderhoeven, rory.c.chen, Davide Zini

On 11/4/22 01:26, Paolo Valente wrote:
> From: Davide Zini <davidezini2@gmail.com>
> 
> The main service scheme of BFQ for sync I/O is serving one sync
> bfq_queue at a time, for a while. In particular, BFQ enforces this
> scheme when it deems the latter necessary to boost throughput or
> to preserve service guarantees. Unfortunately, when BFQ enforces
> this policy, only one actuator at a time gets served for a while,
> because each bfq_queue contains I/O only for one actuator. The
> other actuators may remain underutilized.
> 
> Actually, BFQ may serve (inject) extra I/O, taken from other
> bfq_queues, in parallel with that of the in-service queue. This
> injection mechanism may provide the ground for dealing also with
> the above actuator-underutilization problem. Yet BFQ does not take
> the actuator load into account when choosing which queue to pick
> extra I/O from. In addition, BFQ may happen to inject extra I/O
> only when the in-service queue is temporarily empty.
> 
> In view of these facts, this commit extends the
> injection mechanism in such a way that the latter:
> (1) takes into account also the actuator load;
> (2) checks such a load on each dispatch, and injects I/O for an
>     underutilized actuator, if there is one and there is I/O for it.
> 
> To perform the check in (2), this commit introduces a load
> threshold, currently set to 4.  A linear scan of each actuator is
> performed, until an actuator is found for which the following two
> conditions hold: the load of the actuator is below the threshold,
> and there is at least one non-in-service queue that contains I/O
> for that actuator. If such a pair (actuator, queue) is found, then
> the head request of that queue is returned for dispatch, instead
> of the head request of the in-service queue.
> 
> We have set the threshold, empirically, to the minimum possible
> value for which an actuator is fully utilized, or close to be
> fully utilized. By doing so, injected I/O 'steals' as few
> drive-queue slots as possibile to the in-service queue. This
> reduces as much as possible the probability that the service of
> I/O from the in-service bfq_queue gets delayed because of slot
> exhaustion, i.e., because all the slots of the drive queue are
> filled with I/O injected from other queues (NCQ provides for 32
> slots).
> 
> This new mechanism also counters actuator underutilization in the
> case of asymmetric configurations of bfq_queues. Namely if there
> are few bfq_queues containing I/O for some actuators and many
> bfq_queues containing I/O for other actuators. Or if the
> bfq_queues containing I/O for some actuators have lower weights
> than the other bfq_queues.
> 
> Signed-off-by: Paolo Valente <paolo.valente@linaro.org>
> Signed-off-by: Davide Zini <davidezini2@gmail.com>

A few nits below. Otherwise, looks ok.

Reviewed-by: Damien Le Moal <damien.lemoal@opensource.wdc.com>

> ---
>  block/bfq-cgroup.c  |   2 +-
>  block/bfq-iosched.c | 139 +++++++++++++++++++++++++++++++++-----------
>  block/bfq-iosched.h |  39 ++++++++++++-
>  block/bfq-wf2q.c    |   2 +-
>  4 files changed, 143 insertions(+), 39 deletions(-)
> 
> diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
> index d243c429d9c0..38ccfe55ad46 100644
> --- a/block/bfq-cgroup.c
> +++ b/block/bfq-cgroup.c
> @@ -694,7 +694,7 @@ void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq,
>  		bfq_activate_bfqq(bfqd, bfqq);
>  	}
>  
> -	if (!bfqd->in_service_queue && !bfqd->rq_in_driver)
> +	if (!bfqd->in_service_queue && !bfqd->tot_rq_in_driver)
>  		bfq_schedule_dispatch(bfqd);
>  	/* release extra ref taken above, bfqq may happen to be freed now */
>  	bfq_put_queue(bfqq);
> diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
> index 106c8820cc5c..db91f1a651d3 100644
> --- a/block/bfq-iosched.c
> +++ b/block/bfq-iosched.c
> @@ -2252,6 +2252,7 @@ static void bfq_add_request(struct request *rq)
>  
>  	bfq_log_bfqq(bfqd, bfqq, "add_request %d", rq_is_sync(rq));
>  	bfqq->queued[rq_is_sync(rq)]++;
> +

whiteline change

>  	/*
>  	 * Updating of 'bfqd->queued' is protected by 'bfqd->lock', however, it
>  	 * may be read without holding the lock in bfq_has_work().
> @@ -2297,9 +2298,9 @@ static void bfq_add_request(struct request *rq)
>  		 *   elapsed.
>  		 */
>  		if (bfqq == bfqd->in_service_queue &&
> -		    (bfqd->rq_in_driver == 0 ||
> +		    (bfqd->tot_rq_in_driver == 0 ||
>  		     (bfqq->last_serv_time_ns > 0 &&
> -		      bfqd->rqs_injected && bfqd->rq_in_driver > 0)) &&
> +		      bfqd->rqs_injected && bfqd->tot_rq_in_driver > 0)) &&
>  		    time_is_before_eq_jiffies(bfqq->decrease_time_jif +
>  					      msecs_to_jiffies(10))) {
>  			bfqd->last_empty_occupied_ns = ktime_get_ns();
> @@ -2323,7 +2324,7 @@ static void bfq_add_request(struct request *rq)
>  			 * will be set in case injection is performed
>  			 * on bfqq before rq is completed).
>  			 */
> -			if (bfqd->rq_in_driver == 0)
> +			if (bfqd->tot_rq_in_driver == 0)
>  				bfqd->rqs_injected = false;
>  		}
>  	}
> @@ -2421,15 +2422,18 @@ static sector_t get_sdist(sector_t last_pos, struct request *rq)
>  static void bfq_activate_request(struct request_queue *q, struct request *rq)
>  {
>  	struct bfq_data *bfqd = q->elevator->elevator_data;
> +	unsigned int act_idx = bfq_actuator_index(bfqd, rq->bio);
>  
> -	bfqd->rq_in_driver++;
> +	bfqd->tot_rq_in_driver++;
> +	bfqd->rq_in_driver[act_idx]++;
>  }
>  
>  static void bfq_deactivate_request(struct request_queue *q, struct request *rq)
>  {
>  	struct bfq_data *bfqd = q->elevator->elevator_data;
>  
> -	bfqd->rq_in_driver--;
> +	bfqd->tot_rq_in_driver--;
> +	bfqd->rq_in_driver[bfq_actuator_index(bfqd, rq->bio)]--;
>  }
>  #endif
>  
> @@ -2703,11 +2707,14 @@ void bfq_end_wr_async_queues(struct bfq_data *bfqd,
>  static void bfq_end_wr(struct bfq_data *bfqd)
>  {
>  	struct bfq_queue *bfqq;
> +	int i;
>  
>  	spin_lock_irq(&bfqd->lock);
>  
> -	list_for_each_entry(bfqq, &bfqd->active_list, bfqq_list)
> -		bfq_bfqq_end_wr(bfqq);
> +	for (i = 0; i < bfqd->num_actuators; i++) {
> +		list_for_each_entry(bfqq, &bfqd->active_list[i], bfqq_list)
> +			bfq_bfqq_end_wr(bfqq);
> +	}
>  	list_for_each_entry(bfqq, &bfqd->idle_list, bfqq_list)
>  		bfq_bfqq_end_wr(bfqq);
>  	bfq_end_wr_async(bfqd);
> @@ -3651,13 +3658,13 @@ static void bfq_update_peak_rate(struct bfq_data *bfqd, struct request *rq)
>  	 * - start a new observation interval with this dispatch
>  	 */
>  	if (now_ns - bfqd->last_dispatch > 100*NSEC_PER_MSEC &&
> -	    bfqd->rq_in_driver == 0)
> +	    bfqd->tot_rq_in_driver == 0)
>  		goto update_rate_and_reset;
>  
>  	/* Update sampling information */
>  	bfqd->peak_rate_samples++;
>  
> -	if ((bfqd->rq_in_driver > 0 ||
> +	if ((bfqd->tot_rq_in_driver > 0 ||
>  		now_ns - bfqd->last_completion < BFQ_MIN_TT)
>  	    && !BFQ_RQ_SEEKY(bfqd, bfqd->last_position, rq))
>  		bfqd->sequential_samples++;
> @@ -3924,7 +3931,7 @@ static bool idling_needed_for_service_guarantees(struct bfq_data *bfqd,
>  	return (bfqq->wr_coeff > 1 &&
>  		(bfqd->wr_busy_queues <
>  		 tot_busy_queues ||
> -		 bfqd->rq_in_driver >=
> +		 bfqd->tot_rq_in_driver >=
>  		 bfqq->dispatched + 4)) ||
>  		bfq_asymmetric_scenario(bfqd, bfqq) ||

Nit: with all the line splits, this is really hard to read... Use the full
80 chars available please.

>  		tot_busy_queues == 1;
> @@ -4696,6 +4703,7 @@ bfq_choose_bfqq_for_injection(struct bfq_data *bfqd)
>  {
>  	struct bfq_queue *bfqq, *in_serv_bfqq = bfqd->in_service_queue;
>  	unsigned int limit = in_serv_bfqq->inject_limit;
> +	int i;

Missing blank line after this declaration.

>  	/*
>  	 * If
>  	 * - bfqq is not weight-raised and therefore does not carry
> @@ -4727,7 +4735,7 @@ bfq_choose_bfqq_for_injection(struct bfq_data *bfqd)
>  		)
>  		limit = 1;
>  
> -	if (bfqd->rq_in_driver >= limit)
> +	if (bfqd->tot_rq_in_driver >= limit)
>  		return NULL;
>  
>  	/*
> @@ -4742,11 +4750,12 @@ bfq_choose_bfqq_for_injection(struct bfq_data *bfqd)
>  	 *   (and re-added only if it gets new requests, but then it
>  	 *   is assigned again enough budget for its new backlog).
>  	 */
> -	list_for_each_entry(bfqq, &bfqd->active_list, bfqq_list)
> -		if (!RB_EMPTY_ROOT(&bfqq->sort_list) &&
> -		    (in_serv_always_inject || bfqq->wr_coeff > 1) &&
> -		    bfq_serv_to_charge(bfqq->next_rq, bfqq) <=
> -		    bfq_bfqq_budget_left(bfqq)) {
> +	for (i = 0; i < bfqd->num_actuators; i++) {
> +		list_for_each_entry(bfqq, &bfqd->active_list[i], bfqq_list)
> +			if (!RB_EMPTY_ROOT(&bfqq->sort_list) &&
> +				(in_serv_always_inject || bfqq->wr_coeff > 1) &&
> +				bfq_serv_to_charge(bfqq->next_rq, bfqq) <=
> +				bfq_bfqq_budget_left(bfqq)) {
>  			/*
>  			 * Allow for only one large in-flight request
>  			 * on non-rotational devices, for the
> @@ -4771,22 +4780,69 @@ bfq_choose_bfqq_for_injection(struct bfq_data *bfqd)
>  			else
>  				limit = in_serv_bfqq->inject_limit;
>  
> -			if (bfqd->rq_in_driver < limit) {
> +			if (bfqd->tot_rq_in_driver < limit) {
>  				bfqd->rqs_injected = true;
>  				return bfqq;
>  			}
>  		}
> +	}
> +
> +	return NULL;
> +}
> +
> +static struct bfq_queue *
> +bfq_find_active_bfqq_for_actuator(struct bfq_data *bfqd,
> +						    int idx)

Why the line split ?

> +{
> +	struct bfq_queue *bfqq = NULL;
> +
> +	if (bfqd->in_service_queue &&
> +	    bfqd->in_service_queue->actuator_idx == idx)
> +		return bfqd->in_service_queue;
> +
> +	list_for_each_entry(bfqq, &bfqd->active_list[idx], bfqq_list) {
> +		if (!RB_EMPTY_ROOT(&bfqq->sort_list) &&
> +			bfq_serv_to_charge(bfqq->next_rq, bfqq) <=
> +				bfq_bfqq_budget_left(bfqq)) {
> +			return bfqq;
> +		}
> +	}
>  
>  	return NULL;
>  }
>  
> +/*
> + * Perform a linear scan of each actuator, until an actuator is found
> + * for which the following two conditions hold: the load of the
> + * actuator is below the threshold (see comments on actuator_load_threshold
> + * for details), and there is a queue that contains I/O for that
> + * actuator. On success, return that queue.
> + */
> +static struct bfq_queue *
> +bfq_find_bfqq_for_underused_actuator(struct bfq_data *bfqd)
> +{
> +	int i;
> +
> +	for (i = 0 ; i < bfqd->num_actuators; i++)
> +		if (bfqd->rq_in_driver[i] < bfqd->actuator_load_threshold) {
> +			struct bfq_queue *bfqq =
> +				bfq_find_active_bfqq_for_actuator(bfqd, i);
> +
> +			if (bfqq)
> +				return bfqq;
> +		}

Given that the statement inside the for loop is multi-line, adding curly
brackets would be nice.

> +
> +	return NULL;
> +}
> +
> +
>  /*
>   * Select a queue for service.  If we have a current queue in service,
>   * check whether to continue servicing it, or retrieve and set a new one.
>   */
>  static struct bfq_queue *bfq_select_queue(struct bfq_data *bfqd)
>  {
> -	struct bfq_queue *bfqq;
> +	struct bfq_queue *bfqq, *inject_bfqq;
>  	struct request *next_rq;
>  	enum bfqq_expiration reason = BFQQE_BUDGET_TIMEOUT;
>  
> @@ -4808,6 +4864,15 @@ static struct bfq_queue *bfq_select_queue(struct bfq_data *bfqd)
>  		goto expire;
>  
>  check_queue:
> +	/*
> +	 *  If some actuator is underutilized, but the in-service
> +	 *  queue does not contain I/O for that actuator, then try to
> +	 *  inject I/O for that actuator.
> +	 */
> +	inject_bfqq = bfq_find_bfqq_for_underused_actuator(bfqd);
> +	if (inject_bfqq && inject_bfqq != bfqq)
> +		return inject_bfqq;
> +
>  	/*
>  	 * This loop is rarely executed more than once. Even when it
>  	 * happens, it is much more convenient to re-execute this loop
> @@ -5163,11 +5228,11 @@ static struct request *__bfq_dispatch_request(struct blk_mq_hw_ctx *hctx)
>  
>  		/*
>  		 * We exploit the bfq_finish_requeue_request hook to
> -		 * decrement rq_in_driver, but
> +		 * decrement tot_rq_in_driver, but
>  		 * bfq_finish_requeue_request will not be invoked on
>  		 * this request. So, to avoid unbalance, just start
> -		 * this request, without incrementing rq_in_driver. As
> -		 * a negative consequence, rq_in_driver is deceptively
> +		 * this request, without incrementing tot_rq_in_driver. As
> +		 * a negative consequence, tot_rq_in_driver is deceptively
>  		 * lower than it should be while this request is in
>  		 * service. This may cause bfq_schedule_dispatch to be
>  		 * invoked uselessly.
> @@ -5176,7 +5241,7 @@ static struct request *__bfq_dispatch_request(struct blk_mq_hw_ctx *hctx)
>  		 * bfq_finish_requeue_request hook, if defined, is
>  		 * probably invoked also on this request. So, by
>  		 * exploiting this hook, we could 1) increment
> -		 * rq_in_driver here, and 2) decrement it in
> +		 * tot_rq_in_driver here, and 2) decrement it in
>  		 * bfq_finish_requeue_request. Such a solution would
>  		 * let the value of the counter be always accurate,
>  		 * but it would entail using an extra interface
> @@ -5205,7 +5270,7 @@ static struct request *__bfq_dispatch_request(struct blk_mq_hw_ctx *hctx)
>  	 * Of course, serving one request at a time may cause loss of
>  	 * throughput.
>  	 */
> -	if (bfqd->strict_guarantees && bfqd->rq_in_driver > 0)
> +	if (bfqd->strict_guarantees && bfqd->tot_rq_in_driver > 0)
>  		goto exit;
>  
>  	bfqq = bfq_select_queue(bfqd);
> @@ -5216,7 +5281,8 @@ static struct request *__bfq_dispatch_request(struct blk_mq_hw_ctx *hctx)
>  
>  	if (rq) {
>  inc_in_driver_start_rq:
> -		bfqd->rq_in_driver++;
> +		bfqd->rq_in_driver[bfqq->actuator_idx]++;
> +		bfqd->tot_rq_in_driver++;
>  start_rq:
>  		rq->rq_flags |= RQF_STARTED;
>  	}
> @@ -6289,7 +6355,7 @@ static void bfq_update_hw_tag(struct bfq_data *bfqd)
>  	struct bfq_queue *bfqq = bfqd->in_service_queue;
>  
>  	bfqd->max_rq_in_driver = max_t(int, bfqd->max_rq_in_driver,
> -				       bfqd->rq_in_driver);
> +				       bfqd->tot_rq_in_driver);
>  
>  	if (bfqd->hw_tag == 1)
>  		return;
> @@ -6300,7 +6366,7 @@ static void bfq_update_hw_tag(struct bfq_data *bfqd)
>  	 * sum is not exact, as it's not taking into account deactivated
>  	 * requests.
>  	 */
> -	if (bfqd->rq_in_driver + bfqd->queued <= BFQ_HW_QUEUE_THRESHOLD)
> +	if (bfqd->tot_rq_in_driver + bfqd->queued <= BFQ_HW_QUEUE_THRESHOLD)
>  		return;
>  
>  	/*
> @@ -6311,7 +6377,7 @@ static void bfq_update_hw_tag(struct bfq_data *bfqd)
>  	if (bfqq && bfq_bfqq_has_short_ttime(bfqq) &&
>  	    bfqq->dispatched + bfqq->queued[0] + bfqq->queued[1] <
>  	    BFQ_HW_QUEUE_THRESHOLD &&
> -	    bfqd->rq_in_driver < BFQ_HW_QUEUE_THRESHOLD)
> +	    bfqd->tot_rq_in_driver < BFQ_HW_QUEUE_THRESHOLD)
>  		return;
>  
>  	if (bfqd->hw_tag_samples++ < BFQ_HW_QUEUE_SAMPLES)
> @@ -6332,7 +6398,8 @@ static void bfq_completed_request(struct bfq_queue *bfqq, struct bfq_data *bfqd)
>  
>  	bfq_update_hw_tag(bfqd);
>  
> -	bfqd->rq_in_driver--;
> +	bfqd->rq_in_driver[bfqq->actuator_idx]--;
> +	bfqd->tot_rq_in_driver--;
>  	bfqq->dispatched--;
>  
>  	if (!bfqq->dispatched && !bfq_bfqq_busy(bfqq)) {
> @@ -6451,7 +6518,7 @@ static void bfq_completed_request(struct bfq_queue *bfqq, struct bfq_data *bfqd)
>  					BFQQE_NO_MORE_REQUESTS);
>  	}
>  
> -	if (!bfqd->rq_in_driver)
> +	if (!bfqd->tot_rq_in_driver)
>  		bfq_schedule_dispatch(bfqd);
>  }
>  
> @@ -6582,13 +6649,13 @@ static void bfq_update_inject_limit(struct bfq_data *bfqd,
>  	 * conditions to do it, or we can lower the last base value
>  	 * computed.
>  	 *
> -	 * NOTE: (bfqd->rq_in_driver == 1) means that there is no I/O
> +	 * NOTE: (bfqd->tot_rq_in_driver == 1) means that there is no I/O
>  	 * request in flight, because this function is in the code
>  	 * path that handles the completion of a request of bfqq, and,
>  	 * in particular, this function is executed before
> -	 * bfqd->rq_in_driver is decremented in such a code path.
> +	 * bfqd->tot_rq_in_driver is decremented in such a code path.
>  	 */
> -	if ((bfqq->last_serv_time_ns == 0 && bfqd->rq_in_driver == 1) ||
> +	if ((bfqq->last_serv_time_ns == 0 && bfqd->tot_rq_in_driver == 1) ||
>  	    tot_time_ns < bfqq->last_serv_time_ns) {
>  		if (bfqq->last_serv_time_ns == 0) {
>  			/*
> @@ -6598,7 +6665,7 @@ static void bfq_update_inject_limit(struct bfq_data *bfqd,
>  			bfqq->inject_limit = max_t(unsigned int, 1, old_limit);
>  		}
>  		bfqq->last_serv_time_ns = tot_time_ns;
> -	} else if (!bfqd->rqs_injected && bfqd->rq_in_driver == 1)
> +	} else if (!bfqd->rqs_injected && bfqd->tot_rq_in_driver == 1)
>  		/*
>  		 * No I/O injected and no request still in service in
>  		 * the drive: these are the exact conditions for
> @@ -7239,7 +7306,8 @@ static int bfq_init_queue(struct request_queue *q, struct elevator_type *e)
>  	bfqd->queue_weights_tree = RB_ROOT_CACHED;
>  	bfqd->num_groups_with_pending_reqs = 0;
>  
> -	INIT_LIST_HEAD(&bfqd->active_list);
> +	INIT_LIST_HEAD(&bfqd->active_list[0]);
> +	INIT_LIST_HEAD(&bfqd->active_list[1]);
>  	INIT_LIST_HEAD(&bfqd->idle_list);
>  	INIT_HLIST_HEAD(&bfqd->burst_list);
>  
> @@ -7284,6 +7352,9 @@ static int bfq_init_queue(struct request_queue *q, struct elevator_type *e)
>  		ref_wr_duration[blk_queue_nonrot(bfqd->queue)];
>  	bfqd->peak_rate = ref_rate[blk_queue_nonrot(bfqd->queue)] * 2 / 3;
>  
> +	/* see comments on the definition of next field inside bfq_data */
> +	bfqd->actuator_load_threshold = 4;
> +
>  	spin_lock_init(&bfqd->lock);
>  
>  	/*
> diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h
> index 90130a893c8f..adb3ba6a9d90 100644
> --- a/block/bfq-iosched.h
> +++ b/block/bfq-iosched.h
> @@ -586,7 +586,12 @@ struct bfq_data {
>  	/* number of queued requests */
>  	int queued;
>  	/* number of requests dispatched and waiting for completion */
> -	int rq_in_driver;
> +	int tot_rq_in_driver;
> +	/*
> +	 * number of requests dispatched and waiting for completion
> +	 * for each actuator
> +	 */
> +	int rq_in_driver[BFQ_MAX_ACTUATORS];
>  
>  	/* true if the device is non rotational and performs queueing */
>  	bool nonrot_with_queueing;
> @@ -680,8 +685,13 @@ struct bfq_data {
>  	/* maximum budget allotted to a bfq_queue before rescheduling */
>  	int bfq_max_budget;
>  
> -	/* list of all the bfq_queues active on the device */
> -	struct list_head active_list;
> +	/*
> +	 * List of all the bfq_queues active for a specific actuator
> +	 * on the device. Keeping active queues separate on a
> +	 * per-actuator basis helps implementing per-actuator
> +	 * injection more efficiently.
> +	 */
> +	struct list_head active_list[BFQ_MAX_ACTUATORS];
>  	/* list of all the bfq_queues idle on the device */
>  	struct list_head idle_list;
>  
> @@ -816,6 +826,29 @@ struct bfq_data {
>  	 * in this device.
>  	 */
>  	struct blk_independent_access_range ia_ranges[BFQ_MAX_ACTUATORS];
> +
> +	/*
> +	 * If the number of I/O requests queued in the device for a
> +	 * given actuator is below next threshold, then the actuator
> +	 * is deemed as underutilized. If this condition is found to
> +	 * hold for some actuator upon a dispatch, but (i) the
> +	 * in-service queue does not contain I/O for that actuator,
> +	 * while (ii) some other queue does contain I/O for that
> +	 * actuator, then the head I/O request of the latter queue is
> +	 * returned (injected), instead of the head request of the
> +	 * currently in-service queue.
> +	 *
> +	 * We set the threshold, empirically, to the minimum possible
> +	 * value for which an actuator is fully utilized, or close to
> +	 * be fully utilized. By doing so, injected I/O 'steals' as
> +	 * few drive-queue slots as possibile to the in-service
> +	 * queue. This reduces as much as possible the probability
> +	 * that the service of I/O from the in-service bfq_queue gets
> +	 * delayed because of slot exhaustion, i.e., because all the
> +	 * slots of the drive queue are filled with I/O injected from
> +	 * other queues (NCQ provides for 32 slots).
> +	 */
> +	unsigned int actuator_load_threshold;
>  };
>  
>  enum bfqq_state_flags {
> diff --git a/block/bfq-wf2q.c b/block/bfq-wf2q.c
> index 8fc3da4c23bb..ec0273e2cd07 100644
> --- a/block/bfq-wf2q.c
> +++ b/block/bfq-wf2q.c
> @@ -477,7 +477,7 @@ static void bfq_active_insert(struct bfq_service_tree *st,
>  	bfqd = (struct bfq_data *)bfqg->bfqd;
>  #endif
>  	if (bfqq)
> -		list_add(&bfqq->bfqq_list, &bfqq->bfqd->active_list);
> +		list_add(&bfqq->bfqq_list, &bfqq->bfqd->active_list[bfqq->actuator_idx]);
>  #ifdef CONFIG_BFQ_GROUP_IOSCHED
>  	if (bfqg != bfqd->root_group)
>  		bfqg->active_entities++;

-- 
Damien Le Moal
Western Digital Research


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH V6 1/8] block, bfq: split sync bfq_queues on a per-actuator basis
  2022-11-21  0:18   ` Damien Le Moal
@ 2022-11-26 16:28     ` Paolo Valente
  2022-11-28 14:32       ` Jens Axboe
  2022-12-06  8:04     ` Paolo Valente
  1 sibling, 1 reply; 31+ messages in thread
From: Paolo Valente @ 2022-11-26 16:28 UTC (permalink / raw)
  To: Damien Le Moal
  Cc: Jens Axboe, linux-block, linux-kernel, Arie van der Hoeven,
	Rory Chen, Gabriele Felici, Carmine Zaccagnino



> Il giorno 21 nov 2022, alle ore 01:18, Damien Le Moal <damien.lemoal@opensource.wdc.com> ha scritto:
> 
> On 11/4/22 01:26, Paolo Valente wrote:
>> Single-LUN multi-actuator SCSI drives, as well as all multi-actuator
>> SATA drives appear as a single device to the I/O subsystem [1].  Yet
>> they address commands to different actuators internally, as a function
>> of Logical Block Addressing (LBAs). A given sector is reachable by
>> only one of the actuators. For example, Seagate’s Serial Advanced
>> Technology Attachment (SATA) version contains two actuators and maps
>> the lower half of the SATA LBA space to the lower actuator and the
>> upper half to the upper actuator.
>> 
>> Evidently, to fully utilize actuators, no actuator must be left idle
>> or underutilized while there is pending I/O for it. The block layer
>> must somehow control the load of each actuator individually. This
>> commit lays the ground for allowing BFQ to provide such a per-actuator
>> control.
>> 
>> BFQ associates an I/O-request sync bfq_queue with each process doing
>> synchronous I/O, or with a group of processes, in case of queue
>> merging. Then BFQ serves one bfq_queue at a time. While in service, a
>> bfq_queue is emptied in request-position order. Yet the same process,
>> or group of processes, may generate I/O for different actuators. In
>> this case, different streams of I/O (each for a different actuator)
>> get all inserted into the same sync bfq_queue. So there is basically
>> no individual control on when each stream is served, i.e., on when the
>> I/O requests of the stream are picked from the bfq_queue and
>> dispatched to the drive.
>> 
>> This commit enables BFQ to control the service of each actuator
>> individually for synchronous I/O, by simply splitting each sync
>> bfq_queue into N queues, one for each actuator. In other words, a sync
>> bfq_queue is now associated to a pair (process, actuator). As a
>> consequence of this split, the per-queue proportional-share policy
>> implemented by BFQ will guarantee that the sync I/O generated for each
>> actuator, by each process, receives its fair share of service.
>> 
>> This is just a preparatory patch. If the I/O of the same process
>> happens to be sent to different queues, then each of these queues may
>> undergo queue merging. To handle this event, the bfq_io_cq data
>> structure must be properly extended. In addition, stable merging must
>> be disabled to avoid loss of control on individual actuators. Finally,
>> also async queues must be split. These issues are described in detail
>> and addressed in next commits. As for this commit, although multiple
>> per-process bfq_queues are provided, the I/O of each process or group
>> of processes is still sent to only one queue, regardless of the
>> actuator the I/O is for. The forwarding to distinct bfq_queues will be
>> enabled after addressing the above issues.
>> 
>> [1] https://www.linaro.org/blog/budget-fair-queueing-bfq-linux-io-scheduler-optimizations-for-multi-actuator-sata-hard-drives/
>> 
>> Signed-off-by: Gabriele Felici <felicigb@gmail.com>
>> Signed-off-by: Carmine Zaccagnino <carmine@carminezacc.com>
>> Signed-off-by: Paolo Valente <paolo.valente@linaro.org>
>> ---
>> block/bfq-cgroup.c  |  95 ++++++++++++++++------------
>> block/bfq-iosched.c | 151 +++++++++++++++++++++++++++++---------------
>> block/bfq-iosched.h |  51 ++++++++++++---
>> 3 files changed, 194 insertions(+), 103 deletions(-)
>> 
>> diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
>> index 144bca006463..d243c429d9c0 100644
>> --- a/block/bfq-cgroup.c
>> +++ b/block/bfq-cgroup.c
>> @@ -700,6 +700,48 @@ void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq,
>> 	bfq_put_queue(bfqq);
>> }
>> 
>> +static void bfq_sync_bfqq_move(struct bfq_data *bfqd,
>> +			       struct bfq_queue *sync_bfqq,
>> +			       struct bfq_io_cq *bic,
>> +			       struct bfq_group *bfqg,
>> +			       unsigned int act_idx)
>> +{
>> +	if (!sync_bfqq->new_bfqq && !bfq_bfqq_coop(sync_bfqq)) {
>> +		/* We are the only user of this bfqq, just move it */
>> +		if (sync_bfqq->entity.sched_data != &bfqg->sched_data)
>> +			bfq_bfqq_move(bfqd, sync_bfqq, bfqg);
> 
> Nit: add a return here to avoid the "else" and save one tab indent level
> for the code below. That will be more pleasant to the eyes :)
> 
>> +	} else {
>> +		struct bfq_queue *bfqq;
>> +
>> +		/*
>> +		 * The queue was merged to a different queue. Check
>> +		 * that the merge chain still belongs to the same
>> +		 * cgroup.
>> +		 */
>> +		for (bfqq = sync_bfqq; bfqq; bfqq = bfqq->new_bfqq)
>> +			if (bfqq->entity.sched_data !=
>> +			    &bfqg->sched_data)
>> +				break;
>> +		if (bfqq) {
>> +			/*
>> +			 * Some queue changed cgroup so the merge is
>> +			 * not valid anymore. We cannot easily just
>> +			 * cancel the merge (by clearing new_bfqq) as
>> +			 * there may be other processes using this
>> +			 * queue and holding refs to all queues below
>> +			 * sync_bfqq->new_bfqq. Similarly if the merge
>> +			 * already happened, we need to detach from
>> +			 * bfqq now so that we cannot merge bio to a
>> +			 * request from the old cgroup.
>> +			 */
>> +			bfq_put_cooperator(sync_bfqq);
>> +			bfq_release_process_ref(bfqd, sync_bfqq);
>> +			bic_set_bfqq(bic, NULL, 1, act_idx);
>> +		}
>> +	}
>> +}
>> +
>> +
> 
> Double blank line.
> 
>> /**
>>  * __bfq_bic_change_cgroup - move @bic to @bfqg.
>>  * @bfqd: the queue descriptor.
>> @@ -714,53 +756,24 @@ static void *__bfq_bic_change_cgroup(struct bfq_data *bfqd,
>> 				     struct bfq_io_cq *bic,
>> 				     struct bfq_group *bfqg)
>> {
>> -	struct bfq_queue *async_bfqq = bic_to_bfqq(bic, 0);
>> -	struct bfq_queue *sync_bfqq = bic_to_bfqq(bic, 1);
>> 	struct bfq_entity *entity;
>> +	unsigned int act_idx;
>> 
>> -	if (async_bfqq) {
>> -		entity = &async_bfqq->entity;
>> -
>> -		if (entity->sched_data != &bfqg->sched_data) {
>> -			bic_set_bfqq(bic, NULL, 0);
>> -			bfq_release_process_ref(bfqd, async_bfqq);
>> -		}
>> -	}
>> +	for (act_idx = 0; act_idx < bfqd->num_actuators; act_idx++) {
>> +		struct bfq_queue *async_bfqq = bic_to_bfqq(bic, 0, act_idx);
>> +		struct bfq_queue *sync_bfqq = bic_to_bfqq(bic, 1, act_idx);
> 
> The second argument of bic_to_bfqq() is a bool. So it would make more
> sense to use false/true instead of 0/1. That said, creating 2 helpers
> bic_to_async_bfqq() and bic_to_sync_bfqq() without that second argument
> would make the code more readable and less error prone I think.
> 
>> 
>> -	if (sync_bfqq) {
>> -		if (!sync_bfqq->new_bfqq && !bfq_bfqq_coop(sync_bfqq)) {
>> -			/* We are the only user of this bfqq, just move it */
>> -			if (sync_bfqq->entity.sched_data != &bfqg->sched_data)
>> -				bfq_bfqq_move(bfqd, sync_bfqq, bfqg);
>> -		} else {
>> -			struct bfq_queue *bfqq;
>> +		if (async_bfqq) {
>> +			entity = &async_bfqq->entity;
>> 
>> -			/*
>> -			 * The queue was merged to a different queue. Check
>> -			 * that the merge chain still belongs to the same
>> -			 * cgroup.
>> -			 */
>> -			for (bfqq = sync_bfqq; bfqq; bfqq = bfqq->new_bfqq)
>> -				if (bfqq->entity.sched_data !=
>> -				    &bfqg->sched_data)
>> -					break;
>> -			if (bfqq) {
>> -				/*
>> -				 * Some queue changed cgroup so the merge is
>> -				 * not valid anymore. We cannot easily just
>> -				 * cancel the merge (by clearing new_bfqq) as
>> -				 * there may be other processes using this
>> -				 * queue and holding refs to all queues below
>> -				 * sync_bfqq->new_bfqq. Similarly if the merge
>> -				 * already happened, we need to detach from
>> -				 * bfqq now so that we cannot merge bio to a
>> -				 * request from the old cgroup.
>> -				 */
>> -				bfq_put_cooperator(sync_bfqq);
>> -				bfq_release_process_ref(bfqd, sync_bfqq);
>> -				bic_set_bfqq(bic, NULL, 1);
>> +			if (entity->sched_data != &bfqg->sched_data) {
> 
> You are referencing the entity variable once here only. So is it really
> needed ?
> 
>> +				bic_set_bfqq(bic, NULL, 0, act_idx);
>> +				bfq_release_process_ref(bfqd, async_bfqq);
>> 			}
>> 		}
>> +
>> +		if (sync_bfqq)
>> +			bfq_sync_bfqq_move(bfqd, sync_bfqq, bic, bfqg, act_idx);
>> 	}
>> 
>> 	return bfqg;
>> diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
>> index 7ea427817f7f..5c69394bbb65 100644
>> --- a/block/bfq-iosched.c
>> +++ b/block/bfq-iosched.c
>> @@ -377,14 +377,19 @@ static const unsigned long bfq_late_stable_merging = 600;
>> #define RQ_BIC(rq)		((struct bfq_io_cq *)((rq)->elv.priv[0]))
>> #define RQ_BFQQ(rq)		((rq)->elv.priv[1])
>> 
>> -struct bfq_queue *bic_to_bfqq(struct bfq_io_cq *bic, bool is_sync)
>> +struct bfq_queue *bic_to_bfqq(struct bfq_io_cq *bic,
>> +			      bool is_sync,
>> +			      unsigned int actuator_idx)
> 
> Why the line split for the last argument ?
> 
>> {
>> -	return bic->bfqq[is_sync];
>> +	return bic->bfqq[is_sync][actuator_idx];
> 
> See comment above. This is really broken I think. You should not use a
> bool as an array index. Write it as:
> 
> 	if (is_sync)
> 		return bic->bfqq[1][actuator_idx];
> 	return bic->bfqq[0][actuator_idx];
> 
> Or as suggested, create 2 helpers.
> 
>> }
>> 
>> static void bfq_put_stable_ref(struct bfq_queue *bfqq);
>> 
>> -void bic_set_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq, bool is_sync)
>> +void bic_set_bfqq(struct bfq_io_cq *bic,
>> +		  struct bfq_queue *bfqq,
>> +		  bool is_sync,
>> +		  unsigned int actuator_idx)
>> {
>> 	/*
>> 	 * If bfqq != NULL, then a non-stable queue merge between
>> @@ -399,7 +404,7 @@ void bic_set_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq, bool is_sync)
>> 	 * we cancel the stable merge if
>> 	 * bic->stable_merge_bfqq == bfqq.
>> 	 */
>> -	bic->bfqq[is_sync] = bfqq;
>> +	bic->bfqq[is_sync][actuator_idx] = bfqq;
> 
> Same comment as above. Using a bool as an index array feels very wrong to
> me. This seems very fragile/dependent on waht the compiler does with bool
> type.
> 
>> 
>> 	if (bfqq && bic->stable_merge_bfqq == bfqq) {
>> 		/*
>> @@ -672,9 +677,9 @@ static void bfq_limit_depth(blk_opf_t opf, struct blk_mq_alloc_data *data)
>> {
>> 	struct bfq_data *bfqd = data->q->elevator->elevator_data;
>> 	struct bfq_io_cq *bic = bfq_bic_lookup(data->q);
>> -	struct bfq_queue *bfqq = bic ? bic_to_bfqq(bic, op_is_sync(opf)) : NULL;
> 
> Given that you are repeating this same pattern many times, the test for
> bic != NULL should go into the bic_to_bfqq() helper.
> 
>> 	int depth;
>> 	unsigned limit = data->q->nr_requests;
>> +	unsigned int act_idx;
>> 
>> 	/* Sync reads have full depth available */
>> 	if (op_is_sync(opf) && !op_is_write(opf)) {
>> @@ -684,14 +689,21 @@ static void bfq_limit_depth(blk_opf_t opf, struct blk_mq_alloc_data *data)
>> 		limit = (limit * depth) >> bfqd->full_depth_shift;
>> 	}
>> 
>> -	/*
>> -	 * Does queue (or any parent entity) exceed number of requests that
>> -	 * should be available to it? Heavily limit depth so that it cannot
>> -	 * consume more available requests and thus starve other entities.
>> -	 */
>> -	if (bfqq && bfqq_request_over_limit(bfqq, limit))
>> -		depth = 1;
>> +	for (act_idx = 0; act_idx < bfqd->num_actuators; act_idx++) {
>> +		struct bfq_queue *bfqq =
>> +			bic ? bic_to_bfqq(bic, op_is_sync(opf), act_idx) : NULL;
>> 
>> +		/*
>> +		 * Does queue (or any parent entity) exceed number of
>> +		 * requests that should be available to it? Heavily
>> +		 * limit depth so that it cannot consume more
>> +		 * available requests and thus starve other entities.
>> +		 */
>> +		if (bfqq && bfqq_request_over_limit(bfqq, limit)) {
>> +			depth = 1;
>> +			break;
>> +		}
>> +	}
>> 	bfq_log(bfqd, "[%s] wr_busy %d sync %d depth %u",
>> 		__func__, bfqd->wr_busy_queues, op_is_sync(opf), depth);
>> 	if (depth)
>> @@ -2142,7 +2154,7 @@ static void bfq_check_waker(struct bfq_data *bfqd, struct bfq_queue *bfqq,
>> 	 * We reset waker detection logic also if too much time has passed
>>  	 * since the first detection. If wakeups are rare, pointless idling
>> 	 * doesn't hurt throughput that much. The condition below makes sure
>> -	 * we do not uselessly idle blocking waker in more than 1/64 cases. 
>> +	 * we do not uselessly idle blocking waker in more than 1/64 cases.
>> 	 */
>> 	if (bfqd->last_completed_rq_bfqq !=
>> 	    bfqq->tentative_waker_bfqq ||
>> @@ -2454,6 +2466,16 @@ static void bfq_remove_request(struct request_queue *q,
>> 
>> }
>> 
>> +/* get the index of the actuator that will serve bio */
> 
> Generally, function comments use the multi-line comment style...
> 
> /*
> * Get the index of the actuator that will serve @bio.
> */
> 
>> +static unsigned int bfq_actuator_index(struct bfq_data *bfqd, struct bio *bio)
>> +{
>> +	/*
>> +	 * Multi-actuator support not complete yet, so always return 0
>> +	 * for the moment.
>> +	 */
> 
> Nit: You did comment about this in the commit message. So I do not think
> having this comment here helps.
> 
> Note that you could have a prep patch before this patch that gather the
> actuator information for the device, and only does that. So this function
> here could actually have meat instead of being empty. But that may make
> the entire series more complicated, so not a big deal.
> 

Yep.  The idea here is to exercise multi-actuator functionalities
only after setting up all the needed pieces.  And to keep these
functionalities off until they can work correctly, we make
bfq_actuator_index returns 0. Otherwise I should invent a
different way to keep multi-actuary support off. Yet I don't see a
clean alternative.

Apart from this point, I'll apply al your suggestions and send a V7.

Thanks,
Paolo

>> +	return 0;
>> +}
>> +
>> static bool bfq_bio_merge(struct request_queue *q, struct bio *bio,
>> 		unsigned int nr_segs)
>> {
>> @@ -2478,7 +2500,8 @@ static bool bfq_bio_merge(struct request_queue *q, struct bio *bio,
>> 		 */
>> 		bfq_bic_update_cgroup(bic, bio);
>> 
>> -		bfqd->bio_bfqq = bic_to_bfqq(bic, op_is_sync(bio->bi_opf));
>> +		bfqd->bio_bfqq = bic_to_bfqq(bic, op_is_sync(bio->bi_opf),
>> +					     bfq_actuator_index(bfqd, bio));
>> 	} else {
>> 		bfqd->bio_bfqq = NULL;
>> 	}
>> @@ -3174,7 +3197,7 @@ bfq_merge_bfqqs(struct bfq_data *bfqd, struct bfq_io_cq *bic,
>> 	/*
>> 	 * Merge queues (that is, let bic redirect its requests to new_bfqq)
>> 	 */
>> -	bic_set_bfqq(bic, new_bfqq, 1);
>> +	bic_set_bfqq(bic, new_bfqq, 1, bfqq->actuator_idx);
>> 	bfq_mark_bfqq_coop(new_bfqq);
>> 	/*
>> 	 * new_bfqq now belongs to at least two bics (it is a shared queue):
>> @@ -4808,11 +4831,12 @@ static struct bfq_queue *bfq_select_queue(struct bfq_data *bfqd)
>> 	 */
>> 	if (bfq_bfqq_wait_request(bfqq) ||
>> 	    (bfqq->dispatched != 0 && bfq_better_to_idle(bfqq))) {
>> +		unsigned int act_idx = bfqq->actuator_idx;
>> 		struct bfq_queue *async_bfqq =
>> -			bfqq->bic && bfqq->bic->bfqq[0] &&
>> -			bfq_bfqq_busy(bfqq->bic->bfqq[0]) &&
>> -			bfqq->bic->bfqq[0]->next_rq ?
>> -			bfqq->bic->bfqq[0] : NULL;
>> +			bfqq->bic && bfqq->bic->bfqq[0][act_idx] &&
>> +			bfq_bfqq_busy(bfqq->bic->bfqq[0][act_idx]) &&
>> +			bfqq->bic->bfqq[0][act_idx]->next_rq ?
>> +			bfqq->bic->bfqq[0][act_idx] : NULL;
> 
> Missing a blank line before this hunk, after the declaration. And I
> personally think that this is totally unreadable. Using a plain if/else
> would improve that.
> 
>> 		struct bfq_queue *blocked_bfqq =
>> 			!hlist_empty(&bfqq->woken_list) ?
>> 			container_of(bfqq->woken_list.first,
>> @@ -4904,7 +4928,7 @@ static struct bfq_queue *bfq_select_queue(struct bfq_data *bfqd)
>> 		    icq_to_bic(async_bfqq->next_rq->elv.icq) == bfqq->bic &&
>> 		    bfq_serv_to_charge(async_bfqq->next_rq, async_bfqq) <=
>> 		    bfq_bfqq_budget_left(async_bfqq))
>> -			bfqq = bfqq->bic->bfqq[0];
>> +			bfqq = bfqq->bic->bfqq[0][act_idx];
>> 		else if (bfqq->waker_bfqq &&
>> 			   bfq_bfqq_busy(bfqq->waker_bfqq) &&
>> 			   bfqq->waker_bfqq->next_rq &&
>> @@ -5365,49 +5389,59 @@ static void bfq_exit_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq)
>> 	bfq_release_process_ref(bfqd, bfqq);
>> }
>> 
>> -static void bfq_exit_icq_bfqq(struct bfq_io_cq *bic, bool is_sync)
>> +static void bfq_exit_icq_bfqq(struct bfq_io_cq *bic,
>> +			      bool is_sync,
>> +			      unsigned int actuator_idx)
> 
> Why the line split here ?
> 
>> {
>> -	struct bfq_queue *bfqq = bic_to_bfqq(bic, is_sync);
>> +	struct bfq_queue *bfqq = bic_to_bfqq(bic, is_sync, actuator_idx);
>> 	struct bfq_data *bfqd;
>> 
>> 	if (bfqq)
>> 		bfqd = bfqq->bfqd; /* NULL if scheduler already exited */
>> 
>> 	if (bfqq && bfqd) {
>> -		unsigned long flags;
>> -
>> -		spin_lock_irqsave(&bfqd->lock, flags);
>> 		bfqq->bic = NULL;
>> 		bfq_exit_bfqq(bfqd, bfqq);
>> -		bic_set_bfqq(bic, NULL, is_sync);
>> -		spin_unlock_irqrestore(&bfqd->lock, flags);
>> +		bic_set_bfqq(bic, NULL, is_sync, actuator_idx);
>> 	}
>> }
>> 
>> static void bfq_exit_icq(struct io_cq *icq)
>> {
>> 	struct bfq_io_cq *bic = icq_to_bic(icq);
>> +	struct bfq_data *bfqd = bic_to_bfqd(bic);
>> +	unsigned long flags;
>> +	unsigned int act_idx;
>> +	unsigned int num_actuators;
>> 
>> -	if (bic->stable_merge_bfqq) {
>> -		struct bfq_data *bfqd = bic->stable_merge_bfqq->bfqd;
>> -
>> +	/*
>> +	 * bfqd is NULL if scheduler already exited, and in that case
>> +	 * this is the last time these queues are accessed.
>> +	 */
>> +	if (bfqd) {
>> +		spin_lock_irqsave(&bfqd->lock, flags);
>> +		num_actuators = bfqd->num_actuators;
>> +	} else {
>> 		/*
>> -		 * bfqd is NULL if scheduler already exited, and in
>> -		 * that case this is the last time bfqq is accessed.
>> +		 * bfqd->num_actuators not available any longer, cycle
>> +		 * over all possible per-actuator bfqqs in next
>> +		 * loop. We rely on bic being zeroed on creation, and
>> +		 * therefore on its unused per-actuator fields being
>> +		 * NULL.
>> 		 */
>> -		if (bfqd) {
>> -			unsigned long flags;
>> +		num_actuators = BFQ_MAX_ACTUATORS;
>> +	}
> 
> Why not simply initialize num_actuators to BFQ_MAX_ACTUATORS when it is
> declared and drop that "else" above ?
> 
>> 
>> -			spin_lock_irqsave(&bfqd->lock, flags);
>> -			bfq_put_stable_ref(bic->stable_merge_bfqq);
>> -			spin_unlock_irqrestore(&bfqd->lock, flags);
>> -		} else {
>> -			bfq_put_stable_ref(bic->stable_merge_bfqq);
>> -		}
>> +	if (bic->stable_merge_bfqq)
>> +		bfq_put_stable_ref(bic->stable_merge_bfqq);
>> +
>> +	for (act_idx = 0; act_idx < num_actuators; act_idx++) {
>> +		bfq_exit_icq_bfqq(bic, true, act_idx);
>> +		bfq_exit_icq_bfqq(bic, false, act_idx);
>> 	}
>> 
>> -	bfq_exit_icq_bfqq(bic, true);
>> -	bfq_exit_icq_bfqq(bic, false);
>> +	if (bfqd)
>> +		spin_unlock_irqrestore(&bfqd->lock, flags);
>> }
>> 
>> /*
>> @@ -5484,23 +5518,25 @@ static void bfq_check_ioprio_change(struct bfq_io_cq *bic, struct bio *bio)
>> 
>> 	bic->ioprio = ioprio;
>> 
>> -	bfqq = bic_to_bfqq(bic, false);
>> +	bfqq = bic_to_bfqq(bic, false, bfq_actuator_index(bfqd, bio));
>> 	if (bfqq) {
>> 		bfq_release_process_ref(bfqd, bfqq);
>> 		bfqq = bfq_get_queue(bfqd, bio, false, bic, true);
>> -		bic_set_bfqq(bic, bfqq, false);
>> +		bic_set_bfqq(bic, bfqq, false, bfq_actuator_index(bfqd, bio));
>> 	}
>> 
>> -	bfqq = bic_to_bfqq(bic, true);
>> +	bfqq = bic_to_bfqq(bic, true, bfq_actuator_index(bfqd, bio));
>> 	if (bfqq)
>> 		bfq_set_next_ioprio_data(bfqq, bic);
>> }
>> 
>> static void bfq_init_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq,
>> -			  struct bfq_io_cq *bic, pid_t pid, int is_sync)
>> +			  struct bfq_io_cq *bic, pid_t pid, int is_sync,
>> +			  unsigned int act_idx)
>> {
>> 	u64 now_ns = ktime_get_ns();
>> 
>> +	bfqq->actuator_idx = act_idx;
>> 	RB_CLEAR_NODE(&bfqq->entity.rb_node);
>> 	INIT_LIST_HEAD(&bfqq->fifo);
>> 	INIT_HLIST_NODE(&bfqq->burst_list_node);
>> @@ -5739,6 +5775,7 @@ static struct bfq_queue *bfq_get_queue(struct bfq_data *bfqd,
>> 	struct bfq_group *bfqg;
>> 
>> 	bfqg = bfq_bio_bfqg(bfqd, bio);
>> +
> 
> whiteline change.
> 
>> 	if (!is_sync) {
>> 		async_bfqq = bfq_async_queue_prio(bfqd, bfqg, ioprio_class,
>> 						  ioprio);
>> @@ -5753,7 +5790,7 @@ static struct bfq_queue *bfq_get_queue(struct bfq_data *bfqd,
>> 
>> 	if (bfqq) {
>> 		bfq_init_bfqq(bfqd, bfqq, bic, current->pid,
>> -			      is_sync);
>> +			      is_sync, bfq_actuator_index(bfqd, bio));
>> 		bfq_init_entity(&bfqq->entity, bfqg);
>> 		bfq_log_bfqq(bfqd, bfqq, "allocated");
>> 	} else {
>> @@ -6068,7 +6105,8 @@ static bool __bfq_insert_request(struct bfq_data *bfqd, struct request *rq)
>> 		 * then complete the merge and redirect it to
>> 		 * new_bfqq.
>> 		 */
>> -		if (bic_to_bfqq(RQ_BIC(rq), 1) == bfqq)
>> +		if (bic_to_bfqq(RQ_BIC(rq), 1,
>> +				bfq_actuator_index(bfqd, rq->bio)) == bfqq)
>> 			bfq_merge_bfqqs(bfqd, RQ_BIC(rq),
>> 					bfqq, new_bfqq);
>> 
>> @@ -6622,7 +6660,7 @@ bfq_split_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq)
>> 		return bfqq;
>> 	}
>> 
>> -	bic_set_bfqq(bic, NULL, 1);
>> +	bic_set_bfqq(bic, NULL, 1, bfqq->actuator_idx);
>> 
>> 	bfq_put_cooperator(bfqq);
>> 
>> @@ -6636,7 +6674,8 @@ static struct bfq_queue *bfq_get_bfqq_handle_split(struct bfq_data *bfqd,
>> 						   bool split, bool is_sync,
>> 						   bool *new_queue)
>> {
>> -	struct bfq_queue *bfqq = bic_to_bfqq(bic, is_sync);
>> +	unsigned int act_idx = bfq_actuator_index(bfqd, bio);
>> +	struct bfq_queue *bfqq = bic_to_bfqq(bic, is_sync, act_idx);
>> 
>> 	if (likely(bfqq && bfqq != &bfqd->oom_bfqq))
>> 		return bfqq;
>> @@ -6648,7 +6687,7 @@ static struct bfq_queue *bfq_get_bfqq_handle_split(struct bfq_data *bfqd,
>> 		bfq_put_queue(bfqq);
>> 	bfqq = bfq_get_queue(bfqd, bio, is_sync, bic, split);
>> 
>> -	bic_set_bfqq(bic, bfqq, is_sync);
>> +	bic_set_bfqq(bic, bfqq, is_sync, act_idx);
>> 	if (split && is_sync) {
>> 		if ((bic->was_in_burst_list && bfqd->large_burst) ||
>> 		    bic->saved_in_large_burst)
>> @@ -7090,8 +7129,10 @@ static int bfq_init_queue(struct request_queue *q, struct elevator_type *e)
>> 	 * Our fallback bfqq if bfq_find_alloc_queue() runs into OOM issues.
>> 	 * Grab a permanent reference to it, so that the normal code flow
>> 	 * will not attempt to free it.
>> +	 * Set zero as actuator index: we will pretend that
>> +	 * all I/O requests are for the same actuator.
>> 	 */
>> -	bfq_init_bfqq(bfqd, &bfqd->oom_bfqq, NULL, 1, 0);
>> +	bfq_init_bfqq(bfqd, &bfqd->oom_bfqq, NULL, 1, 0, 0);
>> 	bfqd->oom_bfqq.ref++;
>> 	bfqd->oom_bfqq.new_ioprio = BFQ_DEFAULT_QUEUE_IOPRIO;
>> 	bfqd->oom_bfqq.new_ioprio_class = IOPRIO_CLASS_BE;
>> @@ -7110,6 +7151,12 @@ static int bfq_init_queue(struct request_queue *q, struct elevator_type *e)
>> 
>> 	bfqd->queue = q;
>> 
>> +	/*
>> +	 * Multi-actuator support not complete yet, default to single
>> +	 * actuator for the moment.
>> +	 */
>> +	bfqd->num_actuators = 1;
> 
> This should be the default anyway, so is that comment really necessary ?
> 
>> +
>> 	INIT_LIST_HEAD(&bfqd->dispatch);
>> 
>> 	hrtimer_init(&bfqd->idle_slice_timer, CLOCK_MONOTONIC,
>> diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h
>> index 71f721670ab6..bfcbd8ea9000 100644
>> --- a/block/bfq-iosched.h
>> +++ b/block/bfq-iosched.h
>> @@ -33,6 +33,14 @@
>>  */
>> #define BFQ_SOFTRT_WEIGHT_FACTOR	100
>> 
>> +/*
>> + * Maximum number of actuators supported. This constant is used simply
>> + * to define the size of the static array that will contain
>> + * per-actuator data. The current value is hopefully a good upper
>> + * bound to the possible number of actuators of any actual drive.
>> + */
>> +#define BFQ_MAX_ACTUATORS 32
> 
> That seems really excessive... What about 4 or 8 ?
> 
>> +
>> struct bfq_entity;
>> 
>> /**
>> @@ -225,12 +233,14 @@ struct bfq_ttime {
>>  * struct bfq_queue - leaf schedulable entity.
>>  *
>>  * A bfq_queue is a leaf request queue; it can be associated with an
>> - * io_context or more, if it  is  async or shared  between  cooperating
>> - * processes. @cgroup holds a reference to the cgroup, to be sure that it
>> - * does not disappear while a bfqq still references it (mostly to avoid
>> - * races between request issuing and task migration followed by cgroup
>> - * destruction).
>> - * All the fields are protected by the queue lock of the containing bfqd.
>> + * io_context or more, if it is async or shared between cooperating
>> + * processes. Besides, it contains I/O requests for only one actuator
>> + * (an io_context is associated with a different bfq_queue for each
>> + * actuator it generates I/O for). @cgroup holds a reference to the
>> + * cgroup, to be sure that it does not disappear while a bfqq still
>> + * references it (mostly to avoid races between request issuing and
>> + * task migration followed by cgroup destruction).  All the fields are
>> + * protected by the queue lock of the containing bfqd.
>>  */
>> struct bfq_queue {
>> 	/* reference counter */
>> @@ -395,6 +405,9 @@ struct bfq_queue {
>> 	 * the woken queues when this queue exits.
>> 	 */
>> 	struct hlist_head woken_list;
>> +
>> +	/* index of the actuator this queue is associated with */
>> +	unsigned int actuator_idx;
>> };
>> 
>> /**
>> @@ -403,8 +416,17 @@ struct bfq_queue {
>> struct bfq_io_cq {
>> 	/* associated io_cq structure */
>> 	struct io_cq icq; /* must be the first member */
>> -	/* array of two process queues, the sync and the async */
>> -	struct bfq_queue *bfqq[2];
>> +	/*
>> +	 * Matrix of associated process queues: first row for async
>> +	 * queues, second row sync queues. Each row contains one
>> +	 * column for each actuator. An I/O request generated by the
>> +	 * process is inserted into the queue pointed by bfqq[i][j] if
>> +	 * the request is to be served by the j-th actuator of the
>> +	 * drive, where i==0 or i==1, depending on whether the request
>> +	 * is async or sync. So there is a distinct queue for each
>> +	 * actuator.
>> +	 */
>> +	struct bfq_queue *bfqq[2][BFQ_MAX_ACTUATORS];
>> 	/* per (request_queue, blkcg) ioprio */
>> 	int ioprio;
>> #ifdef CONFIG_BFQ_GROUP_IOSCHED
>> @@ -768,6 +790,13 @@ struct bfq_data {
>> 	 */
>> 	unsigned int word_depths[2][2];
>> 	unsigned int full_depth_shift;
>> +
>> +	/*
>> +	 * Number of independent actuators. This is equal to 1 in
>> +	 * case of single-actuator drives.
>> +	 */
>> +	unsigned int num_actuators;
>> +
>> };
>> 
>> enum bfqq_state_flags {
>> @@ -964,8 +993,10 @@ struct bfq_group {
>> 
>> extern const int bfq_timeout;
>> 
>> -struct bfq_queue *bic_to_bfqq(struct bfq_io_cq *bic, bool is_sync);
>> -void bic_set_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq, bool is_sync);
>> +struct bfq_queue *bic_to_bfqq(struct bfq_io_cq *bic, bool is_sync,
>> +				unsigned int actuator_idx);
>> +void bic_set_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq, bool is_sync,
>> +				unsigned int actuator_idx);
>> struct bfq_data *bic_to_bfqd(struct bfq_io_cq *bic);
>> void bfq_pos_tree_add_move(struct bfq_data *bfqd, struct bfq_queue *bfqq);
>> void bfq_weights_tree_add(struct bfq_data *bfqd, struct bfq_queue *bfqq,
> 
> -- 
> Damien Le Moal
> Western Digital Research


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH V6 1/8] block, bfq: split sync bfq_queues on a per-actuator basis
  2022-11-26 16:28     ` Paolo Valente
@ 2022-11-28 14:32       ` Jens Axboe
  0 siblings, 0 replies; 31+ messages in thread
From: Jens Axboe @ 2022-11-28 14:32 UTC (permalink / raw)
  To: Paolo Valente, Damien Le Moal
  Cc: linux-block, linux-kernel, Arie van der Hoeven, Rory Chen,
	Gabriele Felici, Carmine Zaccagnino

On 11/26/22 9:28?AM, Paolo Valente wrote:

[snip walls of quoted, useless text[

Guys, can you please both start properly quoting emails? Leave the bit
in place you are responding too, cut the rest. It's almost impossible to
find the signal in all of that noise.

This isn't an isolated incident, which is why I'm bringing it up. The
context is there in previous emails should anyone need it. Not cutting
any of the text when replying is just lazy and puts the onus on the
reader on digging through the haystack to find the needle.

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH V6 4/8] block, bfq: turn bfqq_data into an array in bfq_io_cq
  2022-11-21  0:39   ` Damien Le Moal
@ 2022-12-06  8:00     ` Paolo Valente
  0 siblings, 0 replies; 31+ messages in thread
From: Paolo Valente @ 2022-12-06  8:00 UTC (permalink / raw)
  To: Damien Le Moal
  Cc: Jens Axboe, linux-block, linux-kernel, Arie van der Hoeven,
	Rory Chen, Gabriele Felici, Gianmarco Lusvardi, Giulio Barabino,
	Emiliano Maccaferri



> Il giorno 21 nov 2022, alle ore 01:39, Damien Le Moal <damien.lemoal@opensource.wdc.com> ha scritto:
...
>> 
>> 			bfqq = bfq_split_bfqq(bic, bfqq);
>> 			split = true;
>> diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h
>> index f2e8ab91951c..e27897d66a0f 100644
>> --- a/block/bfq-iosched.h
>> +++ b/block/bfq-iosched.h
>> @@ -416,7 +416,7 @@ struct bfq_queue {
>> struct bfq_iocq_bfqq_data {
>> 	/*
>> 	 * Snapshot of the has_short_time flag before merging; taken
>> -	 * to remember its value while the queue is merged, so as to
>> +	 * to remember its values while the queue is merged, so as to
>> 	 * be able to restore it in case of split.
>> 	 */
>> 	bool saved_has_short_ttime;
>> @@ -430,7 +430,7 @@ struct bfq_iocq_bfqq_data {
>> 	u64 saved_tot_idle_time;
>> 
>> 	/*
>> -	 * Same purpose as the previous fields for the value of the
>> +	 * Same purpose as the previous fields for the values of the
>> 	 * field keeping the queue's belonging to a large burst
>> 	 */
>> 	bool saved_in_large_burst;
>> @@ -493,8 +493,12 @@ struct bfq_io_cq {
>> 	uint64_t blkcg_serial_nr; /* the current blkcg serial */
>> #endif
>> 
>> -	/* persistent data for associated synchronous process queue */
>> -	struct bfq_iocq_bfqq_data bfqq_data;
>> +	/*
>> +	 * Persistent data for associated synchronous process queues
>> +	 * (one queue per actuator, see field bfqq above). In
>> +	 * particular, each of these queues may undergo a merge.
>> +	 */
>> +	struct bfq_iocq_bfqq_data bfqq_data[BFQ_MAX_ACTUATORS];
> 
> I wonder if packing this together with struct bfq_queue would be cleaner.
> That would avoid the 2 arrays you have in this struct. Something like this:
> 
> struct bfq_queue_data {
> 	struct bfq_queue 	*bfqq[2];
> 	struct bfq_iocq_bfqq_data iocq_data;
> }
> 
> struct bfq_io_cq {
> 	...
> 	struct bfq_queue_data bfqqd[BFQ_MAX_ACTUATORS];
> 	...
> }
> 
> Thinking aloud here. That may actually make the code more complicated.

I see your point, but, yes, this change would entail one more
indirection when accessing queues from io contexts.

Apart from this, I have applied all of your other suggestions here.

Thanks,
Paolo

> 
>> 
>> 	unsigned int requests;	/* Number of requests this process has in flight */
>> };
> 
> -- 
> Damien Le Moal
> Western Digital Research


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH V6 1/8] block, bfq: split sync bfq_queues on a per-actuator basis
  2022-11-21  0:18   ` Damien Le Moal
  2022-11-26 16:28     ` Paolo Valente
@ 2022-12-06  8:04     ` Paolo Valente
  1 sibling, 0 replies; 31+ messages in thread
From: Paolo Valente @ 2022-12-06  8:04 UTC (permalink / raw)
  To: Damien Le Moal
  Cc: Jens Axboe, linux-block, linux-kernel, Arie van der Hoeven,
	Rory Chen, Gabriele Felici, Carmine Zaccagnino


> Il giorno 21 nov 2022, alle ore 01:18, Damien Le Moal <damien.lemoal@opensource.wdc.com> ha scritto:
...

> That said, creating 2 helpers
> bic_to_async_bfqq() and bic_to_sync_bfqq() without that second argument
> would make the code more readable and less error prone I think.
> 

I'm not sure yet about this, because, in some other place, the function is invoked like this

bic_to_bfqq(bic, is_sync, act_idx);

where is_sync is a bool.  With your proposal, we would lose the
wrapper and have to turn the above line into an if/else.  

So, if ok for you, I'd leave this further change for a second time, as
it is somehow unrelated with the main goal of this patch.


...

>> 
>> 	if (bfqq && bic->stable_merge_bfqq == bfqq) {
>> 		/*
>> @@ -672,9 +677,9 @@ static void bfq_limit_depth(blk_opf_t opf, struct blk_mq_alloc_data *data)
>> {
>> 	struct bfq_data *bfqd = data->q->elevator->elevator_data;
>> 	struct bfq_io_cq *bic = bfq_bic_lookup(data->q);
>> -	struct bfq_queue *bfqq = bic ? bic_to_bfqq(bic, op_is_sync(opf)) : NULL;
> 
> Given that you are repeating this same pattern many times, the test for
> bic != NULL should go into the bic_to_bfqq() helper.

Actually, this would improve code here, but it would entail a useless control
for all the other invocations :(

As for your other replies, I have applied all the other suggestions in
this reply of yours.

Thanks,
Paolo



^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH V6 6/8] block, bfq: retrieve independent access ranges from request queue
  2022-11-21  1:01   ` Damien Le Moal
@ 2022-12-06  8:06     ` Paolo Valente
  2022-12-06  8:29       ` Damien Le Moal
  0 siblings, 1 reply; 31+ messages in thread
From: Paolo Valente @ 2022-12-06  8:06 UTC (permalink / raw)
  To: Damien Le Moal
  Cc: Jens Axboe, linux-block, linux-kernel, arie.vanderhoeven,
	rory.c.chen, Federico Gavioli



> Il giorno 21 nov 2022, alle ore 02:01, Damien Le Moal <damien.lemoal@opensource.wdc.com> ha scritto:
> 

...

> 
>> }
>> 
>> static bool bfq_bio_merge(struct request_queue *q, struct bio *bio,
>> @@ -7144,6 +7159,8 @@ static int bfq_init_queue(struct request_queue *q, struct elevator_type *e)
>> {
>> 	struct bfq_data *bfqd;
>> 	struct elevator_queue *eq;
>> +	unsigned int i;
>> +	struct blk_independent_access_ranges *ia_ranges = q->disk->ia_ranges;
>> 
>> 	eq = elevator_alloc(q, e);
>> 	if (!eq)
>> @@ -7187,10 +7204,31 @@ static int bfq_init_queue(struct request_queue *q, struct elevator_type *e)
>> 	bfqd->queue = q;
>> 
>> 	/*
>> -	 * Multi-actuator support not complete yet, default to single
>> -	 * actuator for the moment.
>> +	 * If the disk supports multiple actuators, we copy the independent
>> +	 * access ranges from the request queue structure.
>> 	 */
>> -	bfqd->num_actuators = 1;
>> +	spin_lock_irq(&q->queue_lock);
>> +	if (ia_ranges) {
>> +		/*
>> +		 * Check if the disk ia_ranges size exceeds the current bfq
>> +		 * actuator limit.
>> +		 */
>> +		if (ia_ranges->nr_ia_ranges > BFQ_MAX_ACTUATORS) {
>> +			pr_crit("nr_ia_ranges higher than act limit: iars=%d, max=%d.\n",
>> +				ia_ranges->nr_ia_ranges, BFQ_MAX_ACTUATORS);
>> +			pr_crit("Falling back to single actuator mode.\n");
>> +			bfqd->num_actuators = 0;
>> +		} else {
>> +			bfqd->num_actuators = ia_ranges->nr_ia_ranges;
>> +
>> +			for (i = 0; i < bfqd->num_actuators; i++)
>> +				bfqd->ia_ranges[i] = ia_ranges->ia_range[i];
>> +		}
>> +	} else {
>> +		bfqd->num_actuators = 0;
> 
> That is very weird. The default should be 1 actuator.
> ia_ranges->nr_ia_ranges is 0 when the disk does not provide any range
> information, meaning it is a regular disk with a single actuator.

Actually, IIUC this assignment to 0 seems to be done exactly when you
say that it should be done, i.e., when the disk does not provide any
range information (ia_ranges is NULL). Am I missing something else?

Once again, all other suggestions applied. I'm about to submit a V7.

Thanks,
Paolo


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH V6 6/8] block, bfq: retrieve independent access ranges from request queue
  2022-12-06  8:06     ` Paolo Valente
@ 2022-12-06  8:29       ` Damien Le Moal
  2022-12-06  8:41         ` Paolo Valente
  0 siblings, 1 reply; 31+ messages in thread
From: Damien Le Moal @ 2022-12-06  8:29 UTC (permalink / raw)
  To: Paolo Valente
  Cc: Jens Axboe, linux-block, linux-kernel, arie.vanderhoeven,
	rory.c.chen, Federico Gavioli

On 12/6/22 17:06, Paolo Valente wrote:
> 
> 
>> Il giorno 21 nov 2022, alle ore 02:01, Damien Le Moal <damien.lemoal@opensource.wdc.com> ha scritto:
>>
> 
> ...
> 
>>
>>> }
>>>
>>> static bool bfq_bio_merge(struct request_queue *q, struct bio *bio,
>>> @@ -7144,6 +7159,8 @@ static int bfq_init_queue(struct request_queue *q, struct elevator_type *e)
>>> {
>>> 	struct bfq_data *bfqd;
>>> 	struct elevator_queue *eq;
>>> +	unsigned int i;
>>> +	struct blk_independent_access_ranges *ia_ranges = q->disk->ia_ranges;
>>>
>>> 	eq = elevator_alloc(q, e);
>>> 	if (!eq)
>>> @@ -7187,10 +7204,31 @@ static int bfq_init_queue(struct request_queue *q, struct elevator_type *e)
>>> 	bfqd->queue = q;
>>>
>>> 	/*
>>> -	 * Multi-actuator support not complete yet, default to single
>>> -	 * actuator for the moment.
>>> +	 * If the disk supports multiple actuators, we copy the independent
>>> +	 * access ranges from the request queue structure.
>>> 	 */
>>> -	bfqd->num_actuators = 1;
>>> +	spin_lock_irq(&q->queue_lock);
>>> +	if (ia_ranges) {
>>> +		/*
>>> +		 * Check if the disk ia_ranges size exceeds the current bfq
>>> +		 * actuator limit.
>>> +		 */
>>> +		if (ia_ranges->nr_ia_ranges > BFQ_MAX_ACTUATORS) {
>>> +			pr_crit("nr_ia_ranges higher than act limit: iars=%d, max=%d.\n",
>>> +				ia_ranges->nr_ia_ranges, BFQ_MAX_ACTUATORS);
>>> +			pr_crit("Falling back to single actuator mode.\n");
>>> +			bfqd->num_actuators = 0;
>>> +		} else {
>>> +			bfqd->num_actuators = ia_ranges->nr_ia_ranges;
>>> +
>>> +			for (i = 0; i < bfqd->num_actuators; i++)
>>> +				bfqd->ia_ranges[i] = ia_ranges->ia_range[i];
>>> +		}
>>> +	} else {
>>> +		bfqd->num_actuators = 0;
>>
>> That is very weird. The default should be 1 actuator.
>> ia_ranges->nr_ia_ranges is 0 when the disk does not provide any range
>> information, meaning it is a regular disk with a single actuator.
> 
> Actually, IIUC this assignment to 0 seems to be done exactly when you
> say that it should be done, i.e., when the disk does not provide any
> range information (ia_ranges is NULL). Am I missing something else?

No ranges reported means no extra actuators, so a single actuator an
single LBA range for the entire device. In that case, bfq should process
all IOs using bfqd->ia_ranges[0]. The get range function will always
return that range. That makes the code clean and avoids different path for
nr_ranges == 1 and nr_ranges > 1. No ?

> 
> Once again, all other suggestions applied. I'm about to submit a V7.
> 
> Thanks,
> Paolo
> 

-- 
Damien Le Moal
Western Digital Research


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH V6 6/8] block, bfq: retrieve independent access ranges from request queue
  2022-12-06  8:29       ` Damien Le Moal
@ 2022-12-06  8:41         ` Paolo Valente
  2022-12-06  9:02           ` Damien Le Moal
  0 siblings, 1 reply; 31+ messages in thread
From: Paolo Valente @ 2022-12-06  8:41 UTC (permalink / raw)
  To: Damien Le Moal
  Cc: Jens Axboe, linux-block, linux-kernel, Arie van der Hoeven,
	Rory Chen, Federico Gavioli



> Il giorno 6 dic 2022, alle ore 09:29, Damien Le Moal <damien.lemoal@opensource.wdc.com> ha scritto:
> 
> On 12/6/22 17:06, Paolo Valente wrote:
>> 
>> 
>>> Il giorno 21 nov 2022, alle ore 02:01, Damien Le Moal <damien.lemoal@opensource.wdc.com> ha scritto:
>>> 
>> 
>> ...
>> 
>>> 
>>>> }
>>>> 
>>>> static bool bfq_bio_merge(struct request_queue *q, struct bio *bio,
>>>> @@ -7144,6 +7159,8 @@ static int bfq_init_queue(struct request_queue *q, struct elevator_type *e)
>>>> {
>>>> 	struct bfq_data *bfqd;
>>>> 	struct elevator_queue *eq;
>>>> +	unsigned int i;
>>>> +	struct blk_independent_access_ranges *ia_ranges = q->disk->ia_ranges;
>>>> 
>>>> 	eq = elevator_alloc(q, e);
>>>> 	if (!eq)
>>>> @@ -7187,10 +7204,31 @@ static int bfq_init_queue(struct request_queue *q, struct elevator_type *e)
>>>> 	bfqd->queue = q;
>>>> 
>>>> 	/*
>>>> -	 * Multi-actuator support not complete yet, default to single
>>>> -	 * actuator for the moment.
>>>> +	 * If the disk supports multiple actuators, we copy the independent
>>>> +	 * access ranges from the request queue structure.
>>>> 	 */
>>>> -	bfqd->num_actuators = 1;
>>>> +	spin_lock_irq(&q->queue_lock);
>>>> +	if (ia_ranges) {
>>>> +		/*
>>>> +		 * Check if the disk ia_ranges size exceeds the current bfq
>>>> +		 * actuator limit.
>>>> +		 */
>>>> +		if (ia_ranges->nr_ia_ranges > BFQ_MAX_ACTUATORS) {
>>>> +			pr_crit("nr_ia_ranges higher than act limit: iars=%d, max=%d.\n",
>>>> +				ia_ranges->nr_ia_ranges, BFQ_MAX_ACTUATORS);
>>>> +			pr_crit("Falling back to single actuator mode.\n");
>>>> +			bfqd->num_actuators = 0;
>>>> +		} else {
>>>> +			bfqd->num_actuators = ia_ranges->nr_ia_ranges;
>>>> +
>>>> +			for (i = 0; i < bfqd->num_actuators; i++)
>>>> +				bfqd->ia_ranges[i] = ia_ranges->ia_range[i];
>>>> +		}
>>>> +	} else {
>>>> +		bfqd->num_actuators = 0;
>>> 
>>> That is very weird. The default should be 1 actuator.
>>> ia_ranges->nr_ia_ranges is 0 when the disk does not provide any range
>>> information, meaning it is a regular disk with a single actuator.
>> 
>> Actually, IIUC this assignment to 0 seems to be done exactly when you
>> say that it should be done, i.e., when the disk does not provide any
>> range information (ia_ranges is NULL). Am I missing something else?
> 
> No ranges reported means no extra actuators, so a single actuator an
> single LBA range for the entire device.

I'm still confused, sorry.  Where will I read sector ranges from, if
no sector range information is available (ia_ranges is NULL)?

> In that case, bfq should process
> all IOs using bfqd->ia_ranges[0]. The get range function will always
> return that range. That makes the code clean and avoids different path for
> nr_ranges == 1 and nr_ranges > 1. No ?

Apart from the above point, for which maybe there is some other
source of information for getting ranges, I see the following issue.

What you propose is to save sector information and trigger the
range-checking for loop also for the above single-actuator case.  Yet
txecuting (one iteration of) that loop will will always result in
getting a 0 as index.  So, what's the point is saving data and
executing code on each IO, for getting a static result that we already
know we will get?

Thanks,
Paolo

> 
>> 
>> Once again, all other suggestions applied. I'm about to submit a V7.
>> 
>> Thanks,
>> Paolo
>> 
> 
> -- 
> Damien Le Moal
> Western Digital Research


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH V6 6/8] block, bfq: retrieve independent access ranges from request queue
  2022-12-06  8:41         ` Paolo Valente
@ 2022-12-06  9:02           ` Damien Le Moal
  2022-12-06 15:43             ` Paolo Valente
  0 siblings, 1 reply; 31+ messages in thread
From: Damien Le Moal @ 2022-12-06  9:02 UTC (permalink / raw)
  To: Paolo Valente
  Cc: Jens Axboe, linux-block, linux-kernel, Arie van der Hoeven,
	Rory Chen, Federico Gavioli

On 12/6/22 17:41, Paolo Valente wrote:
> 
> 
>> Il giorno 6 dic 2022, alle ore 09:29, Damien Le Moal <damien.lemoal@opensource.wdc.com> ha scritto:
>>
>> On 12/6/22 17:06, Paolo Valente wrote:
>>>
>>>
>>>> Il giorno 21 nov 2022, alle ore 02:01, Damien Le Moal <damien.lemoal@opensource.wdc.com> ha scritto:
>>>>
>>>
>>> ...
>>>
>>>>
>>>>> }
>>>>>
>>>>> static bool bfq_bio_merge(struct request_queue *q, struct bio *bio,
>>>>> @@ -7144,6 +7159,8 @@ static int bfq_init_queue(struct request_queue *q, struct elevator_type *e)
>>>>> {
>>>>> 	struct bfq_data *bfqd;
>>>>> 	struct elevator_queue *eq;
>>>>> +	unsigned int i;
>>>>> +	struct blk_independent_access_ranges *ia_ranges = q->disk->ia_ranges;
>>>>>
>>>>> 	eq = elevator_alloc(q, e);
>>>>> 	if (!eq)
>>>>> @@ -7187,10 +7204,31 @@ static int bfq_init_queue(struct request_queue *q, struct elevator_type *e)
>>>>> 	bfqd->queue = q;
>>>>>
>>>>> 	/*
>>>>> -	 * Multi-actuator support not complete yet, default to single
>>>>> -	 * actuator for the moment.
>>>>> +	 * If the disk supports multiple actuators, we copy the independent
>>>>> +	 * access ranges from the request queue structure.
>>>>> 	 */
>>>>> -	bfqd->num_actuators = 1;
>>>>> +	spin_lock_irq(&q->queue_lock);
>>>>> +	if (ia_ranges) {
>>>>> +		/*
>>>>> +		 * Check if the disk ia_ranges size exceeds the current bfq
>>>>> +		 * actuator limit.
>>>>> +		 */
>>>>> +		if (ia_ranges->nr_ia_ranges > BFQ_MAX_ACTUATORS) {
>>>>> +			pr_crit("nr_ia_ranges higher than act limit: iars=%d, max=%d.\n",
>>>>> +				ia_ranges->nr_ia_ranges, BFQ_MAX_ACTUATORS);
>>>>> +			pr_crit("Falling back to single actuator mode.\n");
>>>>> +			bfqd->num_actuators = 0;
>>>>> +		} else {
>>>>> +			bfqd->num_actuators = ia_ranges->nr_ia_ranges;
>>>>> +
>>>>> +			for (i = 0; i < bfqd->num_actuators; i++)
>>>>> +				bfqd->ia_ranges[i] = ia_ranges->ia_range[i];
>>>>> +		}
>>>>> +	} else {
>>>>> +		bfqd->num_actuators = 0;
>>>>
>>>> That is very weird. The default should be 1 actuator.
>>>> ia_ranges->nr_ia_ranges is 0 when the disk does not provide any range
>>>> information, meaning it is a regular disk with a single actuator.
>>>
>>> Actually, IIUC this assignment to 0 seems to be done exactly when you
>>> say that it should be done, i.e., when the disk does not provide any
>>> range information (ia_ranges is NULL). Am I missing something else?
>>
>> No ranges reported means no extra actuators, so a single actuator an
>> single LBA range for the entire device.
> 
> I'm still confused, sorry.  Where will I read sector ranges from, if
> no sector range information is available (ia_ranges is NULL)?

start = 0 and nr_sectors = bdev_nr_sectors(bdev).
No ia_ranges to read.

> 
>> In that case, bfq should process
>> all IOs using bfqd->ia_ranges[0]. The get range function will always
>> return that range. That makes the code clean and avoids different path for
>> nr_ranges == 1 and nr_ranges > 1. No ?
> 
> Apart from the above point, for which maybe there is some other
> source of information for getting ranges, I see the following issue.
> 
> What you propose is to save sector information and trigger the
> range-checking for loop also for the above single-actuator case.  Yet
> txecuting (one iteration of) that loop will will always result in
> getting a 0 as index.  So, what's the point is saving data and
> executing code on each IO, for getting a static result that we already
> know we will get?

Surely, you can add an "if (bfqd->num_actuators ==1)" optimization in
strategic places to optimize for regular devices with a single actuator,
which bfqd->num_actuators == 1 *exactly* describes. Having
"bfqd->num_actuators = 0" makes no sense to me.

But if you feel strongly about this, feel free to ignore this.

> 
> Thanks,
> Paolo
> 
>>
>>>
>>> Once again, all other suggestions applied. I'm about to submit a V7.
>>>
>>> Thanks,
>>> Paolo
>>>
>>
>> -- 
>> Damien Le Moal
>> Western Digital Research
> 

-- 
Damien Le Moal
Western Digital Research


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH V6 6/8] block, bfq: retrieve independent access ranges from request queue
  2022-12-06  9:02           ` Damien Le Moal
@ 2022-12-06 15:43             ` Paolo Valente
  2022-12-06 23:34               ` Damien Le Moal
  0 siblings, 1 reply; 31+ messages in thread
From: Paolo Valente @ 2022-12-06 15:43 UTC (permalink / raw)
  To: Damien Le Moal
  Cc: Jens Axboe, linux-block, linux-kernel, Arie van der Hoeven,
	Rory Chen, Federico Gavioli



> Il giorno 6 dic 2022, alle ore 10:02, Damien Le Moal <damien.lemoal@opensource.wdc.com> ha scritto:
> 
> On 12/6/22 17:41, Paolo Valente wrote:
>> 
>> 
>>> Il giorno 6 dic 2022, alle ore 09:29, Damien Le Moal <damien.lemoal@opensource.wdc.com> ha scritto:
>>> 
>>> On 12/6/22 17:06, Paolo Valente wrote:
>>>> 
>>>> 
>>>>> Il giorno 21 nov 2022, alle ore 02:01, Damien Le Moal <damien.lemoal@opensource.wdc.com> ha scritto:
>>>>> 
>>>> 
>>>> ...
>>>> 
>>>>> 
>>>>>> }
>>>>>> 
>>>>>> static bool bfq_bio_merge(struct request_queue *q, struct bio *bio,
>>>>>> @@ -7144,6 +7159,8 @@ static int bfq_init_queue(struct request_queue *q, struct elevator_type *e)
>>>>>> {
>>>>>> 	struct bfq_data *bfqd;
>>>>>> 	struct elevator_queue *eq;
>>>>>> +	unsigned int i;
>>>>>> +	struct blk_independent_access_ranges *ia_ranges = q->disk->ia_ranges;
>>>>>> 
>>>>>> 	eq = elevator_alloc(q, e);
>>>>>> 	if (!eq)
>>>>>> @@ -7187,10 +7204,31 @@ static int bfq_init_queue(struct request_queue *q, struct elevator_type *e)
>>>>>> 	bfqd->queue = q;
>>>>>> 
>>>>>> 	/*
>>>>>> -	 * Multi-actuator support not complete yet, default to single
>>>>>> -	 * actuator for the moment.
>>>>>> +	 * If the disk supports multiple actuators, we copy the independent
>>>>>> +	 * access ranges from the request queue structure.
>>>>>> 	 */
>>>>>> -	bfqd->num_actuators = 1;
>>>>>> +	spin_lock_irq(&q->queue_lock);
>>>>>> +	if (ia_ranges) {
>>>>>> +		/*
>>>>>> +		 * Check if the disk ia_ranges size exceeds the current bfq
>>>>>> +		 * actuator limit.
>>>>>> +		 */
>>>>>> +		if (ia_ranges->nr_ia_ranges > BFQ_MAX_ACTUATORS) {
>>>>>> +			pr_crit("nr_ia_ranges higher than act limit: iars=%d, max=%d.\n",
>>>>>> +				ia_ranges->nr_ia_ranges, BFQ_MAX_ACTUATORS);
>>>>>> +			pr_crit("Falling back to single actuator mode.\n");
>>>>>> +			bfqd->num_actuators = 0;
>>>>>> +		} else {
>>>>>> +			bfqd->num_actuators = ia_ranges->nr_ia_ranges;
>>>>>> +
>>>>>> +			for (i = 0; i < bfqd->num_actuators; i++)
>>>>>> +				bfqd->ia_ranges[i] = ia_ranges->ia_range[i];
>>>>>> +		}
>>>>>> +	} else {
>>>>>> +		bfqd->num_actuators = 0;
>>>>> 
>>>>> That is very weird. The default should be 1 actuator.
>>>>> ia_ranges->nr_ia_ranges is 0 when the disk does not provide any range
>>>>> information, meaning it is a regular disk with a single actuator.
>>>> 
>>>> Actually, IIUC this assignment to 0 seems to be done exactly when you
>>>> say that it should be done, i.e., when the disk does not provide any
>>>> range information (ia_ranges is NULL). Am I missing something else?
>>> 
>>> No ranges reported means no extra actuators, so a single actuator an
>>> single LBA range for the entire device.
>> 
>> I'm still confused, sorry.  Where will I read sector ranges from, if
>> no sector range information is available (ia_ranges is NULL)?
> 
> start = 0 and nr_sectors = bdev_nr_sectors(bdev).
> No ia_ranges to read.
> 

ok, thanks

>> 
>>> In that case, bfq should process
>>> all IOs using bfqd->ia_ranges[0]. The get range function will always
>>> return that range. That makes the code clean and avoids different path for
>>> nr_ranges == 1 and nr_ranges > 1. No ?
>> 
>> Apart from the above point, for which maybe there is some other
>> source of information for getting ranges, I see the following issue.
>> 
>> What you propose is to save sector information and trigger the
>> range-checking for loop also for the above single-actuator case.  Yet
>> txecuting (one iteration of) that loop will will always result in
>> getting a 0 as index.  So, what's the point is saving data and
>> executing code on each IO, for getting a static result that we already
>> know we will get?
> 
> Surely, you can add an "if (bfqd->num_actuators ==1)" optimization in
> strategic places to optimize for regular devices with a single actuator,
> which bfqd->num_actuators == 1 *exactly* describes. Having
> "bfqd->num_actuators = 0" makes no sense to me.
> 

Ok, I see your point at last, sorry.  I'll check the code, but I think
that there is no problem in moving from 0 to 1 actuators for the case
ia_ranges == NULL.  I meant to separate the case "single actuator with
ia_ranges available" (num_actuators = 1), from the case "no ia_ranges
available" (num_actuators = 0).  But evidently things don't work as I
thought, and using the same value (1) is ok.

Just, let me avoid setting the fields bfqd->sector and
bfqd->nr_sectors for a case where we don't use them.

Thanks,
Paolo

> But if you feel strongly about this, feel free to ignore this.
> 
>> 
>> Thanks,
>> Paolo
>> 
>>> 
>>>> 
>>>> Once again, all other suggestions applied. I'm about to submit a V7.
>>>> 
>>>> Thanks,
>>>> Paolo
>>>> 
>>> 
>>> -- 
>>> Damien Le Moal
>>> Western Digital Research
>> 
> 
> -- 
> Damien Le Moal
> Western Digital Research


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH V6 6/8] block, bfq: retrieve independent access ranges from request queue
  2022-12-06 15:43             ` Paolo Valente
@ 2022-12-06 23:34               ` Damien Le Moal
  2022-12-08 10:43                 ` Paolo Valente
  0 siblings, 1 reply; 31+ messages in thread
From: Damien Le Moal @ 2022-12-06 23:34 UTC (permalink / raw)
  To: Paolo Valente
  Cc: Jens Axboe, linux-block, linux-kernel, Arie van der Hoeven,
	Rory Chen, Federico Gavioli

On 12/7/22 00:43, Paolo Valente wrote:
>>>> In that case, bfq should process
>>>> all IOs using bfqd->ia_ranges[0]. The get range function will always
>>>> return that range. That makes the code clean and avoids different path for
>>>> nr_ranges == 1 and nr_ranges > 1. No ?
>>>
>>> Apart from the above point, for which maybe there is some other
>>> source of information for getting ranges, I see the following issue.
>>>
>>> What you propose is to save sector information and trigger the
>>> range-checking for loop also for the above single-actuator case.  Yet
>>> txecuting (one iteration of) that loop will will always result in
>>> getting a 0 as index.  So, what's the point is saving data and
>>> executing code on each IO, for getting a static result that we already
>>> know we will get?
>>
>> Surely, you can add an "if (bfqd->num_actuators ==1)" optimization in
>> strategic places to optimize for regular devices with a single actuator,
>> which bfqd->num_actuators == 1 *exactly* describes. Having
>> "bfqd->num_actuators = 0" makes no sense to me.
>>
> 
> Ok, I see your point at last, sorry.  I'll check the code, but I think
> that there is no problem in moving from 0 to 1 actuators for the case
> ia_ranges == NULL.  I meant to separate the case "single actuator with
> ia_ranges available" (num_actuators = 1), from the case "no ia_ranges
> available" (num_actuators = 0).  But evidently things don't work as I
> thought, and using the same value (1) is ok.

Any HDD always has at least 1 actuator. Per SCSI & ATA specs, ia_range
will be present only and only if the device has *more than one actuator*.
So the case "no ia_ranges available" means "num_actuator = 1" and the
implied access range is the entire device capacity.

> Just, let me avoid setting the fields bfqd->sector and
> bfqd->nr_sectors for a case where we don't use them.

Sure. But if you do not use them thanks to "if (num_actuators == 1)"
optimizations, it would still not hurt to set these fields. That actually
could be helpful for debugging.

Overall, I think that the code should not differ much for the
num_actuators == 1 case. Any actuator search loop would simply hit the
first and only entry on the first iteration, so should not add any nasty
overhead. Such single code path would likely greatly simplify the code.


-- 
Damien Le Moal
Western Digital Research


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH V6 6/8] block, bfq: retrieve independent access ranges from request queue
  2022-12-06 23:34               ` Damien Le Moal
@ 2022-12-08 10:43                 ` Paolo Valente
  0 siblings, 0 replies; 31+ messages in thread
From: Paolo Valente @ 2022-12-08 10:43 UTC (permalink / raw)
  To: Damien Le Moal
  Cc: Jens Axboe, linux-block, linux-kernel, Arie van der Hoeven,
	Rory Chen, Federico Gavioli



> Il giorno 7 dic 2022, alle ore 00:34, Damien Le Moal <damien.lemoal@opensource.wdc.com> ha scritto:
> 
> 

[...]

>> Just, let me avoid setting the fields bfqd->sector and
>> bfqd->nr_sectors for a case where we don't use them.
> 
> Sure. But if you do not use them thanks to "if (num_actuators == 1)"
> optimizations, it would still not hurt to set these fields. That actually
> could be helpful for debugging.
> 

Got it. I'm about to send a V9 that applies this last suggestion of yours.

Thanks,
Paolo


^ permalink raw reply	[flat|nested] 31+ messages in thread

end of thread, other threads:[~2022-12-08 10:49 UTC | newest]

Thread overview: 31+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-11-03 16:26 [PATCH V6 0/8] block, bfq: extend bfq to support multi-actuator drives Paolo Valente
2022-11-03 16:26 ` [PATCH V6 1/8] block, bfq: split sync bfq_queues on a per-actuator basis Paolo Valente
2022-11-21  0:18   ` Damien Le Moal
2022-11-26 16:28     ` Paolo Valente
2022-11-28 14:32       ` Jens Axboe
2022-12-06  8:04     ` Paolo Valente
2022-11-03 16:26 ` [PATCH V6 2/8] block, bfq: forbid stable merging of queues associated with different actuators Paolo Valente
2022-11-21  0:20   ` Damien Le Moal
2022-11-03 16:26 ` [PATCH V6 3/8] block, bfq: move io_cq-persistent bfqq data into a dedicated struct Paolo Valente
2022-11-21  0:24   ` Damien Le Moal
2022-11-03 16:26 ` [PATCH V6 4/8] block, bfq: turn bfqq_data into an array in bfq_io_cq Paolo Valente
2022-11-21  0:39   ` Damien Le Moal
2022-12-06  8:00     ` Paolo Valente
2022-11-03 16:26 ` [PATCH V6 5/8] block, bfq: split also async bfq_queues on a per-actuator basis Paolo Valente
2022-11-21  0:52   ` Damien Le Moal
2022-11-03 16:26 ` [PATCH V6 6/8] block, bfq: retrieve independent access ranges from request queue Paolo Valente
2022-11-21  1:01   ` Damien Le Moal
2022-12-06  8:06     ` Paolo Valente
2022-12-06  8:29       ` Damien Le Moal
2022-12-06  8:41         ` Paolo Valente
2022-12-06  9:02           ` Damien Le Moal
2022-12-06 15:43             ` Paolo Valente
2022-12-06 23:34               ` Damien Le Moal
2022-12-08 10:43                 ` Paolo Valente
2022-11-03 16:26 ` [PATCH V6 7/8] block, bfq: inject I/O to underutilized actuators Paolo Valente
2022-11-21  1:17   ` Damien Le Moal
2022-11-03 16:26 ` [PATCH V6 8/8] block, bfq: balance I/O injection among " Paolo Valente
2022-11-10 15:25   ` Arie van der Hoeven
2022-11-20  7:29     ` Paolo Valente
2022-11-20 22:06       ` Jens Axboe
2022-11-20 23:41         ` Damien Le Moal

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).