linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH BUGFIX] block, bfq: postpone rq preparation to insert or merge
@ 2018-05-04 17:17 Paolo Valente
  2018-05-04 19:46 ` Mike Galbraith
                   ` (2 more replies)
  0 siblings, 3 replies; 18+ messages in thread
From: Paolo Valente @ 2018-05-04 17:17 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, linux-kernel, ulf.hansson, broonie, linus.walleij,
	bfq-iosched, oleksandr, Paolo Valente

When invoked for an I/O request rq, the prepare_request hook of bfq
increments reference counters in the destination bfq_queue for rq. In
this respect, after this hook has been invoked, rq may still be
transformed into a request with no icq attached, i.e., for bfq, a
request not associated with any bfq_queue. No further hook is invoked
to signal this tranformation to bfq (in general, to the destination
elevator for rq). This leads bfq into an inconsistent state, because
bfq has no chance to correctly lower these counters back. This
inconsistency may in its turn cause incorrect scheduling and hangs. It
certainly causes memory leaks, by making it impossible for bfq to free
the involved bfq_queue.

On the bright side, no transformation can still happen for rq after rq
has been inserted into bfq, or merged with another, already inserted,
request. Exploiting this fact, this commit addresses the above issue
by delaying the preparation of an I/O request to when the request is
inserted or merged.

This change also gives a performance bonus: a lock-contention point
gets removed. To prepare a request, bfq needs to hold its scheduler
lock. After postponing request preparation to insertion or merging, no
lock needs to be grabbed any longer in the prepare_request hook, while
the lock already taken to perform insertion or merging is used to
preparare the request as well.

Signed-off-by: Paolo Valente <paolo.valente@linaro.org>
---
 block/bfq-iosched.c | 86 +++++++++++++++++++++++++++++++++++------------------
 1 file changed, 57 insertions(+), 29 deletions(-)

diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index 771ae9730ac6..ea02162df6c7 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -1858,6 +1858,8 @@ static int bfq_request_merge(struct request_queue *q, struct request **req,
 	return ELEVATOR_NO_MERGE;
 }
 
+static struct bfq_queue *bfq_init_rq(struct request *rq);
+
 static void bfq_request_merged(struct request_queue *q, struct request *req,
 			       enum elv_merge type)
 {
@@ -1866,7 +1868,7 @@ static void bfq_request_merged(struct request_queue *q, struct request *req,
 	    blk_rq_pos(req) <
 	    blk_rq_pos(container_of(rb_prev(&req->rb_node),
 				    struct request, rb_node))) {
-		struct bfq_queue *bfqq = RQ_BFQQ(req);
+		struct bfq_queue *bfqq = bfq_init_rq(req);
 		struct bfq_data *bfqd = bfqq->bfqd;
 		struct request *prev, *next_rq;
 
@@ -1894,7 +1896,8 @@ static void bfq_request_merged(struct request_queue *q, struct request *req,
 static void bfq_requests_merged(struct request_queue *q, struct request *rq,
 				struct request *next)
 {
-	struct bfq_queue *bfqq = RQ_BFQQ(rq), *next_bfqq = RQ_BFQQ(next);
+	struct bfq_queue *bfqq = bfq_init_rq(rq),
+		*next_bfqq = bfq_init_rq(next);
 
 	if (!RB_EMPTY_NODE(&rq->rb_node))
 		goto end;
@@ -4540,14 +4543,12 @@ static inline void bfq_update_insert_stats(struct request_queue *q,
 					   unsigned int cmd_flags) {}
 #endif
 
-static void bfq_prepare_request(struct request *rq, struct bio *bio);
-
 static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
 			       bool at_head)
 {
 	struct request_queue *q = hctx->queue;
 	struct bfq_data *bfqd = q->elevator->elevator_data;
-	struct bfq_queue *bfqq = RQ_BFQQ(rq);
+	struct bfq_queue *bfqq;
 	bool idle_timer_disabled = false;
 	unsigned int cmd_flags;
 
@@ -4562,24 +4563,13 @@ static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
 	blk_mq_sched_request_inserted(rq);
 
 	spin_lock_irq(&bfqd->lock);
+	bfqq = bfq_init_rq(rq);
 	if (at_head || blk_rq_is_passthrough(rq)) {
 		if (at_head)
 			list_add(&rq->queuelist, &bfqd->dispatch);
 		else
 			list_add_tail(&rq->queuelist, &bfqd->dispatch);
-	} else {
-		if (WARN_ON_ONCE(!bfqq)) {
-			/*
-			 * This should never happen. Most likely rq is
-			 * a requeued regular request, being
-			 * re-inserted without being first
-			 * re-prepared. Do a prepare, to avoid
-			 * failure.
-			 */
-			bfq_prepare_request(rq, rq->bio);
-			bfqq = RQ_BFQQ(rq);
-		}
-
+	} else { /* bfqq is assumed to be non null here */
 		idle_timer_disabled = __bfq_insert_request(bfqd, rq);
 		/*
 		 * Update bfqq, because, if a queue merge has occurred
@@ -4922,11 +4912,48 @@ static struct bfq_queue *bfq_get_bfqq_handle_split(struct bfq_data *bfqd,
 }
 
 /*
- * Allocate bfq data structures associated with this request.
+ * Only reset private fields. The actual request preparation will be
+ * performed by bfq_init_rq, when rq is either inserted or merged. See
+ * comments on bfq_init_rq for the reason behind this delayed
+ * preparation.
  */
 static void bfq_prepare_request(struct request *rq, struct bio *bio)
+{
+	/*
+	 * Regardless of whether we have an icq attached, we have to
+	 * clear the scheduler pointers, as they might point to
+	 * previously allocated bic/bfqq structs.
+	 */
+	rq->elv.priv[0] = rq->elv.priv[1] = NULL;
+}
+
+/*
+ * If needed, init rq, allocate bfq data structures associated with
+ * rq, and increment reference counters in the destination bfq_queue
+ * for rq. Return the destination bfq_queue for rq, or NULL is rq is
+ * not associated with any bfq_queue.
+ *
+ * This function is invoked by the functions that perform rq insertion
+ * or merging. One may have expected the above preparation operations
+ * to be performed in bfq_prepare_request, and not delayed to when rq
+ * is inserted or merged. The rationale behind this delayed
+ * preparation is that, after the prepare_request hook is invoked for
+ * rq, rq may still be transformed into a request with no icq, i.e., a
+ * request not associated with any queue. No bfq hook is invoked to
+ * signal this tranformation. As a consequence, should these
+ * preparation operations be performed when the prepare_request hook
+ * is invoked, and should rq be transformed one moment later, bfq
+ * would end up in an inconsistent state, because it would have
+ * incremented some queue counters for an rq destined to
+ * transformation, without any chance to correctly lower these
+ * counters back. In contrast, no transformation can still happen for
+ * rq after rq has been inserted or merged. So, it is safe to execute
+ * these preparation operations when rq is finally inserted or merged.
+ */
+static struct bfq_queue *bfq_init_rq(struct request *rq)
 {
 	struct request_queue *q = rq->q;
+	struct bio *bio = rq->bio;
 	struct bfq_data *bfqd = q->elevator->elevator_data;
 	struct bfq_io_cq *bic;
 	const int is_sync = rq_is_sync(rq);
@@ -4934,20 +4961,21 @@ static void bfq_prepare_request(struct request *rq, struct bio *bio)
 	bool new_queue = false;
 	bool bfqq_already_existing = false, split = false;
 
+	if (unlikely(!rq->elv.icq))
+		return NULL;
+
 	/*
-	 * Even if we don't have an icq attached, we should still clear
-	 * the scheduler pointers, as they might point to previously
-	 * allocated bic/bfqq structs.
+	 * Assuming that elv.priv[1] is set only if everything is set
+	 * for this rq. This holds true, because this function is
+	 * invoked only for insertion or merging, and, after such
+	 * events, a request cannot be manipulated any longer before
+	 * being removed from bfq.
 	 */
-	if (!rq->elv.icq) {
-		rq->elv.priv[0] = rq->elv.priv[1] = NULL;
-		return;
-	}
+	if (rq->elv.priv[1])
+		return rq->elv.priv[1];
 
 	bic = icq_to_bic(rq->elv.icq);
 
-	spin_lock_irq(&bfqd->lock);
-
 	bfq_check_ioprio_change(bic, bio);
 
 	bfq_bic_update_cgroup(bic, bio);
@@ -5006,7 +5034,7 @@ static void bfq_prepare_request(struct request *rq, struct bio *bio)
 	if (unlikely(bfq_bfqq_just_created(bfqq)))
 		bfq_handle_burst(bfqd, bfqq);
 
-	spin_unlock_irq(&bfqd->lock);
+	return bfqq;
 }
 
 static void bfq_idle_slice_timer_body(struct bfq_queue *bfqq)
-- 
2.16.1

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH BUGFIX] block, bfq: postpone rq preparation to insert or merge
  2018-05-04 17:17 [PATCH BUGFIX] block, bfq: postpone rq preparation to insert or merge Paolo Valente
@ 2018-05-04 19:46 ` Mike Galbraith
  2018-05-05  8:19   ` Mike Galbraith
  2018-05-06  7:33 ` Oleksandr Natalenko
  2018-05-10 16:14 ` Bart Van Assche
  2 siblings, 1 reply; 18+ messages in thread
From: Mike Galbraith @ 2018-05-04 19:46 UTC (permalink / raw)
  To: Paolo Valente, Jens Axboe
  Cc: linux-block, linux-kernel, ulf.hansson, broonie, linus.walleij,
	bfq-iosched, oleksandr

Tentatively, I suspect you've just fixed the nasty stalls I reported a
while back.  Not a hint of stall as yet (should have shown itself by
now), spinning rust buckets are being all they can be, box feels good.

Later mq-deadline (I hope to eventually forget the module dependency
eternities we've spent together;), welcome back bfq (maybe.. I hope).

	-Mike

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH BUGFIX] block, bfq: postpone rq preparation to insert or merge
  2018-05-04 19:46 ` Mike Galbraith
@ 2018-05-05  8:19   ` Mike Galbraith
  2018-05-05 10:39     ` Paolo Valente
  0 siblings, 1 reply; 18+ messages in thread
From: Mike Galbraith @ 2018-05-05  8:19 UTC (permalink / raw)
  To: Paolo Valente, Jens Axboe
  Cc: linux-block, linux-kernel, ulf.hansson, broonie, linus.walleij,
	bfq-iosched, oleksandr

On Fri, 2018-05-04 at 21:46 +0200, Mike Galbraith wrote:
> Tentatively, I suspect you've just fixed the nasty stalls I reported a
> while back.

Oh well, so much for optimism.  It took a lot, but just hung.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH BUGFIX] block, bfq: postpone rq preparation to insert or merge
  2018-05-05  8:19   ` Mike Galbraith
@ 2018-05-05 10:39     ` Paolo Valente
  2018-05-05 14:56       ` Mike Galbraith
  0 siblings, 1 reply; 18+ messages in thread
From: Paolo Valente @ 2018-05-05 10:39 UTC (permalink / raw)
  To: Mike Galbraith
  Cc: Jens Axboe, linux-block, LKML, Ulf Hansson, Mark Brown,
	linus.walleij, bfq-iosched, oleksandr



> Il giorno 05 mag 2018, alle ore 10:19, Mike Galbraith <efault@gmx.de> =
ha scritto:
>=20
> On Fri, 2018-05-04 at 21:46 +0200, Mike Galbraith wrote:
>> Tentatively, I suspect you've just fixed the nasty stalls I reported =
a
>> while back.
>=20
> Oh well, so much for optimism.  It took a lot, but just hung.

Yep, it'd have been a lot of luck, being your hang related to
different operations from those touched by this fix.  Maybe
time-before-failure stretched because your system suffered from the
illness cured by this fix too.

BTW, if you didn't run out of patience with this permanent issue yet,
I was thinking of two o three changes to retry to trigger your failure
reliably.  In case of success, I could restart racking my brains from
there.

Thanks,
Paolo=

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH BUGFIX] block, bfq: postpone rq preparation to insert or merge
  2018-05-05 10:39     ` Paolo Valente
@ 2018-05-05 14:56       ` Mike Galbraith
  2018-05-06  7:42         ` Paolo Valente
  0 siblings, 1 reply; 18+ messages in thread
From: Mike Galbraith @ 2018-05-05 14:56 UTC (permalink / raw)
  To: Paolo Valente
  Cc: Jens Axboe, linux-block, LKML, Ulf Hansson, Mark Brown,
	linus.walleij, bfq-iosched, oleksandr

On Sat, 2018-05-05 at 12:39 +0200, Paolo Valente wrote:
> 
> BTW, if you didn't run out of patience with this permanent issue yet,
> I was thinking of two o three changes to retry to trigger your failure
> reliably.

Sure, fire away, I'll happily give the annoying little bugger
opportunities to show its tender belly.

	-Mike

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH BUGFIX] block, bfq: postpone rq preparation to insert or merge
  2018-05-04 17:17 [PATCH BUGFIX] block, bfq: postpone rq preparation to insert or merge Paolo Valente
  2018-05-04 19:46 ` Mike Galbraith
@ 2018-05-06  7:33 ` Oleksandr Natalenko
  2018-05-10 16:14 ` Bart Van Assche
  2 siblings, 0 replies; 18+ messages in thread
From: Oleksandr Natalenko @ 2018-05-06  7:33 UTC (permalink / raw)
  To: Paolo Valente
  Cc: Jens Axboe, linux-block, linux-kernel, ulf.hansson, broonie,
	linus.walleij, bfq-iosched

Hi.

On 04.05.2018 19:17, Paolo Valente wrote:
> When invoked for an I/O request rq, the prepare_request hook of bfq
> increments reference counters in the destination bfq_queue for rq. In
> this respect, after this hook has been invoked, rq may still be
> transformed into a request with no icq attached, i.e., for bfq, a
> request not associated with any bfq_queue. No further hook is invoked
> to signal this tranformation to bfq (in general, to the destination
> elevator for rq). This leads bfq into an inconsistent state, because
> bfq has no chance to correctly lower these counters back. This
> inconsistency may in its turn cause incorrect scheduling and hangs. It
> certainly causes memory leaks, by making it impossible for bfq to free
> the involved bfq_queue.
> 
> On the bright side, no transformation can still happen for rq after rq
> has been inserted into bfq, or merged with another, already inserted,
> request. Exploiting this fact, this commit addresses the above issue
> by delaying the preparation of an I/O request to when the request is
> inserted or merged.
> 
> This change also gives a performance bonus: a lock-contention point
> gets removed. To prepare a request, bfq needs to hold its scheduler
> lock. After postponing request preparation to insertion or merging, no
> lock needs to be grabbed any longer in the prepare_request hook, while
> the lock already taken to perform insertion or merging is used to
> preparare the request as well.
> 
> Signed-off-by: Paolo Valente <paolo.valente@linaro.org>
> ---
>  block/bfq-iosched.c | 86 
> +++++++++++++++++++++++++++++++++++------------------
>  1 file changed, 57 insertions(+), 29 deletions(-)
> 
> diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
> index 771ae9730ac6..ea02162df6c7 100644
> --- a/block/bfq-iosched.c
> +++ b/block/bfq-iosched.c
> @@ -1858,6 +1858,8 @@ static int bfq_request_merge(struct
> request_queue *q, struct request **req,
>  	return ELEVATOR_NO_MERGE;
>  }
> 
> +static struct bfq_queue *bfq_init_rq(struct request *rq);
> +
>  static void bfq_request_merged(struct request_queue *q, struct request 
> *req,
>  			       enum elv_merge type)
>  {
> @@ -1866,7 +1868,7 @@ static void bfq_request_merged(struct
> request_queue *q, struct request *req,
>  	    blk_rq_pos(req) <
>  	    blk_rq_pos(container_of(rb_prev(&req->rb_node),
>  				    struct request, rb_node))) {
> -		struct bfq_queue *bfqq = RQ_BFQQ(req);
> +		struct bfq_queue *bfqq = bfq_init_rq(req);
>  		struct bfq_data *bfqd = bfqq->bfqd;
>  		struct request *prev, *next_rq;
> 
> @@ -1894,7 +1896,8 @@ static void bfq_request_merged(struct
> request_queue *q, struct request *req,
>  static void bfq_requests_merged(struct request_queue *q, struct 
> request *rq,
>  				struct request *next)
>  {
> -	struct bfq_queue *bfqq = RQ_BFQQ(rq), *next_bfqq = RQ_BFQQ(next);
> +	struct bfq_queue *bfqq = bfq_init_rq(rq),
> +		*next_bfqq = bfq_init_rq(next);
> 
>  	if (!RB_EMPTY_NODE(&rq->rb_node))
>  		goto end;
> @@ -4540,14 +4543,12 @@ static inline void
> bfq_update_insert_stats(struct request_queue *q,
>  					   unsigned int cmd_flags) {}
>  #endif
> 
> -static void bfq_prepare_request(struct request *rq, struct bio *bio);
> -
>  static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct 
> request *rq,
>  			       bool at_head)
>  {
>  	struct request_queue *q = hctx->queue;
>  	struct bfq_data *bfqd = q->elevator->elevator_data;
> -	struct bfq_queue *bfqq = RQ_BFQQ(rq);
> +	struct bfq_queue *bfqq;
>  	bool idle_timer_disabled = false;
>  	unsigned int cmd_flags;
> 
> @@ -4562,24 +4563,13 @@ static void bfq_insert_request(struct
> blk_mq_hw_ctx *hctx, struct request *rq,
>  	blk_mq_sched_request_inserted(rq);
> 
>  	spin_lock_irq(&bfqd->lock);
> +	bfqq = bfq_init_rq(rq);
>  	if (at_head || blk_rq_is_passthrough(rq)) {
>  		if (at_head)
>  			list_add(&rq->queuelist, &bfqd->dispatch);
>  		else
>  			list_add_tail(&rq->queuelist, &bfqd->dispatch);
> -	} else {
> -		if (WARN_ON_ONCE(!bfqq)) {
> -			/*
> -			 * This should never happen. Most likely rq is
> -			 * a requeued regular request, being
> -			 * re-inserted without being first
> -			 * re-prepared. Do a prepare, to avoid
> -			 * failure.
> -			 */
> -			bfq_prepare_request(rq, rq->bio);
> -			bfqq = RQ_BFQQ(rq);
> -		}
> -
> +	} else { /* bfqq is assumed to be non null here */
>  		idle_timer_disabled = __bfq_insert_request(bfqd, rq);
>  		/*
>  		 * Update bfqq, because, if a queue merge has occurred
> @@ -4922,11 +4912,48 @@ static struct bfq_queue
> *bfq_get_bfqq_handle_split(struct bfq_data *bfqd,
>  }
> 
>  /*
> - * Allocate bfq data structures associated with this request.
> + * Only reset private fields. The actual request preparation will be
> + * performed by bfq_init_rq, when rq is either inserted or merged. See
> + * comments on bfq_init_rq for the reason behind this delayed
> + * preparation.
>   */
>  static void bfq_prepare_request(struct request *rq, struct bio *bio)
> +{
> +	/*
> +	 * Regardless of whether we have an icq attached, we have to
> +	 * clear the scheduler pointers, as they might point to
> +	 * previously allocated bic/bfqq structs.
> +	 */
> +	rq->elv.priv[0] = rq->elv.priv[1] = NULL;
> +}
> +
> +/*
> + * If needed, init rq, allocate bfq data structures associated with
> + * rq, and increment reference counters in the destination bfq_queue
> + * for rq. Return the destination bfq_queue for rq, or NULL is rq is
> + * not associated with any bfq_queue.
> + *
> + * This function is invoked by the functions that perform rq insertion
> + * or merging. One may have expected the above preparation operations
> + * to be performed in bfq_prepare_request, and not delayed to when rq
> + * is inserted or merged. The rationale behind this delayed
> + * preparation is that, after the prepare_request hook is invoked for
> + * rq, rq may still be transformed into a request with no icq, i.e., a
> + * request not associated with any queue. No bfq hook is invoked to
> + * signal this tranformation. As a consequence, should these
> + * preparation operations be performed when the prepare_request hook
> + * is invoked, and should rq be transformed one moment later, bfq
> + * would end up in an inconsistent state, because it would have
> + * incremented some queue counters for an rq destined to
> + * transformation, without any chance to correctly lower these
> + * counters back. In contrast, no transformation can still happen for
> + * rq after rq has been inserted or merged. So, it is safe to execute
> + * these preparation operations when rq is finally inserted or merged.
> + */
> +static struct bfq_queue *bfq_init_rq(struct request *rq)
>  {
>  	struct request_queue *q = rq->q;
> +	struct bio *bio = rq->bio;
>  	struct bfq_data *bfqd = q->elevator->elevator_data;
>  	struct bfq_io_cq *bic;
>  	const int is_sync = rq_is_sync(rq);
> @@ -4934,20 +4961,21 @@ static void bfq_prepare_request(struct request
> *rq, struct bio *bio)
>  	bool new_queue = false;
>  	bool bfqq_already_existing = false, split = false;
> 
> +	if (unlikely(!rq->elv.icq))
> +		return NULL;
> +
>  	/*
> -	 * Even if we don't have an icq attached, we should still clear
> -	 * the scheduler pointers, as they might point to previously
> -	 * allocated bic/bfqq structs.
> +	 * Assuming that elv.priv[1] is set only if everything is set
> +	 * for this rq. This holds true, because this function is
> +	 * invoked only for insertion or merging, and, after such
> +	 * events, a request cannot be manipulated any longer before
> +	 * being removed from bfq.
>  	 */
> -	if (!rq->elv.icq) {
> -		rq->elv.priv[0] = rq->elv.priv[1] = NULL;
> -		return;
> -	}
> +	if (rq->elv.priv[1])
> +		return rq->elv.priv[1];
> 
>  	bic = icq_to_bic(rq->elv.icq);
> 
> -	spin_lock_irq(&bfqd->lock);
> -
>  	bfq_check_ioprio_change(bic, bio);
> 
>  	bfq_bic_update_cgroup(bic, bio);
> @@ -5006,7 +5034,7 @@ static void bfq_prepare_request(struct request
> *rq, struct bio *bio)
>  	if (unlikely(bfq_bfqq_just_created(bfqq)))
>  		bfq_handle_burst(bfqd, bfqq);
> 
> -	spin_unlock_irq(&bfqd->lock);
> +	return bfqq;
>  }
> 
>  static void bfq_idle_slice_timer_body(struct bfq_queue *bfqq)

No harm is observed on both test VM with smartctl hammer and my laptop. 
So,

Tested-by: Oleksandr Natalenko <oleksandr@natalenko.name>

Thanks.

-- 
   Oleksandr Natalenko (post-factum)

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH BUGFIX] block, bfq: postpone rq preparation to insert or merge
  2018-05-05 14:56       ` Mike Galbraith
@ 2018-05-06  7:42         ` Paolo Valente
  2018-05-07  2:43           ` Mike Galbraith
  2018-05-07  5:56           ` Mike Galbraith
  0 siblings, 2 replies; 18+ messages in thread
From: Paolo Valente @ 2018-05-06  7:42 UTC (permalink / raw)
  To: Mike Galbraith
  Cc: Jens Axboe, linux-block, LKML, Ulf Hansson, Mark Brown,
	linus.walleij, bfq-iosched, oleksandr



> Il giorno 05 mag 2018, alle ore 16:56, Mike Galbraith <efault@gmx.de> =
ha scritto:
>=20
> On Sat, 2018-05-05 at 12:39 +0200, Paolo Valente wrote:
>>=20
>> BTW, if you didn't run out of patience with this permanent issue yet,
>> I was thinking of two o three changes to retry to trigger your =
failure
>> reliably.
>=20
> Sure, fire away, I'll happily give the annoying little bugger
> opportunities to show its tender belly.

I've attached a compressed patch (to avoid possible corruption from my
mailer).  I'm little confident, but no pain, no gain, right?

If possible, apply this patch on top of the fix I proposed in this
thread, just to eliminate possible further noise. Finally, the
patch content follows.

Hoping for a stroke of luck,
Paolo

diff --git a/block/bfq-mq-iosched.c b/block/bfq-mq-iosched.c
index 118f319af7c0..6662efe29b69 100644
--- a/block/bfq-mq-iosched.c
+++ b/block/bfq-mq-iosched.c
@@ -525,8 +525,13 @@ static void bfq_limit_depth(unsigned int op, struct =
blk_mq_alloc_data *data)
        if (unlikely(bfqd->sb_shift !=3D bt->sb.shift))
                bfq_update_depths(bfqd, bt);
=20
+#if 0
        data->shallow_depth =3D
                =
bfqd->word_depths[!!bfqd->wr_busy_queues][op_is_sync(op)];
+#else
+       data->shallow_depth =3D 1;
+#endif
+
=20
        bfq_log(bfqd, "wr_busy %d sync %d depth %u",
                        bfqd->wr_busy_queues, op_is_sync(op),


>=20
> 	-Mike
>=20

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH BUGFIX] block, bfq: postpone rq preparation to insert or merge
  2018-05-06  7:42         ` Paolo Valente
@ 2018-05-07  2:43           ` Mike Galbraith
  2018-05-07  3:23             ` Mike Galbraith
  2018-05-07  5:56           ` Mike Galbraith
  1 sibling, 1 reply; 18+ messages in thread
From: Mike Galbraith @ 2018-05-07  2:43 UTC (permalink / raw)
  To: Paolo Valente
  Cc: Jens Axboe, linux-block, LKML, Ulf Hansson, Mark Brown,
	linus.walleij, bfq-iosched, oleksandr

On Sun, 2018-05-06 at 09:42 +0200, Paolo Valente wrote:
> 
> I've attached a compressed patch (to avoid possible corruption from my
> mailer).  I'm little confident, but no pain, no gain, right?
> 
> If possible, apply this patch on top of the fix I proposed in this
> thread, just to eliminate possible further noise. Finally, the
> patch content follows.
> 
> Hoping for a stroke of luck,

FWIW, box didn't survive the first full build of the morning.

> Paolo
> 
> diff --git a/block/bfq-mq-iosched.c b/block/bfq-mq-iosched.c
> index 118f319af7c0..6662efe29b69 100644
> --- a/block/bfq-mq-iosched.c
> +++ b/block/bfq-mq-iosched.c

That doesn't exist in master, so I applied it like so.

---
 block/bfq-iosched.c |    4 ++++
 1 file changed, 4 insertions(+)

--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -554,8 +554,12 @@ static void bfq_limit_depth(unsigned int
        if (unlikely(bfqd->sb_shift != bt->sb.shift))
                bfq_update_depths(bfqd, bt);
 
+#if 0
        data->shallow_depth =
                bfqd->word_depths[!!bfqd->wr_busy_queues][op_is_sync(op)];
+#else
+       data->shallow_depth = 1;
+#endif
 
        bfq_log(bfqd, "[%s] wr_busy %d sync %d depth %u",
                        __func__, bfqd->wr_busy_queues, op_is_sync(op),

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH BUGFIX] block, bfq: postpone rq preparation to insert or merge
  2018-05-07  2:43           ` Mike Galbraith
@ 2018-05-07  3:23             ` Mike Galbraith
  2018-05-07  9:32               ` Paolo Valente
  0 siblings, 1 reply; 18+ messages in thread
From: Mike Galbraith @ 2018-05-07  3:23 UTC (permalink / raw)
  To: Paolo Valente
  Cc: Jens Axboe, linux-block, LKML, Ulf Hansson, Mark Brown,
	linus.walleij, bfq-iosched, oleksandr

On Mon, 2018-05-07 at 04:43 +0200, Mike Galbraith wrote:
> On Sun, 2018-05-06 at 09:42 +0200, Paolo Valente wrote:
> > 
> > I've attached a compressed patch (to avoid possible corruption from my
> > mailer).  I'm little confident, but no pain, no gain, right?
> > 
> > If possible, apply this patch on top of the fix I proposed in this
> > thread, just to eliminate possible further noise. Finally, the
> > patch content follows.
> > 
> > Hoping for a stroke of luck,
> 
> FWIW, box didn't survive the first full build of the morning.

Nor the second.

	-Mike

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH BUGFIX] block, bfq: postpone rq preparation to insert or merge
  2018-05-06  7:42         ` Paolo Valente
  2018-05-07  2:43           ` Mike Galbraith
@ 2018-05-07  5:56           ` Mike Galbraith
  2018-05-07  9:27             ` Paolo Valente
  1 sibling, 1 reply; 18+ messages in thread
From: Mike Galbraith @ 2018-05-07  5:56 UTC (permalink / raw)
  To: Paolo Valente
  Cc: Jens Axboe, linux-block, LKML, Ulf Hansson, Mark Brown,
	linus.walleij, bfq-iosched, oleksandr

On Sun, 2018-05-06 at 09:42 +0200, Paolo Valente wrote:
>=20
> diff --git a/block/bfq-mq-iosched.c b/block/bfq-mq-iosched.c
> index 118f319af7c0..6662efe29b69 100644
> --- a/block/bfq-mq-iosched.c
> +++ b/block/bfq-mq-iosched.c
> @@ -525,8 +525,13 @@ static void bfq_limit_depth(unsigned int op, struct =
blk_mq_alloc_data *data)
>         if (unlikely(bfqd->sb_shift !=3D bt->sb.shift))
>                 bfq_update_depths(bfqd, bt);
> =20
> +#if 0
>         data->shallow_depth =3D
>                 bfqd->word_depths[!!bfqd->wr_busy_queues][op_is_sync(op)]=
;
                                                            ^^^^^^^^^^^^^

Q: why doesn't the top of this function look like so?

---
 block/bfq-iosched.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -539,7 +539,7 @@ static void bfq_limit_depth(unsigned int
 	struct bfq_data *bfqd =3D data->q->elevator->elevator_data;
 	struct sbitmap_queue *bt;
=20
-	if (op_is_sync(op) && !op_is_write(op))
+	if (!op_is_write(op))
 		return;
=20
 	if (data->flags & BLK_MQ_REQ_RESERVED) {

It looks a bit odd that these elements exist...

+ =A0=A0=A0=A0=A0=A0/*
+ =A0=A0=A0=A0=A0=A0=A0* no more than 75% of tags for sync writes (25% extr=
a tags
+ =A0=A0=A0=A0=A0=A0=A0* w.r.t. async I/O, to prevent async I/O from starvi=
ng sync
+ =A0=A0=A0=A0=A0=A0=A0* writes)
+ =A0=A0=A0=A0=A0=A0=A0*/
+ =A0=A0=A0=A0=A0=A0bfqd->word_depths[0][1] =3D max(((1U<<bfqd->sb_shift) *=
 3)>>2, 1U);

+ =A0=A0=A0=A0=A0=A0/* no more than ~37% of tags for sync writes (~20% extr=
a tags) */
+ =A0=A0=A0=A0=A0=A0bfqd->word_depths[1][1] =3D max(((1U<<bfqd->sb_shift) *=
 6)>>4, 1U);

...yet we index via and log a guaranteed zero.

	-Mike

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH BUGFIX] block, bfq: postpone rq preparation to insert or merge
  2018-05-07  5:56           ` Mike Galbraith
@ 2018-05-07  9:27             ` Paolo Valente
  2018-05-07 10:01               ` Mike Galbraith
  0 siblings, 1 reply; 18+ messages in thread
From: Paolo Valente @ 2018-05-07  9:27 UTC (permalink / raw)
  To: Mike Galbraith
  Cc: Jens Axboe, linux-block, LKML, Ulf Hansson, Mark Brown,
	linus.walleij, bfq-iosched, oleksandr



> Il giorno 07 mag 2018, alle ore 07:56, Mike Galbraith <efault@gmx.de> =
ha scritto:
>=20
> On Sun, 2018-05-06 at 09:42 +0200, Paolo Valente wrote:
>>=20
>> diff --git a/block/bfq-mq-iosched.c b/block/bfq-mq-iosched.c
>> index 118f319af7c0..6662efe29b69 100644
>> --- a/block/bfq-mq-iosched.c
>> +++ b/block/bfq-mq-iosched.c
>> @@ -525,8 +525,13 @@ static void bfq_limit_depth(unsigned int op, =
struct blk_mq_alloc_data *data)
>>        if (unlikely(bfqd->sb_shift !=3D bt->sb.shift))
>>                bfq_update_depths(bfqd, bt);
>>=20
>> +#if 0
>>        data->shallow_depth =3D
>>                =
bfqd->word_depths[!!bfqd->wr_busy_queues][op_is_sync(op)];
>                                                            =
^^^^^^^^^^^^^
>=20
> Q: why doesn't the top of this function look like so?
>=20
> ---
> block/bfq-iosched.c |    2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>=20
> --- a/block/bfq-iosched.c
> +++ b/block/bfq-iosched.c
> @@ -539,7 +539,7 @@ static void bfq_limit_depth(unsigned int
> 	struct bfq_data *bfqd =3D data->q->elevator->elevator_data;
> 	struct sbitmap_queue *bt;
>=20
> -	if (op_is_sync(op) && !op_is_write(op))
> +	if (!op_is_write(op))
> 		return;
>=20
> 	if (data->flags & BLK_MQ_REQ_RESERVED) {
>=20
> It looks a bit odd that these elements exist...
>=20
> +       /*
> +        * no more than 75% of tags for sync writes (25% extra tags
> +        * w.r.t. async I/O, to prevent async I/O from starving sync
> +        * writes)
> +        */
> +       bfqd->word_depths[0][1] =3D max(((1U<<bfqd->sb_shift) * 3)>>2, =
1U);
>=20
> +       /* no more than ~37% of tags for sync writes (~20% extra tags) =
*/
> +       bfqd->word_depths[1][1] =3D max(((1U<<bfqd->sb_shift) * 6)>>4, =
1U);
>=20
> ...yet we index via and log a guaranteed zero.
>=20

I'm not sure I got your point, so, to help you help me quickly, I'll
repeat what I expect the code you highlight to do:

- sync reads must have no limitation, and the lines
if (op_is_sync(op) && !op_is_write(op))
	return;
make sure they don't

- sync writes must be limited, and the code you pasted above computes
those limits

- for sync writes, for which op_is_sync(op) is true (but the condition
"op_is_sync(op) && !op_is_write(op)" is false), the line:
	bfqd->word_depths[!!bfqd->wr_busy_queues][op_is_sync(op)];
becomes
	bfqd->word_depths[!!bfqd->wr_busy_queues][1];
e yields the right limit for sync writes, depending on =
bfqd->wr_busy_queues.

Where is the bug?

Thanks,
Paolo


> 	-Mike
>=20
>=20

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH BUGFIX] block, bfq: postpone rq preparation to insert or merge
  2018-05-07  3:23             ` Mike Galbraith
@ 2018-05-07  9:32               ` Paolo Valente
  0 siblings, 0 replies; 18+ messages in thread
From: Paolo Valente @ 2018-05-07  9:32 UTC (permalink / raw)
  To: Mike Galbraith
  Cc: Jens Axboe, linux-block, LKML, Ulf Hansson, Mark Brown,
	linus.walleij, bfq-iosched, oleksandr



> Il giorno 07 mag 2018, alle ore 05:23, Mike Galbraith <efault@gmx.de> =
ha scritto:
>=20
> On Mon, 2018-05-07 at 04:43 +0200, Mike Galbraith wrote:
>> On Sun, 2018-05-06 at 09:42 +0200, Paolo Valente wrote:
>>>=20
>>> I've attached a compressed patch (to avoid possible corruption from =
my
>>> mailer).  I'm little confident, but no pain, no gain, right?
>>>=20
>>> If possible, apply this patch on top of the fix I proposed in this
>>> thread, just to eliminate possible further noise. Finally, the
>>> patch content follows.
>>>=20
>>> Hoping for a stroke of luck,
>>=20
>> FWIW, box didn't survive the first full build of the morning.
>=20
> Nor the second.
>=20

Great, finally the first good news!

Help from blk-mq experts would be fundamental here.  To increase the
chances to get it, I'm going to open a new thread on this.  In that
thread I'll ask you to provide an OOPS or something, I hope it is now
easier for you to get it.

Thanks,
Paolo

> 	-Mike

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH BUGFIX] block, bfq: postpone rq preparation to insert or merge
  2018-05-07  9:27             ` Paolo Valente
@ 2018-05-07 10:01               ` Mike Galbraith
  2018-05-07 18:03                 ` Paolo Valente
  0 siblings, 1 reply; 18+ messages in thread
From: Mike Galbraith @ 2018-05-07 10:01 UTC (permalink / raw)
  To: Paolo Valente
  Cc: Jens Axboe, linux-block, LKML, Ulf Hansson, Mark Brown,
	linus.walleij, bfq-iosched, oleksandr

On Mon, 2018-05-07 at 11:27 +0200, Paolo Valente wrote:
> 
> 
> Where is the bug?

Hm, seems potent pain-killers and C don't mix all that well.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH BUGFIX] block, bfq: postpone rq preparation to insert or merge
  2018-05-07 10:01               ` Mike Galbraith
@ 2018-05-07 18:03                 ` Paolo Valente
  0 siblings, 0 replies; 18+ messages in thread
From: Paolo Valente @ 2018-05-07 18:03 UTC (permalink / raw)
  To: Mike Galbraith
  Cc: Jens Axboe, linux-block, LKML, Ulf Hansson, Mark Brown,
	linus.walleij, bfq-iosched, oleksandr



> Il giorno 07 mag 2018, alle ore 12:01, Mike Galbraith <efault@gmx.de> =
ha scritto:
>=20
> On Mon, 2018-05-07 at 11:27 +0200, Paolo Valente wrote:
>>=20
>>=20
>> Where is the bug?
>=20
> Hm, seems potent pain-killers and C don't mix all that well.
>=20

I'll try to keep it in mind, and I do wis you to get well soon.

Thanks,
Paolo=

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH BUGFIX] block, bfq: postpone rq preparation to insert or merge
  2018-05-04 17:17 [PATCH BUGFIX] block, bfq: postpone rq preparation to insert or merge Paolo Valente
  2018-05-04 19:46 ` Mike Galbraith
  2018-05-06  7:33 ` Oleksandr Natalenko
@ 2018-05-10 16:14 ` Bart Van Assche
  2018-05-14 17:16   ` Paolo Valente
  2 siblings, 1 reply; 18+ messages in thread
From: Bart Van Assche @ 2018-05-10 16:14 UTC (permalink / raw)
  To: paolo.valente, axboe
  Cc: ulf.hansson, linux-kernel, linux-block, broonie, linus.walleij,
	bfq-iosched, oleksandr

T24gRnJpLCAyMDE4LTA1LTA0IGF0IDE5OjE3ICswMjAwLCBQYW9sbyBWYWxlbnRlIHdyb3RlOg0K
PiBXaGVuIGludm9rZWQgZm9yIGFuIEkvTyByZXF1ZXN0IHJxLCBbIC4uLiBdDQoNClRlc3RlZC1i
eTogQmFydCBWYW4gQXNzY2hlIDxiYXJ0LnZhbmFzc2NoZUB3ZGMuY29tPg0KDQoNCg0K

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH BUGFIX] block, bfq: postpone rq preparation to insert or merge
  2018-05-10 16:14 ` Bart Van Assche
@ 2018-05-14 17:16   ` Paolo Valente
  2018-05-14 17:31     ` Jens Axboe
  0 siblings, 1 reply; 18+ messages in thread
From: Paolo Valente @ 2018-05-14 17:16 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: axboe, ulf.hansson, linux-kernel, linux-block, broonie,
	linus.walleij, bfq-iosched, oleksandr



> Il giorno 10 mag 2018, alle ore 18:14, Bart Van Assche =
<bart.vanassche@wdc.com> ha scritto:
>=20
> On Fri, 2018-05-04 at 19:17 +0200, Paolo Valente wrote:
>> When invoked for an I/O request rq, [ ... ]
>=20
> Tested-by: Bart Van Assche <bart.vanassche@wdc.com>
>=20
>=20
>=20

Any decision for this fix, Jens?

Thanks,
Paolo=

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH BUGFIX] block, bfq: postpone rq preparation to insert or merge
  2018-05-14 17:16   ` Paolo Valente
@ 2018-05-14 17:31     ` Jens Axboe
  2018-05-14 17:37       ` Paolo Valente
  0 siblings, 1 reply; 18+ messages in thread
From: Jens Axboe @ 2018-05-14 17:31 UTC (permalink / raw)
  To: Paolo Valente, Bart Van Assche
  Cc: ulf.hansson, linux-kernel, linux-block, broonie, linus.walleij,
	bfq-iosched, oleksandr

On 5/14/18 11:16 AM, Paolo Valente wrote:
> 
> 
>> Il giorno 10 mag 2018, alle ore 18:14, Bart Van Assche <bart.vanassche@wdc.com> ha scritto:
>>
>> On Fri, 2018-05-04 at 19:17 +0200, Paolo Valente wrote:
>>> When invoked for an I/O request rq, [ ... ]
>>
>> Tested-by: Bart Van Assche <bart.vanassche@wdc.com>
>>
>>
>>
> 
> Any decision for this fix, Jens?

Guess I didn't reply, but I did commit this on Thursday.

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH BUGFIX] block, bfq: postpone rq preparation to insert or merge
  2018-05-14 17:31     ` Jens Axboe
@ 2018-05-14 17:37       ` Paolo Valente
  0 siblings, 0 replies; 18+ messages in thread
From: Paolo Valente @ 2018-05-14 17:37 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Bart Van Assche, ulf.hansson, linux-kernel, linux-block, broonie,
	linus.walleij, bfq-iosched, oleksandr



> Il giorno 14 mag 2018, alle ore 19:31, Jens Axboe <axboe@kernel.dk> ha =
scritto:
>=20
> On 5/14/18 11:16 AM, Paolo Valente wrote:
>>=20
>>=20
>>> Il giorno 10 mag 2018, alle ore 18:14, Bart Van Assche =
<bart.vanassche@wdc.com> ha scritto:
>>>=20
>>> On Fri, 2018-05-04 at 19:17 +0200, Paolo Valente wrote:
>>>> When invoked for an I/O request rq, [ ... ]
>>>=20
>>> Tested-by: Bart Van Assche <bart.vanassche@wdc.com>
>>>=20
>>>=20
>>>=20
>>=20
>> Any decision for this fix, Jens?
>=20
> Guess I didn't reply, but I did commit this on Thursday.
>=20

Great, thank you!

Paolo

> --=20
> Jens Axboe
>=20

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2018-05-14 17:37 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-05-04 17:17 [PATCH BUGFIX] block, bfq: postpone rq preparation to insert or merge Paolo Valente
2018-05-04 19:46 ` Mike Galbraith
2018-05-05  8:19   ` Mike Galbraith
2018-05-05 10:39     ` Paolo Valente
2018-05-05 14:56       ` Mike Galbraith
2018-05-06  7:42         ` Paolo Valente
2018-05-07  2:43           ` Mike Galbraith
2018-05-07  3:23             ` Mike Galbraith
2018-05-07  9:32               ` Paolo Valente
2018-05-07  5:56           ` Mike Galbraith
2018-05-07  9:27             ` Paolo Valente
2018-05-07 10:01               ` Mike Galbraith
2018-05-07 18:03                 ` Paolo Valente
2018-05-06  7:33 ` Oleksandr Natalenko
2018-05-10 16:14 ` Bart Van Assche
2018-05-14 17:16   ` Paolo Valente
2018-05-14 17:31     ` Jens Axboe
2018-05-14 17:37       ` Paolo Valente

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).