All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH BUGFIX/IMPROVEMENT 0/2] block, bfq: two pending patches
@ 2018-01-13 11:05 Paolo Valente
  2018-01-13 11:05 ` [PATCH BUGFIX/IMPROVEMENT 1/2] block, bfq: limit tags for writes and async I/O Paolo Valente
                   ` (4 more replies)
  0 siblings, 5 replies; 7+ messages in thread
From: Paolo Valente @ 2018-01-13 11:05 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, linux-kernel, ulf.hansson, broonie, linus.walleij,
	bfq-iosched, oleksandr, Paolo Valente

Hi Jens,
here are again the two pending patches you asked me to resend [1]. One
of them, fixing read-starvation problems, was accompanied by a cover
letter. I'm pasting the content of that cover letter below.

The patch addresses (serious) starvation problems caused by
request-tag exhaustion, as explained in more detail in the commit
message. I started from the solution in the function
kyber_limit_depth, but then I had to define more articulate limits, to
counter starvation also in cases not covered in kyber_limit_depth.
If this solution proves to be effective, I'm willing to port it
somehow to the other schedulers.

Thanks,
Paolo

[1] https://www.spinics.net/lists/linux-block/msg21586.html

Paolo Valente (2):
  block, bfq: limit tags for writes and async I/O
  block, bfq: limit sectors served with interactive weight raising

 block/bfq-iosched.c | 158 +++++++++++++++++++++++++++++++++++++++++++++++++---
 block/bfq-iosched.h |  17 ++++++
 block/bfq-wf2q.c    |   3 +
 3 files changed, 169 insertions(+), 9 deletions(-)

--
2.15.1

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH BUGFIX/IMPROVEMENT 1/2] block, bfq: limit tags for writes and async I/O
  2018-01-13 11:05 [PATCH BUGFIX/IMPROVEMENT 0/2] block, bfq: two pending patches Paolo Valente
@ 2018-01-13 11:05 ` Paolo Valente
  2018-01-13 11:05 ` [PATCH BUGFIX/IMPROVEMENT 2/2] block, bfq: limit sectors served with interactive weight raising Paolo Valente
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 7+ messages in thread
From: Paolo Valente @ 2018-01-13 11:05 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, linux-kernel, ulf.hansson, broonie, linus.walleij,
	bfq-iosched, oleksandr, Paolo Valente

Asynchronous I/O can easily starve synchronous I/O (both sync reads
and sync writes), by consuming all request tags. Similarly, storms of
synchronous writes, such as those that sync(2) may trigger, can starve
synchronous reads. In their turn, these two problems may also cause
BFQ to loose control on latency for interactive and soft real-time
applications. For example, on a PLEXTOR PX-256M5S SSD, LibreOffice
Writer takes 0.6 seconds to start if the device is idle, but it takes
more than 45 seconds (!) if there are sequential writes in the
background.

This commit addresses this issue by limiting the maximum percentage of
tags that asynchronous I/O requests and synchronous write requests can
consume. In particular, this commit grants a higher threshold to
synchronous writes, to prevent the latter from being starved by
asynchronous I/O.

According to the above test, LibreOffice Writer now starts in about
1.2 seconds on average, regardless of the background workload, and
apart from some rare outlier. To check this improvement, run, e.g.,
sudo ./comm_startup_lat.sh bfq 5 5 seq 10 "lowriter --terminate_after_init"
for the comm_startup_lat benchmark in the S suite [1].

[1] https://github.com/Algodev-github/S

Tested-by: Oleksandr Natalenko <oleksandr@natalenko.name>
Tested-by: Holger Hoffstätte <holger@applied-asynchrony.com>
Signed-off-by: Paolo Valente <paolo.valente@linaro.org>
---
 block/bfq-iosched.c | 77 +++++++++++++++++++++++++++++++++++++++++++++++++++++
 block/bfq-iosched.h | 12 +++++++++
 2 files changed, 89 insertions(+)

diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index 1caeecad7af1..527bd2ccda51 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -417,6 +417,82 @@ static struct request *bfq_choose_req(struct bfq_data *bfqd,
 	}
 }
 
+/*
+ * See the comments on bfq_limit_depth for the purpose of
+ * the depths set in the function.
+ */
+static void bfq_update_depths(struct bfq_data *bfqd, struct sbitmap_queue *bt)
+{
+	bfqd->sb_shift = bt->sb.shift;
+
+	/*
+	 * In-word depths if no bfq_queue is being weight-raised:
+	 * leaving 25% of tags only for sync reads.
+	 *
+	 * In next formulas, right-shift the value
+	 * (1U<<bfqd->sb_shift), instead of computing directly
+	 * (1U<<(bfqd->sb_shift - something)), to be robust against
+	 * any possible value of bfqd->sb_shift, without having to
+	 * limit 'something'.
+	 */
+	/* no more than 50% of tags for async I/O */
+	bfqd->word_depths[0][0] = max((1U<<bfqd->sb_shift)>>1, 1U);
+	/*
+	 * no more than 75% of tags for sync writes (25% extra tags
+	 * w.r.t. async I/O, to prevent async I/O from starving sync
+	 * writes)
+	 */
+	bfqd->word_depths[0][1] = max(((1U<<bfqd->sb_shift) * 3)>>2, 1U);
+
+	/*
+	 * In-word depths in case some bfq_queue is being weight-
+	 * raised: leaving ~63% of tags for sync reads. This is the
+	 * highest percentage for which, in our tests, application
+	 * start-up times didn't suffer from any regression due to tag
+	 * shortage.
+	 */
+	/* no more than ~18% of tags for async I/O */
+	bfqd->word_depths[1][0] = max(((1U<<bfqd->sb_shift) * 3)>>4, 1U);
+	/* no more than ~37% of tags for sync writes (~20% extra tags) */
+	bfqd->word_depths[1][1] = max(((1U<<bfqd->sb_shift) * 6)>>4, 1U);
+}
+
+/*
+ * Async I/O can easily starve sync I/O (both sync reads and sync
+ * writes), by consuming all tags. Similarly, storms of sync writes,
+ * such as those that sync(2) may trigger, can starve sync reads.
+ * Limit depths of async I/O and sync writes so as to counter both
+ * problems.
+ */
+static void bfq_limit_depth(unsigned int op, struct blk_mq_alloc_data *data)
+{
+	struct blk_mq_tags *tags = blk_mq_tags_from_data(data);
+	struct bfq_data *bfqd = data->q->elevator->elevator_data;
+	struct sbitmap_queue *bt;
+
+	if (op_is_sync(op) && !op_is_write(op))
+		return;
+
+	if (data->flags & BLK_MQ_REQ_RESERVED) {
+		if (unlikely(!tags->nr_reserved_tags)) {
+			WARN_ON_ONCE(1);
+			return;
+		}
+		bt = &tags->breserved_tags;
+	} else
+		bt = &tags->bitmap_tags;
+
+	if (unlikely(bfqd->sb_shift != bt->sb.shift))
+		bfq_update_depths(bfqd, bt);
+
+	data->shallow_depth =
+		bfqd->word_depths[!!bfqd->wr_busy_queues][op_is_sync(op)];
+
+	bfq_log(bfqd, "[%s] wr_busy %d sync %d depth %u",
+			__func__, bfqd->wr_busy_queues, op_is_sync(op),
+			data->shallow_depth);
+}
+
 static struct bfq_queue *
 bfq_rq_pos_tree_lookup(struct bfq_data *bfqd, struct rb_root *root,
 		     sector_t sector, struct rb_node **ret_parent,
@@ -5267,6 +5343,7 @@ static struct elv_fs_entry bfq_attrs[] = {
 
 static struct elevator_type iosched_bfq_mq = {
 	.ops.mq = {
+		.limit_depth		= bfq_limit_depth,
 		.prepare_request	= bfq_prepare_request,
 		.finish_request		= bfq_finish_request,
 		.exit_icq		= bfq_exit_icq,
diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h
index 5d47b58d5fc8..fcd941008127 100644
--- a/block/bfq-iosched.h
+++ b/block/bfq-iosched.h
@@ -629,6 +629,18 @@ struct bfq_data {
 	struct bfq_io_cq *bio_bic;
 	/* bfqq associated with the task issuing current bio for merging */
 	struct bfq_queue *bio_bfqq;
+
+	/*
+	 * Cached sbitmap shift, used to compute depth limits in
+	 * bfq_update_depths.
+	 */
+	unsigned int sb_shift;
+
+	/*
+	 * Depth limits used in bfq_limit_depth (see comments on the
+	 * function)
+	 */
+	unsigned int word_depths[2][2];
 };
 
 enum bfqq_state_flags {
-- 
2.15.1

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH BUGFIX/IMPROVEMENT 2/2] block, bfq: limit sectors served with interactive weight raising
  2018-01-13 11:05 [PATCH BUGFIX/IMPROVEMENT 0/2] block, bfq: two pending patches Paolo Valente
  2018-01-13 11:05 ` [PATCH BUGFIX/IMPROVEMENT 1/2] block, bfq: limit tags for writes and async I/O Paolo Valente
@ 2018-01-13 11:05 ` Paolo Valente
  2018-01-13 13:02 ` [PATCH BUGFIX/IMPROVEMENT 0/2] block, bfq: two pending patches Oleksandr Natalenko
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 7+ messages in thread
From: Paolo Valente @ 2018-01-13 11:05 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, linux-kernel, ulf.hansson, broonie, linus.walleij,
	bfq-iosched, oleksandr, Paolo Valente

To maximise responsiveness, BFQ raises the weight, and performs device
idling, for bfq_queues associated with processes deemed as
interactive. In particular, weight raising has a maximum duration,
equal to the time needed to start a large application. If a
weight-raised process goes on doing I/O beyond this maximum duration,
it loses weight-raising.

This mechanism is evidently vulnerable to the following false
positives: I/O-bound applications that will go on doing I/O for much
longer than the duration of weight-raising. These applications have
basically no benefit from being weight-raised at the beginning of
their I/O. On the opposite end, while being weight-raised, these
applications
a) unjustly steal throughput to applications that may truly need
low latency;
b) make BFQ uselessly perform device idling; device idling results
in loss of device throughput with most flash-based storage, and may
increase latencies when used purposelessly.

This commit adds a countermeasure to reduce both the above
problems. To introduce this countermeasure, we provide the following
extra piece of information (full details in the comments added by this
commit). During the start-up of the large application used as a
reference to set the duration of weight-raising, involved processes
transfer at most ~110K sectors each. Accordingly, a process initially
deemed as interactive has no right to be weight-raised any longer,
once transferred 110K sectors or more.

Basing on this consideration, this commit early-ends weight-raising
for a bfq_queue if the latter happens to have received an amount of
service at least equal to 110K sectors (actually, a little bit more,
to keep a safety margin). I/O-bound applications that reach a high
throughput, such as file copy, get to this threshold much before the
allowed weight-raising period finishes. Thus this early ending of
weight-raising reduces the amount of time during which these
applications cause the problems described above.

Tested-by: Oleksandr Natalenko <oleksandr@natalenko.name>
Tested-by: Holger Hoffstätte <holger@applied-asynchrony.com>
Signed-off-by: Paolo Valente <paolo.valente@linaro.org>
---
 block/bfq-iosched.c | 81 +++++++++++++++++++++++++++++++++++++++++++++++------
 block/bfq-iosched.h |  5 ++++
 block/bfq-wf2q.c    |  3 ++
 3 files changed, 80 insertions(+), 9 deletions(-)

diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index 527bd2ccda51..93a97a7fe519 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -209,15 +209,17 @@ static struct kmem_cache *bfq_pool;
  * interactive applications automatically, using the following formula:
  * duration = (R / r) * T, where r is the peak rate of the device, and
  * R and T are two reference parameters.
- * In particular, R is the peak rate of the reference device (see below),
- * and T is a reference time: given the systems that are likely to be
- * installed on the reference device according to its speed class, T is
- * about the maximum time needed, under BFQ and while reading two files in
- * parallel, to load typical large applications on these systems.
- * In practice, the slower/faster the device at hand is, the more/less it
- * takes to load applications with respect to the reference device.
- * Accordingly, the longer/shorter BFQ grants weight raising to interactive
- * applications.
+ * In particular, R is the peak rate of the reference device (see
+ * below), and T is a reference time: given the systems that are
+ * likely to be installed on the reference device according to its
+ * speed class, T is about the maximum time needed, under BFQ and
+ * while reading two files in parallel, to load typical large
+ * applications on these systems (see the comments on
+ * max_service_from_wr below, for more details on how T is obtained).
+ * In practice, the slower/faster the device at hand is, the more/less
+ * it takes to load applications with respect to the reference device.
+ * Accordingly, the longer/shorter BFQ grants weight raising to
+ * interactive applications.
  *
  * BFQ uses four different reference pairs (R, T), depending on:
  * . whether the device is rotational or non-rotational;
@@ -254,6 +256,60 @@ static int T_slow[2];
 static int T_fast[2];
 static int device_speed_thresh[2];
 
+/*
+ * BFQ uses the above-detailed, time-based weight-raising mechanism to
+ * privilege interactive tasks. This mechanism is vulnerable to the
+ * following false positives: I/O-bound applications that will go on
+ * doing I/O for much longer than the duration of weight
+ * raising. These applications have basically no benefit from being
+ * weight-raised at the beginning of their I/O. On the opposite end,
+ * while being weight-raised, these applications
+ * a) unjustly steal throughput to applications that may actually need
+ * low latency;
+ * b) make BFQ uselessly perform device idling; device idling results
+ * in loss of device throughput with most flash-based storage, and may
+ * increase latencies when used purposelessly.
+ *
+ * BFQ tries to reduce these problems, by adopting the following
+ * countermeasure. To introduce this countermeasure, we need first to
+ * finish explaining how the duration of weight-raising for
+ * interactive tasks is computed.
+ *
+ * For a bfq_queue deemed as interactive, the duration of weight
+ * raising is dynamically adjusted, as a function of the estimated
+ * peak rate of the device, so as to be equal to the time needed to
+ * execute the 'largest' interactive task we benchmarked so far. By
+ * largest task, we mean the task for which each involved process has
+ * to do more I/O than for any of the other tasks we benchmarked. This
+ * reference interactive task is the start-up of LibreOffice Writer,
+ * and in this task each process/bfq_queue needs to have at most ~110K
+ * sectors transferred.
+ *
+ * This last piece of information enables BFQ to reduce the actual
+ * duration of weight-raising for at least one class of I/O-bound
+ * applications: those doing sequential or quasi-sequential I/O. An
+ * example is file copy. In fact, once started, the main I/O-bound
+ * processes of these applications usually consume the above 110K
+ * sectors in much less time than the processes of an application that
+ * is starting, because these I/O-bound processes will greedily devote
+ * almost all their CPU cycles only to their target,
+ * throughput-friendly I/O operations. This is even more true if BFQ
+ * happens to be underestimating the device peak rate, and thus
+ * overestimating the duration of weight raising. But, according to
+ * our measurements, once transferred 110K sectors, these processes
+ * have no right to be weight-raised any longer.
+ *
+ * Basing on the last consideration, BFQ ends weight-raising for a
+ * bfq_queue if the latter happens to have received an amount of
+ * service at least equal to the following constant. The constant is
+ * set to slightly more than 110K, to have a minimum safety margin.
+ *
+ * This early ending of weight-raising reduces the amount of time
+ * during which interactive false positives cause the two problems
+ * described at the beginning of these comments.
+ */
+static const unsigned long max_service_from_wr = 120000;
+
 #define RQ_BIC(rq)		icq_to_bic((rq)->elv.priv[0])
 #define RQ_BFQQ(rq)		((rq)->elv.priv[1])
 
@@ -1352,6 +1408,7 @@ static void bfq_update_bfqq_wr_on_rq_arrival(struct bfq_data *bfqd,
 	if (old_wr_coeff == 1 && wr_or_deserves_wr) {
 		/* start a weight-raising period */
 		if (interactive) {
+			bfqq->service_from_wr = 0;
 			bfqq->wr_coeff = bfqd->bfq_wr_coeff;
 			bfqq->wr_cur_max_time = bfq_wr_duration(bfqd);
 		} else {
@@ -3665,6 +3722,12 @@ static void bfq_update_wr_data(struct bfq_data *bfqd, struct bfq_queue *bfqq)
 				bfqq->entity.prio_changed = 1;
 			}
 		}
+		if (bfqq->wr_coeff > 1 &&
+		    bfqq->wr_cur_max_time != bfqd->bfq_wr_rt_max_time &&
+		    bfqq->service_from_wr > max_service_from_wr) {
+			/* see comments on max_service_from_wr */
+			bfq_bfqq_end_wr(bfqq);
+		}
 	}
 	/*
 	 * To improve latency (for this or other queues), immediately
diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h
index fcd941008127..350c39ae2896 100644
--- a/block/bfq-iosched.h
+++ b/block/bfq-iosched.h
@@ -337,6 +337,11 @@ struct bfq_queue {
 	 * last transition from idle to backlogged.
 	 */
 	unsigned long service_from_backlogged;
+	/*
+	 * Cumulative service received from the @bfq_queue since its
+	 * last transition to weight-raised state.
+	 */
+	unsigned long service_from_wr;
 
 	/*
 	 * Value of wr start time when switching to soft rt
diff --git a/block/bfq-wf2q.c b/block/bfq-wf2q.c
index 4456eda34e48..4498c43245e2 100644
--- a/block/bfq-wf2q.c
+++ b/block/bfq-wf2q.c
@@ -838,6 +838,9 @@ void bfq_bfqq_served(struct bfq_queue *bfqq, int served)
 	if (!bfqq->service_from_backlogged)
 		bfqq->first_IO_time = jiffies;
 
+	if (bfqq->wr_coeff > 1)
+		bfqq->service_from_wr += served;
+
 	bfqq->service_from_backlogged += served;
 	for_each_entity(entity) {
 		st = bfq_entity_service_tree(entity);
-- 
2.15.1

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH BUGFIX/IMPROVEMENT 0/2] block, bfq: two pending patches
  2018-01-13 11:05 [PATCH BUGFIX/IMPROVEMENT 0/2] block, bfq: two pending patches Paolo Valente
  2018-01-13 11:05 ` [PATCH BUGFIX/IMPROVEMENT 1/2] block, bfq: limit tags for writes and async I/O Paolo Valente
  2018-01-13 11:05 ` [PATCH BUGFIX/IMPROVEMENT 2/2] block, bfq: limit sectors served with interactive weight raising Paolo Valente
@ 2018-01-13 13:02 ` Oleksandr Natalenko
  2018-01-18  8:25   ` Paolo Valente
  2018-01-18 15:23 ` Jens Axboe
  4 siblings, 0 replies; 7+ messages in thread
From: Oleksandr Natalenko @ 2018-01-13 13:02 UTC (permalink / raw)
  To: Paolo Valente
  Cc: Jens Axboe, linux-block, linux-kernel, ulf.hansson, broonie,
	linus.walleij, bfq-iosched

Hi.

13.01.2018 12:05, Paolo Valente wrote:
> Hi Jens,
> here are again the two pending patches you asked me to resend [1]. One
> of them, fixing read-starvation problems, was accompanied by a cover
> letter. I'm pasting the content of that cover letter below.
> 
> The patch addresses (serious) starvation problems caused by
> request-tag exhaustion, as explained in more detail in the commit
> message. I started from the solution in the function
> kyber_limit_depth, but then I had to define more articulate limits, to
> counter starvation also in cases not covered in kyber_limit_depth.
> If this solution proves to be effective, I'm willing to port it
> somehow to the other schedulers.
> 
> Thanks,
> Paolo
> 
> [1] https://www.spinics.net/lists/linux-block/msg21586.html
> 
> Paolo Valente (2):
>   block, bfq: limit tags for writes and async I/O
>   block, bfq: limit sectors served with interactive weight raising
> 
>  block/bfq-iosched.c | 158 
> +++++++++++++++++++++++++++++++++++++++++++++++++---
>  block/bfq-iosched.h |  17 ++++++
>  block/bfq-wf2q.c    |   3 +
>  3 files changed, 169 insertions(+), 9 deletions(-)
> 
> --
> 2.15.1

I'm running the system with these patches since the end of December, so 
with regard to stability and visible smoke:

Tested-by: Oleksandr Natalenko <oleksandr@natalenko.name>

for both of them.

Many thanks, Paolo!

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH BUGFIX/IMPROVEMENT 0/2] block, bfq: two pending patches
  2018-01-13 11:05 [PATCH BUGFIX/IMPROVEMENT 0/2] block, bfq: two pending patches Paolo Valente
@ 2018-01-18  8:25   ` Paolo Valente
  2018-01-13 11:05 ` [PATCH BUGFIX/IMPROVEMENT 2/2] block, bfq: limit sectors served with interactive weight raising Paolo Valente
                     ` (3 subsequent siblings)
  4 siblings, 0 replies; 7+ messages in thread
From: Paolo Valente @ 2018-01-18  8:25 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, linux-kernel, ulf.hansson, broonie, linus.walleij,
	bfq-iosched, oleksandr



> Il giorno 13 gen 2018, alle ore 12:05, Paolo Valente =
<paolo.valente@linaro.org> ha scritto:
>=20
> Hi Jens,
> here are again the two pending patches you asked me to resend [1]. One
> of them, fixing read-starvation problems, was accompanied by a cover
> letter. I'm pasting the content of that cover letter below.
>=20
> The patch addresses (serious) starvation problems caused by
> request-tag exhaustion, as explained in more detail in the commit
> message. I started from the solution in the function
> kyber_limit_depth, but then I had to define more articulate limits, to
> counter starvation also in cases not covered in kyber_limit_depth.
> If this solution proves to be effective, I'm willing to port it
> somehow to the other schedulers.
>=20

Hi Jens,
have had to time to check these patches?  Sorry for pushing, but I
guess 4.16 is getting closer, and these patches are performance
critical; especially the first, which solves a starvation problem.

Thanks,
Paolo

> Thanks,
> Paolo
>=20
> [1] https://www.spinics.net/lists/linux-block/msg21586.html
>=20
> Paolo Valente (2):
>  block, bfq: limit tags for writes and async I/O
>  block, bfq: limit sectors served with interactive weight raising
>=20
> block/bfq-iosched.c | 158 =
+++++++++++++++++++++++++++++++++++++++++++++++++---
> block/bfq-iosched.h |  17 ++++++
> block/bfq-wf2q.c    |   3 +
> 3 files changed, 169 insertions(+), 9 deletions(-)
>=20
> --
> 2.15.1

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH BUGFIX/IMPROVEMENT 0/2] block, bfq: two pending patches
@ 2018-01-18  8:25   ` Paolo Valente
  0 siblings, 0 replies; 7+ messages in thread
From: Paolo Valente @ 2018-01-18  8:25 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, linux-kernel, ulf.hansson, broonie, linus.walleij,
	bfq-iosched, oleksandr



> Il giorno 13 gen 2018, alle ore 12:05, Paolo Valente <paolo.valente@linaro.org> ha scritto:
> 
> Hi Jens,
> here are again the two pending patches you asked me to resend [1]. One
> of them, fixing read-starvation problems, was accompanied by a cover
> letter. I'm pasting the content of that cover letter below.
> 
> The patch addresses (serious) starvation problems caused by
> request-tag exhaustion, as explained in more detail in the commit
> message. I started from the solution in the function
> kyber_limit_depth, but then I had to define more articulate limits, to
> counter starvation also in cases not covered in kyber_limit_depth.
> If this solution proves to be effective, I'm willing to port it
> somehow to the other schedulers.
> 

Hi Jens,
have had to time to check these patches?  Sorry for pushing, but I
guess 4.16 is getting closer, and these patches are performance
critical; especially the first, which solves a starvation problem.

Thanks,
Paolo

> Thanks,
> Paolo
> 
> [1] https://www.spinics.net/lists/linux-block/msg21586.html
> 
> Paolo Valente (2):
>  block, bfq: limit tags for writes and async I/O
>  block, bfq: limit sectors served with interactive weight raising
> 
> block/bfq-iosched.c | 158 +++++++++++++++++++++++++++++++++++++++++++++++++---
> block/bfq-iosched.h |  17 ++++++
> block/bfq-wf2q.c    |   3 +
> 3 files changed, 169 insertions(+), 9 deletions(-)
> 
> --
> 2.15.1

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH BUGFIX/IMPROVEMENT 0/2] block, bfq: two pending patches
  2018-01-13 11:05 [PATCH BUGFIX/IMPROVEMENT 0/2] block, bfq: two pending patches Paolo Valente
                   ` (3 preceding siblings ...)
  2018-01-18  8:25   ` Paolo Valente
@ 2018-01-18 15:23 ` Jens Axboe
  4 siblings, 0 replies; 7+ messages in thread
From: Jens Axboe @ 2018-01-18 15:23 UTC (permalink / raw)
  To: Paolo Valente
  Cc: linux-block, linux-kernel, ulf.hansson, broonie, linus.walleij,
	bfq-iosched, oleksandr

On 1/13/18 4:05 AM, Paolo Valente wrote:
> Hi Jens,
> here are again the two pending patches you asked me to resend [1]. One
> of them, fixing read-starvation problems, was accompanied by a cover
> letter. I'm pasting the content of that cover letter below.
> 
> The patch addresses (serious) starvation problems caused by
> request-tag exhaustion, as explained in more detail in the commit
> message. I started from the solution in the function
> kyber_limit_depth, but then I had to define more articulate limits, to
> counter starvation also in cases not covered in kyber_limit_depth.
> If this solution proves to be effective, I'm willing to port it
> somehow to the other schedulers.

It's something we've been doing in the old request layer for tagging for
a long time (more than a decade), so a generic (and fast) solution that
covers all cases for blk-mq-tag would indeed be great.

For now, I have applied these for 4.16, thanks.

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2018-01-18 15:23 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-01-13 11:05 [PATCH BUGFIX/IMPROVEMENT 0/2] block, bfq: two pending patches Paolo Valente
2018-01-13 11:05 ` [PATCH BUGFIX/IMPROVEMENT 1/2] block, bfq: limit tags for writes and async I/O Paolo Valente
2018-01-13 11:05 ` [PATCH BUGFIX/IMPROVEMENT 2/2] block, bfq: limit sectors served with interactive weight raising Paolo Valente
2018-01-13 13:02 ` [PATCH BUGFIX/IMPROVEMENT 0/2] block, bfq: two pending patches Oleksandr Natalenko
2018-01-18  8:25 ` Paolo Valente
2018-01-18  8:25   ` Paolo Valente
2018-01-18 15:23 ` Jens Axboe

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.