All of lore.kernel.org
 help / color / mirror / Atom feed
* (no subject)
@ 2017-10-09  7:37 ` Michael Lyle
  0 siblings, 0 replies; 6+ messages in thread
From: Michael Lyle @ 2017-10-09  7:37 UTC (permalink / raw)
  To: linux-bcache, linux-block; +Cc: colyli

[PATCH v2 1/2] bcache: writeback rate shouldn't artifically clamp
[PATCH v2 2/2] bcache: rearrange writeback main thread ratelimit

This is a reroll of the previous "don't clamp" patch.  It corrects
type issues where negative numbers were handled badly (mostly for
display in writeback_rate_debug).

Additionally, a new, related patch-- during scanning for dirty
blocks, don't reset the ratelimiting counter.  This can prevent
undershoots/overshoots of the target rate relating to scanning.

Mike

^ permalink raw reply	[flat|nested] 6+ messages in thread

* (unknown), 
@ 2017-10-09  7:37 ` Michael Lyle
  0 siblings, 0 replies; 6+ messages in thread
From: Michael Lyle @ 2017-10-09  7:37 UTC (permalink / raw)
  To: linux-bcache, linux-block; +Cc: colyli

[PATCH v2 1/2] bcache: writeback rate shouldn't artifically clamp
[PATCH v2 2/2] bcache: rearrange writeback main thread ratelimit

This is a reroll of the previous "don't clamp" patch.  It corrects
type issues where negative numbers were handled badly (mostly for
display in writeback_rate_debug).

Additionally, a new, related patch-- during scanning for dirty
blocks, don't reset the ratelimiting counter.  This can prevent
undershoots/overshoots of the target rate relating to scanning.

Mike

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH v2 1/2] bcache: writeback rate shouldn't artifically clamp
  2017-10-09  7:37 ` (unknown), Michael Lyle
  (?)
@ 2017-10-09  7:37 ` Michael Lyle
  2017-10-09 10:59   ` Coly Li
  -1 siblings, 1 reply; 6+ messages in thread
From: Michael Lyle @ 2017-10-09  7:37 UTC (permalink / raw)
  To: linux-bcache, linux-block; +Cc: colyli, Michael Lyle

The previous code artificially limited writeback rate to 1000000
blocks/second (NSEC_PER_MSEC), which is a rate that can be met on fast
hardware.  The rate limiting code works fine (though with decreased
precision) up to 3 orders of magnitude faster, so use NSEC_PER_SEC.

Additionally, ensure that uint32_t is used as a type for rate throughout
the rate management so that type checking/clamp_t can work properly.

bch_next_delay should be rewritten for increased precision and better
handling of high rates and long sleep periods, but this is adequate for
now.

Signed-off-by: Michael Lyle <mlyle@lyle.org>
Reported-by: Coly Li <colyli@suse.de>
---
 drivers/md/bcache/bcache.h    | 2 +-
 drivers/md/bcache/util.h      | 4 ++--
 drivers/md/bcache/writeback.c | 7 ++++---
 3 files changed, 7 insertions(+), 6 deletions(-)

diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
index df0b2ccfca8d..2f9c380c4692 100644
--- a/drivers/md/bcache/bcache.h
+++ b/drivers/md/bcache/bcache.h
@@ -363,7 +363,7 @@ struct cached_dev {
 	int64_t			writeback_rate_proportional;
 	int64_t			writeback_rate_integral;
 	int64_t			writeback_rate_integral_scaled;
-	int64_t			writeback_rate_change;
+	int32_t			writeback_rate_change;
 
 	unsigned		writeback_rate_update_seconds;
 	unsigned		writeback_rate_i_term_inverse;
diff --git a/drivers/md/bcache/util.h b/drivers/md/bcache/util.h
index cb8d2ccbb6c6..8f509290bb02 100644
--- a/drivers/md/bcache/util.h
+++ b/drivers/md/bcache/util.h
@@ -441,10 +441,10 @@ struct bch_ratelimit {
 	uint64_t		next;
 
 	/*
-	 * Rate at which we want to do work, in units per nanosecond
+	 * Rate at which we want to do work, in units per second
 	 * The units here correspond to the units passed to bch_next_delay()
 	 */
-	unsigned		rate;
+	uint32_t		rate;
 };
 
 static inline void bch_ratelimit_reset(struct bch_ratelimit *d)
diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
index d63356a60001..42d087b9fb56 100644
--- a/drivers/md/bcache/writeback.c
+++ b/drivers/md/bcache/writeback.c
@@ -52,7 +52,8 @@ static void __update_writeback_rate(struct cached_dev *dc)
 	int64_t error = dirty - target;
 	int64_t proportional_scaled =
 		div_s64(error, dc->writeback_rate_p_term_inverse);
-	int64_t integral_scaled, new_rate;
+	int64_t integral_scaled;
+	uint32_t new_rate;
 
 	if ((error < 0 && dc->writeback_rate_integral > 0) ||
 	    (error > 0 && time_before64(local_clock(),
@@ -74,8 +75,8 @@ static void __update_writeback_rate(struct cached_dev *dc)
 	integral_scaled = div_s64(dc->writeback_rate_integral,
 			dc->writeback_rate_i_term_inverse);
 
-	new_rate = clamp_t(int64_t, (proportional_scaled + integral_scaled),
-			dc->writeback_rate_minimum, NSEC_PER_MSEC);
+	new_rate = clamp_t(int32_t, (proportional_scaled + integral_scaled),
+			dc->writeback_rate_minimum, NSEC_PER_SEC);
 
 	dc->writeback_rate_proportional = proportional_scaled;
 	dc->writeback_rate_integral_scaled = integral_scaled;
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v2 2/2] bcache: rearrange writeback main thread ratelimit
  2017-10-09  7:37 ` (unknown), Michael Lyle
  (?)
  (?)
@ 2017-10-09  7:37 ` Michael Lyle
  2017-10-13 23:20   ` Kent Overstreet
  -1 siblings, 1 reply; 6+ messages in thread
From: Michael Lyle @ 2017-10-09  7:37 UTC (permalink / raw)
  To: linux-bcache, linux-block; +Cc: colyli, Michael Lyle

The time spent searching for things to write back "counts" for the
actual rate achieved, so don't flush the accumulated rate with each
chunk.

This will maintain better fidelity to user-commanded rates, but it
may slightly increase the burstiness of writeback.  The writeback
lock needs improvement to help mitigate this.

Signed-off-by: Michael Lyle <mlyle@lyle.org>
---
 drivers/md/bcache/writeback.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
index 42d087b9fb56..719b104db0cc 100644
--- a/drivers/md/bcache/writeback.c
+++ b/drivers/md/bcache/writeback.c
@@ -526,6 +526,8 @@ static int bch_writeback_thread(void *arg)
 	struct cached_dev *dc = arg;
 	bool searched_full_index;
 
+	bch_ratelimit_reset(&dc->writeback_rate);
+
 	while (!kthread_should_stop()) {
 		down_write(&dc->writeback_lock);
 		if (!atomic_read(&dc->has_dirty) ||
@@ -553,7 +555,6 @@ static int bch_writeback_thread(void *arg)
 
 		up_write(&dc->writeback_lock);
 
-		bch_ratelimit_reset(&dc->writeback_rate);
 		read_dirty(dc);
 
 		if (searched_full_index) {
@@ -563,6 +564,8 @@ static int bch_writeback_thread(void *arg)
 			       !kthread_should_stop() &&
 			       !test_bit(BCACHE_DEV_DETACHING, &dc->disk.flags))
 				delay = schedule_timeout_interruptible(delay);
+
+			bch_ratelimit_reset(&dc->writeback_rate);
 		}
 	}
 
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH v2 1/2] bcache: writeback rate shouldn't artifically clamp
  2017-10-09  7:37 ` [PATCH v2 1/2] bcache: writeback rate shouldn't artifically clamp Michael Lyle
@ 2017-10-09 10:59   ` Coly Li
  0 siblings, 0 replies; 6+ messages in thread
From: Coly Li @ 2017-10-09 10:59 UTC (permalink / raw)
  To: Michael Lyle; +Cc: linux-bcache, linux-block, colyli

On 2017/10/9 下午3:37, Michael Lyle wrote:
> The previous code artificially limited writeback rate to 1000000
> blocks/second (NSEC_PER_MSEC), which is a rate that can be met on fast
> hardware.  The rate limiting code works fine (though with decreased
> precision) up to 3 orders of magnitude faster, so use NSEC_PER_SEC.
> 
> Additionally, ensure that uint32_t is used as a type for rate throughout
> the rate management so that type checking/clamp_t can work properly.
> 
> bch_next_delay should be rewritten for increased precision and better
> handling of high rates and long sleep periods, but this is adequate for
> now.
> 
> Signed-off-by: Michael Lyle <mlyle@lyle.org>
> Reported-by: Coly Li <colyli@suse.de>

Current maximum rate number is around 488.2M/sec, it could be easily a
bottleneck for md raid0 deivce with SATA SSD or more hard drives. The
new maximum rate (476G/sec) should be enough for most of common hardware
configurations :-)

Reviewed-by: Coly Li <colyli@suse.de>

Thanks.

Coly Li
> ---
>  drivers/md/bcache/bcache.h    | 2 +-
>  drivers/md/bcache/util.h      | 4 ++--
>  drivers/md/bcache/writeback.c | 7 ++++---
>  3 files changed, 7 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
> index df0b2ccfca8d..2f9c380c4692 100644
> --- a/drivers/md/bcache/bcache.h
> +++ b/drivers/md/bcache/bcache.h
> @@ -363,7 +363,7 @@ struct cached_dev {
>  	int64_t			writeback_rate_proportional;
>  	int64_t			writeback_rate_integral;
>  	int64_t			writeback_rate_integral_scaled;
> -	int64_t			writeback_rate_change;
> +	int32_t			writeback_rate_change;
>  
>  	unsigned		writeback_rate_update_seconds;
>  	unsigned		writeback_rate_i_term_inverse;
> diff --git a/drivers/md/bcache/util.h b/drivers/md/bcache/util.h
> index cb8d2ccbb6c6..8f509290bb02 100644
> --- a/drivers/md/bcache/util.h
> +++ b/drivers/md/bcache/util.h
> @@ -441,10 +441,10 @@ struct bch_ratelimit {
>  	uint64_t		next;
>  
>  	/*
> -	 * Rate at which we want to do work, in units per nanosecond
> +	 * Rate at which we want to do work, in units per second
>  	 * The units here correspond to the units passed to bch_next_delay()
>  	 */
> -	unsigned		rate;
> +	uint32_t		rate;
>  };
>  
>  static inline void bch_ratelimit_reset(struct bch_ratelimit *d)
> diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
> index d63356a60001..42d087b9fb56 100644
> --- a/drivers/md/bcache/writeback.c
> +++ b/drivers/md/bcache/writeback.c
> @@ -52,7 +52,8 @@ static void __update_writeback_rate(struct cached_dev *dc)
>  	int64_t error = dirty - target;
>  	int64_t proportional_scaled =
>  		div_s64(error, dc->writeback_rate_p_term_inverse);
> -	int64_t integral_scaled, new_rate;
> +	int64_t integral_scaled;
> +	uint32_t new_rate;
>  
>  	if ((error < 0 && dc->writeback_rate_integral > 0) ||
>  	    (error > 0 && time_before64(local_clock(),
> @@ -74,8 +75,8 @@ static void __update_writeback_rate(struct cached_dev *dc)
>  	integral_scaled = div_s64(dc->writeback_rate_integral,
>  			dc->writeback_rate_i_term_inverse);
>  
> -	new_rate = clamp_t(int64_t, (proportional_scaled + integral_scaled),
> -			dc->writeback_rate_minimum, NSEC_PER_MSEC);
> +	new_rate = clamp_t(int32_t, (proportional_scaled + integral_scaled),
> +			dc->writeback_rate_minimum, NSEC_PER_SEC);
>  
>  	dc->writeback_rate_proportional = proportional_scaled;
>  	dc->writeback_rate_integral_scaled = integral_scaled;

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v2 2/2] bcache: rearrange writeback main thread ratelimit
  2017-10-09  7:37 ` [PATCH v2 2/2] bcache: rearrange writeback main thread ratelimit Michael Lyle
@ 2017-10-13 23:20   ` Kent Overstreet
  0 siblings, 0 replies; 6+ messages in thread
From: Kent Overstreet @ 2017-10-13 23:20 UTC (permalink / raw)
  To: Michael Lyle; +Cc: linux-bcache, linux-block, colyli

On Mon, Oct 09, 2017 at 12:37:30AM -0700, Michael Lyle wrote:
> The time spent searching for things to write back "counts" for the
> actual rate achieved, so don't flush the accumulated rate with each
> chunk.
> 
> This will maintain better fidelity to user-commanded rates, but it
> may slightly increase the burstiness of writeback.  The writeback
> lock needs improvement to help mitigate this.
> 
> Signed-off-by: Michael Lyle <mlyle@lyle.org>

Reviewed-by: Kent Overstreet <kent.overstreet@gmail.com>

> ---
>  drivers/md/bcache/writeback.c | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
> index 42d087b9fb56..719b104db0cc 100644
> --- a/drivers/md/bcache/writeback.c
> +++ b/drivers/md/bcache/writeback.c
> @@ -526,6 +526,8 @@ static int bch_writeback_thread(void *arg)
>  	struct cached_dev *dc = arg;
>  	bool searched_full_index;
>  
> +	bch_ratelimit_reset(&dc->writeback_rate);
> +
>  	while (!kthread_should_stop()) {
>  		down_write(&dc->writeback_lock);
>  		if (!atomic_read(&dc->has_dirty) ||
> @@ -553,7 +555,6 @@ static int bch_writeback_thread(void *arg)
>  
>  		up_write(&dc->writeback_lock);
>  
> -		bch_ratelimit_reset(&dc->writeback_rate);
>  		read_dirty(dc);
>  
>  		if (searched_full_index) {
> @@ -563,6 +564,8 @@ static int bch_writeback_thread(void *arg)
>  			       !kthread_should_stop() &&
>  			       !test_bit(BCACHE_DEV_DETACHING, &dc->disk.flags))
>  				delay = schedule_timeout_interruptible(delay);
> +
> +			bch_ratelimit_reset(&dc->writeback_rate);
>  		}
>  	}
>  
> -- 
> 2.11.0
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-bcache" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2017-10-13 23:20 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-10-09  7:37 Michael Lyle
2017-10-09  7:37 ` (unknown), Michael Lyle
2017-10-09  7:37 ` [PATCH v2 1/2] bcache: writeback rate shouldn't artifically clamp Michael Lyle
2017-10-09 10:59   ` Coly Li
2017-10-09  7:37 ` [PATCH v2 2/2] bcache: rearrange writeback main thread ratelimit Michael Lyle
2017-10-13 23:20   ` Kent Overstreet

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.