All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/12] bcache patches for Linux v4.21
@ 2018-12-13 14:53 Coly Li
  2018-12-13 14:53 ` [PATCH 01/12] bcache: add comment for cache_set->fill_iter Coly Li
                   ` (12 more replies)
  0 siblings, 13 replies; 15+ messages in thread
From: Coly Li @ 2018-12-13 14:53 UTC (permalink / raw)
  To: linux-bcache, axboe; +Cc: linux-block, Coly Li

Hi Jens,

Here are the patches for Linux v4.21.

The patches from me are for a set of disabled-by-default optimization
for writeback cache mode, which are used by some users I know. Guoju
Fang contributes a patch to add number of keys in
trace_bcache_journal_write(). And there are 6 patchges are from Shenghui
Wang, which removes an unncessary NULL check for
debugfs_remove_recursive() and debugfs_remove(), adds useful code
comments for bcache, and improves sysfs information display for bcache
writeback rate parameters.

We don't have important or big change in this run. Please take these
patches.

Thanks in advance.

Coly Li
---
Coly Li (5):
  bcache: introduce force_wake_up_gc()
  bcache: option to automatically run gc thread after writeback
    accomplished
  bcache: add MODULE_DESCRIPTION information
  bcache: make cutoff_writeback and cutoff_writeback_sync tunnable
  bcache: set writeback_percent in a flexible range

Guoju Fang (1):
  bcache: print number of keys in trace_bcache_journal_write

Shenghui Wang (6):
  bcache: add comment for cache_set->fill_iter
  bcache: do not check if debug dentry is ERR or NULL explicitly on
    remove
  bcache: update comment for bch_data_insert
  bcache: update comment in sysfs.c
  bcache: do not mark writeback_running until backing dev attached to
    cache_set
  bcache: cannot set writeback_running via sysfs if no writeback kthread
    created

 drivers/md/bcache/bcache.h    | 20 +++++++++++++-
 drivers/md/bcache/btree.c     |  5 ++++
 drivers/md/bcache/btree.h     | 18 +++++++++++++
 drivers/md/bcache/debug.c     |  3 +--
 drivers/md/bcache/journal.c   |  2 +-
 drivers/md/bcache/request.c   |  6 ++---
 drivers/md/bcache/super.c     | 48 +++++++++++++++++++++++++++++++---
 drivers/md/bcache/sysfs.c     | 61 +++++++++++++++++++++++++++++--------------
 drivers/md/bcache/writeback.c | 30 ++++++++++++++++++++-
 drivers/md/bcache/writeback.h | 12 +++++++--
 include/trace/events/bcache.h | 27 ++++++++++++++++---
 11 files changed, 195 insertions(+), 37 deletions(-)

-- 
2.16.4


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH 01/12] bcache: add comment for cache_set->fill_iter
  2018-12-13 14:53 [PATCH 00/12] bcache patches for Linux v4.21 Coly Li
@ 2018-12-13 14:53 ` Coly Li
  2018-12-13 14:53 ` [PATCH 02/12] bcache: do not check if debug dentry is ERR or NULL explicitly on remove Coly Li
                   ` (11 subsequent siblings)
  12 siblings, 0 replies; 15+ messages in thread
From: Coly Li @ 2018-12-13 14:53 UTC (permalink / raw)
  To: linux-bcache, axboe; +Cc: linux-block, Shenghui Wang, Coly Li

From: Shenghui Wang <shhuiw@foxmail.com>

We have the following define for btree iterator:
	struct btree_iter {
		size_t size, used;
	#ifdef CONFIG_BCACHE_DEBUG
		struct btree_keys *b;
	#endif
		struct btree_iter_set {
			struct bkey *k, *end;
		} data[MAX_BSETS];
	};

We can see that the length of data[] field is static MAX_BSETS, which is
defined as 4 currently.

But a btree node on disk could have too many bsets for an iterator to fit
on the stack - maybe far more that MAX_BSETS. Have to dynamically allocate
space to host more btree_iter_sets.

bch_cache_set_alloc() will make sure the pool cache_set->fill_iter can
allocate an iterator equipped with enough room that can host
	(sb.bucket_size / sb.block_size)
btree_iter_sets, which is more than static MAX_BSETS.

bch_btree_node_read_done() will use that pool to allocate one iterator, to
host many bsets in one btree node.

Add more comment around cache_set->fill_iter to make code less confusing.

Signed-off-by: Shenghui Wang <shhuiw@foxmail.com>
Signed-off-by: Coly Li <colyli@suse.de>
---
 drivers/md/bcache/bcache.h | 6 +++++-
 drivers/md/bcache/btree.c  | 5 +++++
 2 files changed, 10 insertions(+), 1 deletion(-)

diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
index b61b83bbcfff..96d2213f279e 100644
--- a/drivers/md/bcache/bcache.h
+++ b/drivers/md/bcache/bcache.h
@@ -658,7 +658,11 @@ struct cache_set {
 
 	/*
 	 * A btree node on disk could have too many bsets for an iterator to fit
-	 * on the stack - have to dynamically allocate them
+	 * on the stack - have to dynamically allocate them.
+	 * bch_cache_set_alloc() will make sure the pool can allocate iterators
+	 * equipped with enough room that can host
+	 *     (sb.bucket_size / sb.block_size)
+	 * btree_iter_sets, which is more than static MAX_BSETS.
 	 */
 	mempool_t		fill_iter;
 
diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c
index 3f4211b5cd33..23cb1dc7296b 100644
--- a/drivers/md/bcache/btree.c
+++ b/drivers/md/bcache/btree.c
@@ -207,6 +207,11 @@ void bch_btree_node_read_done(struct btree *b)
 	struct bset *i = btree_bset_first(b);
 	struct btree_iter *iter;
 
+	/*
+	 * c->fill_iter can allocate an iterator with more memory space
+	 * than static MAX_BSETS.
+	 * See the comment arount cache_set->fill_iter.
+	 */
 	iter = mempool_alloc(&b->c->fill_iter, GFP_NOIO);
 	iter->size = b->c->sb.bucket_size / b->c->sb.block_size;
 	iter->used = 0;
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 02/12] bcache: do not check if debug dentry is ERR or NULL explicitly on remove
  2018-12-13 14:53 [PATCH 00/12] bcache patches for Linux v4.21 Coly Li
  2018-12-13 14:53 ` [PATCH 01/12] bcache: add comment for cache_set->fill_iter Coly Li
@ 2018-12-13 14:53 ` Coly Li
  2018-12-13 14:53 ` [PATCH 03/12] bcache: update comment for bch_data_insert Coly Li
                   ` (10 subsequent siblings)
  12 siblings, 0 replies; 15+ messages in thread
From: Coly Li @ 2018-12-13 14:53 UTC (permalink / raw)
  To: linux-bcache, axboe; +Cc: linux-block, Shenghui Wang, Coly Li

From: Shenghui Wang <shhuiw@foxmail.com>

debugfs_remove and debugfs_remove_recursive will check if the dentry
pointer is NULL or ERR, and will do nothing in that case.

Remove the check in cache_set_free and bch_debug_init.

Signed-off-by: Shenghui Wang <shhuiw@foxmail.com>
Signed-off-by: Coly Li <colyli@suse.de>
---
 drivers/md/bcache/debug.c | 3 +--
 drivers/md/bcache/super.c | 3 +--
 2 files changed, 2 insertions(+), 4 deletions(-)

diff --git a/drivers/md/bcache/debug.c b/drivers/md/bcache/debug.c
index 8f448b9c96a1..8b123be05254 100644
--- a/drivers/md/bcache/debug.c
+++ b/drivers/md/bcache/debug.c
@@ -249,8 +249,7 @@ void bch_debug_init_cache_set(struct cache_set *c)
 
 void bch_debug_exit(void)
 {
-	if (!IS_ERR_OR_NULL(bcache_debug))
-		debugfs_remove_recursive(bcache_debug);
+	debugfs_remove_recursive(bcache_debug);
 }
 
 void __init bch_debug_init(void)
diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index 7bbd670a5a84..5b59d44656c0 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -1510,8 +1510,7 @@ static void cache_set_free(struct closure *cl)
 	struct cache *ca;
 	unsigned int i;
 
-	if (!IS_ERR_OR_NULL(c->debug))
-		debugfs_remove(c->debug);
+	debugfs_remove(c->debug);
 
 	bch_open_buckets_free(c);
 	bch_btree_cache_free(c);
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 03/12] bcache: update comment for bch_data_insert
  2018-12-13 14:53 [PATCH 00/12] bcache patches for Linux v4.21 Coly Li
  2018-12-13 14:53 ` [PATCH 01/12] bcache: add comment for cache_set->fill_iter Coly Li
  2018-12-13 14:53 ` [PATCH 02/12] bcache: do not check if debug dentry is ERR or NULL explicitly on remove Coly Li
@ 2018-12-13 14:53 ` Coly Li
  2018-12-13 14:53 ` [PATCH 04/12] bcache: update comment in sysfs.c Coly Li
                   ` (9 subsequent siblings)
  12 siblings, 0 replies; 15+ messages in thread
From: Coly Li @ 2018-12-13 14:53 UTC (permalink / raw)
  To: linux-bcache, axboe; +Cc: linux-block, Shenghui Wang, Coly Li

From: Shenghui Wang <shhuiw@foxmail.com>

commit 220bb38c21b8 ("bcache: Break up struct search") introduced changes
to struct search and s->iop.
bypass/bio are fields of struct data_insert_op now. Update the comment.

Signed-off-by: Shenghui Wang <shhuiw@foxmail.com>
Signed-off-by: Coly Li <colyli@suse.de>
---
 drivers/md/bcache/request.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c
index 3bf35914bb57..15070412a32e 100644
--- a/drivers/md/bcache/request.c
+++ b/drivers/md/bcache/request.c
@@ -311,11 +311,11 @@ static void bch_data_insert_start(struct closure *cl)
  * data is written it calls bch_journal, and after the keys have been added to
  * the next journal write they're inserted into the btree.
  *
- * It inserts the data in s->cache_bio; bi_sector is used for the key offset,
+ * It inserts the data in op->bio; bi_sector is used for the key offset,
  * and op->inode is used for the key inode.
  *
- * If s->bypass is true, instead of inserting the data it invalidates the
- * region of the cache represented by s->cache_bio and op->inode.
+ * If op->bypass is true, instead of inserting the data it invalidates the
+ * region of the cache represented by op->bio and op->inode.
  */
 void bch_data_insert(struct closure *cl)
 {
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 04/12] bcache: update comment in sysfs.c
  2018-12-13 14:53 [PATCH 00/12] bcache patches for Linux v4.21 Coly Li
                   ` (2 preceding siblings ...)
  2018-12-13 14:53 ` [PATCH 03/12] bcache: update comment for bch_data_insert Coly Li
@ 2018-12-13 14:53 ` Coly Li
  2018-12-13 14:53 ` [PATCH 05/12] bcache: do not mark writeback_running until backing dev attached to cache_set Coly Li
                   ` (8 subsequent siblings)
  12 siblings, 0 replies; 15+ messages in thread
From: Coly Li @ 2018-12-13 14:53 UTC (permalink / raw)
  To: linux-bcache, axboe; +Cc: linux-block, Shenghui Wang, Coly Li

From: Shenghui Wang <shhuiw@foxmail.com>

We have struct cached_dev allocated by kzalloc in register_bcache(), which
initializes all the fields of cached_dev with 0s. And commit ce4c3e19e520
("bcache: Replace bch_read_string_list() by __sysfs_match_string()") has
remove the string "default".

Update the comment.

Signed-off-by: Shenghui Wang <shhuiw@foxmail.com>
Signed-off-by: Coly Li <colyli@suse.de>
---
 drivers/md/bcache/sysfs.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/md/bcache/sysfs.c b/drivers/md/bcache/sysfs.c
index 26f035a0c5b9..d2e5c9892d4d 100644
--- a/drivers/md/bcache/sysfs.c
+++ b/drivers/md/bcache/sysfs.c
@@ -16,7 +16,7 @@
 #include <linux/sort.h>
 #include <linux/sched/clock.h>
 
-/* Default is -1; we skip past it for struct cached_dev's cache mode */
+/* Default is 0 ("writethrough") */
 static const char * const bch_cache_modes[] = {
 	"writethrough",
 	"writeback",
@@ -25,7 +25,7 @@ static const char * const bch_cache_modes[] = {
 	NULL
 };
 
-/* Default is -1; we skip past it for stop_when_cache_set_failed */
+/* Default is 0 ("auto") */
 static const char * const bch_stop_on_failure_modes[] = {
 	"auto",
 	"always",
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 05/12] bcache: do not mark writeback_running until backing dev attached to cache_set
  2018-12-13 14:53 [PATCH 00/12] bcache patches for Linux v4.21 Coly Li
                   ` (3 preceding siblings ...)
  2018-12-13 14:53 ` [PATCH 04/12] bcache: update comment in sysfs.c Coly Li
@ 2018-12-13 14:53 ` Coly Li
  2018-12-13 14:53 ` [PATCH 06/12] bcache: cannot set writeback_running via sysfs if no writeback kthread created Coly Li
                   ` (7 subsequent siblings)
  12 siblings, 0 replies; 15+ messages in thread
From: Coly Li @ 2018-12-13 14:53 UTC (permalink / raw)
  To: linux-bcache, axboe; +Cc: linux-block, Shenghui Wang, Coly Li

From: Shenghui Wang <shhuiw@foxmail.com>

A fresh backing device is not attached to any cache_set, and
has no writeback kthread created until first attached to some
cache_set.

But bch_cached_dev_writeback_init run
"
	dc->writeback_running		= true;
	WARN_ON(test_and_clear_bit(BCACHE_DEV_WB_RUNNING,
			&dc->disk.flags));
"
for any newly formatted backing devices.

For a fresh standalone backing device, we can get something like
following even if no writeback kthread created:
------------------------
/sys/block/bcache0/bcache# cat writeback_running
1
/sys/block/bcache0/bcache# cat writeback_rate_debug
rate:		512.0k/sec
dirty:		0.0k
target:		0.0k
proportional:	0.0k
integral:	0.0k
change:		0.0k/sec
next io:	-15427384ms

The none ZERO fields are misleading as no alive writeback kthread yet.

Set dc->writeback_running false as no writeback thread created in
bch_cached_dev_writeback_init().

We have writeback thread created and woken up in bch_cached_dev_writeback
_start(). Set dc->writeback_running true before bch_writeback_queue()
called, as a writeback thread will check if dc->writeback_running is true
before writing back dirty data, and hung if false detected.

After the change, we can get the following output for a fresh standalone
backing device:
-----------------------
/sys/block/bcache0/bcache$ cat writeback_running
0
/sys/block/bcache0/bcache# cat writeback_rate_debug
rate:		0.0k/sec
dirty:		0.0k
target:		0.0k
proportional:	0.0k
integral:	0.0k
change:		0.0k/sec
next io:	0ms

v1 -> v2:
  Set dc->writeback_running before bch_writeback_queue() called,

Signed-off-by: Shenghui Wang <shhuiw@foxmail.com>
Signed-off-by: Coly Li <colyli@suse.de>
---
 drivers/md/bcache/writeback.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
index 08c3a9f9676c..1696b212ec4e 100644
--- a/drivers/md/bcache/writeback.c
+++ b/drivers/md/bcache/writeback.c
@@ -777,7 +777,7 @@ void bch_cached_dev_writeback_init(struct cached_dev *dc)
 	bch_keybuf_init(&dc->writeback_keys);
 
 	dc->writeback_metadata		= true;
-	dc->writeback_running		= true;
+	dc->writeback_running		= false;
 	dc->writeback_percent		= 10;
 	dc->writeback_delay		= 30;
 	atomic_long_set(&dc->writeback_rate.rate, 1024);
@@ -805,6 +805,7 @@ int bch_cached_dev_writeback_start(struct cached_dev *dc)
 		cached_dev_put(dc);
 		return PTR_ERR(dc->writeback_thread);
 	}
+	dc->writeback_running = true;
 
 	WARN_ON(test_and_set_bit(BCACHE_DEV_WB_RUNNING, &dc->disk.flags));
 	schedule_delayed_work(&dc->writeback_rate_update,
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 06/12] bcache: cannot set writeback_running via sysfs if no writeback kthread created
  2018-12-13 14:53 [PATCH 00/12] bcache patches for Linux v4.21 Coly Li
                   ` (4 preceding siblings ...)
  2018-12-13 14:53 ` [PATCH 05/12] bcache: do not mark writeback_running until backing dev attached to cache_set Coly Li
@ 2018-12-13 14:53 ` Coly Li
  2018-12-13 14:53 ` [PATCH 07/12] bcache: introduce force_wake_up_gc() Coly Li
                   ` (6 subsequent siblings)
  12 siblings, 0 replies; 15+ messages in thread
From: Coly Li @ 2018-12-13 14:53 UTC (permalink / raw)
  To: linux-bcache, axboe; +Cc: linux-block, Shenghui Wang, Coly Li

From: Shenghui Wang <shhuiw@foxmail.com>

"echo 1 > writeback_running" marks writeback_running even if no writeback
kthread created as "d_strtoul(writeback_running)" will simply set dc->
writeback_running without checking the existence of dc->writeback_thread.

Add check for setting writeback_running via sysfs: if no writeback kthread
available, reject setting to 1.

v2 -> v3:
  * Make message on wrong assignment more clear.
  * Print name of bcache device instead of name of backing device.

Signed-off-by: Shenghui Wang <shhuiw@foxmail.com>
Signed-off-by: Coly Li <colyli@suse.de>
---
 drivers/md/bcache/sysfs.c | 21 +++++++++++++++++++--
 1 file changed, 19 insertions(+), 2 deletions(-)

diff --git a/drivers/md/bcache/sysfs.c b/drivers/md/bcache/sysfs.c
index d2e5c9892d4d..9d5fe12f0c9c 100644
--- a/drivers/md/bcache/sysfs.c
+++ b/drivers/md/bcache/sysfs.c
@@ -384,8 +384,25 @@ STORE(bch_cached_dev)
 	mutex_lock(&bch_register_lock);
 	size = __cached_dev_store(kobj, attr, buf, size);
 
-	if (attr == &sysfs_writeback_running)
-		bch_writeback_queue(dc);
+	if (attr == &sysfs_writeback_running) {
+		/* dc->writeback_running changed in __cached_dev_store() */
+		if (IS_ERR_OR_NULL(dc->writeback_thread)) {
+			/*
+			 * reject setting it to 1 via sysfs if writeback
+			 * kthread is not created yet.
+			 */
+			if (dc->writeback_running) {
+				dc->writeback_running = false;
+				pr_err("%s: failed to run non-existent writeback thread",
+						dc->disk.disk->disk_name);
+			}
+		} else
+			/*
+			 * writeback kthread will check if dc->writeback_running
+			 * is true or false.
+			 */
+			bch_writeback_queue(dc);
+	}
 
 	if (attr == &sysfs_writeback_percent)
 		if (!test_and_set_bit(BCACHE_DEV_WB_RUNNING, &dc->disk.flags))
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 07/12] bcache: introduce force_wake_up_gc()
  2018-12-13 14:53 [PATCH 00/12] bcache patches for Linux v4.21 Coly Li
                   ` (5 preceding siblings ...)
  2018-12-13 14:53 ` [PATCH 06/12] bcache: cannot set writeback_running via sysfs if no writeback kthread created Coly Li
@ 2018-12-13 14:53 ` Coly Li
  2018-12-13 14:53 ` [PATCH 08/12] bcache: option to automatically run gc thread after writeback accomplished Coly Li
                   ` (5 subsequent siblings)
  12 siblings, 0 replies; 15+ messages in thread
From: Coly Li @ 2018-12-13 14:53 UTC (permalink / raw)
  To: linux-bcache, axboe; +Cc: linux-block, Coly Li

Garbage collection thread starts to work when c->sectors_to_gc is
negative value, otherwise nothing will happen even the gc thread
is woken up by wake_up_gc().

force_wake_up_gc() sets c->sectors_to_gc to -1 before calling
wake_up_gc(), then gc thread may have chance to run if no one else
sets c->sectors_to_gc to a positive value before gc_should_run().

This routine can be called where the gc thread is woken up and
required to run in force.

Signed-off-by: Coly Li <colyli@suse.de>
---
 drivers/md/bcache/btree.h | 18 ++++++++++++++++++
 drivers/md/bcache/sysfs.c | 17 ++---------------
 2 files changed, 20 insertions(+), 15 deletions(-)

diff --git a/drivers/md/bcache/btree.h b/drivers/md/bcache/btree.h
index a68d6c55783b..d1c72ef64edf 100644
--- a/drivers/md/bcache/btree.h
+++ b/drivers/md/bcache/btree.h
@@ -266,6 +266,24 @@ static inline void wake_up_gc(struct cache_set *c)
 	wake_up(&c->gc_wait);
 }
 
+static inline void force_wake_up_gc(struct cache_set *c)
+{
+	/*
+	 * Garbage collection thread only works when sectors_to_gc < 0,
+	 * calling wake_up_gc() won't start gc thread if sectors_to_gc is
+	 * not a nagetive value.
+	 * Therefore sectors_to_gc is set to -1 here, before waking up
+	 * gc thread by calling wake_up_gc(). Then gc_should_run() will
+	 * give a chance to permit gc thread to run. "Give a chance" means
+	 * before going into gc_should_run(), there is still possibility
+	 * that c->sectors_to_gc being set to other positive value. So
+	 * this routine won't 100% make sure gc thread will be woken up
+	 * to run.
+	 */
+	atomic_set(&c->sectors_to_gc, -1);
+	wake_up_gc(c);
+}
+
 #define MAP_DONE	0
 #define MAP_CONTINUE	1
 
diff --git a/drivers/md/bcache/sysfs.c b/drivers/md/bcache/sysfs.c
index 9d5fe12f0c9c..c09748497cdc 100644
--- a/drivers/md/bcache/sysfs.c
+++ b/drivers/md/bcache/sysfs.c
@@ -742,21 +742,8 @@ STORE(__bch_cache_set)
 		bch_cache_accounting_clear(&c->accounting);
 	}
 
-	if (attr == &sysfs_trigger_gc) {
-		/*
-		 * Garbage collection thread only works when sectors_to_gc < 0,
-		 * when users write to sysfs entry trigger_gc, most of time
-		 * they want to forcibly triger gargage collection. Here -1 is
-		 * set to c->sectors_to_gc, to make gc_should_run() give a
-		 * chance to permit gc thread to run. "give a chance" means
-		 * before going into gc_should_run(), there is still chance
-		 * that c->sectors_to_gc being set to other positive value. So
-		 * writing sysfs entry trigger_gc won't always make sure gc
-		 * thread takes effect.
-		 */
-		atomic_set(&c->sectors_to_gc, -1);
-		wake_up_gc(c);
-	}
+	if (attr == &sysfs_trigger_gc)
+		force_wake_up_gc(c);
 
 	if (attr == &sysfs_prune_cache) {
 		struct shrink_control sc;
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 08/12] bcache: option to automatically run gc thread after writeback accomplished
  2018-12-13 14:53 [PATCH 00/12] bcache patches for Linux v4.21 Coly Li
                   ` (6 preceding siblings ...)
  2018-12-13 14:53 ` [PATCH 07/12] bcache: introduce force_wake_up_gc() Coly Li
@ 2018-12-13 14:53 ` Coly Li
  2018-12-13 14:53 ` [PATCH 09/12] bcache: add MODULE_DESCRIPTION information Coly Li
                   ` (4 subsequent siblings)
  12 siblings, 0 replies; 15+ messages in thread
From: Coly Li @ 2018-12-13 14:53 UTC (permalink / raw)
  To: linux-bcache, axboe; +Cc: linux-block, Coly Li

The option gc_after_writeback is disabled by default, because garbage
collection will discard SSD data which drops cached data.

Echo 1 into /sys/fs/bcache/<UUID>/internal/gc_after_writeback will enable
this option, which wakes up gc thread when writeback accomplished and all
cached data is clean.

This option is helpful for people who cares writing performance more.
In heavy writing workload, all cached data can be clean only happens when
writeback thread cleans all cached data in I/O idle time. In such
situation a following gc running may help to shrink bcache B+ tree and
discard more clean data, which may be helpful for future writing requests.

If you are not sure whether this is helpful for your own workload, please
leave it as disabled by default.

Signed-off-by: Coly Li <colyli@suse.de>
---
 drivers/md/bcache/bcache.h    | 14 ++++++++++++++
 drivers/md/bcache/sysfs.c     |  9 +++++++++
 drivers/md/bcache/writeback.c | 27 +++++++++++++++++++++++++++
 drivers/md/bcache/writeback.h |  2 ++
 4 files changed, 52 insertions(+)

diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
index 96d2213f279e..fdf75352e16a 100644
--- a/drivers/md/bcache/bcache.h
+++ b/drivers/md/bcache/bcache.h
@@ -626,6 +626,20 @@ struct cache_set {
 	/* Where in the btree gc currently is */
 	struct bkey		gc_done;
 
+	/*
+	 * For automatical garbage collection after writeback completed, this
+	 * varialbe is used as bit fields,
+	 * - 0000 0001b (BCH_ENABLE_AUTO_GC): enable gc after writeback
+	 * - 0000 0010b (BCH_DO_AUTO_GC):     do gc after writeback
+	 * This is an optimization for following write request after writeback
+	 * finished, but read hit rate dropped due to clean data on cache is
+	 * discarded. Unless user explicitly sets it via sysfs, it won't be
+	 * enabled.
+	 */
+#define BCH_ENABLE_AUTO_GC	1
+#define BCH_DO_AUTO_GC		2
+	uint8_t			gc_after_writeback;
+
 	/*
 	 * The allocation code needs gc_mark in struct bucket to be correct, but
 	 * it's not while a gc is in progress. Protected by bucket_lock.
diff --git a/drivers/md/bcache/sysfs.c b/drivers/md/bcache/sysfs.c
index c09748497cdc..621186b4240f 100644
--- a/drivers/md/bcache/sysfs.c
+++ b/drivers/md/bcache/sysfs.c
@@ -128,6 +128,7 @@ rw_attribute(expensive_debug_checks);
 rw_attribute(cache_replacement_policy);
 rw_attribute(btree_shrinker_disabled);
 rw_attribute(copy_gc_enabled);
+rw_attribute(gc_after_writeback);
 rw_attribute(size);
 
 static ssize_t bch_snprint_string_list(char *buf,
@@ -693,6 +694,7 @@ SHOW(__bch_cache_set)
 	sysfs_printf(gc_always_rewrite,		"%i", c->gc_always_rewrite);
 	sysfs_printf(btree_shrinker_disabled,	"%i", c->shrinker_disabled);
 	sysfs_printf(copy_gc_enabled,		"%i", c->copy_gc_enabled);
+	sysfs_printf(gc_after_writeback,	"%i", c->gc_after_writeback);
 	sysfs_printf(io_disable,		"%i",
 		     test_bit(CACHE_SET_IO_DISABLE, &c->flags));
 
@@ -793,6 +795,12 @@ STORE(__bch_cache_set)
 	sysfs_strtoul(gc_always_rewrite,	c->gc_always_rewrite);
 	sysfs_strtoul(btree_shrinker_disabled,	c->shrinker_disabled);
 	sysfs_strtoul(copy_gc_enabled,		c->copy_gc_enabled);
+	/*
+	 * write gc_after_writeback here may overwrite an already set
+	 * BCH_DO_AUTO_GC, it doesn't matter because this flag will be
+	 * set in next chance.
+	 */
+	sysfs_strtoul_clamp(gc_after_writeback, c->gc_after_writeback, 0, 1);
 
 	return size;
 }
@@ -873,6 +881,7 @@ static struct attribute *bch_cache_set_internal_files[] = {
 	&sysfs_gc_always_rewrite,
 	&sysfs_btree_shrinker_disabled,
 	&sysfs_copy_gc_enabled,
+	&sysfs_gc_after_writeback,
 	&sysfs_io_disable,
 	NULL
 };
diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
index 1696b212ec4e..73f0efac2b9f 100644
--- a/drivers/md/bcache/writeback.c
+++ b/drivers/md/bcache/writeback.c
@@ -17,6 +17,15 @@
 #include <linux/sched/clock.h>
 #include <trace/events/bcache.h>
 
+static void update_gc_after_writeback(struct cache_set *c)
+{
+	if (c->gc_after_writeback != (BCH_ENABLE_AUTO_GC) ||
+	    c->gc_stats.in_use < BCH_AUTO_GC_DIRTY_THRESHOLD)
+		return;
+
+	c->gc_after_writeback |= BCH_DO_AUTO_GC;
+}
+
 /* Rate limiting */
 static uint64_t __calc_target_rate(struct cached_dev *dc)
 {
@@ -191,6 +200,7 @@ static void update_writeback_rate(struct work_struct *work)
 		if (!set_at_max_writeback_rate(c, dc)) {
 			down_read(&dc->writeback_lock);
 			__update_writeback_rate(dc);
+			update_gc_after_writeback(c);
 			up_read(&dc->writeback_lock);
 		}
 	}
@@ -689,6 +699,23 @@ static int bch_writeback_thread(void *arg)
 				up_write(&dc->writeback_lock);
 				break;
 			}
+
+			/*
+			 * When dirty data rate is high (e.g. 50%+), there might
+			 * be heavy buckets fragmentation after writeback
+			 * finished, which hurts following write performance.
+			 * If users really care about write performance they
+			 * may set BCH_ENABLE_AUTO_GC via sysfs, then when
+			 * BCH_DO_AUTO_GC is set, garbage collection thread
+			 * will be wake up here. After moving gc, the shrunk
+			 * btree and discarded free buckets SSD space may be
+			 * helpful for following write requests.
+			 */
+			if (c->gc_after_writeback ==
+			    (BCH_ENABLE_AUTO_GC|BCH_DO_AUTO_GC)) {
+				c->gc_after_writeback &= ~BCH_DO_AUTO_GC;
+				force_wake_up_gc(c);
+			}
 		}
 
 		up_write(&dc->writeback_lock);
diff --git a/drivers/md/bcache/writeback.h b/drivers/md/bcache/writeback.h
index d2b9fdbc8994..ce63be98a39d 100644
--- a/drivers/md/bcache/writeback.h
+++ b/drivers/md/bcache/writeback.h
@@ -11,6 +11,8 @@
 #define WRITEBACK_RATE_UPDATE_SECS_MAX		60
 #define WRITEBACK_RATE_UPDATE_SECS_DEFAULT	5
 
+#define BCH_AUTO_GC_DIRTY_THRESHOLD	50
+
 /*
  * 14 (16384ths) is chosen here as something that each backing device
  * should be a reasonable fraction of the share, and not to blow up
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 09/12] bcache: add MODULE_DESCRIPTION information
  2018-12-13 14:53 [PATCH 00/12] bcache patches for Linux v4.21 Coly Li
                   ` (7 preceding siblings ...)
  2018-12-13 14:53 ` [PATCH 08/12] bcache: option to automatically run gc thread after writeback accomplished Coly Li
@ 2018-12-13 14:53 ` Coly Li
  2018-12-13 14:53 ` [PATCH 10/12] bcache: make cutoff_writeback and cutoff_writeback_sync tunnable Coly Li
                   ` (3 subsequent siblings)
  12 siblings, 0 replies; 15+ messages in thread
From: Coly Li @ 2018-12-13 14:53 UTC (permalink / raw)
  To: linux-bcache, axboe; +Cc: linux-block, Coly Li

This patch moves MODULE_AUTHOR and MODULE_LICENSE to end of super.c, and
add MODULE_DESCRIPTION("Bcache: a Linux block layer cache").

This is preparation for adding module parameters.

Signed-off-by: Coly Li <colyli@suse.de>
---
 drivers/md/bcache/super.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index 5b59d44656c0..61d3b63fa617 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -25,9 +25,6 @@
 #include <linux/reboot.h>
 #include <linux/sysfs.h>
 
-MODULE_LICENSE("GPL");
-MODULE_AUTHOR("Kent Overstreet <kent.overstreet@gmail.com>");
-
 static const char bcache_magic[] = {
 	0xc6, 0x85, 0x73, 0xf6, 0x4e, 0x1a, 0x45, 0xca,
 	0x82, 0x65, 0xf5, 0x7f, 0x48, 0xba, 0x6d, 0x81
@@ -2469,3 +2466,7 @@ static int __init bcache_init(void)
 
 module_exit(bcache_exit);
 module_init(bcache_init);
+
+MODULE_DESCRIPTION("Bcache: a Linux block layer cache");
+MODULE_AUTHOR("Kent Overstreet <kent.overstreet@gmail.com>");
+MODULE_LICENSE("GPL");
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 10/12] bcache: make cutoff_writeback and cutoff_writeback_sync tunnable
  2018-12-13 14:53 [PATCH 00/12] bcache patches for Linux v4.21 Coly Li
                   ` (8 preceding siblings ...)
  2018-12-13 14:53 ` [PATCH 09/12] bcache: add MODULE_DESCRIPTION information Coly Li
@ 2018-12-13 14:53 ` Coly Li
  2018-12-13 14:53 ` [PATCH 11/12] bcache: set writeback_percent in a flexible range Coly Li
                   ` (2 subsequent siblings)
  12 siblings, 0 replies; 15+ messages in thread
From: Coly Li @ 2018-12-13 14:53 UTC (permalink / raw)
  To: linux-bcache, axboe; +Cc: linux-block, Coly Li

Currently the cutoff writeback and cutoff writeback sync thresholds are
defined by CUTOFF_WRITEBACK (40) and CUTOFF_WRITEBACK_SYNC (70) as static
values. Most of time these they work fine, but when people want to do
research on bcache writeback mode performance tuning, there is no chance
to modify the soft and hard cutoff writeback values.

This patch introduces two module parameters bch_cutoff_writeback_sync and
bch_cutoff_writeback which permit people to tune the values when loading
bcache.ko. If they are not specified by module loading, current values
CUTOFF_WRITEBACK_SYNC and CUTOFF_WRITEBACK will be used as default and
nothing changes.

When people want to tune this two values,
- cutoff_writeback can be set in range [1, 70]
- cutoff_writeback_sync can be set in range [1, 90]
- cutoff_writeback always <= cutoff_writeback_sync

The default values are strongly recommended to most of users for most of
workloads. Anyway, if people wants to take their own risk to do research
on new writeback cutoff tuning for their own workload, now they can
make it.

Signed-off-by: Coly Li <colyli@suse.de>
---
 drivers/md/bcache/super.c     | 40 ++++++++++++++++++++++++++++++++++++++++
 drivers/md/bcache/sysfs.c     |  7 +++++++
 drivers/md/bcache/writeback.h | 10 ++++++++--
 3 files changed, 55 insertions(+), 2 deletions(-)

diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index 61d3b63fa617..4dee119c3664 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -25,6 +25,9 @@
 #include <linux/reboot.h>
 #include <linux/sysfs.h>
 
+unsigned int bch_cutoff_writeback;
+unsigned int bch_cutoff_writeback_sync;
+
 static const char bcache_magic[] = {
 	0xc6, 0x85, 0x73, 0xf6, 0x4e, 0x1a, 0x45, 0xca,
 	0x82, 0x65, 0xf5, 0x7f, 0x48, 0xba, 0x6d, 0x81
@@ -2420,6 +2423,32 @@ static void bcache_exit(void)
 	mutex_destroy(&bch_register_lock);
 }
 
+/* Check and fixup module parameters */
+static void check_module_parameters(void)
+{
+	if (bch_cutoff_writeback_sync == 0)
+		bch_cutoff_writeback_sync = CUTOFF_WRITEBACK_SYNC;
+	else if (bch_cutoff_writeback_sync > CUTOFF_WRITEBACK_SYNC_MAX) {
+		pr_warn("set bch_cutoff_writeback_sync (%u) to max value %u",
+			bch_cutoff_writeback_sync, CUTOFF_WRITEBACK_SYNC_MAX);
+		bch_cutoff_writeback_sync = CUTOFF_WRITEBACK_SYNC_MAX;
+	}
+
+	if (bch_cutoff_writeback == 0)
+		bch_cutoff_writeback = CUTOFF_WRITEBACK;
+	else if (bch_cutoff_writeback > CUTOFF_WRITEBACK_MAX) {
+		pr_warn("set bch_cutoff_writeback (%u) to max value %u",
+			bch_cutoff_writeback, CUTOFF_WRITEBACK_MAX);
+		bch_cutoff_writeback = CUTOFF_WRITEBACK_MAX;
+	}
+
+	if (bch_cutoff_writeback > bch_cutoff_writeback_sync) {
+		pr_warn("set bch_cutoff_writeback (%u) to %u",
+			bch_cutoff_writeback, bch_cutoff_writeback_sync);
+		bch_cutoff_writeback = bch_cutoff_writeback_sync;
+	}
+}
+
 static int __init bcache_init(void)
 {
 	static const struct attribute *files[] = {
@@ -2428,6 +2457,8 @@ static int __init bcache_init(void)
 		NULL
 	};
 
+	check_module_parameters();
+
 	mutex_init(&bch_register_lock);
 	init_waitqueue_head(&unregister_wait);
 	register_reboot_notifier(&reboot);
@@ -2464,9 +2495,18 @@ static int __init bcache_init(void)
 	return -ENOMEM;
 }
 
+/*
+ * Module hooks
+ */
 module_exit(bcache_exit);
 module_init(bcache_init);
 
+module_param(bch_cutoff_writeback, uint, 0);
+MODULE_PARM_DESC(bch_cutoff_writeback, "threshold to cutoff writeback");
+
+module_param(bch_cutoff_writeback_sync, uint, 0);
+MODULE_PARM_DESC(bch_cutoff_writeback_sync, "hard threshold to cutoff writeback");
+
 MODULE_DESCRIPTION("Bcache: a Linux block layer cache");
 MODULE_AUTHOR("Kent Overstreet <kent.overstreet@gmail.com>");
 MODULE_LICENSE("GPL");
diff --git a/drivers/md/bcache/sysfs.c b/drivers/md/bcache/sysfs.c
index 621186b4240f..482b128b3e9d 100644
--- a/drivers/md/bcache/sysfs.c
+++ b/drivers/md/bcache/sysfs.c
@@ -88,6 +88,8 @@ read_attribute(writeback_keys_done);
 read_attribute(writeback_keys_failed);
 read_attribute(io_errors);
 read_attribute(congested);
+read_attribute(cutoff_writeback);
+read_attribute(cutoff_writeback_sync);
 rw_attribute(congested_read_threshold_us);
 rw_attribute(congested_write_threshold_us);
 
@@ -686,6 +688,9 @@ SHOW(__bch_cache_set)
 	sysfs_print(congested_write_threshold_us,
 		    c->congested_write_threshold_us);
 
+	sysfs_print(cutoff_writeback, bch_cutoff_writeback);
+	sysfs_print(cutoff_writeback_sync, bch_cutoff_writeback_sync);
+
 	sysfs_print(active_journal_entries,	fifo_used(&c->journal.pin));
 	sysfs_printf(verify,			"%i", c->verify);
 	sysfs_printf(key_merging_disabled,	"%i", c->key_merging_disabled);
@@ -883,6 +888,8 @@ static struct attribute *bch_cache_set_internal_files[] = {
 	&sysfs_copy_gc_enabled,
 	&sysfs_gc_after_writeback,
 	&sysfs_io_disable,
+	&sysfs_cutoff_writeback,
+	&sysfs_cutoff_writeback_sync,
 	NULL
 };
 KTYPE(bch_cache_set_internal);
diff --git a/drivers/md/bcache/writeback.h b/drivers/md/bcache/writeback.h
index ce63be98a39d..6a743d3bb338 100644
--- a/drivers/md/bcache/writeback.h
+++ b/drivers/md/bcache/writeback.h
@@ -5,6 +5,9 @@
 #define CUTOFF_WRITEBACK	40
 #define CUTOFF_WRITEBACK_SYNC	70
 
+#define CUTOFF_WRITEBACK_MAX		70
+#define CUTOFF_WRITEBACK_SYNC_MAX	90
+
 #define MAX_WRITEBACKS_IN_PASS  5
 #define MAX_WRITESIZE_IN_PASS   5000	/* *512b */
 
@@ -55,6 +58,9 @@ static inline bool bcache_dev_stripe_dirty(struct cached_dev *dc,
 	}
 }
 
+extern unsigned int bch_cutoff_writeback;
+extern unsigned int bch_cutoff_writeback_sync;
+
 static inline bool should_writeback(struct cached_dev *dc, struct bio *bio,
 				    unsigned int cache_mode, bool would_skip)
 {
@@ -62,7 +68,7 @@ static inline bool should_writeback(struct cached_dev *dc, struct bio *bio,
 
 	if (cache_mode != CACHE_MODE_WRITEBACK ||
 	    test_bit(BCACHE_DEV_DETACHING, &dc->disk.flags) ||
-	    in_use > CUTOFF_WRITEBACK_SYNC)
+	    in_use > bch_cutoff_writeback_sync)
 		return false;
 
 	if (dc->partial_stripes_expensive &&
@@ -75,7 +81,7 @@ static inline bool should_writeback(struct cached_dev *dc, struct bio *bio,
 
 	return (op_is_sync(bio->bi_opf) ||
 		bio->bi_opf & (REQ_META|REQ_PRIO) ||
-		in_use <= CUTOFF_WRITEBACK);
+		in_use <= bch_cutoff_writeback);
 }
 
 static inline void bch_writeback_queue(struct cached_dev *dc)
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 11/12] bcache: set writeback_percent in a flexible range
  2018-12-13 14:53 [PATCH 00/12] bcache patches for Linux v4.21 Coly Li
                   ` (9 preceding siblings ...)
  2018-12-13 14:53 ` [PATCH 10/12] bcache: make cutoff_writeback and cutoff_writeback_sync tunnable Coly Li
@ 2018-12-13 14:53 ` Coly Li
  2018-12-13 14:53 ` [PATCH 12/12] bcache: print number of keys in trace_bcache_journal_write Coly Li
  2018-12-13 15:16 ` [PATCH 00/12] bcache patches for Linux v4.21 Jens Axboe
  12 siblings, 0 replies; 15+ messages in thread
From: Coly Li @ 2018-12-13 14:53 UTC (permalink / raw)
  To: linux-bcache, axboe; +Cc: linux-block, Coly Li

Because CUTOFF_WRITEBACK is defined as 40, so before the changes of
dynamic cutoff writeback values, writeback_percent is limited to
[0, CUTOFF_WRITEBACK]. Any value larger than CUTOFF_WRITEBACK will
be fixed up to 40.

Now cutof writeback limit is a dynamic value bch_cutoff_writeback,
so the range of writeback_percent can be a more flexible range as
[0, bch_cutoff_writeback]. The flexibility is, it can be expended
to a larger or smaller range than [0, 40], depends on how value
bch_cutoff_writeback is specified.

The default value is still strongly recommended to most of users for
most of workloads. But for people who want to do research on bcache
writeback perforamnce tuning, they may have chance to specify more
flexible writeback_percent in range [0, 70].

Signed-off-by: Coly Li <colyli@suse.de>
---
 drivers/md/bcache/sysfs.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/md/bcache/sysfs.c b/drivers/md/bcache/sysfs.c
index 482b128b3e9d..557a8a3270a1 100644
--- a/drivers/md/bcache/sysfs.c
+++ b/drivers/md/bcache/sysfs.c
@@ -267,7 +267,8 @@ STORE(__cached_dev)
 	d_strtoul(writeback_running);
 	d_strtoul(writeback_delay);
 
-	sysfs_strtoul_clamp(writeback_percent, dc->writeback_percent, 0, 40);
+	sysfs_strtoul_clamp(writeback_percent, dc->writeback_percent,
+			    0, bch_cutoff_writeback);
 
 	if (attr == &sysfs_writeback_rate) {
 		ssize_t ret;
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 12/12] bcache: print number of keys in trace_bcache_journal_write
  2018-12-13 14:53 [PATCH 00/12] bcache patches for Linux v4.21 Coly Li
                   ` (10 preceding siblings ...)
  2018-12-13 14:53 ` [PATCH 11/12] bcache: set writeback_percent in a flexible range Coly Li
@ 2018-12-13 14:53 ` Coly Li
  2018-12-13 15:16 ` [PATCH 00/12] bcache patches for Linux v4.21 Jens Axboe
  12 siblings, 0 replies; 15+ messages in thread
From: Coly Li @ 2018-12-13 14:53 UTC (permalink / raw)
  To: linux-bcache, axboe; +Cc: linux-block, Guoju Fang, Coly Li

From: Guoju Fang <fangguoju@gmail.com>

Sometimes flush journal may be very frequent, so it's useful to dump
number of keys every time write journal.

Signed-off-by: Guoju Fang <fangguoju@gmail.com>
Signed-off-by: Coly Li <colyli@suse.de>
---
 drivers/md/bcache/journal.c   |  2 +-
 include/trace/events/bcache.h | 27 ++++++++++++++++++++++++---
 2 files changed, 25 insertions(+), 4 deletions(-)

diff --git a/drivers/md/bcache/journal.c b/drivers/md/bcache/journal.c
index 522c7426f3a0..b2fd412715b1 100644
--- a/drivers/md/bcache/journal.c
+++ b/drivers/md/bcache/journal.c
@@ -663,7 +663,7 @@ static void journal_write_unlocked(struct closure *cl)
 				 REQ_SYNC|REQ_META|REQ_PREFLUSH|REQ_FUA);
 		bch_bio_map(bio, w->data);
 
-		trace_bcache_journal_write(bio);
+		trace_bcache_journal_write(bio, w->data->keys);
 		bio_list_add(&list, bio);
 
 		SET_PTR_OFFSET(k, i, PTR_OFFSET(k, i) + sectors);
diff --git a/include/trace/events/bcache.h b/include/trace/events/bcache.h
index 2cbd6e42ad83..e4526f85c19d 100644
--- a/include/trace/events/bcache.h
+++ b/include/trace/events/bcache.h
@@ -221,9 +221,30 @@ DEFINE_EVENT(cache_set, bcache_journal_entry_full,
 	TP_ARGS(c)
 );
 
-DEFINE_EVENT(bcache_bio, bcache_journal_write,
-	TP_PROTO(struct bio *bio),
-	TP_ARGS(bio)
+TRACE_EVENT(bcache_journal_write,
+	TP_PROTO(struct bio *bio, u32 keys),
+	TP_ARGS(bio, keys),
+
+	TP_STRUCT__entry(
+		__field(dev_t,		dev			)
+		__field(sector_t,	sector			)
+		__field(unsigned int,	nr_sector		)
+		__array(char,		rwbs,	6		)
+		__field(u32,		nr_keys			)
+	),
+
+	TP_fast_assign(
+		__entry->dev		= bio_dev(bio);
+		__entry->sector		= bio->bi_iter.bi_sector;
+		__entry->nr_sector	= bio->bi_iter.bi_size >> 9;
+		__entry->nr_keys	= keys;
+		blk_fill_rwbs(__entry->rwbs, bio->bi_opf, bio->bi_iter.bi_size);
+	),
+
+	TP_printk("%d,%d  %s %llu + %u keys %u",
+		  MAJOR(__entry->dev), MINOR(__entry->dev), __entry->rwbs,
+		  (unsigned long long)__entry->sector, __entry->nr_sector,
+		  __entry->nr_keys)
 );
 
 /* Btree */
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH 00/12] bcache patches for Linux v4.21
  2018-12-13 14:53 [PATCH 00/12] bcache patches for Linux v4.21 Coly Li
                   ` (11 preceding siblings ...)
  2018-12-13 14:53 ` [PATCH 12/12] bcache: print number of keys in trace_bcache_journal_write Coly Li
@ 2018-12-13 15:16 ` Jens Axboe
  2018-12-14  8:25   ` Coly Li
  12 siblings, 1 reply; 15+ messages in thread
From: Jens Axboe @ 2018-12-13 15:16 UTC (permalink / raw)
  To: Coly Li, linux-bcache; +Cc: linux-block

On 12/13/18 7:53 AM, Coly Li wrote:
> Hi Jens,
> 
> Here are the patches for Linux v4.21.
> 
> The patches from me are for a set of disabled-by-default optimization
> for writeback cache mode, which are used by some users I know. Guoju
> Fang contributes a patch to add number of keys in
> trace_bcache_journal_write(). And there are 6 patchges are from Shenghui
> Wang, which removes an unncessary NULL check for
> debugfs_remove_recursive() and debugfs_remove(), adds useful code
> comments for bcache, and improves sysfs information display for bcache
> writeback rate parameters.
> 
> We don't have important or big change in this run. Please take these
> patches.

Applied, thanks.

Note that I reformated some of the commit messages and patch headers.
Keep them within 72 chars, then everything is much more readable
in git log.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 00/12] bcache patches for Linux v4.21
  2018-12-13 15:16 ` [PATCH 00/12] bcache patches for Linux v4.21 Jens Axboe
@ 2018-12-14  8:25   ` Coly Li
  0 siblings, 0 replies; 15+ messages in thread
From: Coly Li @ 2018-12-14  8:25 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-bcache, linux-block

On 12/13/18 11:16 PM, Jens Axboe wrote:
> On 12/13/18 7:53 AM, Coly Li wrote:
>> Hi Jens,
>>
>> Here are the patches for Linux v4.21.
>>
>> The patches from me are for a set of disabled-by-default optimization
>> for writeback cache mode, which are used by some users I know. Guoju
>> Fang contributes a patch to add number of keys in
>> trace_bcache_journal_write(). And there are 6 patchges are from Shenghui
>> Wang, which removes an unncessary NULL check for
>> debugfs_remove_recursive() and debugfs_remove(), adds useful code
>> comments for bcache, and improves sysfs information display for bcache
>> writeback rate parameters.
>>
>> We don't have important or big change in this run. Please take these
>> patches.
> 
> Applied, thanks.
> 
> Note that I reformated some of the commit messages and patch headers.
> Keep them within 72 chars, then everything is much more readable
> in git log.

Hi Jens,

Sure I will take more attention on the 72 chars limit, thanks for your
remind.

Coly Li

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2018-12-14  8:25 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-12-13 14:53 [PATCH 00/12] bcache patches for Linux v4.21 Coly Li
2018-12-13 14:53 ` [PATCH 01/12] bcache: add comment for cache_set->fill_iter Coly Li
2018-12-13 14:53 ` [PATCH 02/12] bcache: do not check if debug dentry is ERR or NULL explicitly on remove Coly Li
2018-12-13 14:53 ` [PATCH 03/12] bcache: update comment for bch_data_insert Coly Li
2018-12-13 14:53 ` [PATCH 04/12] bcache: update comment in sysfs.c Coly Li
2018-12-13 14:53 ` [PATCH 05/12] bcache: do not mark writeback_running until backing dev attached to cache_set Coly Li
2018-12-13 14:53 ` [PATCH 06/12] bcache: cannot set writeback_running via sysfs if no writeback kthread created Coly Li
2018-12-13 14:53 ` [PATCH 07/12] bcache: introduce force_wake_up_gc() Coly Li
2018-12-13 14:53 ` [PATCH 08/12] bcache: option to automatically run gc thread after writeback accomplished Coly Li
2018-12-13 14:53 ` [PATCH 09/12] bcache: add MODULE_DESCRIPTION information Coly Li
2018-12-13 14:53 ` [PATCH 10/12] bcache: make cutoff_writeback and cutoff_writeback_sync tunnable Coly Li
2018-12-13 14:53 ` [PATCH 11/12] bcache: set writeback_percent in a flexible range Coly Li
2018-12-13 14:53 ` [PATCH 12/12] bcache: print number of keys in trace_bcache_journal_write Coly Li
2018-12-13 15:16 ` [PATCH 00/12] bcache patches for Linux v4.21 Jens Axboe
2018-12-14  8:25   ` Coly Li

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.