All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/9] Pending bcache patches for review
@ 2018-07-26 14:22 Coly Li
  2018-07-26 14:22 ` [PATCH 1/9] bcache: do not check return value of debugfs_create_dir() Coly Li
                   ` (8 more replies)
  0 siblings, 9 replies; 15+ messages in thread
From: Coly Li @ 2018-07-26 14:22 UTC (permalink / raw)
  To: colyli, linux-bcache; +Cc: linux-block

Hi folks,

I ask for help to review these patches, some of them are posted for months.
They are OK with my test for a while, still tt would be great to have more
reviewer before these patches go into v4.19.

Thanks in advance. 

Coly Li
---
Coly Li (9):
  bcache: do not check return value of debugfs_create_dir()
  bcache: display rate debug parameters to 0 when writeback is not
    running
  bcache: avoid unncessary cache prefetch bch_btree_node_get()
  bcache: add a comment in register_bdev()
  bcache: fix mistaken code comments in struct cache
  bcache: fix mistaken comments in bch_keylist_realloc()
  bcache: add code comments for bset.c
  bcache: initiate bcache_debug to NULL
  bcache: set max writeback rate when I/O request is idle

 drivers/md/bcache/bcache.h    | 16 +++---
 drivers/md/bcache/bset.c      | 63 ++++++++++++++++++++++++
 drivers/md/bcache/btree.c     | 14 +++---
 drivers/md/bcache/closure.c   | 13 +++--
 drivers/md/bcache/closure.h   |  4 +-
 drivers/md/bcache/debug.c     | 13 ++---
 drivers/md/bcache/request.c   | 56 ++++++++++++++++++++-
 drivers/md/bcache/super.c     |  9 +++-
 drivers/md/bcache/sysfs.c     | 37 +++++++++-----
 drivers/md/bcache/util.c      |  2 +-
 drivers/md/bcache/util.h      |  2 +-
 drivers/md/bcache/writeback.c | 91 +++++++++++++++++++++++------------
 12 files changed, 244 insertions(+), 76 deletions(-)

-- 
2.17.1

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH 1/9] bcache: do not check return value of debugfs_create_dir()
  2018-07-26 14:22 [PATCH 0/9] Pending bcache patches for review Coly Li
@ 2018-07-26 14:22 ` Coly Li
  2018-07-26 14:22 ` [PATCH 2/9] bcache: display rate debug parameters to 0 when writeback is not running Coly Li
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 15+ messages in thread
From: Coly Li @ 2018-07-26 14:22 UTC (permalink / raw)
  To: colyli, linux-bcache; +Cc: linux-block, Kai Krakow, Kent Overstreet

Greg KH suggests that normal code should not care about debugfs. Therefore
no matter successful or failed of debugfs_create_dir() execution, it is
unncessary to check its return value.

There are two functions called debugfs_create_dir() and check the return
value, which are bch_debug_init() and closure_debug_init(). This patch
changes these two functions from int to void type, and ignore return values
of debugfs_create_dir().

This patch does not fix exact bug, just makes things work as they should.

Signed-off-by: Coly Li <colyli@suse.de>
Suggested-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Kai Krakow <kai@kaishome.de>
Cc: Kent Overstreet <kent.overstreet@gmail.com>
---
 drivers/md/bcache/bcache.h  |  2 +-
 drivers/md/bcache/closure.c | 13 +++++++++----
 drivers/md/bcache/closure.h |  4 ++--
 drivers/md/bcache/debug.c   | 11 ++++++-----
 drivers/md/bcache/super.c   |  4 +++-
 5 files changed, 21 insertions(+), 13 deletions(-)

diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
index 872ef4d67711..a44bd427e5ba 100644
--- a/drivers/md/bcache/bcache.h
+++ b/drivers/md/bcache/bcache.h
@@ -1001,7 +1001,7 @@ void bch_open_buckets_free(struct cache_set *);
 int bch_cache_allocator_start(struct cache *ca);
 
 void bch_debug_exit(void);
-int bch_debug_init(struct kobject *);
+void bch_debug_init(struct kobject *);
 void bch_request_exit(void);
 int bch_request_init(void);
 
diff --git a/drivers/md/bcache/closure.c b/drivers/md/bcache/closure.c
index 0e14969182c6..618253683d40 100644
--- a/drivers/md/bcache/closure.c
+++ b/drivers/md/bcache/closure.c
@@ -199,11 +199,16 @@ static const struct file_operations debug_ops = {
 	.release	= single_release
 };
 
-int __init closure_debug_init(void)
+void  __init closure_debug_init(void)
 {
-	closure_debug = debugfs_create_file("closures",
-				0400, bcache_debug, NULL, &debug_ops);
-	return IS_ERR_OR_NULL(closure_debug);
+	if (!IS_ERR_OR_NULL(bcache_debug))
+		/*
+		 * it is unnecessary to check return value of
+		 * debugfs_create_file(), we should not care
+		 * about this.
+		 */
+		closure_debug = debugfs_create_file(
+			"closures", 0400, bcache_debug, NULL, &debug_ops);
 }
 #endif
 
diff --git a/drivers/md/bcache/closure.h b/drivers/md/bcache/closure.h
index 71427eb5fdae..7c2c5bc7c88b 100644
--- a/drivers/md/bcache/closure.h
+++ b/drivers/md/bcache/closure.h
@@ -186,13 +186,13 @@ static inline void closure_sync(struct closure *cl)
 
 #ifdef CONFIG_BCACHE_CLOSURES_DEBUG
 
-int closure_debug_init(void);
+void closure_debug_init(void);
 void closure_debug_create(struct closure *cl);
 void closure_debug_destroy(struct closure *cl);
 
 #else
 
-static inline int closure_debug_init(void) { return 0; }
+static inline void closure_debug_init(void) {}
 static inline void closure_debug_create(struct closure *cl) {}
 static inline void closure_debug_destroy(struct closure *cl) {}
 
diff --git a/drivers/md/bcache/debug.c b/drivers/md/bcache/debug.c
index d030ce3025a6..57f8f5aeee55 100644
--- a/drivers/md/bcache/debug.c
+++ b/drivers/md/bcache/debug.c
@@ -248,11 +248,12 @@ void bch_debug_exit(void)
 		debugfs_remove_recursive(bcache_debug);
 }
 
-int __init bch_debug_init(struct kobject *kobj)
+void __init bch_debug_init(struct kobject *kobj)
 {
-	if (!IS_ENABLED(CONFIG_DEBUG_FS))
-		return 0;
-
+	/*
+	 * it is unnecessary to check return value of
+	 * debugfs_create_file(), we should not care
+	 * about this.
+	 */
 	bcache_debug = debugfs_create_dir("bcache", NULL);
-	return IS_ERR_OR_NULL(bcache_debug);
 }
diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index e0a92104ca23..c7ffa6ef3f82 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -2345,10 +2345,12 @@ static int __init bcache_init(void)
 		goto err;
 
 	if (bch_request_init() ||
-	    bch_debug_init(bcache_kobj) || closure_debug_init() ||
 	    sysfs_create_files(bcache_kobj, files))
 		goto err;
 
+	bch_debug_init(bcache_kobj);
+	closure_debug_init();
+
 	return 0;
 err:
 	bcache_exit();
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 2/9] bcache: display rate debug parameters to 0 when writeback is not running
  2018-07-26 14:22 [PATCH 0/9] Pending bcache patches for review Coly Li
  2018-07-26 14:22 ` [PATCH 1/9] bcache: do not check return value of debugfs_create_dir() Coly Li
@ 2018-07-26 14:22 ` Coly Li
  2018-07-26 14:22 ` [PATCH 3/9] bcache: avoid unncessary cache prefetch bch_btree_node_get() Coly Li
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 15+ messages in thread
From: Coly Li @ 2018-07-26 14:22 UTC (permalink / raw)
  To: colyli, linux-bcache; +Cc: linux-block

When writeback is not running, writeback rate should be 0, other value is
misleading. And the following dyanmic writeback rate debug parameters
should be 0 too,
	rate, proportional, integral, change
otherwise they are misleading when writeback is not running.

Signed-off-by: Coly Li <colyli@suse.de>
---
 drivers/md/bcache/sysfs.c | 26 ++++++++++++++++----------
 1 file changed, 16 insertions(+), 10 deletions(-)

diff --git a/drivers/md/bcache/sysfs.c b/drivers/md/bcache/sysfs.c
index 225b15aa0340..3e9d3459a224 100644
--- a/drivers/md/bcache/sysfs.c
+++ b/drivers/md/bcache/sysfs.c
@@ -149,6 +149,7 @@ SHOW(__bch_cached_dev)
 	struct cached_dev *dc = container_of(kobj, struct cached_dev,
 					     disk.kobj);
 	const char *states[] = { "no cache", "clean", "dirty", "inconsistent" };
+	int wb = dc->writeback_running;
 
 #define var(stat)		(dc->stat)
 
@@ -170,7 +171,7 @@ SHOW(__bch_cached_dev)
 	var_printf(writeback_running,	"%i");
 	var_print(writeback_delay);
 	var_print(writeback_percent);
-	sysfs_hprint(writeback_rate,	dc->writeback_rate.rate << 9);
+	sysfs_hprint(writeback_rate,	wb ? dc->writeback_rate.rate << 9 : 0);
 	sysfs_hprint(io_errors,		atomic_read(&dc->io_errors));
 	sysfs_printf(io_error_limit,	"%i", dc->error_limit);
 	sysfs_printf(io_disable,	"%i", dc->io_disable);
@@ -188,15 +189,20 @@ SHOW(__bch_cached_dev)
 		char change[20];
 		s64 next_io;
 
-		bch_hprint(rate,	dc->writeback_rate.rate << 9);
-		bch_hprint(dirty,	bcache_dev_sectors_dirty(&dc->disk) << 9);
-		bch_hprint(target,	dc->writeback_rate_target << 9);
-		bch_hprint(proportional,dc->writeback_rate_proportional << 9);
-		bch_hprint(integral,	dc->writeback_rate_integral_scaled << 9);
-		bch_hprint(change,	dc->writeback_rate_change << 9);
-
-		next_io = div64_s64(dc->writeback_rate.next - local_clock(),
-				    NSEC_PER_MSEC);
+		/*
+		 * Except for dirty and target, other values should
+		 * be 0 if writeback is not running.
+		 */
+		bch_hprint(rate, wb ? dc->writeback_rate.rate << 9 : 0);
+		bch_hprint(dirty, bcache_dev_sectors_dirty(&dc->disk) << 9);
+		bch_hprint(target, dc->writeback_rate_target << 9);
+		bch_hprint(proportional,
+			   wb ? dc->writeback_rate_proportional << 9 : 0);
+		bch_hprint(integral,
+			   wb ? dc->writeback_rate_integral_scaled << 9 : 0);
+		bch_hprint(change, wb ? dc->writeback_rate_change << 9 : 0);
+		next_io = wb ? div64_s64(dc->writeback_rate.next-local_clock(),
+					 NSEC_PER_MSEC) : 0;
 
 		return sprintf(buf,
 			       "rate:\t\t%s/sec\n"
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 3/9] bcache: avoid unncessary cache prefetch bch_btree_node_get()
  2018-07-26 14:22 [PATCH 0/9] Pending bcache patches for review Coly Li
  2018-07-26 14:22 ` [PATCH 1/9] bcache: do not check return value of debugfs_create_dir() Coly Li
  2018-07-26 14:22 ` [PATCH 2/9] bcache: display rate debug parameters to 0 when writeback is not running Coly Li
@ 2018-07-26 14:22 ` Coly Li
  2018-07-26 14:22 ` [PATCH 4/9] bcache: add a comment in register_bdev() Coly Li
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 15+ messages in thread
From: Coly Li @ 2018-07-26 14:22 UTC (permalink / raw)
  To: colyli, linux-bcache; +Cc: linux-block

In bch_btree_node_get() the read-in btree node will be partially
prefetched into L1 cache for following bset iteration (if there is).
But if the btree node read is failed, the perfetch operations will
waste L1 cache space. This patch checkes whether read operation and
only does cache prefetch when read I/O successed.

Signed-off-by: Coly Li <colyli@suse.de>
---
 drivers/md/bcache/btree.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c
index 475008fbbaab..c19f7716df88 100644
--- a/drivers/md/bcache/btree.c
+++ b/drivers/md/bcache/btree.c
@@ -1011,6 +1011,13 @@ struct btree *bch_btree_node_get(struct cache_set *c, struct btree_op *op,
 		BUG_ON(b->level != level);
 	}
 
+	if (btree_node_io_error(b)) {
+		rw_unlock(write, b);
+		return ERR_PTR(-EIO);
+	}
+
+	BUG_ON(!b->written);
+
 	b->parent = parent;
 	b->accessed = 1;
 
@@ -1022,13 +1029,6 @@ struct btree *bch_btree_node_get(struct cache_set *c, struct btree_op *op,
 	for (; i <= b->keys.nsets; i++)
 		prefetch(b->keys.set[i].data);
 
-	if (btree_node_io_error(b)) {
-		rw_unlock(write, b);
-		return ERR_PTR(-EIO);
-	}
-
-	BUG_ON(!b->written);
-
 	return b;
 }
 
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 4/9] bcache: add a comment in register_bdev()
  2018-07-26 14:22 [PATCH 0/9] Pending bcache patches for review Coly Li
                   ` (2 preceding siblings ...)
  2018-07-26 14:22 ` [PATCH 3/9] bcache: avoid unncessary cache prefetch bch_btree_node_get() Coly Li
@ 2018-07-26 14:22 ` Coly Li
  2018-07-26 14:22 ` [PATCH 5/9] bcache: fix mistaken code comments in struct cache Coly Li
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 15+ messages in thread
From: Coly Li @ 2018-07-26 14:22 UTC (permalink / raw)
  To: colyli, linux-bcache; +Cc: linux-block

---
 drivers/md/bcache/super.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index c7ffa6ef3f82..f517d7d1fa10 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -1291,6 +1291,7 @@ static void register_bdev(struct cache_sb *sb, struct page *sb_page,
 	pr_info("registered backing device %s", dc->backing_dev_name);
 
 	list_add(&dc->list, &uncached_devices);
+	/* attch to a matched cache set if it exists */
 	list_for_each_entry(c, &bch_cache_sets, list)
 		bch_cached_dev_attach(dc, c, NULL);
 
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 5/9] bcache: fix mistaken code comments in struct cache
  2018-07-26 14:22 [PATCH 0/9] Pending bcache patches for review Coly Li
                   ` (3 preceding siblings ...)
  2018-07-26 14:22 ` [PATCH 4/9] bcache: add a comment in register_bdev() Coly Li
@ 2018-07-26 14:22 ` Coly Li
  2018-07-26 14:22 ` [PATCH 6/9] bcache: fix mistaken comments in bch_keylist_realloc() Coly Li
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 15+ messages in thread
From: Coly Li @ 2018-07-26 14:22 UTC (permalink / raw)
  To: colyli, linux-bcache; +Cc: linux-block

---
 drivers/md/bcache/bcache.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
index a44bd427e5ba..5f7082aab1b0 100644
--- a/drivers/md/bcache/bcache.h
+++ b/drivers/md/bcache/bcache.h
@@ -423,8 +423,8 @@ struct cache {
 	/*
 	 * When allocating new buckets, prio_write() gets first dibs - since we
 	 * may not be allocate at all without writing priorities and gens.
-	 * prio_buckets[] contains the last buckets we wrote priorities to (so
-	 * gc can mark them as metadata), prio_next[] contains the buckets
+	 * prio_last_buckets[] contains the last buckets we wrote priorities to (so
+	 * gc can mark them as metadata), prio_buckets[] contains the buckets
 	 * allocated for the next prio write.
 	 */
 	uint64_t		*prio_buckets;
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 6/9] bcache: fix mistaken comments in bch_keylist_realloc()
  2018-07-26 14:22 [PATCH 0/9] Pending bcache patches for review Coly Li
                   ` (4 preceding siblings ...)
  2018-07-26 14:22 ` [PATCH 5/9] bcache: fix mistaken code comments in struct cache Coly Li
@ 2018-07-26 14:22 ` Coly Li
  2018-07-26 14:22 ` [PATCH 7/9] bcache: add code comments for bset.c Coly Li
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 15+ messages in thread
From: Coly Li @ 2018-07-26 14:22 UTC (permalink / raw)
  To: colyli, linux-bcache; +Cc: linux-block

---
 drivers/md/bcache/request.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c
index 8eece9ef9f46..91206f329971 100644
--- a/drivers/md/bcache/request.c
+++ b/drivers/md/bcache/request.c
@@ -107,7 +107,7 @@ static int bch_keylist_realloc(struct keylist *l, unsigned u64s,
 	/*
 	 * The journalling code doesn't handle the case where the keys to insert
 	 * is bigger than an empty write: If we just return -ENOMEM here,
-	 * bio_insert() and bio_invalidate() will insert the keys created so far
+	 * bch_data_insert_keys() will insert the keys created so far
 	 * and finish the rest when the keylist is empty.
 	 */
 	if (newsize * sizeof(uint64_t) > block_bytes(c) - sizeof(struct jset))
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 7/9] bcache: add code comments for bset.c
  2018-07-26 14:22 [PATCH 0/9] Pending bcache patches for review Coly Li
                   ` (5 preceding siblings ...)
  2018-07-26 14:22 ` [PATCH 6/9] bcache: fix mistaken comments in bch_keylist_realloc() Coly Li
@ 2018-07-26 14:22 ` Coly Li
  2018-07-26 14:22 ` [PATCH 8/9] bcache: initiate bcache_debug to NULL Coly Li
  2018-07-26 14:22 ` [PATCH 9/9] bcache: set max writeback rate when I/O request is idle Coly Li
  8 siblings, 0 replies; 15+ messages in thread
From: Coly Li @ 2018-07-26 14:22 UTC (permalink / raw)
  To: colyli, linux-bcache; +Cc: linux-block

This patch tries to add code comments in bset.c, to make some
tricky code and designment to be more comprehensible. Most information
of this patch comes from the discussion between Kent and I, he
offers very informative details. If there is any mistake
of the idea behind the code, no doubt that's from me misrepresentation.

Signed-off-by: Coly Li <colyli@suse.de>
---
 drivers/md/bcache/bset.c | 63 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 63 insertions(+)

diff --git a/drivers/md/bcache/bset.c b/drivers/md/bcache/bset.c
index f3403b45bc28..596c93b44e9b 100644
--- a/drivers/md/bcache/bset.c
+++ b/drivers/md/bcache/bset.c
@@ -366,6 +366,10 @@ EXPORT_SYMBOL(bch_btree_keys_init);
 
 /* Binary tree stuff for auxiliary search trees */
 
+/*
+ * return array index next to j when does in-order traverse
+ * of a binary tree which is stored in a linear array
+ */
 static unsigned inorder_next(unsigned j, unsigned size)
 {
 	if (j * 2 + 1 < size) {
@@ -379,6 +383,10 @@ static unsigned inorder_next(unsigned j, unsigned size)
 	return j;
 }
 
+/*
+ * return array index previous to j when does in-order traverse
+ * of a binary tree which is stored in a linear array
+ */
 static unsigned inorder_prev(unsigned j, unsigned size)
 {
 	if (j * 2 < size) {
@@ -421,6 +429,10 @@ static unsigned __to_inorder(unsigned j, unsigned size, unsigned extra)
 	return j;
 }
 
+/*
+ * Return the cacheline index in bset_tree->data, where j is index
+ * from a linear array which stores the auxiliar binary tree
+ */
 static unsigned to_inorder(unsigned j, struct bset_tree *t)
 {
 	return __to_inorder(j, t->size, t->extra);
@@ -441,6 +453,10 @@ static unsigned __inorder_to_tree(unsigned j, unsigned size, unsigned extra)
 	return j;
 }
 
+/*
+ * Return an index from a linear array which stores the auxiliar binary
+ * tree, j is the cacheline index of t->data.
+ */
 static unsigned inorder_to_tree(unsigned j, struct bset_tree *t)
 {
 	return __inorder_to_tree(j, t->size, t->extra);
@@ -546,6 +562,20 @@ static inline uint64_t shrd128(uint64_t high, uint64_t low, uint8_t shift)
 	return low;
 }
 
+/*
+ * Calculate mantissa value for struct bkey_float.
+ * If most significant bit of f->exponent is not set, then
+ *  - f->exponent >> 6 is 0
+ *  - p[0] points to bkey->low
+ *  - p[-1] borrows bits from KEY_INODE() of bkey->high
+ * if most isgnificant bits of f->exponent is set, then
+ *  - f->exponent >> 6 is 1
+ *  - p[0] points to bits from KEY_INODE() of bkey->high
+ *  - p[-1] points to other bits from KEY_INODE() of
+ *    bkey->high too.
+ * See make_bfloat() to check when most significant bit of f->exponent
+ * is set or not.
+ */
 static inline unsigned bfloat_mantissa(const struct bkey *k,
 				       struct bkey_float *f)
 {
@@ -570,6 +600,16 @@ static void make_bfloat(struct bset_tree *t, unsigned j)
 	BUG_ON(m < l || m > r);
 	BUG_ON(bkey_next(p) != m);
 
+	/*
+	 * If l and r have different KEY_INODE values (different backing
+	 * device), f->exponent records how many least significant bits
+	 * are different in KEY_INODE values and sets most significant
+	 * bits to 1 (by +64).
+	 * If l and r have same KEY_INODE value, f->exponent records
+	 * how many different bits in least significant bits of bkey->low.
+	 * See bfloat_mantiss() how the most significant bit of
+	 * f->exponent is used to calculate bfloat mantissa value.
+	 */
 	if (KEY_INODE(l) != KEY_INODE(r))
 		f->exponent = fls64(KEY_INODE(r) ^ KEY_INODE(l)) + 64;
 	else
@@ -633,6 +673,15 @@ void bch_bset_init_next(struct btree_keys *b, struct bset *i, uint64_t magic)
 }
 EXPORT_SYMBOL(bch_bset_init_next);
 
+/*
+ * Build auxiliary binary tree 'struct bset_tree *t', this tree is used to
+ * accelerate bkey search in a btree node (pointed by bset_tree->data in
+ * memory). After search in the auxiliar tree by calling bset_search_tree(),
+ * a struct bset_search_iter is returned which indicates range [l, r] from
+ * bset_tree->data where the searching bkey might be inside. Then a followed
+ * linear comparison does the exact search, see __bch_bset_search() for how
+ * the auxiliary tree is used.
+ */
 void bch_bset_build_written_tree(struct btree_keys *b)
 {
 	struct bset_tree *t = bset_tree_last(b);
@@ -898,6 +947,17 @@ static struct bset_search_iter bset_search_tree(struct bset_tree *t,
 	unsigned inorder, j, n = 1;
 
 	do {
+		/*
+		 * A bit trick here.
+		 * If p < t->size, (int)(p - t->size) is a minus value and
+		 * the most significant bit is set, right shifting 31 bits
+		 * gets 1. If p >= t->size, the most significant bit is
+		 * not set, right shifting 31 bits gets 0.
+		 * So the following 2 lines equals to
+		 *	if (p >= t->size)
+		 *		p = 0;
+		 * but a branch instruction is avoided.
+		 */
 		unsigned p = n << 4;
 		p &= ((int) (p - t->size)) >> 31;
 
@@ -907,6 +967,9 @@ static struct bset_search_iter bset_search_tree(struct bset_tree *t,
 		f = &t->tree[j];
 
 		/*
+		 * Similar bit trick, use subtract operation to avoid a branch
+		 * instruction.
+		 *
 		 * n = (f->mantissa > bfloat_mantissa())
 		 *	? j * 2
 		 *	: j * 2 + 1;
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 8/9] bcache: initiate bcache_debug to NULL
  2018-07-26 14:22 [PATCH 0/9] Pending bcache patches for review Coly Li
                   ` (6 preceding siblings ...)
  2018-07-26 14:22 ` [PATCH 7/9] bcache: add code comments for bset.c Coly Li
@ 2018-07-26 14:22 ` Coly Li
  2018-07-27 17:37   ` Noah Massey
  2018-07-27 19:49   ` Bart Van Assche
  2018-07-26 14:22 ` [PATCH 9/9] bcache: set max writeback rate when I/O request is idle Coly Li
  8 siblings, 2 replies; 15+ messages in thread
From: Coly Li @ 2018-07-26 14:22 UTC (permalink / raw)
  To: colyli, linux-bcache; +Cc: linux-block

Global variable bcache_debug is firstly initialized in bch_debug_init(),
and destroyed in bch_debug_exit(). bch_debug_init() is called in
bcache_init() with many other functions, if one of the previous calling
onces failed, bcache_exit() will be called in the failure path.

The problem is, if bcache_init() fails before bch_debug_init() is called,
then in bcache_exit() when bch_debug_exit() is called to destroy global
variable bcache_debug, at this moment bcache_debug is unndefined, then the
test of "if (!IS_ERR_OR_NULL(bcache_debug))" might be buggy.

This patch initializes global varabile bcache_debug to be NULL, to make
the failure code path to be predictable.

Signed-off-by: Coly Li <colyli@suse.de>
---
 drivers/md/bcache/debug.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/md/bcache/debug.c b/drivers/md/bcache/debug.c
index 57f8f5aeee55..24b0eb65ddec 100644
--- a/drivers/md/bcache/debug.c
+++ b/drivers/md/bcache/debug.c
@@ -17,7 +17,7 @@
 #include <linux/random.h>
 #include <linux/seq_file.h>
 
-struct dentry *bcache_debug;
+struct dentry *bcache_debug = NULL;
 
 #ifdef CONFIG_BCACHE_DEBUG
 
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 9/9] bcache: set max writeback rate when I/O request is idle
  2018-07-26 14:22 [PATCH 0/9] Pending bcache patches for review Coly Li
                   ` (7 preceding siblings ...)
  2018-07-26 14:22 ` [PATCH 8/9] bcache: initiate bcache_debug to NULL Coly Li
@ 2018-07-26 14:22 ` Coly Li
  8 siblings, 0 replies; 15+ messages in thread
From: Coly Li @ 2018-07-26 14:22 UTC (permalink / raw)
  To: colyli, linux-bcache; +Cc: linux-block, stable, Michael Lyle

Commit b1092c9af9ed ("bcache: allow quick writeback when backing idle")
allows the writeback rate to be faster if there is no I/O request on a
bcache device. It works well if there is only one bcache device attached
to the cache set. If there are many bcache devices attached to a cache
set, it may introduce performance regression because multiple faster
writeback threads of the idle bcache devices will compete the btree level
locks with the bcache device who have I/O requests coming.

This patch fixes the above issue by only permitting fast writebac when
all bcache devices attached on the cache set are idle. And if one of the
bcache devices has new I/O request coming, minimized all writeback
throughput immediately and let PI controller __update_writeback_rate()
to decide the upcoming writeback rate for each bcache device.

Also when all bcache devices are idle, limited wrieback rate to a small
number is wast of thoughput, especially when backing devices are slower
non-rotation devices (e.g. SATA SSD). This patch sets a max writeback
rate for each backing device if the whole cache set is idle. A faster
writeback rate in idle time means new I/Os may have more available space
for dirty data, and people may observe a better write performance then.

Please note bcache may change its cache mode in run time, and this patch
still works if the cache mode is switched from writeback mode and there
is still dirty data on cache.

Fixes: Commit b1092c9af9ed ("bcache: allow quick writeback when backing idle")
Cc: stable@vger.kernel.org #4.16+
Signed-off-by: Coly Li <colyli@suse.de>
Tested-by: Kai Krakow <kai@kaishome.de>
Tested-by: Stefan Priebe <s.priebe@profihost.ag>
Cc: Michael Lyle <mlyle@lyle.org>
---
 drivers/md/bcache/bcache.h    | 10 ++--
 drivers/md/bcache/request.c   | 54 ++++++++++++++++++++-
 drivers/md/bcache/super.c     |  4 ++
 drivers/md/bcache/sysfs.c     | 15 ++++--
 drivers/md/bcache/util.c      |  2 +-
 drivers/md/bcache/util.h      |  2 +-
 drivers/md/bcache/writeback.c | 91 +++++++++++++++++++++++------------
 7 files changed, 134 insertions(+), 44 deletions(-)

diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
index 5f7082aab1b0..97489573dedc 100644
--- a/drivers/md/bcache/bcache.h
+++ b/drivers/md/bcache/bcache.h
@@ -328,13 +328,6 @@ struct cached_dev {
 	 */
 	atomic_t		has_dirty;
 
-	/*
-	 * Set to zero by things that touch the backing volume-- except
-	 * writeback.  Incremented by writeback.  Used to determine when to
-	 * accelerate idle writeback.
-	 */
-	atomic_t		backing_idle;
-
 	struct bch_ratelimit	writeback_rate;
 	struct delayed_work	writeback_rate_update;
 
@@ -515,6 +508,8 @@ struct cache_set {
 	struct cache_accounting accounting;
 
 	unsigned long		flags;
+	atomic_t		idle_counter;
+	atomic_t		at_max_writeback_rate;
 
 	struct cache_sb		sb;
 
@@ -524,6 +519,7 @@ struct cache_set {
 
 	struct bcache_device	**devices;
 	unsigned		devices_max_used;
+	atomic_t		attached_dev_nr;
 	struct list_head	cached_devs;
 	uint64_t		cached_dev_sectors;
 	atomic_long_t		flash_dev_dirty_sectors;
diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c
index 91206f329971..86a977c2a176 100644
--- a/drivers/md/bcache/request.c
+++ b/drivers/md/bcache/request.c
@@ -1105,6 +1105,44 @@ static void detached_dev_do_request(struct bcache_device *d, struct bio *bio)
 		generic_make_request(bio);
 }
 
+static void quit_max_writeback_rate(struct cache_set *c,
+				    struct cached_dev *this_dc)
+{
+	int i;
+	struct bcache_device *d;
+	struct cached_dev *dc;
+
+	/*
+	 * mutex bch_register_lock may compete with other parallel requesters,
+	 * or attach/detach operations on other backing device. Waiting to
+	 * the mutex lock may increase I/O request latency for seconds or more.
+	 * To avoid such situation, if mutext_trylock() failed, only writeback
+	 * rate of current cached device is set to 1, and __update_write_back()
+	 * will decide writeback rate of other cached devices (remember now
+	 * c->idle_counter is 0 already).
+	 */
+	if (mutex_trylock(&bch_register_lock)) {
+		for (i = 0; i < c->devices_max_used; i++) {
+			if (!c->devices[i])
+				continue;
+
+			if (UUID_FLASH_ONLY(&c->uuids[i]))
+				continue;
+
+			d = c->devices[i];
+			dc = container_of(d, struct cached_dev, disk);
+			/*
+			 * set writeback rate to default minimum value,
+			 * then let update_writeback_rate() to decide the
+			 * upcoming rate.
+			 */
+			atomic_long_set(&dc->writeback_rate.rate, 1);
+		}
+		mutex_unlock(&bch_register_lock);
+	} else
+		atomic_long_set(&this_dc->writeback_rate.rate, 1);
+}
+
 /* Cached devices - read & write stuff */
 
 static blk_qc_t cached_dev_make_request(struct request_queue *q,
@@ -1122,7 +1160,21 @@ static blk_qc_t cached_dev_make_request(struct request_queue *q,
 		return BLK_QC_T_NONE;
 	}
 
-	atomic_set(&dc->backing_idle, 0);
+	if (likely(d->c)) {
+		if (atomic_read(&d->c->idle_counter))
+			atomic_set(&d->c->idle_counter, 0);
+		/*
+		 * If at_max_writeback_rate of cache set is true and new I/O
+		 * comes, quit max writeback rate of all cached devices
+		 * attached to this cache set, and set at_max_writeback_rate
+		 * to false.
+		 */
+		if (unlikely(atomic_read(&d->c->at_max_writeback_rate) == 1)) {
+			atomic_set(&d->c->at_max_writeback_rate, 0);
+			quit_max_writeback_rate(d->c, dc);
+		}
+	}
+
 	generic_start_io_acct(q, rw, bio_sectors(bio), &d->disk->part0);
 
 	bio_set_dev(bio, dc->bdev);
diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index f517d7d1fa10..32b95f3b9461 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -696,6 +696,8 @@ static void bcache_device_detach(struct bcache_device *d)
 {
 	lockdep_assert_held(&bch_register_lock);
 
+	atomic_dec(&d->c->attached_dev_nr);
+
 	if (test_bit(BCACHE_DEV_DETACHING, &d->flags)) {
 		struct uuid_entry *u = d->c->uuids + d->id;
 
@@ -1144,6 +1146,7 @@ int bch_cached_dev_attach(struct cached_dev *dc, struct cache_set *c,
 
 	bch_cached_dev_run(dc);
 	bcache_device_link(&dc->disk, c, "bdev");
+	atomic_inc(&c->attached_dev_nr);
 
 	/* Allow the writeback thread to proceed */
 	up_write(&dc->writeback_lock);
@@ -1696,6 +1699,7 @@ struct cache_set *bch_cache_set_alloc(struct cache_sb *sb)
 	c->block_bits		= ilog2(sb->block_size);
 	c->nr_uuids		= bucket_bytes(c) / sizeof(struct uuid_entry);
 	c->devices_max_used	= 0;
+	atomic_set(&c->attached_dev_nr, 0);
 	c->btree_pages		= bucket_pages(c);
 	if (c->btree_pages > BTREE_MAX_PAGES)
 		c->btree_pages = max_t(int, c->btree_pages / 4,
diff --git a/drivers/md/bcache/sysfs.c b/drivers/md/bcache/sysfs.c
index 3e9d3459a224..6e88142514fb 100644
--- a/drivers/md/bcache/sysfs.c
+++ b/drivers/md/bcache/sysfs.c
@@ -171,7 +171,8 @@ SHOW(__bch_cached_dev)
 	var_printf(writeback_running,	"%i");
 	var_print(writeback_delay);
 	var_print(writeback_percent);
-	sysfs_hprint(writeback_rate,	wb ? dc->writeback_rate.rate << 9 : 0);
+	sysfs_hprint(writeback_rate,
+		     wb ? atomic_long_read(&dc->writeback_rate.rate) << 9 : 0);
 	sysfs_hprint(io_errors,		atomic_read(&dc->io_errors));
 	sysfs_printf(io_error_limit,	"%i", dc->error_limit);
 	sysfs_printf(io_disable,	"%i", dc->io_disable);
@@ -193,7 +194,9 @@ SHOW(__bch_cached_dev)
 		 * Except for dirty and target, other values should
 		 * be 0 if writeback is not running.
 		 */
-		bch_hprint(rate, wb ? dc->writeback_rate.rate << 9 : 0);
+		bch_hprint(rate,
+			   wb ? atomic_long_read(&dc->writeback_rate.rate) << 9
+			      : 0);
 		bch_hprint(dirty, bcache_dev_sectors_dirty(&dc->disk) << 9);
 		bch_hprint(target, dc->writeback_rate_target << 9);
 		bch_hprint(proportional,
@@ -261,8 +264,12 @@ STORE(__cached_dev)
 
 	sysfs_strtoul_clamp(writeback_percent, dc->writeback_percent, 0, 40);
 
-	sysfs_strtoul_clamp(writeback_rate,
-			    dc->writeback_rate.rate, 1, INT_MAX);
+	if (attr == &sysfs_writeback_rate) {
+		int v;
+
+		sysfs_strtoul_clamp(writeback_rate, v, 1, INT_MAX);
+		atomic_long_set(&dc->writeback_rate.rate, v);
+	}
 
 	sysfs_strtoul_clamp(writeback_rate_update_seconds,
 			    dc->writeback_rate_update_seconds,
diff --git a/drivers/md/bcache/util.c b/drivers/md/bcache/util.c
index f912c372978c..c6a99dfa1ad9 100644
--- a/drivers/md/bcache/util.c
+++ b/drivers/md/bcache/util.c
@@ -200,7 +200,7 @@ uint64_t bch_next_delay(struct bch_ratelimit *d, uint64_t done)
 {
 	uint64_t now = local_clock();
 
-	d->next += div_u64(done * NSEC_PER_SEC, d->rate);
+	d->next += div_u64(done * NSEC_PER_SEC, atomic_long_read(&d->rate));
 
 	/* Bound the time.  Don't let us fall further than 2 seconds behind
 	 * (this prevents unnecessary backlog that would make it impossible
diff --git a/drivers/md/bcache/util.h b/drivers/md/bcache/util.h
index a1579e28049f..5ff055f0a653 100644
--- a/drivers/md/bcache/util.h
+++ b/drivers/md/bcache/util.h
@@ -443,7 +443,7 @@ struct bch_ratelimit {
 	 * Rate at which we want to do work, in units per second
 	 * The units here correspond to the units passed to bch_next_delay()
 	 */
-	uint32_t		rate;
+	atomic_long_t		rate;
 };
 
 static inline void bch_ratelimit_reset(struct bch_ratelimit *d)
diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
index 912e969fedba..481d4cf38ac0 100644
--- a/drivers/md/bcache/writeback.c
+++ b/drivers/md/bcache/writeback.c
@@ -104,11 +104,56 @@ static void __update_writeback_rate(struct cached_dev *dc)
 
 	dc->writeback_rate_proportional = proportional_scaled;
 	dc->writeback_rate_integral_scaled = integral_scaled;
-	dc->writeback_rate_change = new_rate - dc->writeback_rate.rate;
-	dc->writeback_rate.rate = new_rate;
+	dc->writeback_rate_change = new_rate -
+			atomic_long_read(&dc->writeback_rate.rate);
+	atomic_long_set(&dc->writeback_rate.rate, new_rate);
 	dc->writeback_rate_target = target;
 }
 
+static bool set_at_max_writeback_rate(struct cache_set *c,
+				       struct cached_dev *dc)
+{
+	/*
+	 * Idle_counter is increased everytime when update_writeback_rate() is
+	 * called. If all backing devices attached to the same cache set have
+	 * identical dc->writeback_rate_update_seconds values, it is about 6
+	 * rounds of update_writeback_rate() on each backing device before
+	 * c->at_max_writeback_rate is set to 1, and then max wrteback rate set
+	 * to each dc->writeback_rate.rate.
+	 * In order to avoid extra locking cost for counting exact dirty cached
+	 * devices number, c->attached_dev_nr is used to calculate the idle
+	 * throushold. It might be bigger if not all cached device are in write-
+	 * back mode, but it still works well with limited extra rounds of
+	 * update_writeback_rate().
+	 */
+	if (atomic_inc_return(&c->idle_counter) <
+	    atomic_read(&c->attached_dev_nr) * 6)
+		return false;
+
+	if (atomic_read(&c->at_max_writeback_rate) != 1)
+		atomic_set(&c->at_max_writeback_rate, 1);
+
+	atomic_long_set(&dc->writeback_rate.rate, INT_MAX);
+
+	/* keep writeback_rate_target as existing value */
+	dc->writeback_rate_proportional = 0;
+	dc->writeback_rate_integral_scaled = 0;
+	dc->writeback_rate_change = 0;
+
+	/*
+	 * Check c->idle_counter and c->at_max_writeback_rate agagain in case
+	 * new I/O arrives during before set_at_max_writeback_rate() returns.
+	 * Then the writeback rate is set to 1, and its new value should be
+	 * decided via __update_writeback_rate().
+	 */
+	if ((atomic_read(&c->idle_counter) <
+	     atomic_read(&c->attached_dev_nr) * 6) ||
+	    !atomic_read(&c->at_max_writeback_rate))
+		return false;
+
+	return true;
+}
+
 static void update_writeback_rate(struct work_struct *work)
 {
 	struct cached_dev *dc = container_of(to_delayed_work(work),
@@ -136,13 +181,20 @@ static void update_writeback_rate(struct work_struct *work)
 		return;
 	}
 
-	down_read(&dc->writeback_lock);
-
-	if (atomic_read(&dc->has_dirty) &&
-	    dc->writeback_percent)
-		__update_writeback_rate(dc);
+	if (atomic_read(&dc->has_dirty) && dc->writeback_percent) {
+		/*
+		 * If the whole cache set is idle, set_at_max_writeback_rate()
+		 * will set writeback rate to a max number. Then it is
+		 * unncessary to update writeback rate for an idle cache set
+		 * in maximum writeback rate number(s).
+		 */
+		if (!set_at_max_writeback_rate(c, dc)) {
+			down_read(&dc->writeback_lock);
+			__update_writeback_rate(dc);
+			up_read(&dc->writeback_lock);
+		}
+	}
 
-	up_read(&dc->writeback_lock);
 
 	/*
 	 * CACHE_SET_IO_DISABLE might be set via sysfs interface,
@@ -422,27 +474,6 @@ static void read_dirty(struct cached_dev *dc)
 
 		delay = writeback_delay(dc, size);
 
-		/* If the control system would wait for at least half a
-		 * second, and there's been no reqs hitting the backing disk
-		 * for awhile: use an alternate mode where we have at most
-		 * one contiguous set of writebacks in flight at a time.  If
-		 * someone wants to do IO it will be quick, as it will only
-		 * have to contend with one operation in flight, and we'll
-		 * be round-tripping data to the backing disk as quickly as
-		 * it can accept it.
-		 */
-		if (delay >= HZ / 2) {
-			/* 3 means at least 1.5 seconds, up to 7.5 if we
-			 * have slowed way down.
-			 */
-			if (atomic_inc_return(&dc->backing_idle) >= 3) {
-				/* Wait for current I/Os to finish */
-				closure_sync(&cl);
-				/* And immediately launch a new set. */
-				delay = 0;
-			}
-		}
-
 		while (!kthread_should_stop() &&
 		       !test_bit(CACHE_SET_IO_DISABLE, &dc->disk.c->flags) &&
 		       delay) {
@@ -741,7 +772,7 @@ void bch_cached_dev_writeback_init(struct cached_dev *dc)
 	dc->writeback_running		= true;
 	dc->writeback_percent		= 10;
 	dc->writeback_delay		= 30;
-	dc->writeback_rate.rate		= 1024;
+	atomic_long_set(&dc->writeback_rate.rate, 1024);
 	dc->writeback_rate_minimum	= 8;
 
 	dc->writeback_rate_update_seconds = WRITEBACK_RATE_UPDATE_SECS_DEFAULT;
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH 8/9] bcache: initiate bcache_debug to NULL
  2018-07-26 14:22 ` [PATCH 8/9] bcache: initiate bcache_debug to NULL Coly Li
@ 2018-07-27 17:37   ` Noah Massey
  2018-07-27 19:32     ` Noah Massey
  2018-07-27 21:53     ` Vojtech Pavlik
  2018-07-27 19:49   ` Bart Van Assche
  1 sibling, 2 replies; 15+ messages in thread
From: Noah Massey @ 2018-07-27 17:37 UTC (permalink / raw)
  To: Coly Li; +Cc: linux-bcache, linux-block

[-- Attachment #1: Type: text/plain, Size: 1546 bytes --]

On Thu, Jul 26, 2018, 10:23 AM Coly Li <colyli@suse.de> wrote:

> Global variable bcache_debug is firstly initialized in bch_debug_init(),
> and destroyed in bch_debug_exit(). bch_debug_init() is called in
> bcache_init() with many other functions, if one of the previous calling
> onces failed, bcache_exit() will be called in the failure path.
>
> The problem is, if bcache_init() fails before bch_debug_init() is called,
> then in bcache_exit() when bch_debug_exit() is called to destroy global
> variable bcache_debug, at this moment bcache_debug is unndefined, then the
> test of "if (!IS_ERR_OR_NULL(bcache_debug))" might be buggy.
>
> This patch initializes global varabile bcache_debug to be NULL, to make
> the failure code path to be predictable.
>
> Signed-off-by: Coly Li <colyli@suse.de>
> ---
>

Makes sense

Reviewed-by: Noah Massey <noah.massey@gmail.com>

 drivers/md/bcache/debug.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/md/bcache/debug.c b/drivers/md/bcache/debug.c
> index 57f8f5aeee55..24b0eb65ddec 100644
> --- a/drivers/md/bcache/debug.c
> +++ b/drivers/md/bcache/debug.c
> @@ -17,7 +17,7 @@
>  #include <linux/random.h>
>  #include <linux/seq_file.h>
>
> -struct dentry *bcache_debug;
> +struct dentry *bcache_debug = NULL;
>
>  #ifdef CONFIG_BCACHE_DEBUG
>
> --
> 2.17.1
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-bcache" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>

[-- Attachment #2: Type: text/html, Size: 2549 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 8/9] bcache: initiate bcache_debug to NULL
  2018-07-27 17:37   ` Noah Massey
@ 2018-07-27 19:32     ` Noah Massey
  2018-07-28  5:24       ` Coly Li
  2018-07-27 21:53     ` Vojtech Pavlik
  1 sibling, 1 reply; 15+ messages in thread
From: Noah Massey @ 2018-07-27 19:32 UTC (permalink / raw)
  To: Coly Li; +Cc: linux-bcache, linux-block

On Fri, Jul 27, 2018 at 1:37 PM Noah Massey <noah.massey@gmail.com> wrote:
> On Thu, Jul 26, 2018, 10:23 AM Coly Li <colyli@suse.de> wrote:
>>
>> Global variable bcache_debug is firstly initialized in bch_debug_init(),
>> and destroyed in bch_debug_exit(). bch_debug_init() is called in
>> bcache_init() with many other functions, if one of the previous calling
>> onces failed, bcache_exit() will be called in the failure path.
>>
>> The problem is, if bcache_init() fails before bch_debug_init() is called,
>> then in bcache_exit() when bch_debug_exit() is called to destroy global
>> variable bcache_debug, at this moment bcache_debug is unndefined, then the
>> test of "if (!IS_ERR_OR_NULL(bcache_debug))" might be buggy.
>>
>> This patch initializes global varabile bcache_debug to be NULL, to make
>> the failure code path to be predictable.
>>
>> Signed-off-by: Coly Li <colyli@suse.de>
>> ---
>
>
> Makes sense
>
> Reviewed-by: Noah Massey <noah.massey@gmail.com>
>

Wait... aren't static variables already initialized to 0?
So this makes it explicit, but no code change to generated binary, right?

>>  drivers/md/bcache/debug.c | 2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/drivers/md/bcache/debug.c b/drivers/md/bcache/debug.c
>> index 57f8f5aeee55..24b0eb65ddec 100644
>> --- a/drivers/md/bcache/debug.c
>> +++ b/drivers/md/bcache/debug.c
>> @@ -17,7 +17,7 @@
>>  #include <linux/random.h>
>>  #include <linux/seq_file.h>
>>
>> -struct dentry *bcache_debug;
>> +struct dentry *bcache_debug = NULL;
>>
>>  #ifdef CONFIG_BCACHE_DEBUG
>>
>> --
>> 2.17.1
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-bcache" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 8/9] bcache: initiate bcache_debug to NULL
  2018-07-26 14:22 ` [PATCH 8/9] bcache: initiate bcache_debug to NULL Coly Li
  2018-07-27 17:37   ` Noah Massey
@ 2018-07-27 19:49   ` Bart Van Assche
  1 sibling, 0 replies; 15+ messages in thread
From: Bart Van Assche @ 2018-07-27 19:49 UTC (permalink / raw)
  To: colyli, linux-bcache; +Cc: noah.massey, linux-block

On Thu, 2018-07-26 at 22:22 +0800, Coly Li wrote:
> Global variable bcache_debug is firstly initialized in bch_de=
bug_init(),
> and destroyed in bch_debug_exit(). bch_debug_init() i=
s called in
> bcache_init() with many other functions, if one of the previous c=
alling
> onces failed, bcache_exit() will be called in the failure path.
>=20
> The problem is, if bcache_init() fails before bch_debug_i=
nit() is called,
> then in bcache_exit() when bch_debug_exit() is called to =
destroy global
> variable bcache_debug, at this moment bcache_debug is unndefi=
ned, then the
> test of "if (!IS_ERR_OR_NULL(bcache_debug))=
CI- might be buggy.
>=20
> This patch initializes global varabile bcache_debug to be NULL, t=
o make
> the failure code path to be predictable.
>=20
> Signed-off-by: Coly Li <colyli@suse.de>
> ---
>  drivers/md/bcache/debug.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>=20
> diff --git a/drivers/md/bcache/debug.c b/drivers/md/bcache/debug.c
> index 57f8f5aeee55..24b0eb65ddec 100644
> --- a/drivers/md/bcache/debug.c
> +++ b/drivers/md/bcache/debug.c
> @@ -17,7 +17,7 @@
>  #include <linux/random.h>
>  #include <linux/seq_file.h>
> =20
> -struct dentry *bcache_debug;
> +struct dentry *bcache_debug = NULL;
> =20
>  #ifdef CONFIG_BCACHE_DEBUG

Please verify patches with checkpatch before posting these. Checkpatch is n=
amely
able to detect that this patch is useless:

$ scripts/checkpatch.pl \[PATCH_8_9\]_bcache\:=
_initiate_bcache_debug_to_NULL.mbox=20
ERROR: do not initialise globals to NULL
#217: FILE: drivers/md/bcache/debug.c:20:
+struct dentry *bcache_debug = NULL;

Bart.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 8/9] bcache: initiate bcache_debug to NULL
  2018-07-27 17:37   ` Noah Massey
  2018-07-27 19:32     ` Noah Massey
@ 2018-07-27 21:53     ` Vojtech Pavlik
  1 sibling, 0 replies; 15+ messages in thread
From: Vojtech Pavlik @ 2018-07-27 21:53 UTC (permalink / raw)
  To: Noah Massey; +Cc: Coly Li, linux-bcache, linux-block

On Fri, Jul 27, 2018 at 01:37:57PM -0400, Noah Massey wrote:

> On Thu, Jul 26, 2018, 10:23 AM Coly Li <colyli@suse.de> wrote:
> 
> > Global variable bcache_debug is firstly initialized in bch_debug_init(),
> > and destroyed in bch_debug_exit(). bch_debug_init() is called in
> > bcache_init() with many other functions, if one of the previous calling
> > onces failed, bcache_exit() will be called in the failure path.
> >
> > The problem is, if bcache_init() fails before bch_debug_init() is called,
> > then in bcache_exit() when bch_debug_exit() is called to destroy global
> > variable bcache_debug, at this moment bcache_debug is unndefined, then the
> > test of "if (!IS_ERR_OR_NULL(bcache_debug))" might be buggy.
> >
> > This patch initializes global varabile bcache_debug to be NULL, to make
> > the failure code path to be predictable.
> >
> > Signed-off-by: Coly Li <colyli@suse.de>
> > ---
> >
> 
> Makes sense
> 
> Reviewed-by: Noah Massey <noah.massey@gmail.com>

Aren't global variables zeroed by default anyway?

>  drivers/md/bcache/debug.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/drivers/md/bcache/debug.c b/drivers/md/bcache/debug.c
> > index 57f8f5aeee55..24b0eb65ddec 100644
> > --- a/drivers/md/bcache/debug.c
> > +++ b/drivers/md/bcache/debug.c
> > @@ -17,7 +17,7 @@
> >  #include <linux/random.h>
> >  #include <linux/seq_file.h>
> >
> > -struct dentry *bcache_debug;
> > +struct dentry *bcache_debug = NULL;
> >
> >  #ifdef CONFIG_BCACHE_DEBUG
> >
> > --
> > 2.17.1
> >
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-bcache" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >

-- 
Vojtech Pavlik
Director SUSE Labs

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 8/9] bcache: initiate bcache_debug to NULL
  2018-07-27 19:32     ` Noah Massey
@ 2018-07-28  5:24       ` Coly Li
  0 siblings, 0 replies; 15+ messages in thread
From: Coly Li @ 2018-07-28  5:24 UTC (permalink / raw)
  To: Noah Massey, Bart Van Assche, Vojtech Pavlik; +Cc: linux-bcache, linux-block

On 2018/7/28 3:32 AM, Noah Massey wrote:
> On Fri, Jul 27, 2018 at 1:37 PM Noah Massey <noah.massey@gmail.com> wrote:
>> On Thu, Jul 26, 2018, 10:23 AM Coly Li <colyli@suse.de> wrote:
>>>
>>> Global variable bcache_debug is firstly initialized in bch_debug_init(),
>>> and destroyed in bch_debug_exit(). bch_debug_init() is called in
>>> bcache_init() with many other functions, if one of the previous calling
>>> onces failed, bcache_exit() will be called in the failure path.
>>>
>>> The problem is, if bcache_init() fails before bch_debug_init() is called,
>>> then in bcache_exit() when bch_debug_exit() is called to destroy global
>>> variable bcache_debug, at this moment bcache_debug is unndefined, then the
>>> test of "if (!IS_ERR_OR_NULL(bcache_debug))" might be buggy.
>>>
>>> This patch initializes global varabile bcache_debug to be NULL, to make
>>> the failure code path to be predictable.
>>>
>>> Signed-off-by: Coly Li <colyli@suse.de>
>>> ---
>>
>>
>> Makes sense
>>
>> Reviewed-by: Noah Massey <noah.massey@gmail.com>
>>
> 
> Wait... aren't static variables already initialized to 0?
> So this makes it explicit, but no code change to generated binary, right?

Hi Noah, Vojtech and Bart,

You are right, of course. I just notice this is already in C11 (and
early versions) for static duration storage,

6.7.9 Initialization
10 If an object that has automatic storage duration is not initialized
explicitly, its value is indeterminate. If an object that has static or
thread storage duration is not initialized explicitly, then:
— if it has pointer type, it is initialized to a null pointer;
— if it has arithmetic type, it is initialized to (positive or unsigned)
zero;
— if it is an aggregate, every member is initialized (recursively)
according to these rules, and any padding is initialized to zero bits;
— if it is a union, the first named member is initialized (recursively)
according to these rules, and any padding is initialized to zero bits;

Then this patch is useless, it is unnecessary.

Thank you all for the comments, I am about to read the C11 document :-)

Coly Li

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2018-07-28  6:49 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-07-26 14:22 [PATCH 0/9] Pending bcache patches for review Coly Li
2018-07-26 14:22 ` [PATCH 1/9] bcache: do not check return value of debugfs_create_dir() Coly Li
2018-07-26 14:22 ` [PATCH 2/9] bcache: display rate debug parameters to 0 when writeback is not running Coly Li
2018-07-26 14:22 ` [PATCH 3/9] bcache: avoid unncessary cache prefetch bch_btree_node_get() Coly Li
2018-07-26 14:22 ` [PATCH 4/9] bcache: add a comment in register_bdev() Coly Li
2018-07-26 14:22 ` [PATCH 5/9] bcache: fix mistaken code comments in struct cache Coly Li
2018-07-26 14:22 ` [PATCH 6/9] bcache: fix mistaken comments in bch_keylist_realloc() Coly Li
2018-07-26 14:22 ` [PATCH 7/9] bcache: add code comments for bset.c Coly Li
2018-07-26 14:22 ` [PATCH 8/9] bcache: initiate bcache_debug to NULL Coly Li
2018-07-27 17:37   ` Noah Massey
2018-07-27 19:32     ` Noah Massey
2018-07-28  5:24       ` Coly Li
2018-07-27 21:53     ` Vojtech Pavlik
2018-07-27 19:49   ` Bart Van Assche
2018-07-26 14:22 ` [PATCH 9/9] bcache: set max writeback rate when I/O request is idle Coly Li

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.