All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/14] bcache: remove multiple caches code framework
@ 2020-08-15  4:10 Coly Li
  2020-08-15  4:10 ` [PATCH 01/14] bcache: remove 'int n' from parameter list of bch_bucket_alloc_set() Coly Li
                   ` (13 more replies)
  0 siblings, 14 replies; 34+ messages in thread
From: Coly Li @ 2020-08-15  4:10 UTC (permalink / raw)
  To: linux-bcache; +Cc: linux-block, Coly Li

The multiple caches code framework in bcache is to store multiple
copies of the cached data among multiple caches of the cache set.
Current code framework just does simple data write to each cache without
any extra condition handling (e.g. device failure, slow devices). This
code framework is not and will never be completed. Considering people
may use md raid1 for same similar data duplication purpose, the multiple
caches framework is useless dead code indeed.

Due to the multiple caches code framework, bcache has two data structure
struct cache and struct cache_set to manage the cache device. Indeed
since bcache was merged into mainline kernel in Linux v3.10, a cache set
only has one cache, the unnecessary two level abstraction makes extra
effort to maintain redundant information between struct cache and struct
cache set, for examaple the in-memmory super block struct cache_sb.

This is the first wave effort to remove multiple caches framework and
make the code and data structure relation to be more clear. This series
explicitly make each cache set only have single cache, and remove the
embedded partial super block in struct cache_set and directly reference
cache's in-memory super block, finally move struct cache_sb from
include/uapi/linux/bcache.h to drivers/md/bcache/bcache.h since it isn't
part of uapi anymore.

The patch set is just compiling passed, I post this series early for
your review and comments. More fixes after testing will follow up soon.

Thanks in advance.

Coly Li
----

Coly Li (14):
  bcache: remove 'int n' from parameter list of bch_bucket_alloc_set()
  bcache: explicitly make cache_set only have single cache
  bcache: remove for_each_cache()
  bcache: add set_uuid in struct cache_set
  bcache: only use block_bytes() on struct cache
  bcache: remove useless alloc_bucket_pages()
  bcache: remove useless bucket_pages()
  bcache: only use bucket_bytes() on struct cache
  bcache: avoid data copy between cache_set->sb and cache->sb
  bcache: don't check seq numbers in register_cache_set()
  bcache: remove can_attach_cache()
  bcache: check and set sync status on cache's in-memory super block
  bcache: remove embedded struct cache_sb from struct cache_set
  bcache: move struct cache_sb out of uapi bcache.h

 drivers/md/bcache/alloc.c     |  60 ++++-----
 drivers/md/bcache/bcache.h    | 128 +++++++++++++++---
 drivers/md/bcache/btree.c     | 144 ++++++++++----------
 drivers/md/bcache/btree.h     |   2 +-
 drivers/md/bcache/debug.c     |  10 +-
 drivers/md/bcache/extents.c   |   6 +-
 drivers/md/bcache/features.c  |   4 +-
 drivers/md/bcache/io.c        |   2 +-
 drivers/md/bcache/journal.c   | 246 ++++++++++++++++------------------
 drivers/md/bcache/movinggc.c  |  58 ++++----
 drivers/md/bcache/request.c   |   6 +-
 drivers/md/bcache/super.c     | 225 +++++++++++--------------------
 drivers/md/bcache/sysfs.c     |  10 +-
 drivers/md/bcache/writeback.c |   2 +-
 include/trace/events/bcache.h |   4 +-
 include/uapi/linux/bcache.h   |  98 --------------
 16 files changed, 445 insertions(+), 560 deletions(-)

-- 
2.26.2


^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 01/14] bcache: remove 'int n' from parameter list of bch_bucket_alloc_set()
  2020-08-15  4:10 [PATCH 00/14] bcache: remove multiple caches code framework Coly Li
@ 2020-08-15  4:10 ` Coly Li
  2020-08-17  6:08   ` Hannes Reinecke
  2020-08-15  4:10 ` [PATCH 02/14] bcache: explicitly make cache_set only have single cache Coly Li
                   ` (12 subsequent siblings)
  13 siblings, 1 reply; 34+ messages in thread
From: Coly Li @ 2020-08-15  4:10 UTC (permalink / raw)
  To: linux-bcache; +Cc: linux-block, Coly Li

The parameter 'int n' from bch_bucket_alloc_set() is not cleared
defined. From the code comments n is the number of buckets to alloc, but
from the code itself 'n' is the maximum cache to iterate. Indeed all the
locations where bch_bucket_alloc_set() is called, 'n' is alwasy 1.

This patch removes the confused and unnecessary 'int n' from parameter
list of  bch_bucket_alloc_set(), and explicitly allocates only 1 bucket
for its caller.

Signed-off-by: Coly Li <colyli@suse.de>
---
 drivers/md/bcache/alloc.c  | 35 +++++++++++++++--------------------
 drivers/md/bcache/bcache.h |  4 ++--
 drivers/md/bcache/btree.c  |  2 +-
 drivers/md/bcache/super.c  |  2 +-
 4 files changed, 19 insertions(+), 24 deletions(-)

diff --git a/drivers/md/bcache/alloc.c b/drivers/md/bcache/alloc.c
index 52035a78d836..4493ff57476d 100644
--- a/drivers/md/bcache/alloc.c
+++ b/drivers/md/bcache/alloc.c
@@ -49,7 +49,7 @@
  *
  * bch_bucket_alloc() allocates a single bucket from a specific cache.
  *
- * bch_bucket_alloc_set() allocates one or more buckets from different caches
+ * bch_bucket_alloc_set() allocates one  bucket from different caches
  * out of a cache set.
  *
  * free_some_buckets() drives all the processes described above. It's called
@@ -488,34 +488,29 @@ void bch_bucket_free(struct cache_set *c, struct bkey *k)
 }
 
 int __bch_bucket_alloc_set(struct cache_set *c, unsigned int reserve,
-			   struct bkey *k, int n, bool wait)
+			   struct bkey *k, bool wait)
 {
-	int i;
+	struct cache *ca;
+	long b;
 
 	/* No allocation if CACHE_SET_IO_DISABLE bit is set */
 	if (unlikely(test_bit(CACHE_SET_IO_DISABLE, &c->flags)))
 		return -1;
 
 	lockdep_assert_held(&c->bucket_lock);
-	BUG_ON(!n || n > c->caches_loaded || n > MAX_CACHES_PER_SET);
 
 	bkey_init(k);
 
-	/* sort by free space/prio of oldest data in caches */
-
-	for (i = 0; i < n; i++) {
-		struct cache *ca = c->cache_by_alloc[i];
-		long b = bch_bucket_alloc(ca, reserve, wait);
+	ca = c->cache_by_alloc[0];
+	b = bch_bucket_alloc(ca, reserve, wait);
+	if (b == -1)
+		goto err;
 
-		if (b == -1)
-			goto err;
+	k->ptr[0] = MAKE_PTR(ca->buckets[b].gen,
+			     bucket_to_sector(c, b),
+			     ca->sb.nr_this_dev);
 
-		k->ptr[i] = MAKE_PTR(ca->buckets[b].gen,
-				bucket_to_sector(c, b),
-				ca->sb.nr_this_dev);
-
-		SET_KEY_PTRS(k, i + 1);
-	}
+	SET_KEY_PTRS(k, 1);
 
 	return 0;
 err:
@@ -525,12 +520,12 @@ int __bch_bucket_alloc_set(struct cache_set *c, unsigned int reserve,
 }
 
 int bch_bucket_alloc_set(struct cache_set *c, unsigned int reserve,
-			 struct bkey *k, int n, bool wait)
+			 struct bkey *k, bool wait)
 {
 	int ret;
 
 	mutex_lock(&c->bucket_lock);
-	ret = __bch_bucket_alloc_set(c, reserve, k, n, wait);
+	ret = __bch_bucket_alloc_set(c, reserve, k, wait);
 	mutex_unlock(&c->bucket_lock);
 	return ret;
 }
@@ -638,7 +633,7 @@ bool bch_alloc_sectors(struct cache_set *c,
 
 		spin_unlock(&c->data_bucket_lock);
 
-		if (bch_bucket_alloc_set(c, watermark, &alloc.key, 1, wait))
+		if (bch_bucket_alloc_set(c, watermark, &alloc.key, wait))
 			return false;
 
 		spin_lock(&c->data_bucket_lock);
diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
index 4fd03d2496d8..5ff6e9573935 100644
--- a/drivers/md/bcache/bcache.h
+++ b/drivers/md/bcache/bcache.h
@@ -994,9 +994,9 @@ void bch_bucket_free(struct cache_set *c, struct bkey *k);
 
 long bch_bucket_alloc(struct cache *ca, unsigned int reserve, bool wait);
 int __bch_bucket_alloc_set(struct cache_set *c, unsigned int reserve,
-			   struct bkey *k, int n, bool wait);
+			   struct bkey *k, bool wait);
 int bch_bucket_alloc_set(struct cache_set *c, unsigned int reserve,
-			 struct bkey *k, int n, bool wait);
+			 struct bkey *k, bool wait);
 bool bch_alloc_sectors(struct cache_set *c, struct bkey *k,
 		       unsigned int sectors, unsigned int write_point,
 		       unsigned int write_prio, bool wait);
diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c
index 3d8bd0692af3..e2a719fed53b 100644
--- a/drivers/md/bcache/btree.c
+++ b/drivers/md/bcache/btree.c
@@ -1091,7 +1091,7 @@ struct btree *__bch_btree_node_alloc(struct cache_set *c, struct btree_op *op,
 
 	mutex_lock(&c->bucket_lock);
 retry:
-	if (__bch_bucket_alloc_set(c, RESERVE_BTREE, &k.key, 1, wait))
+	if (__bch_bucket_alloc_set(c, RESERVE_BTREE, &k.key, wait))
 		goto err;
 
 	bkey_put(c, &k.key);
diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index 1bbdc410ee3c..7057ec48f3d1 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -486,7 +486,7 @@ static int __uuid_write(struct cache_set *c)
 	closure_init_stack(&cl);
 	lockdep_assert_held(&bch_register_lock);
 
-	if (bch_bucket_alloc_set(c, RESERVE_BTREE, &k.key, 1, true))
+	if (bch_bucket_alloc_set(c, RESERVE_BTREE, &k.key, true))
 		return 1;
 
 	size =  meta_bucket_pages(&c->sb) * PAGE_SECTORS;
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 02/14] bcache: explicitly make cache_set only have single cache
  2020-08-15  4:10 [PATCH 00/14] bcache: remove multiple caches code framework Coly Li
  2020-08-15  4:10 ` [PATCH 01/14] bcache: remove 'int n' from parameter list of bch_bucket_alloc_set() Coly Li
@ 2020-08-15  4:10 ` Coly Li
  2020-08-17  6:11   ` Hannes Reinecke
  2020-08-15  4:10 ` [PATCH 03/14] bcache: remove for_each_cache() Coly Li
                   ` (11 subsequent siblings)
  13 siblings, 1 reply; 34+ messages in thread
From: Coly Li @ 2020-08-15  4:10 UTC (permalink / raw)
  To: linux-bcache; +Cc: linux-block, Coly Li

Currently although the bcache code has a framework for multiple caches
in a cache set, but indeed the multiple caches never completed and users
use md raid1 for multiple copies of the cached data.

This patch does the following change in struct cache_set, to explicitly
make a cache_set only have single cache,
- Change pointer array "*cache[MAX_CACHES_PER_SET]" to a single pointer
  "*cache".
- Remove pointer array "*cache_by_alloc[MAX_CACHES_PER_SET]".
- Remove "caches_loaded".

Now the code looks as exactly what it does in practic: only one cache is
used in the cache set.

Signed-off-by: Coly Li <colyli@suse.de>
---
 drivers/md/bcache/alloc.c  |  2 +-
 drivers/md/bcache/bcache.h |  8 +++-----
 drivers/md/bcache/super.c  | 19 ++++++++-----------
 3 files changed, 12 insertions(+), 17 deletions(-)

diff --git a/drivers/md/bcache/alloc.c b/drivers/md/bcache/alloc.c
index 4493ff57476d..3385f6add6df 100644
--- a/drivers/md/bcache/alloc.c
+++ b/drivers/md/bcache/alloc.c
@@ -501,7 +501,7 @@ int __bch_bucket_alloc_set(struct cache_set *c, unsigned int reserve,
 
 	bkey_init(k);
 
-	ca = c->cache_by_alloc[0];
+	ca = c->cache;
 	b = bch_bucket_alloc(ca, reserve, wait);
 	if (b == -1)
 		goto err;
diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
index 5ff6e9573935..aa112c1adba1 100644
--- a/drivers/md/bcache/bcache.h
+++ b/drivers/md/bcache/bcache.h
@@ -519,9 +519,7 @@ struct cache_set {
 
 	struct cache_sb		sb;
 
-	struct cache		*cache[MAX_CACHES_PER_SET];
-	struct cache		*cache_by_alloc[MAX_CACHES_PER_SET];
-	int			caches_loaded;
+	struct cache		*cache;
 
 	struct bcache_device	**devices;
 	unsigned int		devices_max_used;
@@ -808,7 +806,7 @@ static inline struct cache *PTR_CACHE(struct cache_set *c,
 				      const struct bkey *k,
 				      unsigned int ptr)
 {
-	return c->cache[PTR_DEV(k, ptr)];
+	return c->cache;
 }
 
 static inline size_t PTR_BUCKET_NR(struct cache_set *c,
@@ -890,7 +888,7 @@ do {									\
 /* Looping macros */
 
 #define for_each_cache(ca, cs, iter)					\
-	for (iter = 0; ca = cs->cache[iter], iter < (cs)->sb.nr_in_set; iter++)
+	for (iter = 0; ca = cs->cache, iter < 1; iter++)
 
 #define for_each_bucket(b, ca)						\
 	for (b = (ca)->buckets + (ca)->sb.first_bucket;			\
diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index 7057ec48f3d1..e9ccfa17beb8 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -1675,7 +1675,7 @@ static void cache_set_free(struct closure *cl)
 	for_each_cache(ca, c, i)
 		if (ca) {
 			ca->set = NULL;
-			c->cache[ca->sb.nr_this_dev] = NULL;
+			c->cache = NULL;
 			kobject_put(&ca->kobj);
 		}
 
@@ -2166,7 +2166,7 @@ static const char *register_cache_set(struct cache *ca)
 
 	list_for_each_entry(c, &bch_cache_sets, list)
 		if (!memcmp(c->sb.set_uuid, ca->sb.set_uuid, 16)) {
-			if (c->cache[ca->sb.nr_this_dev])
+			if (c->cache)
 				return "duplicate cache set member";
 
 			if (!can_attach_cache(ca, c))
@@ -2216,14 +2216,11 @@ static const char *register_cache_set(struct cache *ca)
 
 	kobject_get(&ca->kobj);
 	ca->set = c;
-	ca->set->cache[ca->sb.nr_this_dev] = ca;
-	c->cache_by_alloc[c->caches_loaded++] = ca;
+	ca->set->cache = ca;
 
-	if (c->caches_loaded == c->sb.nr_in_set) {
-		err = "failed to run cache set";
-		if (run_cache_set(c) < 0)
-			goto err;
-	}
+	err = "failed to run cache set";
+	if (run_cache_set(c) < 0)
+		goto err;
 
 	return NULL;
 err:
@@ -2240,8 +2237,8 @@ void bch_cache_release(struct kobject *kobj)
 	unsigned int i;
 
 	if (ca->set) {
-		BUG_ON(ca->set->cache[ca->sb.nr_this_dev] != ca);
-		ca->set->cache[ca->sb.nr_this_dev] = NULL;
+		BUG_ON(ca->set->cache != ca);
+		ca->set->cache = NULL;
 	}
 
 	free_pages((unsigned long) ca->disk_buckets, ilog2(meta_bucket_pages(&ca->sb)));
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 03/14] bcache: remove for_each_cache()
  2020-08-15  4:10 [PATCH 00/14] bcache: remove multiple caches code framework Coly Li
  2020-08-15  4:10 ` [PATCH 01/14] bcache: remove 'int n' from parameter list of bch_bucket_alloc_set() Coly Li
  2020-08-15  4:10 ` [PATCH 02/14] bcache: explicitly make cache_set only have single cache Coly Li
@ 2020-08-15  4:10 ` Coly Li
  2020-08-17  6:13   ` Hannes Reinecke
  2020-08-15  4:10 ` [PATCH 04/14] bcache: add set_uuid in struct cache_set Coly Li
                   ` (10 subsequent siblings)
  13 siblings, 1 reply; 34+ messages in thread
From: Coly Li @ 2020-08-15  4:10 UTC (permalink / raw)
  To: linux-bcache; +Cc: linux-block, Coly Li

Since now each cache_set explicitly has single cache, for_each_cache()
is unnecessary. This patch removes this macro, and update all locations
where it is used, and makes sure all code logic still being consistent.

Signed-off-by: Coly Li <colyli@suse.de>
---
 drivers/md/bcache/alloc.c    |  17 ++-
 drivers/md/bcache/bcache.h   |   9 +-
 drivers/md/bcache/btree.c    | 103 +++++++---------
 drivers/md/bcache/journal.c  | 229 ++++++++++++++++-------------------
 drivers/md/bcache/movinggc.c |  58 +++++----
 drivers/md/bcache/super.c    | 114 +++++++----------
 6 files changed, 236 insertions(+), 294 deletions(-)

diff --git a/drivers/md/bcache/alloc.c b/drivers/md/bcache/alloc.c
index 3385f6add6df..1b8310992dd0 100644
--- a/drivers/md/bcache/alloc.c
+++ b/drivers/md/bcache/alloc.c
@@ -88,7 +88,6 @@ void bch_rescale_priorities(struct cache_set *c, int sectors)
 	struct cache *ca;
 	struct bucket *b;
 	unsigned long next = c->nbuckets * c->sb.bucket_size / 1024;
-	unsigned int i;
 	int r;
 
 	atomic_sub(sectors, &c->rescale);
@@ -104,14 +103,14 @@ void bch_rescale_priorities(struct cache_set *c, int sectors)
 
 	c->min_prio = USHRT_MAX;
 
-	for_each_cache(ca, c, i)
-		for_each_bucket(b, ca)
-			if (b->prio &&
-			    b->prio != BTREE_PRIO &&
-			    !atomic_read(&b->pin)) {
-				b->prio--;
-				c->min_prio = min(c->min_prio, b->prio);
-			}
+	ca = c->cache;
+	for_each_bucket(b, ca)
+		if (b->prio &&
+		    b->prio != BTREE_PRIO &&
+		    !atomic_read(&b->pin)) {
+			b->prio--;
+			c->min_prio = min(c->min_prio, b->prio);
+		}
 
 	mutex_unlock(&c->bucket_lock);
 }
diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
index aa112c1adba1..7ffe6b2d179b 100644
--- a/drivers/md/bcache/bcache.h
+++ b/drivers/md/bcache/bcache.h
@@ -887,9 +887,6 @@ do {									\
 
 /* Looping macros */
 
-#define for_each_cache(ca, cs, iter)					\
-	for (iter = 0; ca = cs->cache, iter < 1; iter++)
-
 #define for_each_bucket(b, ca)						\
 	for (b = (ca)->buckets + (ca)->sb.first_bucket;			\
 	     b < (ca)->buckets + (ca)->sb.nbuckets; b++)
@@ -931,11 +928,9 @@ static inline uint8_t bucket_gc_gen(struct bucket *b)
 
 static inline void wake_up_allocators(struct cache_set *c)
 {
-	struct cache *ca;
-	unsigned int i;
+	struct cache *ca = c->cache;
 
-	for_each_cache(ca, c, i)
-		wake_up_process(ca->alloc_thread);
+	wake_up_process(ca->alloc_thread);
 }
 
 static inline void closure_bio_submit(struct cache_set *c,
diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c
index e2a719fed53b..0817ad510d9f 100644
--- a/drivers/md/bcache/btree.c
+++ b/drivers/md/bcache/btree.c
@@ -1167,19 +1167,18 @@ static void make_btree_freeing_key(struct btree *b, struct bkey *k)
 static int btree_check_reserve(struct btree *b, struct btree_op *op)
 {
 	struct cache_set *c = b->c;
-	struct cache *ca;
-	unsigned int i, reserve = (c->root->level - b->level) * 2 + 1;
+	struct cache *ca = c->cache;
+	unsigned int reserve = (c->root->level - b->level) * 2 + 1;
 
 	mutex_lock(&c->bucket_lock);
 
-	for_each_cache(ca, c, i)
-		if (fifo_used(&ca->free[RESERVE_BTREE]) < reserve) {
-			if (op)
-				prepare_to_wait(&c->btree_cache_wait, &op->wait,
-						TASK_UNINTERRUPTIBLE);
-			mutex_unlock(&c->bucket_lock);
-			return -EINTR;
-		}
+	if (fifo_used(&ca->free[RESERVE_BTREE]) < reserve) {
+		if (op)
+			prepare_to_wait(&c->btree_cache_wait, &op->wait,
+					TASK_UNINTERRUPTIBLE);
+		mutex_unlock(&c->bucket_lock);
+		return -EINTR;
+	}
 
 	mutex_unlock(&c->bucket_lock);
 
@@ -1695,7 +1694,6 @@ static void btree_gc_start(struct cache_set *c)
 {
 	struct cache *ca;
 	struct bucket *b;
-	unsigned int i;
 
 	if (!c->gc_mark_valid)
 		return;
@@ -1705,14 +1703,14 @@ static void btree_gc_start(struct cache_set *c)
 	c->gc_mark_valid = 0;
 	c->gc_done = ZERO_KEY;
 
-	for_each_cache(ca, c, i)
-		for_each_bucket(b, ca) {
-			b->last_gc = b->gen;
-			if (!atomic_read(&b->pin)) {
-				SET_GC_MARK(b, 0);
-				SET_GC_SECTORS_USED(b, 0);
-			}
+	ca = c->cache;
+	for_each_bucket(b, ca) {
+		b->last_gc = b->gen;
+		if (!atomic_read(&b->pin)) {
+			SET_GC_MARK(b, 0);
+			SET_GC_SECTORS_USED(b, 0);
 		}
+	}
 
 	mutex_unlock(&c->bucket_lock);
 }
@@ -1721,7 +1719,8 @@ static void bch_btree_gc_finish(struct cache_set *c)
 {
 	struct bucket *b;
 	struct cache *ca;
-	unsigned int i;
+	unsigned int i, j;
+	uint64_t *k;
 
 	mutex_lock(&c->bucket_lock);
 
@@ -1739,7 +1738,6 @@ static void bch_btree_gc_finish(struct cache_set *c)
 		struct bcache_device *d = c->devices[i];
 		struct cached_dev *dc;
 		struct keybuf_key *w, *n;
-		unsigned int j;
 
 		if (!d || UUID_FLASH_ONLY(&c->uuids[i]))
 			continue;
@@ -1756,29 +1754,27 @@ static void bch_btree_gc_finish(struct cache_set *c)
 	rcu_read_unlock();
 
 	c->avail_nbuckets = 0;
-	for_each_cache(ca, c, i) {
-		uint64_t *i;
 
-		ca->invalidate_needs_gc = 0;
+	ca = c->cache;
+	ca->invalidate_needs_gc = 0;
 
-		for (i = ca->sb.d; i < ca->sb.d + ca->sb.keys; i++)
-			SET_GC_MARK(ca->buckets + *i, GC_MARK_METADATA);
+	for (k = ca->sb.d; k < ca->sb.d + ca->sb.keys; k++)
+		SET_GC_MARK(ca->buckets + *k, GC_MARK_METADATA);
 
-		for (i = ca->prio_buckets;
-		     i < ca->prio_buckets + prio_buckets(ca) * 2; i++)
-			SET_GC_MARK(ca->buckets + *i, GC_MARK_METADATA);
+	for (k = ca->prio_buckets;
+	     k < ca->prio_buckets + prio_buckets(ca) * 2; k++)
+		SET_GC_MARK(ca->buckets + *k, GC_MARK_METADATA);
 
-		for_each_bucket(b, ca) {
-			c->need_gc	= max(c->need_gc, bucket_gc_gen(b));
+	for_each_bucket(b, ca) {
+		c->need_gc	= max(c->need_gc, bucket_gc_gen(b));
 
-			if (atomic_read(&b->pin))
-				continue;
+		if (atomic_read(&b->pin))
+			continue;
 
-			BUG_ON(!GC_MARK(b) && GC_SECTORS_USED(b));
+		BUG_ON(!GC_MARK(b) && GC_SECTORS_USED(b));
 
-			if (!GC_MARK(b) || GC_MARK(b) == GC_MARK_RECLAIMABLE)
-				c->avail_nbuckets++;
-		}
+		if (!GC_MARK(b) || GC_MARK(b) == GC_MARK_RECLAIMABLE)
+			c->avail_nbuckets++;
 	}
 
 	mutex_unlock(&c->bucket_lock);
@@ -1830,12 +1826,10 @@ static void bch_btree_gc(struct cache_set *c)
 
 static bool gc_should_run(struct cache_set *c)
 {
-	struct cache *ca;
-	unsigned int i;
+	struct cache *ca = c->cache;
 
-	for_each_cache(ca, c, i)
-		if (ca->invalidate_needs_gc)
-			return true;
+	if (ca->invalidate_needs_gc)
+		return true;
 
 	if (atomic_read(&c->sectors_to_gc) < 0)
 		return true;
@@ -2081,9 +2075,8 @@ int bch_btree_check(struct cache_set *c)
 
 void bch_initial_gc_finish(struct cache_set *c)
 {
-	struct cache *ca;
+	struct cache *ca = c->cache;
 	struct bucket *b;
-	unsigned int i;
 
 	bch_btree_gc_finish(c);
 
@@ -2098,20 +2091,18 @@ void bch_initial_gc_finish(struct cache_set *c)
 	 * This is only safe for buckets that have no live data in them, which
 	 * there should always be some of.
 	 */
-	for_each_cache(ca, c, i) {
-		for_each_bucket(b, ca) {
-			if (fifo_full(&ca->free[RESERVE_PRIO]) &&
-			    fifo_full(&ca->free[RESERVE_BTREE]))
-				break;
+	for_each_bucket(b, ca) {
+		if (fifo_full(&ca->free[RESERVE_PRIO]) &&
+		    fifo_full(&ca->free[RESERVE_BTREE]))
+			break;
 
-			if (bch_can_invalidate_bucket(ca, b) &&
-			    !GC_MARK(b)) {
-				__bch_invalidate_one_bucket(ca, b);
-				if (!fifo_push(&ca->free[RESERVE_PRIO],
-				   b - ca->buckets))
-					fifo_push(&ca->free[RESERVE_BTREE],
-						  b - ca->buckets);
-			}
+		if (bch_can_invalidate_bucket(ca, b) &&
+		    !GC_MARK(b)) {
+			__bch_invalidate_one_bucket(ca, b);
+			if (!fifo_push(&ca->free[RESERVE_PRIO],
+			   b - ca->buckets))
+				fifo_push(&ca->free[RESERVE_BTREE],
+					  b - ca->buckets);
 		}
 	}
 
diff --git a/drivers/md/bcache/journal.c b/drivers/md/bcache/journal.c
index 77fbfd52edcf..027d0f8c4daf 100644
--- a/drivers/md/bcache/journal.c
+++ b/drivers/md/bcache/journal.c
@@ -179,112 +179,109 @@ int bch_journal_read(struct cache_set *c, struct list_head *list)
 		ret;							\
 	})
 
-	struct cache *ca;
-	unsigned int iter;
+	struct cache *ca = c->cache;
 	int ret = 0;
+	struct journal_device *ja = &ca->journal;
+	DECLARE_BITMAP(bitmap, SB_JOURNAL_BUCKETS);
+	unsigned int i, l, r, m;
+	uint64_t seq;
 
-	for_each_cache(ca, c, iter) {
-		struct journal_device *ja = &ca->journal;
-		DECLARE_BITMAP(bitmap, SB_JOURNAL_BUCKETS);
-		unsigned int i, l, r, m;
-		uint64_t seq;
-
-		bitmap_zero(bitmap, SB_JOURNAL_BUCKETS);
-		pr_debug("%u journal buckets\n", ca->sb.njournal_buckets);
+	bitmap_zero(bitmap, SB_JOURNAL_BUCKETS);
+	pr_debug("%u journal buckets\n", ca->sb.njournal_buckets);
 
+	/*
+	 * Read journal buckets ordered by golden ratio hash to quickly
+	 * find a sequence of buckets with valid journal entries
+	 */
+	for (i = 0; i < ca->sb.njournal_buckets; i++) {
 		/*
-		 * Read journal buckets ordered by golden ratio hash to quickly
-		 * find a sequence of buckets with valid journal entries
+		 * We must try the index l with ZERO first for
+		 * correctness due to the scenario that the journal
+		 * bucket is circular buffer which might have wrapped
 		 */
-		for (i = 0; i < ca->sb.njournal_buckets; i++) {
-			/*
-			 * We must try the index l with ZERO first for
-			 * correctness due to the scenario that the journal
-			 * bucket is circular buffer which might have wrapped
-			 */
-			l = (i * 2654435769U) % ca->sb.njournal_buckets;
+		l = (i * 2654435769U) % ca->sb.njournal_buckets;
 
-			if (test_bit(l, bitmap))
-				break;
+		if (test_bit(l, bitmap))
+			break;
 
-			if (read_bucket(l))
-				goto bsearch;
-		}
+		if (read_bucket(l))
+			goto bsearch;
+	}
 
-		/*
-		 * If that fails, check all the buckets we haven't checked
-		 * already
-		 */
-		pr_debug("falling back to linear search\n");
+	/*
+	 * If that fails, check all the buckets we haven't checked
+	 * already
+	 */
+	pr_debug("falling back to linear search\n");
 
-		for_each_clear_bit(l, bitmap, ca->sb.njournal_buckets)
-			if (read_bucket(l))
-				goto bsearch;
+	for_each_clear_bit(l, bitmap, ca->sb.njournal_buckets)
+		if (read_bucket(l))
+			goto bsearch;
 
-		/* no journal entries on this device? */
-		if (l == ca->sb.njournal_buckets)
-			continue;
+	/* no journal entries on this device? */
+	if (l == ca->sb.njournal_buckets)
+		goto out;
 bsearch:
-		BUG_ON(list_empty(list));
+	BUG_ON(list_empty(list));
 
-		/* Binary search */
-		m = l;
-		r = find_next_bit(bitmap, ca->sb.njournal_buckets, l + 1);
-		pr_debug("starting binary search, l %u r %u\n", l, r);
+	/* Binary search */
+	m = l;
+	r = find_next_bit(bitmap, ca->sb.njournal_buckets, l + 1);
+	pr_debug("starting binary search, l %u r %u\n", l, r);
 
-		while (l + 1 < r) {
-			seq = list_entry(list->prev, struct journal_replay,
-					 list)->j.seq;
+	while (l + 1 < r) {
+		seq = list_entry(list->prev, struct journal_replay,
+				 list)->j.seq;
 
-			m = (l + r) >> 1;
-			read_bucket(m);
+		m = (l + r) >> 1;
+		read_bucket(m);
 
-			if (seq != list_entry(list->prev, struct journal_replay,
-					      list)->j.seq)
-				l = m;
-			else
-				r = m;
-		}
+		if (seq != list_entry(list->prev, struct journal_replay,
+				      list)->j.seq)
+			l = m;
+		else
+			r = m;
+	}
 
-		/*
-		 * Read buckets in reverse order until we stop finding more
-		 * journal entries
-		 */
-		pr_debug("finishing up: m %u njournal_buckets %u\n",
-			 m, ca->sb.njournal_buckets);
-		l = m;
+	/*
+	 * Read buckets in reverse order until we stop finding more
+	 * journal entries
+	 */
+	pr_debug("finishing up: m %u njournal_buckets %u\n",
+		 m, ca->sb.njournal_buckets);
+	l = m;
 
-		while (1) {
-			if (!l--)
-				l = ca->sb.njournal_buckets - 1;
+	while (1) {
+		if (!l--)
+			l = ca->sb.njournal_buckets - 1;
 
-			if (l == m)
-				break;
+		if (l == m)
+			break;
 
-			if (test_bit(l, bitmap))
-				continue;
+		if (test_bit(l, bitmap))
+			continue;
 
-			if (!read_bucket(l))
-				break;
-		}
+		if (!read_bucket(l))
+			break;
+	}
 
-		seq = 0;
+	seq = 0;
 
-		for (i = 0; i < ca->sb.njournal_buckets; i++)
-			if (ja->seq[i] > seq) {
-				seq = ja->seq[i];
-				/*
-				 * When journal_reclaim() goes to allocate for
-				 * the first time, it'll use the bucket after
-				 * ja->cur_idx
-				 */
-				ja->cur_idx = i;
-				ja->last_idx = ja->discard_idx = (i + 1) %
-					ca->sb.njournal_buckets;
+	for (i = 0; i < ca->sb.njournal_buckets; i++)
+		if (ja->seq[i] > seq) {
+			seq = ja->seq[i];
+			/*
+			 * When journal_reclaim() goes to allocate for
+			 * the first time, it'll use the bucket after
+			 * ja->cur_idx
+			 */
+			ja->cur_idx = i;
+			ja->last_idx = ja->discard_idx = (i + 1) %
+				ca->sb.njournal_buckets;
 
-			}
-	}
+		}
 
+out:
 	if (!list_empty(list))
 		c->journal.seq = list_entry(list->prev,
 					    struct journal_replay,
@@ -342,12 +339,10 @@ void bch_journal_mark(struct cache_set *c, struct list_head *list)
 
 static bool is_discard_enabled(struct cache_set *s)
 {
-	struct cache *ca;
-	unsigned int i;
+	struct cache *ca = s->cache;
 
-	for_each_cache(ca, s, i)
-		if (ca->discard)
-			return true;
+	if (ca->discard)
+		return true;
 
 	return false;
 }
@@ -633,9 +628,10 @@ static void do_journal_discard(struct cache *ca)
 static void journal_reclaim(struct cache_set *c)
 {
 	struct bkey *k = &c->journal.key;
-	struct cache *ca;
+	struct cache *ca = c->cache;
 	uint64_t last_seq;
-	unsigned int iter, n = 0;
+	unsigned int next;
+	struct journal_device *ja = &ca->journal;
 	atomic_t p __maybe_unused;
 
 	atomic_long_inc(&c->reclaim);
@@ -647,46 +643,31 @@ static void journal_reclaim(struct cache_set *c)
 
 	/* Update last_idx */
 
-	for_each_cache(ca, c, iter) {
-		struct journal_device *ja = &ca->journal;
-
-		while (ja->last_idx != ja->cur_idx &&
-		       ja->seq[ja->last_idx] < last_seq)
-			ja->last_idx = (ja->last_idx + 1) %
-				ca->sb.njournal_buckets;
-	}
+	while (ja->last_idx != ja->cur_idx &&
+	       ja->seq[ja->last_idx] < last_seq)
+		ja->last_idx = (ja->last_idx + 1) %
+			ca->sb.njournal_buckets;
 
-	for_each_cache(ca, c, iter)
-		do_journal_discard(ca);
+	do_journal_discard(ca);
 
 	if (c->journal.blocks_free)
 		goto out;
 
-	/*
-	 * Allocate:
-	 * XXX: Sort by free journal space
-	 */
-
-	for_each_cache(ca, c, iter) {
-		struct journal_device *ja = &ca->journal;
-		unsigned int next = (ja->cur_idx + 1) % ca->sb.njournal_buckets;
+	next = (ja->cur_idx + 1) % ca->sb.njournal_buckets;
+	/* No space available on this device */
+	if (next == ja->discard_idx)
+		goto out;
 
-		/* No space available on this device */
-		if (next == ja->discard_idx)
-			continue;
+	ja->cur_idx = next;
+	k->ptr[0] = MAKE_PTR(0,
+			     bucket_to_sector(c, ca->sb.d[ja->cur_idx]),
+			     ca->sb.nr_this_dev);
+	atomic_long_inc(&c->reclaimed_journal_buckets);
 
-		ja->cur_idx = next;
-		k->ptr[n++] = MAKE_PTR(0,
-				  bucket_to_sector(c, ca->sb.d[ja->cur_idx]),
-				  ca->sb.nr_this_dev);
-		atomic_long_inc(&c->reclaimed_journal_buckets);
-	}
+	bkey_init(k);
+	SET_KEY_PTRS(k, 1);
+	c->journal.blocks_free = c->sb.bucket_size >> c->block_bits;
 
-	if (n) {
-		bkey_init(k);
-		SET_KEY_PTRS(k, n);
-		c->journal.blocks_free = c->sb.bucket_size >> c->block_bits;
-	}
 out:
 	if (!journal_full(&c->journal))
 		__closure_wake_up(&c->journal.wait);
@@ -750,7 +731,7 @@ static void journal_write_unlocked(struct closure *cl)
 	__releases(c->journal.lock)
 {
 	struct cache_set *c = container_of(cl, struct cache_set, journal.io);
-	struct cache *ca;
+	struct cache *ca = c->cache;
 	struct journal_write *w = c->journal.cur;
 	struct bkey *k = &c->journal.key;
 	unsigned int i, sectors = set_blocks(w->data, block_bytes(c)) *
@@ -780,9 +761,7 @@ static void journal_write_unlocked(struct closure *cl)
 	bkey_copy(&w->data->btree_root, &c->root->key);
 	bkey_copy(&w->data->uuid_bucket, &c->uuid_bucket);
 
-	for_each_cache(ca, c, i)
-		w->data->prio_bucket[ca->sb.nr_this_dev] = ca->prio_buckets[0];
-
+	w->data->prio_bucket[ca->sb.nr_this_dev] = ca->prio_buckets[0];
 	w->data->magic		= jset_magic(&c->sb);
 	w->data->version	= BCACHE_JSET_VERSION;
 	w->data->last_seq	= last_seq(&c->journal);
diff --git a/drivers/md/bcache/movinggc.c b/drivers/md/bcache/movinggc.c
index 5872d6470470..b9c3d27ec093 100644
--- a/drivers/md/bcache/movinggc.c
+++ b/drivers/md/bcache/movinggc.c
@@ -196,50 +196,48 @@ static unsigned int bucket_heap_top(struct cache *ca)
 
 void bch_moving_gc(struct cache_set *c)
 {
-	struct cache *ca;
+	struct cache *ca = c->cache;
 	struct bucket *b;
-	unsigned int i;
+	unsigned long sectors_to_move, reserve_sectors;
 
 	if (!c->copy_gc_enabled)
 		return;
 
 	mutex_lock(&c->bucket_lock);
 
-	for_each_cache(ca, c, i) {
-		unsigned long sectors_to_move = 0;
-		unsigned long reserve_sectors = ca->sb.bucket_size *
+	sectors_to_move = 0;
+	reserve_sectors = ca->sb.bucket_size *
 			     fifo_used(&ca->free[RESERVE_MOVINGGC]);
 
-		ca->heap.used = 0;
-
-		for_each_bucket(b, ca) {
-			if (GC_MARK(b) == GC_MARK_METADATA ||
-			    !GC_SECTORS_USED(b) ||
-			    GC_SECTORS_USED(b) == ca->sb.bucket_size ||
-			    atomic_read(&b->pin))
-				continue;
-
-			if (!heap_full(&ca->heap)) {
-				sectors_to_move += GC_SECTORS_USED(b);
-				heap_add(&ca->heap, b, bucket_cmp);
-			} else if (bucket_cmp(b, heap_peek(&ca->heap))) {
-				sectors_to_move -= bucket_heap_top(ca);
-				sectors_to_move += GC_SECTORS_USED(b);
-
-				ca->heap.data[0] = b;
-				heap_sift(&ca->heap, 0, bucket_cmp);
-			}
-		}
+	ca->heap.used = 0;
+
+	for_each_bucket(b, ca) {
+		if (GC_MARK(b) == GC_MARK_METADATA ||
+		    !GC_SECTORS_USED(b) ||
+		    GC_SECTORS_USED(b) == ca->sb.bucket_size ||
+		    atomic_read(&b->pin))
+			continue;
 
-		while (sectors_to_move > reserve_sectors) {
-			heap_pop(&ca->heap, b, bucket_cmp);
-			sectors_to_move -= GC_SECTORS_USED(b);
+		if (!heap_full(&ca->heap)) {
+			sectors_to_move += GC_SECTORS_USED(b);
+			heap_add(&ca->heap, b, bucket_cmp);
+		} else if (bucket_cmp(b, heap_peek(&ca->heap))) {
+			sectors_to_move -= bucket_heap_top(ca);
+			sectors_to_move += GC_SECTORS_USED(b);
+
+			ca->heap.data[0] = b;
+			heap_sift(&ca->heap, 0, bucket_cmp);
 		}
+	}
 
-		while (heap_pop(&ca->heap, b, bucket_cmp))
-			SET_GC_MOVE(b, 1);
+	while (sectors_to_move > reserve_sectors) {
+		heap_pop(&ca->heap, b, bucket_cmp);
+		sectors_to_move -= GC_SECTORS_USED(b);
 	}
 
+	while (heap_pop(&ca->heap, b, bucket_cmp))
+		SET_GC_MOVE(b, 1);
+
 	mutex_unlock(&c->bucket_lock);
 
 	c->moving_gc_keys.last_scanned = ZERO_KEY;
diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index e9ccfa17beb8..2521be9380d6 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -343,8 +343,9 @@ static void bcache_write_super_unlock(struct closure *cl)
 void bcache_write_super(struct cache_set *c)
 {
 	struct closure *cl = &c->sb_write;
-	struct cache *ca;
-	unsigned int i, version = BCACHE_SB_VERSION_CDEV_WITH_UUID;
+	struct cache *ca = c->cache;
+	struct bio *bio = &ca->sb_bio;
+	unsigned int version = BCACHE_SB_VERSION_CDEV_WITH_UUID;
 
 	down(&c->sb_write_mutex);
 	closure_init(cl, &c->cl);
@@ -354,23 +355,19 @@ void bcache_write_super(struct cache_set *c)
 	if (c->sb.version > version)
 		version = c->sb.version;
 
-	for_each_cache(ca, c, i) {
-		struct bio *bio = &ca->sb_bio;
-
-		ca->sb.version		= version;
-		ca->sb.seq		= c->sb.seq;
-		ca->sb.last_mount	= c->sb.last_mount;
+	ca->sb.version		= version;
+	ca->sb.seq		= c->sb.seq;
+	ca->sb.last_mount	= c->sb.last_mount;
 
-		SET_CACHE_SYNC(&ca->sb, CACHE_SYNC(&c->sb));
+	SET_CACHE_SYNC(&ca->sb, CACHE_SYNC(&c->sb));
 
-		bio_init(bio, ca->sb_bv, 1);
-		bio_set_dev(bio, ca->bdev);
-		bio->bi_end_io	= write_super_endio;
-		bio->bi_private = ca;
+	bio_init(bio, ca->sb_bv, 1);
+	bio_set_dev(bio, ca->bdev);
+	bio->bi_end_io	= write_super_endio;
+	bio->bi_private = ca;
 
-		closure_get(cl);
-		__write_super(&ca->sb, ca->sb_disk, bio);
-	}
+	closure_get(cl);
+	__write_super(&ca->sb, ca->sb_disk, bio);
 
 	closure_return_with_destructor(cl, bcache_write_super_unlock);
 }
@@ -772,26 +769,22 @@ static void bcache_device_unlink(struct bcache_device *d)
 	lockdep_assert_held(&bch_register_lock);
 
 	if (d->c && !test_and_set_bit(BCACHE_DEV_UNLINK_DONE, &d->flags)) {
-		unsigned int i;
-		struct cache *ca;
+		struct cache *ca = d->c->cache;
 
 		sysfs_remove_link(&d->c->kobj, d->name);
 		sysfs_remove_link(&d->kobj, "cache");
 
-		for_each_cache(ca, d->c, i)
-			bd_unlink_disk_holder(ca->bdev, d->disk);
+		bd_unlink_disk_holder(ca->bdev, d->disk);
 	}
 }
 
 static void bcache_device_link(struct bcache_device *d, struct cache_set *c,
 			       const char *name)
 {
-	unsigned int i;
-	struct cache *ca;
+	struct cache *ca = c->cache;
 	int ret;
 
-	for_each_cache(ca, d->c, i)
-		bd_link_disk_holder(ca->bdev, d->disk);
+	bd_link_disk_holder(ca->bdev, d->disk);
 
 	snprintf(d->name, BCACHEDEVNAME_SIZE,
 		 "%s%u", name, d->id);
@@ -1663,7 +1656,6 @@ static void cache_set_free(struct closure *cl)
 {
 	struct cache_set *c = container_of(cl, struct cache_set, cl);
 	struct cache *ca;
-	unsigned int i;
 
 	debugfs_remove(c->debug);
 
@@ -1672,12 +1664,12 @@ static void cache_set_free(struct closure *cl)
 	bch_journal_free(c);
 
 	mutex_lock(&bch_register_lock);
-	for_each_cache(ca, c, i)
-		if (ca) {
-			ca->set = NULL;
-			c->cache = NULL;
-			kobject_put(&ca->kobj);
-		}
+	ca = c->cache;
+	if (ca) {
+		ca->set = NULL;
+		c->cache = NULL;
+		kobject_put(&ca->kobj);
+	}
 
 	bch_bset_sort_state_free(&c->sort);
 	free_pages((unsigned long) c->uuids, ilog2(meta_bucket_pages(&c->sb)));
@@ -1703,9 +1695,8 @@ static void cache_set_free(struct closure *cl)
 static void cache_set_flush(struct closure *cl)
 {
 	struct cache_set *c = container_of(cl, struct cache_set, caching);
-	struct cache *ca;
+	struct cache *ca = c->cache;
 	struct btree *b;
-	unsigned int i;
 
 	bch_cache_accounting_destroy(&c->accounting);
 
@@ -1730,9 +1721,8 @@ static void cache_set_flush(struct closure *cl)
 			mutex_unlock(&b->write_lock);
 		}
 
-	for_each_cache(ca, c, i)
-		if (ca->alloc_thread)
-			kthread_stop(ca->alloc_thread);
+	if (ca->alloc_thread)
+		kthread_stop(ca->alloc_thread);
 
 	if (c->journal.cur) {
 		cancel_delayed_work_sync(&c->journal.work);
@@ -1973,16 +1963,14 @@ static int run_cache_set(struct cache_set *c)
 {
 	const char *err = "cannot allocate memory";
 	struct cached_dev *dc, *t;
-	struct cache *ca;
+	struct cache *ca = c->cache;
 	struct closure cl;
-	unsigned int i;
 	LIST_HEAD(journal);
 	struct journal_replay *l;
 
 	closure_init_stack(&cl);
 
-	for_each_cache(ca, c, i)
-		c->nbuckets += ca->sb.nbuckets;
+	c->nbuckets = ca->sb.nbuckets;
 	set_gc_sectors(c);
 
 	if (CACHE_SYNC(&c->sb)) {
@@ -2002,10 +1990,8 @@ static int run_cache_set(struct cache_set *c)
 		j = &list_entry(journal.prev, struct journal_replay, list)->j;
 
 		err = "IO error reading priorities";
-		for_each_cache(ca, c, i) {
-			if (prio_read(ca, j->prio_bucket[ca->sb.nr_this_dev]))
-				goto err;
-		}
+		if (prio_read(ca, j->prio_bucket[ca->sb.nr_this_dev]))
+			goto err;
 
 		/*
 		 * If prio_read() fails it'll call cache_set_error and we'll
@@ -2049,9 +2035,8 @@ static int run_cache_set(struct cache_set *c)
 		bch_journal_next(&c->journal);
 
 		err = "error starting allocator thread";
-		for_each_cache(ca, c, i)
-			if (bch_cache_allocator_start(ca))
-				goto err;
+		if (bch_cache_allocator_start(ca))
+			goto err;
 
 		/*
 		 * First place it's safe to allocate: btree_check() and
@@ -2070,28 +2055,23 @@ static int run_cache_set(struct cache_set *c)
 		if (bch_journal_replay(c, &journal))
 			goto err;
 	} else {
-		pr_notice("invalidating existing data\n");
+		unsigned int j;
 
-		for_each_cache(ca, c, i) {
-			unsigned int j;
-
-			ca->sb.keys = clamp_t(int, ca->sb.nbuckets >> 7,
-					      2, SB_JOURNAL_BUCKETS);
+		pr_notice("invalidating existing data\n");
+		ca->sb.keys = clamp_t(int, ca->sb.nbuckets >> 7,
+					2, SB_JOURNAL_BUCKETS);
 
-			for (j = 0; j < ca->sb.keys; j++)
-				ca->sb.d[j] = ca->sb.first_bucket + j;
-		}
+		for (j = 0; j < ca->sb.keys; j++)
+			ca->sb.d[j] = ca->sb.first_bucket + j;
 
 		bch_initial_gc_finish(c);
 
 		err = "error starting allocator thread";
-		for_each_cache(ca, c, i)
-			if (bch_cache_allocator_start(ca))
-				goto err;
+		if (bch_cache_allocator_start(ca))
+			goto err;
 
 		mutex_lock(&c->bucket_lock);
-		for_each_cache(ca, c, i)
-			bch_prio_write(ca, true);
+		bch_prio_write(ca, true);
 		mutex_unlock(&c->bucket_lock);
 
 		err = "cannot allocate new UUID bucket";
@@ -2467,13 +2447,13 @@ static bool bch_is_open_backing(struct block_device *bdev)
 static bool bch_is_open_cache(struct block_device *bdev)
 {
 	struct cache_set *c, *tc;
-	struct cache *ca;
-	unsigned int i;
 
-	list_for_each_entry_safe(c, tc, &bch_cache_sets, list)
-		for_each_cache(ca, c, i)
-			if (ca->bdev == bdev)
-				return true;
+	list_for_each_entry_safe(c, tc, &bch_cache_sets, list) {
+		struct cache *ca = c->cache;
+		if (ca->bdev == bdev)
+			return true;
+	}
+
 	return false;
 }
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 04/14] bcache: add set_uuid in struct cache_set
  2020-08-15  4:10 [PATCH 00/14] bcache: remove multiple caches code framework Coly Li
                   ` (2 preceding siblings ...)
  2020-08-15  4:10 ` [PATCH 03/14] bcache: remove for_each_cache() Coly Li
@ 2020-08-15  4:10 ` Coly Li
  2020-08-17  6:15   ` Hannes Reinecke
  2020-08-15  4:10 ` [PATCH 05/14] bcache: only use block_bytes() on struct cache Coly Li
                   ` (9 subsequent siblings)
  13 siblings, 1 reply; 34+ messages in thread
From: Coly Li @ 2020-08-15  4:10 UTC (permalink / raw)
  To: linux-bcache; +Cc: linux-block, Coly Li

This patch adds a separated set_uuid[16] in struct cache_set, to store
the uuid of the cache set. This is the preparation to remove the
embedded struct cache_sb from struct cache_set.

Signed-off-by: Coly Li <colyli@suse.de>
---
 drivers/md/bcache/bcache.h    |  1 +
 drivers/md/bcache/debug.c     |  2 +-
 drivers/md/bcache/super.c     | 24 ++++++++++++------------
 include/trace/events/bcache.h |  4 ++--
 4 files changed, 16 insertions(+), 15 deletions(-)

diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
index 7ffe6b2d179b..94a62acac4fc 100644
--- a/drivers/md/bcache/bcache.h
+++ b/drivers/md/bcache/bcache.h
@@ -668,6 +668,7 @@ struct cache_set {
 	struct mutex		verify_lock;
 #endif
 
+	uint8_t			set_uuid[16];
 	unsigned int		nr_uuids;
 	struct uuid_entry	*uuids;
 	BKEY_PADDED(uuid_bucket);
diff --git a/drivers/md/bcache/debug.c b/drivers/md/bcache/debug.c
index 336f43910383..0ccc1b0baa42 100644
--- a/drivers/md/bcache/debug.c
+++ b/drivers/md/bcache/debug.c
@@ -238,7 +238,7 @@ void bch_debug_init_cache_set(struct cache_set *c)
 	if (!IS_ERR_OR_NULL(bcache_debug)) {
 		char name[50];
 
-		snprintf(name, 50, "bcache-%pU", c->sb.set_uuid);
+		snprintf(name, 50, "bcache-%pU", c->set_uuid);
 		c->debug = debugfs_create_file(name, 0400, bcache_debug, c,
 					       &cache_set_debug_ops);
 	}
diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index 2521be9380d6..77f5673adbbc 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -1189,8 +1189,8 @@ int bch_cached_dev_attach(struct cached_dev *dc, struct cache_set *c,
 	struct cached_dev *exist_dc, *t;
 	int ret = 0;
 
-	if ((set_uuid && memcmp(set_uuid, c->sb.set_uuid, 16)) ||
-	    (!set_uuid && memcmp(dc->sb.set_uuid, c->sb.set_uuid, 16)))
+	if ((set_uuid && memcmp(set_uuid, c->set_uuid, 16)) ||
+	    (!set_uuid && memcmp(dc->sb.set_uuid, c->set_uuid, 16)))
 		return -ENOENT;
 
 	if (dc->disk.c) {
@@ -1262,7 +1262,7 @@ int bch_cached_dev_attach(struct cached_dev *dc, struct cache_set *c,
 		u->first_reg = u->last_reg = rtime;
 		bch_uuid_write(c);
 
-		memcpy(dc->sb.set_uuid, c->sb.set_uuid, 16);
+		memcpy(dc->sb.set_uuid, c->set_uuid, 16);
 		SET_BDEV_STATE(&dc->sb, BDEV_STATE_CLEAN);
 
 		bch_write_bdev_super(dc, &cl);
@@ -1324,7 +1324,7 @@ int bch_cached_dev_attach(struct cached_dev *dc, struct cache_set *c,
 	pr_info("Caching %s as %s on set %pU\n",
 		dc->backing_dev_name,
 		dc->disk.disk->disk_name,
-		dc->disk.c->sb.set_uuid);
+		dc->disk.c->set_uuid);
 	return 0;
 }
 
@@ -1632,7 +1632,7 @@ bool bch_cache_set_error(struct cache_set *c, const char *fmt, ...)
 	vaf.va = &args;
 
 	pr_err("error on %pU: %pV, disabling caching\n",
-	       c->sb.set_uuid, &vaf);
+	       c->set_uuid, &vaf);
 
 	va_end(args);
 
@@ -1685,7 +1685,7 @@ static void cache_set_free(struct closure *cl)
 	list_del(&c->list);
 	mutex_unlock(&bch_register_lock);
 
-	pr_info("Cache set %pU unregistered\n", c->sb.set_uuid);
+	pr_info("Cache set %pU unregistered\n", c->set_uuid);
 	wake_up(&unregister_wait);
 
 	closure_debug_destroy(&c->cl);
@@ -1755,7 +1755,7 @@ static void conditional_stop_bcache_device(struct cache_set *c,
 {
 	if (dc->stop_when_cache_set_failed == BCH_CACHED_DEV_STOP_ALWAYS) {
 		pr_warn("stop_when_cache_set_failed of %s is \"always\", stop it for failed cache set %pU.\n",
-			d->disk->disk_name, c->sb.set_uuid);
+			d->disk->disk_name, c->set_uuid);
 		bcache_device_stop(d);
 	} else if (atomic_read(&dc->has_dirty)) {
 		/*
@@ -1862,7 +1862,7 @@ struct cache_set *bch_cache_set_alloc(struct cache_sb *sb)
 
 	bch_cache_accounting_init(&c->accounting, &c->cl);
 
-	memcpy(c->sb.set_uuid, sb->set_uuid, 16);
+	memcpy(c->set_uuid, sb->set_uuid, 16);
 	c->sb.block_size	= sb->block_size;
 	c->sb.bucket_size	= sb->bucket_size;
 	c->sb.nr_in_set		= sb->nr_in_set;
@@ -2145,7 +2145,7 @@ static const char *register_cache_set(struct cache *ca)
 	struct cache_set *c;
 
 	list_for_each_entry(c, &bch_cache_sets, list)
-		if (!memcmp(c->sb.set_uuid, ca->sb.set_uuid, 16)) {
+		if (!memcmp(c->set_uuid, ca->sb.set_uuid, 16)) {
 			if (c->cache)
 				return "duplicate cache set member";
 
@@ -2163,7 +2163,7 @@ static const char *register_cache_set(struct cache *ca)
 		return err;
 
 	err = "error creating kobject";
-	if (kobject_add(&c->kobj, bcache_kobj, "%pU", c->sb.set_uuid) ||
+	if (kobject_add(&c->kobj, bcache_kobj, "%pU", c->set_uuid) ||
 	    kobject_add(&c->internal, &c->kobj, "internal"))
 		goto err;
 
@@ -2188,7 +2188,7 @@ static const char *register_cache_set(struct cache *ca)
 	 */
 	if (ca->sb.seq > c->sb.seq || c->sb.seq == 0) {
 		c->sb.version		= ca->sb.version;
-		memcpy(c->sb.set_uuid, ca->sb.set_uuid, 16);
+		memcpy(c->set_uuid, ca->sb.set_uuid, 16);
 		c->sb.flags             = ca->sb.flags;
 		c->sb.seq		= ca->sb.seq;
 		pr_debug("set version = %llu\n", c->sb.version);
@@ -2697,7 +2697,7 @@ static ssize_t bch_pending_bdevs_cleanup(struct kobject *k,
 	list_for_each_entry_safe(pdev, tpdev, &pending_devs, list) {
 		list_for_each_entry_safe(c, tc, &bch_cache_sets, list) {
 			char *pdev_set_uuid = pdev->dc->sb.set_uuid;
-			char *set_uuid = c->sb.uuid;
+			char *set_uuid = c->set_uuid;
 
 			if (!memcmp(pdev_set_uuid, set_uuid, 16)) {
 				list_del(&pdev->list);
diff --git a/include/trace/events/bcache.h b/include/trace/events/bcache.h
index 0bddea663b3b..e41c611d6d3b 100644
--- a/include/trace/events/bcache.h
+++ b/include/trace/events/bcache.h
@@ -164,7 +164,7 @@ TRACE_EVENT(bcache_write,
 	),
 
 	TP_fast_assign(
-		memcpy(__entry->uuid, c->sb.set_uuid, 16);
+		memcpy(__entry->uuid, c->set_uuid, 16);
 		__entry->inode		= inode;
 		__entry->sector		= bio->bi_iter.bi_sector;
 		__entry->nr_sector	= bio->bi_iter.bi_size >> 9;
@@ -200,7 +200,7 @@ DECLARE_EVENT_CLASS(cache_set,
 	),
 
 	TP_fast_assign(
-		memcpy(__entry->uuid, c->sb.set_uuid, 16);
+		memcpy(__entry->uuid, c->set_uuid, 16);
 	),
 
 	TP_printk("%pU", __entry->uuid)
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 05/14] bcache: only use block_bytes() on struct cache
  2020-08-15  4:10 [PATCH 00/14] bcache: remove multiple caches code framework Coly Li
                   ` (3 preceding siblings ...)
  2020-08-15  4:10 ` [PATCH 04/14] bcache: add set_uuid in struct cache_set Coly Li
@ 2020-08-15  4:10 ` Coly Li
  2020-08-17  6:16   ` Hannes Reinecke
  2020-08-15  4:10 ` [PATCH 06/14] bcache: remove useless alloc_bucket_pages() Coly Li
                   ` (8 subsequent siblings)
  13 siblings, 1 reply; 34+ messages in thread
From: Coly Li @ 2020-08-15  4:10 UTC (permalink / raw)
  To: linux-bcache; +Cc: linux-block, Coly Li

Because struct cache_set and struct cache both have struct cache_sb,
therefore macro block_bytes() can be used on both of them. When removing
the embedded struct cache_sb from struct cache_set, this macro won't be
used on struct cache_set anymore.

This patch unifies all block_bytes() usage only on struct cache, this is
one of the preparation to remove the embedded struct cache_sb from
struct cache_set.

Signed-off-by: Coly Li <colyli@suse.de>
---
 drivers/md/bcache/bcache.h  |  2 +-
 drivers/md/bcache/btree.c   | 24 ++++++++++++------------
 drivers/md/bcache/debug.c   |  8 ++++----
 drivers/md/bcache/journal.c |  8 ++++----
 drivers/md/bcache/request.c |  2 +-
 drivers/md/bcache/super.c   |  2 +-
 drivers/md/bcache/sysfs.c   |  2 +-
 7 files changed, 24 insertions(+), 24 deletions(-)

diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
index 94a62acac4fc..29bec61cafbb 100644
--- a/drivers/md/bcache/bcache.h
+++ b/drivers/md/bcache/bcache.h
@@ -759,7 +759,7 @@ struct bbio {
 
 #define bucket_pages(c)		((c)->sb.bucket_size / PAGE_SECTORS)
 #define bucket_bytes(c)		((c)->sb.bucket_size << 9)
-#define block_bytes(c)		((c)->sb.block_size << 9)
+#define block_bytes(ca)		((ca)->sb.block_size << 9)
 
 static inline unsigned int meta_bucket_pages(struct cache_sb *sb)
 {
diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c
index 0817ad510d9f..c91b4d58a5b3 100644
--- a/drivers/md/bcache/btree.c
+++ b/drivers/md/bcache/btree.c
@@ -104,7 +104,7 @@
 
 static inline struct bset *write_block(struct btree *b)
 {
-	return ((void *) btree_bset_first(b)) + b->written * block_bytes(b->c);
+	return ((void *) btree_bset_first(b)) + b->written * block_bytes(b->c->cache);
 }
 
 static void bch_btree_init_next(struct btree *b)
@@ -173,7 +173,7 @@ void bch_btree_node_read_done(struct btree *b)
 			goto err;
 
 		err = "bad btree header";
-		if (b->written + set_blocks(i, block_bytes(b->c)) >
+		if (b->written + set_blocks(i, block_bytes(b->c->cache)) >
 		    btree_blocks(b))
 			goto err;
 
@@ -199,13 +199,13 @@ void bch_btree_node_read_done(struct btree *b)
 
 		bch_btree_iter_push(iter, i->start, bset_bkey_last(i));
 
-		b->written += set_blocks(i, block_bytes(b->c));
+		b->written += set_blocks(i, block_bytes(b->c->cache));
 	}
 
 	err = "corrupted btree";
 	for (i = write_block(b);
 	     bset_sector_offset(&b->keys, i) < KEY_SIZE(&b->key);
-	     i = ((void *) i) + block_bytes(b->c))
+	     i = ((void *) i) + block_bytes(b->c->cache))
 		if (i->seq == b->keys.set[0].data->seq)
 			goto err;
 
@@ -347,7 +347,7 @@ static void do_btree_node_write(struct btree *b)
 
 	b->bio->bi_end_io	= btree_node_write_endio;
 	b->bio->bi_private	= cl;
-	b->bio->bi_iter.bi_size	= roundup(set_bytes(i), block_bytes(b->c));
+	b->bio->bi_iter.bi_size	= roundup(set_bytes(i), block_bytes(b->c->cache));
 	b->bio->bi_opf		= REQ_OP_WRITE | REQ_META | REQ_FUA;
 	bch_bio_map(b->bio, i);
 
@@ -423,10 +423,10 @@ void __bch_btree_node_write(struct btree *b, struct closure *parent)
 
 	do_btree_node_write(b);
 
-	atomic_long_add(set_blocks(i, block_bytes(b->c)) * b->c->sb.block_size,
+	atomic_long_add(set_blocks(i, block_bytes(b->c->cache)) * b->c->sb.block_size,
 			&PTR_CACHE(b->c, &b->key, 0)->btree_sectors_written);
 
-	b->written += set_blocks(i, block_bytes(b->c));
+	b->written += set_blocks(i, block_bytes(b->c->cache));
 }
 
 void bch_btree_node_write(struct btree *b, struct closure *parent)
@@ -1344,7 +1344,7 @@ static int btree_gc_coalesce(struct btree *b, struct btree_op *op,
 
 	if (nodes < 2 ||
 	    __set_blocks(b->keys.set[0].data, keys,
-			 block_bytes(b->c)) > blocks * (nodes - 1))
+			 block_bytes(b->c->cache)) > blocks * (nodes - 1))
 		return 0;
 
 	for (i = 0; i < nodes; i++) {
@@ -1378,7 +1378,7 @@ static int btree_gc_coalesce(struct btree *b, struct btree_op *op,
 			     k = bkey_next(k)) {
 				if (__set_blocks(n1, n1->keys + keys +
 						 bkey_u64s(k),
-						 block_bytes(b->c)) > blocks)
+						 block_bytes(b->c->cache)) > blocks)
 					break;
 
 				last = k;
@@ -1394,7 +1394,7 @@ static int btree_gc_coalesce(struct btree *b, struct btree_op *op,
 			 * though)
 			 */
 			if (__set_blocks(n1, n1->keys + n2->keys,
-					 block_bytes(b->c)) >
+					 block_bytes(b->c->cache)) >
 			    btree_blocks(new_nodes[i]))
 				goto out_unlock_nocoalesce;
 
@@ -1403,7 +1403,7 @@ static int btree_gc_coalesce(struct btree *b, struct btree_op *op,
 			last = &r->b->key;
 		}
 
-		BUG_ON(__set_blocks(n1, n1->keys + keys, block_bytes(b->c)) >
+		BUG_ON(__set_blocks(n1, n1->keys + keys, block_bytes(b->c->cache)) >
 		       btree_blocks(new_nodes[i]));
 
 		if (last)
@@ -2210,7 +2210,7 @@ static int btree_split(struct btree *b, struct btree_op *op,
 		goto err;
 
 	split = set_blocks(btree_bset_first(n1),
-			   block_bytes(n1->c)) > (btree_blocks(b) * 4) / 5;
+			   block_bytes(n1->c->cache)) > (btree_blocks(b) * 4) / 5;
 
 	if (split) {
 		unsigned int keys = 0;
diff --git a/drivers/md/bcache/debug.c b/drivers/md/bcache/debug.c
index 0ccc1b0baa42..b00fd08d696b 100644
--- a/drivers/md/bcache/debug.c
+++ b/drivers/md/bcache/debug.c
@@ -25,8 +25,8 @@ struct dentry *bcache_debug;
 	for (i = (start);						\
 	     (void *) i < (void *) (start) + (KEY_SIZE(&b->key) << 9) &&\
 	     i->seq == (start)->seq;					\
-	     i = (void *) i + set_blocks(i, block_bytes(b->c)) *	\
-		 block_bytes(b->c))
+	     i = (void *) i + set_blocks(i, block_bytes(b->c->cache)) *	\
+		 block_bytes(b->c->cache))
 
 void bch_btree_verify(struct btree *b)
 {
@@ -82,14 +82,14 @@ void bch_btree_verify(struct btree *b)
 
 		for_each_written_bset(b, ondisk, i) {
 			unsigned int block = ((void *) i - (void *) ondisk) /
-				block_bytes(b->c);
+				block_bytes(b->c->cache);
 
 			pr_err("*** on disk block %u:\n", block);
 			bch_dump_bset(&b->keys, i, block);
 		}
 
 		pr_err("*** block %zu not written\n",
-		       ((void *) i - (void *) ondisk) / block_bytes(b->c));
+		       ((void *) i - (void *) ondisk) / block_bytes(b->c->cache));
 
 		for (j = 0; j < inmemory->keys; j++)
 			if (inmemory->d[j] != sorted->d[j])
diff --git a/drivers/md/bcache/journal.c b/drivers/md/bcache/journal.c
index 027d0f8c4daf..ccd5de0ab0fe 100644
--- a/drivers/md/bcache/journal.c
+++ b/drivers/md/bcache/journal.c
@@ -98,7 +98,7 @@ reread:		left = ca->sb.bucket_size - offset;
 				return ret;
 			}
 
-			blocks = set_blocks(j, block_bytes(ca->set));
+			blocks = set_blocks(j, block_bytes(ca));
 
 			/*
 			 * Nodes in 'list' are in linear increasing order of
@@ -734,7 +734,7 @@ static void journal_write_unlocked(struct closure *cl)
 	struct cache *ca = c->cache;
 	struct journal_write *w = c->journal.cur;
 	struct bkey *k = &c->journal.key;
-	unsigned int i, sectors = set_blocks(w->data, block_bytes(c)) *
+	unsigned int i, sectors = set_blocks(w->data, block_bytes(ca)) *
 		c->sb.block_size;
 
 	struct bio *bio;
@@ -754,7 +754,7 @@ static void journal_write_unlocked(struct closure *cl)
 		return;
 	}
 
-	c->journal.blocks_free -= set_blocks(w->data, block_bytes(c));
+	c->journal.blocks_free -= set_blocks(w->data, block_bytes(ca));
 
 	w->data->btree_level = c->root->level;
 
@@ -847,7 +847,7 @@ static struct journal_write *journal_wait_for_write(struct cache_set *c,
 		struct journal_write *w = c->journal.cur;
 
 		sectors = __set_blocks(w->data, w->data->keys + nkeys,
-				       block_bytes(c)) * c->sb.block_size;
+				       block_bytes(c->cache)) * c->sb.block_size;
 
 		if (sectors <= min_t(size_t,
 				     c->journal.blocks_free * c->sb.block_size,
diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c
index c7cadaafa947..02408fdbf5bb 100644
--- a/drivers/md/bcache/request.c
+++ b/drivers/md/bcache/request.c
@@ -99,7 +99,7 @@ static int bch_keylist_realloc(struct keylist *l, unsigned int u64s,
 	 * bch_data_insert_keys() will insert the keys created so far
 	 * and finish the rest when the keylist is empty.
 	 */
-	if (newsize * sizeof(uint64_t) > block_bytes(c) - sizeof(struct jset))
+	if (newsize * sizeof(uint64_t) > block_bytes(c->cache) - sizeof(struct jset))
 		return -ENOMEM;
 
 	return __bch_keylist_realloc(l, u64s);
diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index 77f5673adbbc..5636648902e3 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -1528,7 +1528,7 @@ static int flash_dev_run(struct cache_set *c, struct uuid_entry *u)
 
 	kobject_init(&d->kobj, &bch_flash_dev_ktype);
 
-	if (bcache_device_init(d, block_bytes(c), u->sectors,
+	if (bcache_device_init(d, block_bytes(c->cache), u->sectors,
 			NULL, &bcache_flash_ops))
 		goto err;
 
diff --git a/drivers/md/bcache/sysfs.c b/drivers/md/bcache/sysfs.c
index ac06c0bc3c0a..b9f524ab5cc8 100644
--- a/drivers/md/bcache/sysfs.c
+++ b/drivers/md/bcache/sysfs.c
@@ -714,7 +714,7 @@ SHOW(__bch_cache_set)
 	sysfs_print(synchronous,		CACHE_SYNC(&c->sb));
 	sysfs_print(journal_delay_ms,		c->journal_delay_ms);
 	sysfs_hprint(bucket_size,		bucket_bytes(c));
-	sysfs_hprint(block_size,		block_bytes(c));
+	sysfs_hprint(block_size,		block_bytes(c->cache));
 	sysfs_print(tree_depth,			c->root->level);
 	sysfs_print(root_usage_percent,		bch_root_usage(c));
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 06/14] bcache: remove useless alloc_bucket_pages()
  2020-08-15  4:10 [PATCH 00/14] bcache: remove multiple caches code framework Coly Li
                   ` (4 preceding siblings ...)
  2020-08-15  4:10 ` [PATCH 05/14] bcache: only use block_bytes() on struct cache Coly Li
@ 2020-08-15  4:10 ` Coly Li
  2020-08-17  6:17   ` Hannes Reinecke
  2020-08-15  4:10 ` [PATCH 07/14] bcache: remove useless bucket_pages() Coly Li
                   ` (7 subsequent siblings)
  13 siblings, 1 reply; 34+ messages in thread
From: Coly Li @ 2020-08-15  4:10 UTC (permalink / raw)
  To: linux-bcache; +Cc: linux-block, Coly Li

Now no one uses alloc_bucket_pages() anymore, remove it from bcache.h.

Signed-off-by: Coly Li <colyli@suse.de>
---
 drivers/md/bcache/super.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index 5636648902e3..748b08ab4f11 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -1832,9 +1832,6 @@ void bch_cache_set_unregister(struct cache_set *c)
 	bch_cache_set_stop(c);
 }
 
-#define alloc_bucket_pages(gfp, c)			\
-	((void *) __get_free_pages(__GFP_ZERO|__GFP_COMP|gfp, ilog2(bucket_pages(c))))
-
 #define alloc_meta_bucket_pages(gfp, sb)		\
 	((void *) __get_free_pages(__GFP_ZERO|__GFP_COMP|gfp, ilog2(meta_bucket_pages(sb))))
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 07/14] bcache: remove useless bucket_pages()
  2020-08-15  4:10 [PATCH 00/14] bcache: remove multiple caches code framework Coly Li
                   ` (5 preceding siblings ...)
  2020-08-15  4:10 ` [PATCH 06/14] bcache: remove useless alloc_bucket_pages() Coly Li
@ 2020-08-15  4:10 ` Coly Li
  2020-08-17  6:18   ` Hannes Reinecke
  2020-08-15  4:10 ` [PATCH 08/14] bcache: only use bucket_bytes() on struct cache Coly Li
                   ` (6 subsequent siblings)
  13 siblings, 1 reply; 34+ messages in thread
From: Coly Li @ 2020-08-15  4:10 UTC (permalink / raw)
  To: linux-bcache; +Cc: linux-block, Coly Li

It seems alloc_bucket_pages() is the only user of bucket_pages().
Considering alloc_bucket_pages() is removed from bcache code, it is safe
to remove the useless macro bucket_pages() now.

Signed-off-by: Coly Li <colyli@suse.de>
---
 drivers/md/bcache/bcache.h | 1 -
 1 file changed, 1 deletion(-)

diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
index 29bec61cafbb..48a2585b6bbb 100644
--- a/drivers/md/bcache/bcache.h
+++ b/drivers/md/bcache/bcache.h
@@ -757,7 +757,6 @@ struct bbio {
 #define btree_default_blocks(c)						\
 	((unsigned int) ((PAGE_SECTORS * (c)->btree_pages) >> (c)->block_bits))
 
-#define bucket_pages(c)		((c)->sb.bucket_size / PAGE_SECTORS)
 #define bucket_bytes(c)		((c)->sb.bucket_size << 9)
 #define block_bytes(ca)		((ca)->sb.block_size << 9)
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 08/14] bcache: only use bucket_bytes() on struct cache
  2020-08-15  4:10 [PATCH 00/14] bcache: remove multiple caches code framework Coly Li
                   ` (6 preceding siblings ...)
  2020-08-15  4:10 ` [PATCH 07/14] bcache: remove useless bucket_pages() Coly Li
@ 2020-08-15  4:10 ` Coly Li
  2020-08-17  6:19   ` Hannes Reinecke
  2020-08-15  4:10 ` [PATCH 09/14] bcache: avoid data copy between cache_set->sb and cache->sb Coly Li
                   ` (5 subsequent siblings)
  13 siblings, 1 reply; 34+ messages in thread
From: Coly Li @ 2020-08-15  4:10 UTC (permalink / raw)
  To: linux-bcache; +Cc: linux-block, Coly Li

Because struct cache_set and struct cache both have struct cache_sb,
macro bucket_bytes() currently are used on both of them. When removing
the embedded struct cache_sb from struct cache_set, this macro won't be
used on struct cache_set anymore.

This patch unifies all bucket_bytes() usage only on struct cache, this is
one of the preparation to remove the embedded struct cache_sb from
struct cache_set.

Signed-off-by: Coly Li <colyli@suse.de>
---
 drivers/md/bcache/bcache.h | 2 +-
 drivers/md/bcache/sysfs.c  | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
index 48a2585b6bbb..94d4baf4c405 100644
--- a/drivers/md/bcache/bcache.h
+++ b/drivers/md/bcache/bcache.h
@@ -757,7 +757,7 @@ struct bbio {
 #define btree_default_blocks(c)						\
 	((unsigned int) ((PAGE_SECTORS * (c)->btree_pages) >> (c)->block_bits))
 
-#define bucket_bytes(c)		((c)->sb.bucket_size << 9)
+#define bucket_bytes(ca)	((ca)->sb.bucket_size << 9)
 #define block_bytes(ca)		((ca)->sb.block_size << 9)
 
 static inline unsigned int meta_bucket_pages(struct cache_sb *sb)
diff --git a/drivers/md/bcache/sysfs.c b/drivers/md/bcache/sysfs.c
index b9f524ab5cc8..4bfe98faadcc 100644
--- a/drivers/md/bcache/sysfs.c
+++ b/drivers/md/bcache/sysfs.c
@@ -713,7 +713,7 @@ SHOW(__bch_cache_set)
 
 	sysfs_print(synchronous,		CACHE_SYNC(&c->sb));
 	sysfs_print(journal_delay_ms,		c->journal_delay_ms);
-	sysfs_hprint(bucket_size,		bucket_bytes(c));
+	sysfs_hprint(bucket_size,		bucket_bytes(c->cache));
 	sysfs_hprint(block_size,		block_bytes(c->cache));
 	sysfs_print(tree_depth,			c->root->level);
 	sysfs_print(root_usage_percent,		bch_root_usage(c));
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 09/14] bcache: avoid data copy between cache_set->sb and cache->sb
  2020-08-15  4:10 [PATCH 00/14] bcache: remove multiple caches code framework Coly Li
                   ` (7 preceding siblings ...)
  2020-08-15  4:10 ` [PATCH 08/14] bcache: only use bucket_bytes() on struct cache Coly Li
@ 2020-08-15  4:10 ` Coly Li
  2020-08-17  6:22   ` Hannes Reinecke
  2020-08-15  4:10 ` [PATCH 10/14] bcache: don't check seq numbers in register_cache_set() Coly Li
                   ` (4 subsequent siblings)
  13 siblings, 1 reply; 34+ messages in thread
From: Coly Li @ 2020-08-15  4:10 UTC (permalink / raw)
  To: linux-bcache; +Cc: linux-block, Coly Li

struct cache_sb embedded in struct cache_set is only partial used and
not a real copy from cache's in-memory super block. When removing the
embedded cache_set->sb, it is unncessary to copy data between these two
in-memory super blocks (cache_set->sb and cache->sb), it is sufficient
to just use cache->sb.

This patch removes the data copy between these two in-memory super
blocks in bch_cache_set_alloc() and bcache_write_super(). In future
except for set_uuid, cache's super block will be referenced by cache
set, no copy any more.

Signed-off-by: Coly Li <colyli@suse.de>
---
 drivers/md/bcache/super.c | 22 +++-------------------
 1 file changed, 3 insertions(+), 19 deletions(-)

diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index 748b08ab4f11..05c5a7e867bb 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -350,16 +350,10 @@ void bcache_write_super(struct cache_set *c)
 	down(&c->sb_write_mutex);
 	closure_init(cl, &c->cl);
 
-	c->sb.seq++;
+	ca->sb.seq++;
 
-	if (c->sb.version > version)
-		version = c->sb.version;
-
-	ca->sb.version		= version;
-	ca->sb.seq		= c->sb.seq;
-	ca->sb.last_mount	= c->sb.last_mount;
-
-	SET_CACHE_SYNC(&ca->sb, CACHE_SYNC(&c->sb));
+	if (ca->sb.version < version)
+		ca->sb.version = version;
 
 	bio_init(bio, ca->sb_bv, 1);
 	bio_set_dev(bio, ca->bdev);
@@ -1860,16 +1854,6 @@ struct cache_set *bch_cache_set_alloc(struct cache_sb *sb)
 	bch_cache_accounting_init(&c->accounting, &c->cl);
 
 	memcpy(c->set_uuid, sb->set_uuid, 16);
-	c->sb.block_size	= sb->block_size;
-	c->sb.bucket_size	= sb->bucket_size;
-	c->sb.nr_in_set		= sb->nr_in_set;
-	c->sb.last_mount	= sb->last_mount;
-	c->sb.version		= sb->version;
-	if (c->sb.version >= BCACHE_SB_VERSION_CDEV_WITH_FEATURES) {
-		c->sb.feature_compat = sb->feature_compat;
-		c->sb.feature_ro_compat = sb->feature_ro_compat;
-		c->sb.feature_incompat = sb->feature_incompat;
-	}
 
 	c->bucket_bits		= ilog2(sb->bucket_size);
 	c->block_bits		= ilog2(sb->block_size);
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 10/14] bcache: don't check seq numbers in register_cache_set()
  2020-08-15  4:10 [PATCH 00/14] bcache: remove multiple caches code framework Coly Li
                   ` (8 preceding siblings ...)
  2020-08-15  4:10 ` [PATCH 09/14] bcache: avoid data copy between cache_set->sb and cache->sb Coly Li
@ 2020-08-15  4:10 ` Coly Li
  2020-08-17  6:23   ` Hannes Reinecke
  2020-08-15  4:10 ` [PATCH 11/14] bcache: remove can_attach_cache() Coly Li
                   ` (3 subsequent siblings)
  13 siblings, 1 reply; 34+ messages in thread
From: Coly Li @ 2020-08-15  4:10 UTC (permalink / raw)
  To: linux-bcache; +Cc: linux-block, Coly Li

In order to update the partial super block of cache set, the seq numbers
of cache and cache set are checked in register_cache_set(). If cache's
seq number is larger than cache set's seq number, cache set must update
its partial super block from cache's super block. It is unncessary when
the embedded struct cache_sb is removed from struct cache set.

This patch removed the seq numbers checking from register_cache_set(),
because later there will be no such partial super block in struct cache
set, the cache set will directly reference in-memory super block from
struct cache. This is a preparation patch for removing embedded struct
cache_sb from struct cache_set.

Signed-off-by: Coly Li <colyli@suse.de>
---
 drivers/md/bcache/super.c | 15 ---------------
 1 file changed, 15 deletions(-)

diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index 05c5a7e867bb..4ba713d0d9b0 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -2160,21 +2160,6 @@ static const char *register_cache_set(struct cache *ca)
 	    sysfs_create_link(&c->kobj, &ca->kobj, buf))
 		goto err;
 
-	/*
-	 * A special case is both ca->sb.seq and c->sb.seq are 0,
-	 * such condition happens on a new created cache device whose
-	 * super block is never flushed yet. In this case c->sb.version
-	 * and other members should be updated too, otherwise we will
-	 * have a mistaken super block version in cache set.
-	 */
-	if (ca->sb.seq > c->sb.seq || c->sb.seq == 0) {
-		c->sb.version		= ca->sb.version;
-		memcpy(c->set_uuid, ca->sb.set_uuid, 16);
-		c->sb.flags             = ca->sb.flags;
-		c->sb.seq		= ca->sb.seq;
-		pr_debug("set version = %llu\n", c->sb.version);
-	}
-
 	kobject_get(&ca->kobj);
 	ca->set = c;
 	ca->set->cache = ca;
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 11/14] bcache: remove can_attach_cache()
  2020-08-15  4:10 [PATCH 00/14] bcache: remove multiple caches code framework Coly Li
                   ` (9 preceding siblings ...)
  2020-08-15  4:10 ` [PATCH 10/14] bcache: don't check seq numbers in register_cache_set() Coly Li
@ 2020-08-15  4:10 ` Coly Li
  2020-08-17  6:24   ` Hannes Reinecke
  2020-08-15  4:10 ` [PATCH 12/14] bcache: check and set sync status on cache's in-memory super block Coly Li
                   ` (2 subsequent siblings)
  13 siblings, 1 reply; 34+ messages in thread
From: Coly Li @ 2020-08-15  4:10 UTC (permalink / raw)
  To: linux-bcache; +Cc: linux-block, Coly Li

After removing the embedded struct cache_sb from struct cache_set, cache
set will directly reference the in-memory super block of struct cache.
It is unnecessary to compare block_size, bucket_size and nr_in_set from
the identical in-memory super block in can_attach_cache().

This is a preparation patch for latter removing cache_set->sb from
struct cache_set.

Signed-off-by: Coly Li <colyli@suse.de>
---
 drivers/md/bcache/super.c | 10 ----------
 1 file changed, 10 deletions(-)

diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index 4ba713d0d9b0..a2bba78d78e6 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -2112,13 +2112,6 @@ static int run_cache_set(struct cache_set *c)
 	return -EIO;
 }
 
-static bool can_attach_cache(struct cache *ca, struct cache_set *c)
-{
-	return ca->sb.block_size	== c->sb.block_size &&
-		ca->sb.bucket_size	== c->sb.bucket_size &&
-		ca->sb.nr_in_set	== c->sb.nr_in_set;
-}
-
 static const char *register_cache_set(struct cache *ca)
 {
 	char buf[12];
@@ -2130,9 +2123,6 @@ static const char *register_cache_set(struct cache *ca)
 			if (c->cache)
 				return "duplicate cache set member";
 
-			if (!can_attach_cache(ca, c))
-				return "cache sb does not match set";
-
 			if (!CACHE_SYNC(&ca->sb))
 				SET_CACHE_SYNC(&c->sb, false);
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 12/14] bcache: check and set sync status on cache's in-memory super block
  2020-08-15  4:10 [PATCH 00/14] bcache: remove multiple caches code framework Coly Li
                   ` (10 preceding siblings ...)
  2020-08-15  4:10 ` [PATCH 11/14] bcache: remove can_attach_cache() Coly Li
@ 2020-08-15  4:10 ` Coly Li
  2020-08-17  6:25   ` Hannes Reinecke
  2020-08-15  4:10 ` [PATCH 13/14] bcache: remove embedded struct cache_sb from struct cache_set Coly Li
  2020-08-15  4:10 ` [PATCH 14/14] bcache: move struct cache_sb out of uapi bcache.h Coly Li
  13 siblings, 1 reply; 34+ messages in thread
From: Coly Li @ 2020-08-15  4:10 UTC (permalink / raw)
  To: linux-bcache; +Cc: linux-block, Coly Li

Currently the cache's sync status is checked and set on cache set's in-
memory partial super block. After removing the embedded struct cache_sb
from cache set and reference cache's in-memory super block from struct
cache_set, the sync status can set and check directly on cache's super
block.

This patch checks and sets the cache sync status directly on cache's
in-memory super block. This is a preparation for later removing embedded
struct cache_sb from struct cache_set.

Signed-off-by: Coly Li <colyli@suse.de>
---
 drivers/md/bcache/alloc.c   | 2 +-
 drivers/md/bcache/journal.c | 2 +-
 drivers/md/bcache/super.c   | 7 ++-----
 drivers/md/bcache/sysfs.c   | 6 +++---
 4 files changed, 7 insertions(+), 10 deletions(-)

diff --git a/drivers/md/bcache/alloc.c b/drivers/md/bcache/alloc.c
index 1b8310992dd0..65fdbdeb5134 100644
--- a/drivers/md/bcache/alloc.c
+++ b/drivers/md/bcache/alloc.c
@@ -361,7 +361,7 @@ static int bch_allocator_thread(void *arg)
 		 * new stuff to them:
 		 */
 		allocator_wait(ca, !atomic_read(&ca->set->prio_blocked));
-		if (CACHE_SYNC(&ca->set->sb)) {
+		if (CACHE_SYNC(&ca->sb)) {
 			/*
 			 * This could deadlock if an allocation with a btree
 			 * node locked ever blocked - having the btree node
diff --git a/drivers/md/bcache/journal.c b/drivers/md/bcache/journal.c
index ccd5de0ab0fe..e2810668ede3 100644
--- a/drivers/md/bcache/journal.c
+++ b/drivers/md/bcache/journal.c
@@ -915,7 +915,7 @@ atomic_t *bch_journal(struct cache_set *c,
 	if (unlikely(test_bit(CACHE_SET_IO_DISABLE, &c->flags)))
 		return NULL;
 
-	if (!CACHE_SYNC(&c->sb))
+	if (!CACHE_SYNC(&c->cache->sb))
 		return NULL;
 
 	w = journal_wait_for_write(c, bch_keylist_nkeys(keys));
diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index a2bba78d78e6..68563ee92393 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -1954,7 +1954,7 @@ static int run_cache_set(struct cache_set *c)
 	c->nbuckets = ca->sb.nbuckets;
 	set_gc_sectors(c);
 
-	if (CACHE_SYNC(&c->sb)) {
+	if (CACHE_SYNC(&c->cache->sb)) {
 		struct bkey *k;
 		struct jset *j;
 
@@ -2077,7 +2077,7 @@ static int run_cache_set(struct cache_set *c)
 		 * everything is set up - fortunately journal entries won't be
 		 * written until the SET_CACHE_SYNC() here:
 		 */
-		SET_CACHE_SYNC(&c->sb, true);
+		SET_CACHE_SYNC(&c->cache->sb, true);
 
 		bch_journal_next(&c->journal);
 		bch_journal_meta(c, &cl);
@@ -2123,9 +2123,6 @@ static const char *register_cache_set(struct cache *ca)
 			if (c->cache)
 				return "duplicate cache set member";
 
-			if (!CACHE_SYNC(&ca->sb))
-				SET_CACHE_SYNC(&c->sb, false);
-
 			goto found;
 		}
 
diff --git a/drivers/md/bcache/sysfs.c b/drivers/md/bcache/sysfs.c
index 4bfe98faadcc..554e3afc9b68 100644
--- a/drivers/md/bcache/sysfs.c
+++ b/drivers/md/bcache/sysfs.c
@@ -711,7 +711,7 @@ SHOW(__bch_cache_set)
 {
 	struct cache_set *c = container_of(kobj, struct cache_set, kobj);
 
-	sysfs_print(synchronous,		CACHE_SYNC(&c->sb));
+	sysfs_print(synchronous,		CACHE_SYNC(&c->cache->sb));
 	sysfs_print(journal_delay_ms,		c->journal_delay_ms);
 	sysfs_hprint(bucket_size,		bucket_bytes(c->cache));
 	sysfs_hprint(block_size,		block_bytes(c->cache));
@@ -812,8 +812,8 @@ STORE(__bch_cache_set)
 	if (attr == &sysfs_synchronous) {
 		bool sync = strtoul_or_return(buf);
 
-		if (sync != CACHE_SYNC(&c->sb)) {
-			SET_CACHE_SYNC(&c->sb, sync);
+		if (sync != CACHE_SYNC(&c->cache->sb)) {
+			SET_CACHE_SYNC(&c->cache->sb, sync);
 			bcache_write_super(c);
 		}
 	}
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 13/14] bcache: remove embedded struct cache_sb from struct cache_set
  2020-08-15  4:10 [PATCH 00/14] bcache: remove multiple caches code framework Coly Li
                   ` (11 preceding siblings ...)
  2020-08-15  4:10 ` [PATCH 12/14] bcache: check and set sync status on cache's in-memory super block Coly Li
@ 2020-08-15  4:10 ` Coly Li
  2020-08-17  6:26   ` Hannes Reinecke
  2020-08-15  4:10 ` [PATCH 14/14] bcache: move struct cache_sb out of uapi bcache.h Coly Li
  13 siblings, 1 reply; 34+ messages in thread
From: Coly Li @ 2020-08-15  4:10 UTC (permalink / raw)
  To: linux-bcache; +Cc: linux-block, Coly Li

Since bcache code was merged into mainline kerrnel, each cache set only
as one single cache in it. The multiple caches framework is here but the
code is far from completed. Considering the multiple copies of cached
data can also be stored on e.g. md raid1 devices, it is unnecessary to
support multiple caches in one cache set indeed.

The previous preparation patches fix the dependencies of explicitly
making a cache set only have single cache. Now we don't have to maintain
an embedded partial super block in struct cache_set, the in-memory super
block can be directly referenced from struct cache.

This patch removes the embedded struct cache_sb from struct cache_set,
and fixes all locations where the superb lock was referenced from this
removed super block by referencing the in-memory super block of struct
cache.

Signed-off-by: Coly Li <colyli@suse.de>
---
 drivers/md/bcache/alloc.c     |  6 +++---
 drivers/md/bcache/bcache.h    |  4 +---
 drivers/md/bcache/btree.c     | 17 +++++++++--------
 drivers/md/bcache/btree.h     |  2 +-
 drivers/md/bcache/extents.c   |  6 +++---
 drivers/md/bcache/features.c  |  4 ++--
 drivers/md/bcache/io.c        |  2 +-
 drivers/md/bcache/journal.c   | 11 ++++++-----
 drivers/md/bcache/request.c   |  4 ++--
 drivers/md/bcache/super.c     | 19 +++++++++----------
 drivers/md/bcache/writeback.c |  2 +-
 11 files changed, 38 insertions(+), 39 deletions(-)

diff --git a/drivers/md/bcache/alloc.c b/drivers/md/bcache/alloc.c
index 65fdbdeb5134..8c371d5eef8e 100644
--- a/drivers/md/bcache/alloc.c
+++ b/drivers/md/bcache/alloc.c
@@ -87,7 +87,7 @@ void bch_rescale_priorities(struct cache_set *c, int sectors)
 {
 	struct cache *ca;
 	struct bucket *b;
-	unsigned long next = c->nbuckets * c->sb.bucket_size / 1024;
+	unsigned long next = c->nbuckets * c->cache->sb.bucket_size / 1024;
 	int r;
 
 	atomic_sub(sectors, &c->rescale);
@@ -583,7 +583,7 @@ static struct open_bucket *pick_data_bucket(struct cache_set *c,
 					   struct open_bucket, list);
 found:
 	if (!ret->sectors_free && KEY_PTRS(alloc)) {
-		ret->sectors_free = c->sb.bucket_size;
+		ret->sectors_free = c->cache->sb.bucket_size;
 		bkey_copy(&ret->key, alloc);
 		bkey_init(alloc);
 	}
@@ -677,7 +677,7 @@ bool bch_alloc_sectors(struct cache_set *c,
 				&PTR_CACHE(c, &b->key, i)->sectors_written);
 	}
 
-	if (b->sectors_free < c->sb.block_size)
+	if (b->sectors_free < c->cache->sb.block_size)
 		b->sectors_free = 0;
 
 	/*
diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
index 94d4baf4c405..1d57f48307e6 100644
--- a/drivers/md/bcache/bcache.h
+++ b/drivers/md/bcache/bcache.h
@@ -517,8 +517,6 @@ struct cache_set {
 	atomic_t		idle_counter;
 	atomic_t		at_max_writeback_rate;
 
-	struct cache_sb		sb;
-
 	struct cache		*cache;
 
 	struct bcache_device	**devices;
@@ -799,7 +797,7 @@ static inline sector_t bucket_to_sector(struct cache_set *c, size_t b)
 
 static inline sector_t bucket_remainder(struct cache_set *c, sector_t s)
 {
-	return s & (c->sb.bucket_size - 1);
+	return s & (c->cache->sb.bucket_size - 1);
 }
 
 static inline struct cache *PTR_CACHE(struct cache_set *c,
diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c
index c91b4d58a5b3..d09103cc7da5 100644
--- a/drivers/md/bcache/btree.c
+++ b/drivers/md/bcache/btree.c
@@ -117,7 +117,7 @@ static void bch_btree_init_next(struct btree *b)
 
 	if (b->written < btree_blocks(b))
 		bch_bset_init_next(&b->keys, write_block(b),
-				   bset_magic(&b->c->sb));
+				   bset_magic(&b->c->cache->sb));
 
 }
 
@@ -155,7 +155,7 @@ void bch_btree_node_read_done(struct btree *b)
 	 * See the comment arount cache_set->fill_iter.
 	 */
 	iter = mempool_alloc(&b->c->fill_iter, GFP_NOIO);
-	iter->size = b->c->sb.bucket_size / b->c->sb.block_size;
+	iter->size = b->c->cache->sb.bucket_size / b->c->cache->sb.block_size;
 	iter->used = 0;
 
 #ifdef CONFIG_BCACHE_DEBUG
@@ -178,7 +178,7 @@ void bch_btree_node_read_done(struct btree *b)
 			goto err;
 
 		err = "bad magic";
-		if (i->magic != bset_magic(&b->c->sb))
+		if (i->magic != bset_magic(&b->c->cache->sb))
 			goto err;
 
 		err = "bad checksum";
@@ -219,7 +219,7 @@ void bch_btree_node_read_done(struct btree *b)
 
 	if (b->written < btree_blocks(b))
 		bch_bset_init_next(&b->keys, write_block(b),
-				   bset_magic(&b->c->sb));
+				   bset_magic(&b->c->cache->sb));
 out:
 	mempool_free(iter, &b->c->fill_iter);
 	return;
@@ -423,7 +423,7 @@ void __bch_btree_node_write(struct btree *b, struct closure *parent)
 
 	do_btree_node_write(b);
 
-	atomic_long_add(set_blocks(i, block_bytes(b->c->cache)) * b->c->sb.block_size,
+	atomic_long_add(set_blocks(i, block_bytes(b->c->cache)) * b->c->cache->sb.block_size,
 			&PTR_CACHE(b->c, &b->key, 0)->btree_sectors_written);
 
 	b->written += set_blocks(i, block_bytes(b->c->cache));
@@ -738,7 +738,7 @@ void bch_btree_cache_free(struct cache_set *c)
 	if (c->verify_data)
 		list_move(&c->verify_data->list, &c->btree_cache);
 
-	free_pages((unsigned long) c->verify_ondisk, ilog2(meta_bucket_pages(&c->sb)));
+	free_pages((unsigned long) c->verify_ondisk, ilog2(meta_bucket_pages(&c->cache->sb)));
 #endif
 
 	list_splice(&c->btree_cache_freeable,
@@ -785,7 +785,8 @@ int bch_btree_cache_alloc(struct cache_set *c)
 	mutex_init(&c->verify_lock);
 
 	c->verify_ondisk = (void *)
-		__get_free_pages(GFP_KERNEL|__GFP_COMP, ilog2(meta_bucket_pages(&c->sb)));
+		__get_free_pages(GFP_KERNEL|__GFP_COMP,
+				 ilog2(meta_bucket_pages(&c->cache->sb)));
 	if (!c->verify_ondisk) {
 		/*
 		 * Don't worry about the mca_rereserve buckets
@@ -1108,7 +1109,7 @@ struct btree *__bch_btree_node_alloc(struct cache_set *c, struct btree_op *op,
 	}
 
 	b->parent = parent;
-	bch_bset_init_next(&b->keys, b->keys.set->data, bset_magic(&b->c->sb));
+	bch_bset_init_next(&b->keys, b->keys.set->data, bset_magic(&b->c->cache->sb));
 
 	mutex_unlock(&c->bucket_lock);
 
diff --git a/drivers/md/bcache/btree.h b/drivers/md/bcache/btree.h
index 257969980c49..50482107134f 100644
--- a/drivers/md/bcache/btree.h
+++ b/drivers/md/bcache/btree.h
@@ -194,7 +194,7 @@ static inline unsigned int bset_block_offset(struct btree *b, struct bset *i)
 
 static inline void set_gc_sectors(struct cache_set *c)
 {
-	atomic_set(&c->sectors_to_gc, c->sb.bucket_size * c->nbuckets / 16);
+	atomic_set(&c->sectors_to_gc, c->cache->sb.bucket_size * c->nbuckets / 16);
 }
 
 void bkey_put(struct cache_set *c, struct bkey *k);
diff --git a/drivers/md/bcache/extents.c b/drivers/md/bcache/extents.c
index 9162af5bb6ec..f4658a1f37b8 100644
--- a/drivers/md/bcache/extents.c
+++ b/drivers/md/bcache/extents.c
@@ -54,7 +54,7 @@ static bool __ptr_invalid(struct cache_set *c, const struct bkey *k)
 			size_t bucket = PTR_BUCKET_NR(c, k, i);
 			size_t r = bucket_remainder(c, PTR_OFFSET(k, i));
 
-			if (KEY_SIZE(k) + r > c->sb.bucket_size ||
+			if (KEY_SIZE(k) + r > c->cache->sb.bucket_size ||
 			    bucket <  ca->sb.first_bucket ||
 			    bucket >= ca->sb.nbuckets)
 				return true;
@@ -75,7 +75,7 @@ static const char *bch_ptr_status(struct cache_set *c, const struct bkey *k)
 			size_t bucket = PTR_BUCKET_NR(c, k, i);
 			size_t r = bucket_remainder(c, PTR_OFFSET(k, i));
 
-			if (KEY_SIZE(k) + r > c->sb.bucket_size)
+			if (KEY_SIZE(k) + r > c->cache->sb.bucket_size)
 				return "bad, length too big";
 			if (bucket <  ca->sb.first_bucket)
 				return "bad, short offset";
@@ -136,7 +136,7 @@ static void bch_bkey_dump(struct btree_keys *keys, const struct bkey *k)
 		size_t n = PTR_BUCKET_NR(b->c, k, j);
 
 		pr_cont(" bucket %zu", n);
-		if (n >= b->c->sb.first_bucket && n < b->c->sb.nbuckets)
+		if (n >= b->c->cache->sb.first_bucket && n < b->c->cache->sb.nbuckets)
 			pr_cont(" prio %i",
 				PTR_BUCKET(b->c, k, j)->prio);
 	}
diff --git a/drivers/md/bcache/features.c b/drivers/md/bcache/features.c
index 4442df48d28c..6469223f0b77 100644
--- a/drivers/md/bcache/features.c
+++ b/drivers/md/bcache/features.c
@@ -30,7 +30,7 @@ static struct feature feature_list[] = {
 	for (f = &feature_list[0]; f->compat != 0; f++) {		\
 		if (f->compat != BCH_FEATURE_ ## type)			\
 			continue;					\
-		if (BCH_HAS_ ## type ## _FEATURE(&c->sb, f->mask)) {	\
+		if (BCH_HAS_ ## type ## _FEATURE(&c->cache->sb, f->mask)) {	\
 			if (first) {					\
 				out += snprintf(out, buf + size - out,	\
 						"[");	\
@@ -44,7 +44,7 @@ static struct feature feature_list[] = {
 									\
 		out += snprintf(out, buf + size - out, "%s", f->string);\
 									\
-		if (BCH_HAS_ ## type ## _FEATURE(&c->sb, f->mask))	\
+		if (BCH_HAS_ ## type ## _FEATURE(&c->cache->sb, f->mask))	\
 			out += snprintf(out, buf + size - out, "]");	\
 									\
 		first = false;						\
diff --git a/drivers/md/bcache/io.c b/drivers/md/bcache/io.c
index a14a445618b4..dad71a6b7889 100644
--- a/drivers/md/bcache/io.c
+++ b/drivers/md/bcache/io.c
@@ -26,7 +26,7 @@ struct bio *bch_bbio_alloc(struct cache_set *c)
 	struct bbio *b = mempool_alloc(&c->bio_meta, GFP_NOIO);
 	struct bio *bio = &b->bio;
 
-	bio_init(bio, bio->bi_inline_vecs, meta_bucket_pages(&c->sb));
+	bio_init(bio, bio->bi_inline_vecs, meta_bucket_pages(&c->cache->sb));
 
 	return bio;
 }
diff --git a/drivers/md/bcache/journal.c b/drivers/md/bcache/journal.c
index e2810668ede3..c5526e5087ef 100644
--- a/drivers/md/bcache/journal.c
+++ b/drivers/md/bcache/journal.c
@@ -666,7 +666,7 @@ static void journal_reclaim(struct cache_set *c)
 
 	bkey_init(k);
 	SET_KEY_PTRS(k, 1);
-	c->journal.blocks_free = c->sb.bucket_size >> c->block_bits;
+	c->journal.blocks_free = ca->sb.bucket_size >> c->block_bits;
 
 out:
 	if (!journal_full(&c->journal))
@@ -735,7 +735,7 @@ static void journal_write_unlocked(struct closure *cl)
 	struct journal_write *w = c->journal.cur;
 	struct bkey *k = &c->journal.key;
 	unsigned int i, sectors = set_blocks(w->data, block_bytes(ca)) *
-		c->sb.block_size;
+		ca->sb.block_size;
 
 	struct bio *bio;
 	struct bio_list list;
@@ -762,7 +762,7 @@ static void journal_write_unlocked(struct closure *cl)
 	bkey_copy(&w->data->uuid_bucket, &c->uuid_bucket);
 
 	w->data->prio_bucket[ca->sb.nr_this_dev] = ca->prio_buckets[0];
-	w->data->magic		= jset_magic(&c->sb);
+	w->data->magic		= jset_magic(&ca->sb);
 	w->data->version	= BCACHE_JSET_VERSION;
 	w->data->last_seq	= last_seq(&c->journal);
 	w->data->csum		= csum_set(w->data);
@@ -838,6 +838,7 @@ static struct journal_write *journal_wait_for_write(struct cache_set *c,
 	size_t sectors;
 	struct closure cl;
 	bool wait = false;
+	struct cache *ca = c->cache;
 
 	closure_init_stack(&cl);
 
@@ -847,10 +848,10 @@ static struct journal_write *journal_wait_for_write(struct cache_set *c,
 		struct journal_write *w = c->journal.cur;
 
 		sectors = __set_blocks(w->data, w->data->keys + nkeys,
-				       block_bytes(c->cache)) * c->sb.block_size;
+				       block_bytes(ca)) * ca->sb.block_size;
 
 		if (sectors <= min_t(size_t,
-				     c->journal.blocks_free * c->sb.block_size,
+				     c->journal.blocks_free * ca->sb.block_size,
 				     PAGE_SECTORS << JSET_BITS))
 			return w;
 
diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c
index 02408fdbf5bb..37e9cf8dbfc1 100644
--- a/drivers/md/bcache/request.c
+++ b/drivers/md/bcache/request.c
@@ -394,8 +394,8 @@ static bool check_should_bypass(struct cached_dev *dc, struct bio *bio)
 			goto skip;
 	}
 
-	if (bio->bi_iter.bi_sector & (c->sb.block_size - 1) ||
-	    bio_sectors(bio) & (c->sb.block_size - 1)) {
+	if (bio->bi_iter.bi_sector & (c->cache->sb.block_size - 1) ||
+	    bio_sectors(bio) & (c->cache->sb.block_size - 1)) {
 		pr_debug("skipping unaligned io\n");
 		goto skip;
 	}
diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index 68563ee92393..698a0de9170f 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -471,7 +471,7 @@ static int __uuid_write(struct cache_set *c)
 {
 	BKEY_PADDED(key) k;
 	struct closure cl;
-	struct cache *ca;
+	struct cache *ca = c->cache;
 	unsigned int size;
 
 	closure_init_stack(&cl);
@@ -480,13 +480,12 @@ static int __uuid_write(struct cache_set *c)
 	if (bch_bucket_alloc_set(c, RESERVE_BTREE, &k.key, true))
 		return 1;
 
-	size =  meta_bucket_pages(&c->sb) * PAGE_SECTORS;
+	size =  meta_bucket_pages(&ca->sb) * PAGE_SECTORS;
 	SET_KEY_SIZE(&k.key, size);
 	uuid_io(c, REQ_OP_WRITE, 0, &k.key, &cl);
 	closure_sync(&cl);
 
 	/* Only one bucket used for uuid write */
-	ca = PTR_CACHE(c, &k.key, 0);
 	atomic_long_add(ca->sb.bucket_size, &ca->meta_sectors_written);
 
 	bkey_copy(&c->uuid_bucket, &k.key);
@@ -1199,7 +1198,7 @@ int bch_cached_dev_attach(struct cached_dev *dc, struct cache_set *c,
 		return -EINVAL;
 	}
 
-	if (dc->sb.block_size < c->sb.block_size) {
+	if (dc->sb.block_size < c->cache->sb.block_size) {
 		/* Will die */
 		pr_err("Couldn't attach %s: block size less than set's block size\n",
 		       dc->backing_dev_name);
@@ -1666,7 +1665,7 @@ static void cache_set_free(struct closure *cl)
 	}
 
 	bch_bset_sort_state_free(&c->sort);
-	free_pages((unsigned long) c->uuids, ilog2(meta_bucket_pages(&c->sb)));
+	free_pages((unsigned long) c->uuids, ilog2(meta_bucket_pages(&c->cache->sb)));
 
 	if (c->moving_gc_wq)
 		destroy_workqueue(c->moving_gc_wq);
@@ -1857,10 +1856,10 @@ struct cache_set *bch_cache_set_alloc(struct cache_sb *sb)
 
 	c->bucket_bits		= ilog2(sb->bucket_size);
 	c->block_bits		= ilog2(sb->block_size);
-	c->nr_uuids		= meta_bucket_bytes(&c->sb) / sizeof(struct uuid_entry);
+	c->nr_uuids		= meta_bucket_bytes(sb) / sizeof(struct uuid_entry);
 	c->devices_max_used	= 0;
 	atomic_set(&c->attached_dev_nr, 0);
-	c->btree_pages		= meta_bucket_pages(&c->sb);
+	c->btree_pages		= meta_bucket_pages(sb);
 	if (c->btree_pages > BTREE_MAX_PAGES)
 		c->btree_pages = max_t(int, c->btree_pages / 4,
 				       BTREE_MAX_PAGES);
@@ -1898,7 +1897,7 @@ struct cache_set *bch_cache_set_alloc(struct cache_sb *sb)
 
 	if (mempool_init_kmalloc_pool(&c->bio_meta, 2,
 			sizeof(struct bbio) +
-			sizeof(struct bio_vec) * meta_bucket_pages(&c->sb)))
+			sizeof(struct bio_vec) * meta_bucket_pages(sb)))
 		goto err;
 
 	if (mempool_init_kmalloc_pool(&c->fill_iter, 1, iter_size))
@@ -1908,7 +1907,7 @@ struct cache_set *bch_cache_set_alloc(struct cache_sb *sb)
 			BIOSET_NEED_BVECS|BIOSET_NEED_RESCUER))
 		goto err;
 
-	c->uuids = alloc_meta_bucket_pages(GFP_KERNEL, &c->sb);
+	c->uuids = alloc_meta_bucket_pages(GFP_KERNEL, sb);
 	if (!c->uuids)
 		goto err;
 
@@ -2088,7 +2087,7 @@ static int run_cache_set(struct cache_set *c)
 		goto err;
 
 	closure_sync(&cl);
-	c->sb.last_mount = (u32)ktime_get_real_seconds();
+	c->cache->sb.last_mount = (u32)ktime_get_real_seconds();
 	bcache_write_super(c);
 
 	list_for_each_entry_safe(dc, t, &uncached_devices, list)
diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
index 4f4ad6b3d43a..3c74996978da 100644
--- a/drivers/md/bcache/writeback.c
+++ b/drivers/md/bcache/writeback.c
@@ -35,7 +35,7 @@ static uint64_t __calc_target_rate(struct cached_dev *dc)
 	 * This is the size of the cache, minus the amount used for
 	 * flash-only devices
 	 */
-	uint64_t cache_sectors = c->nbuckets * c->sb.bucket_size -
+	uint64_t cache_sectors = c->nbuckets * c->cache->sb.bucket_size -
 				atomic_long_read(&c->flash_dev_dirty_sectors);
 
 	/*
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 14/14] bcache: move struct cache_sb out of uapi bcache.h
  2020-08-15  4:10 [PATCH 00/14] bcache: remove multiple caches code framework Coly Li
                   ` (12 preceding siblings ...)
  2020-08-15  4:10 ` [PATCH 13/14] bcache: remove embedded struct cache_sb from struct cache_set Coly Li
@ 2020-08-15  4:10 ` Coly Li
  2020-08-17  6:36   ` Hannes Reinecke
  2020-08-17 10:28   ` Christoph Hellwig
  13 siblings, 2 replies; 34+ messages in thread
From: Coly Li @ 2020-08-15  4:10 UTC (permalink / raw)
  To: linux-bcache; +Cc: linux-block, Coly Li

struct cache_sb does not exactly map to cache_sb_disk, it is only for
in-memory super block and dosn't belong to uapi bcache.h.

This patch moves the struct cache_sb definition and other depending
macros and inline routines from include/uapi/linux/bcache.h to
drivers/md/bcache/bcache.h, this is the proper location to have them.

Signed-off-by: Coly Li <colyli@suse.de>
---
 drivers/md/bcache/bcache.h  | 99 +++++++++++++++++++++++++++++++++++++
 include/uapi/linux/bcache.h | 98 ------------------------------------
 2 files changed, 99 insertions(+), 98 deletions(-)

diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
index 1d57f48307e6..b755bf7832ac 100644
--- a/drivers/md/bcache/bcache.h
+++ b/drivers/md/bcache/bcache.h
@@ -279,6 +279,82 @@ struct bcache_device {
 		     unsigned int cmd, unsigned long arg);
 };
 
+/*
+ * This is for in-memory bcache super block.
+ * NOTE: cache_sb is NOT exactly mapping to cache_sb_disk, the member
+ *       size, ordering and even whole struct size may be different
+ *       from cache_sb_disk.
+ */
+struct cache_sb {
+	__u64			offset;	/* sector where this sb was written */
+	__u64			version;
+
+	__u8			magic[16];
+
+	__u8			uuid[16];
+	union {
+		__u8		set_uuid[16];
+		__u64		set_magic;
+	};
+	__u8			label[SB_LABEL_SIZE];
+
+	__u64			flags;
+	__u64			seq;
+
+	__u64			feature_compat;
+	__u64			feature_incompat;
+	__u64			feature_ro_compat;
+
+	union {
+	struct {
+		/* Cache devices */
+		__u64		nbuckets;	/* device size */
+
+		__u16		block_size;	/* sectors */
+		__u16		nr_in_set;
+		__u16		nr_this_dev;
+		__u32		bucket_size;	/* sectors */
+	};
+	struct {
+		/* Backing devices */
+		__u64		data_offset;
+
+		/*
+		 * block_size from the cache device section is still used by
+		 * backing devices, so don't add anything here until we fix
+		 * things to not need it for backing devices anymore
+		 */
+	};
+	};
+
+	__u32			last_mount;	/* time overflow in y2106 */
+
+	__u16			first_bucket;
+	union {
+		__u16		njournal_buckets;
+		__u16		keys;
+	};
+	__u64			d[SB_JOURNAL_BUCKETS];	/* journal buckets */
+};
+
+BITMASK(CACHE_SYNC,			struct cache_sb, flags, 0, 1);
+BITMASK(CACHE_DISCARD,			struct cache_sb, flags, 1, 1);
+BITMASK(CACHE_REPLACEMENT,		struct cache_sb, flags, 2, 3);
+#define CACHE_REPLACEMENT_LRU		0U
+#define CACHE_REPLACEMENT_FIFO		1U
+#define CACHE_REPLACEMENT_RANDOM	2U
+
+BITMASK(BDEV_CACHE_MODE,		struct cache_sb, flags, 0, 4);
+#define CACHE_MODE_WRITETHROUGH		0U
+#define CACHE_MODE_WRITEBACK		1U
+#define CACHE_MODE_WRITEAROUND		2U
+#define CACHE_MODE_NONE			3U
+BITMASK(BDEV_STATE,			struct cache_sb, flags, 61, 2);
+#define BDEV_STATE_NONE			0U
+#define BDEV_STATE_CLEAN		1U
+#define BDEV_STATE_DIRTY		2U
+#define BDEV_STATE_STALE		3U
+
 struct io {
 	/* Used to track sequential IO so it can be skipped */
 	struct hlist_node	hash;
@@ -840,6 +916,13 @@ static inline bool ptr_available(struct cache_set *c, const struct bkey *k,
 	return (PTR_DEV(k, i) < MAX_CACHES_PER_SET) && PTR_CACHE(c, k, i);
 }
 
+static inline _Bool SB_IS_BDEV(const struct cache_sb *sb)
+{
+	return sb->version == BCACHE_SB_VERSION_BDEV
+		|| sb->version == BCACHE_SB_VERSION_BDEV_WITH_OFFSET
+		|| sb->version == BCACHE_SB_VERSION_BDEV_WITH_FEATURES;
+}
+
 /* Btree key macros */
 
 /*
@@ -958,6 +1041,22 @@ static inline void wait_for_kthread_stop(void)
 	}
 }
 
+/* generate magic number */
+static inline __u64 jset_magic(struct cache_sb *sb)
+{
+	return sb->set_magic ^ JSET_MAGIC;
+}
+
+static inline __u64 pset_magic(struct cache_sb *sb)
+{
+	return sb->set_magic ^ PSET_MAGIC;
+}
+
+static inline __u64 bset_magic(struct cache_sb *sb)
+{
+	return sb->set_magic ^ BSET_MAGIC;
+}
+
 /* Forward declarations */
 
 void bch_count_backing_io_errors(struct cached_dev *dc, struct bio *bio);
diff --git a/include/uapi/linux/bcache.h b/include/uapi/linux/bcache.h
index 52e8bcb33981..18166a3d8503 100644
--- a/include/uapi/linux/bcache.h
+++ b/include/uapi/linux/bcache.h
@@ -216,89 +216,6 @@ struct cache_sb_disk {
 	__le16			bucket_size_hi;
 };
 
-/*
- * This is for in-memory bcache super block.
- * NOTE: cache_sb is NOT exactly mapping to cache_sb_disk, the member
- *       size, ordering and even whole struct size may be different
- *       from cache_sb_disk.
- */
-struct cache_sb {
-	__u64			offset;	/* sector where this sb was written */
-	__u64			version;
-
-	__u8			magic[16];
-
-	__u8			uuid[16];
-	union {
-		__u8		set_uuid[16];
-		__u64		set_magic;
-	};
-	__u8			label[SB_LABEL_SIZE];
-
-	__u64			flags;
-	__u64			seq;
-
-	__u64			feature_compat;
-	__u64			feature_incompat;
-	__u64			feature_ro_compat;
-
-	union {
-	struct {
-		/* Cache devices */
-		__u64		nbuckets;	/* device size */
-
-		__u16		block_size;	/* sectors */
-		__u16		nr_in_set;
-		__u16		nr_this_dev;
-		__u32		bucket_size;	/* sectors */
-	};
-	struct {
-		/* Backing devices */
-		__u64		data_offset;
-
-		/*
-		 * block_size from the cache device section is still used by
-		 * backing devices, so don't add anything here until we fix
-		 * things to not need it for backing devices anymore
-		 */
-	};
-	};
-
-	__u32			last_mount;	/* time overflow in y2106 */
-
-	__u16			first_bucket;
-	union {
-		__u16		njournal_buckets;
-		__u16		keys;
-	};
-	__u64			d[SB_JOURNAL_BUCKETS];	/* journal buckets */
-};
-
-static inline _Bool SB_IS_BDEV(const struct cache_sb *sb)
-{
-	return sb->version == BCACHE_SB_VERSION_BDEV
-		|| sb->version == BCACHE_SB_VERSION_BDEV_WITH_OFFSET
-		|| sb->version == BCACHE_SB_VERSION_BDEV_WITH_FEATURES;
-}
-
-BITMASK(CACHE_SYNC,			struct cache_sb, flags, 0, 1);
-BITMASK(CACHE_DISCARD,			struct cache_sb, flags, 1, 1);
-BITMASK(CACHE_REPLACEMENT,		struct cache_sb, flags, 2, 3);
-#define CACHE_REPLACEMENT_LRU		0U
-#define CACHE_REPLACEMENT_FIFO		1U
-#define CACHE_REPLACEMENT_RANDOM	2U
-
-BITMASK(BDEV_CACHE_MODE,		struct cache_sb, flags, 0, 4);
-#define CACHE_MODE_WRITETHROUGH		0U
-#define CACHE_MODE_WRITEBACK		1U
-#define CACHE_MODE_WRITEAROUND		2U
-#define CACHE_MODE_NONE			3U
-BITMASK(BDEV_STATE,			struct cache_sb, flags, 61, 2);
-#define BDEV_STATE_NONE			0U
-#define BDEV_STATE_CLEAN		1U
-#define BDEV_STATE_DIRTY		2U
-#define BDEV_STATE_STALE		3U
-
 /*
  * Magic numbers
  *
@@ -310,21 +227,6 @@ BITMASK(BDEV_STATE,			struct cache_sb, flags, 61, 2);
 #define PSET_MAGIC			0x6750e15f87337f91ULL
 #define BSET_MAGIC			0x90135c78b99e07f5ULL
 
-static inline __u64 jset_magic(struct cache_sb *sb)
-{
-	return sb->set_magic ^ JSET_MAGIC;
-}
-
-static inline __u64 pset_magic(struct cache_sb *sb)
-{
-	return sb->set_magic ^ PSET_MAGIC;
-}
-
-static inline __u64 bset_magic(struct cache_sb *sb)
-{
-	return sb->set_magic ^ BSET_MAGIC;
-}
-
 /*
  * Journal
  *
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* Re: [PATCH 01/14] bcache: remove 'int n' from parameter list of bch_bucket_alloc_set()
  2020-08-15  4:10 ` [PATCH 01/14] bcache: remove 'int n' from parameter list of bch_bucket_alloc_set() Coly Li
@ 2020-08-17  6:08   ` Hannes Reinecke
  0 siblings, 0 replies; 34+ messages in thread
From: Hannes Reinecke @ 2020-08-17  6:08 UTC (permalink / raw)
  To: Coly Li, linux-bcache; +Cc: linux-block

On 8/15/20 6:10 AM, Coly Li wrote:
> The parameter 'int n' from bch_bucket_alloc_set() is not cleared
> defined. From the code comments n is the number of buckets to alloc, but
> from the code itself 'n' is the maximum cache to iterate. Indeed all the
> locations where bch_bucket_alloc_set() is called, 'n' is alwasy 1.
> 
> This patch removes the confused and unnecessary 'int n' from parameter
> list of  bch_bucket_alloc_set(), and explicitly allocates only 1 bucket
> for its caller.
> 
> Signed-off-by: Coly Li <colyli@suse.de>
> ---
>   drivers/md/bcache/alloc.c  | 35 +++++++++++++++--------------------
>   drivers/md/bcache/bcache.h |  4 ++--
>   drivers/md/bcache/btree.c  |  2 +-
>   drivers/md/bcache/super.c  |  2 +-
>   4 files changed, 19 insertions(+), 24 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke            Teamlead Storage & Networking
hare@suse.de                               +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 02/14] bcache: explicitly make cache_set only have single cache
  2020-08-15  4:10 ` [PATCH 02/14] bcache: explicitly make cache_set only have single cache Coly Li
@ 2020-08-17  6:11   ` Hannes Reinecke
  0 siblings, 0 replies; 34+ messages in thread
From: Hannes Reinecke @ 2020-08-17  6:11 UTC (permalink / raw)
  To: Coly Li, linux-bcache; +Cc: linux-block

On 8/15/20 6:10 AM, Coly Li wrote:
> Currently although the bcache code has a framework for multiple caches
> in a cache set, but indeed the multiple caches never completed and users
> use md raid1 for multiple copies of the cached data.
> 
> This patch does the following change in struct cache_set, to explicitly
> make a cache_set only have single cache,
> - Change pointer array "*cache[MAX_CACHES_PER_SET]" to a single pointer
>    "*cache".
> - Remove pointer array "*cache_by_alloc[MAX_CACHES_PER_SET]".
> - Remove "caches_loaded".
> 
> Now the code looks as exactly what it does in practic: only one cache is
> used in the cache set.
> 
> Signed-off-by: Coly Li <colyli@suse.de>
> ---
>   drivers/md/bcache/alloc.c  |  2 +-
>   drivers/md/bcache/bcache.h |  8 +++-----
>   drivers/md/bcache/super.c  | 19 ++++++++-----------
>   3 files changed, 12 insertions(+), 17 deletions(-)
> 
> diff --git a/drivers/md/bcache/alloc.c b/drivers/md/bcache/alloc.c
> index 4493ff57476d..3385f6add6df 100644
> --- a/drivers/md/bcache/alloc.c
> +++ b/drivers/md/bcache/alloc.c
> @@ -501,7 +501,7 @@ int __bch_bucket_alloc_set(struct cache_set *c, unsigned int reserve,
>   
>   	bkey_init(k);
>   
> -	ca = c->cache_by_alloc[0];
> +	ca = c->cache;
>   	b = bch_bucket_alloc(ca, reserve, wait);
>   	if (b == -1)
>   		goto err;

Maybe drop variable 'ca' altogether?

> diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
> index 5ff6e9573935..aa112c1adba1 100644
> --- a/drivers/md/bcache/bcache.h
> +++ b/drivers/md/bcache/bcache.h
> @@ -519,9 +519,7 @@ struct cache_set {
>   
>   	struct cache_sb		sb;
>   
> -	struct cache		*cache[MAX_CACHES_PER_SET];
> -	struct cache		*cache_by_alloc[MAX_CACHES_PER_SET];
> -	int			caches_loaded;
> +	struct cache		*cache;
>   
>   	struct bcache_device	**devices;
>   	unsigned int		devices_max_used;
> @@ -808,7 +806,7 @@ static inline struct cache *PTR_CACHE(struct cache_set *c,
>   				      const struct bkey *k,
>   				      unsigned int ptr)
>   {
> -	return c->cache[PTR_DEV(k, ptr)];
> +	return c->cache;
>   }
>   
>   static inline size_t PTR_BUCKET_NR(struct cache_set *c,
> @@ -890,7 +888,7 @@ do {									\
>   /* Looping macros */
>   
>   #define for_each_cache(ca, cs, iter)					\
> -	for (iter = 0; ca = cs->cache[iter], iter < (cs)->sb.nr_in_set; iter++)
> +	for (iter = 0; ca = cs->cache, iter < 1; iter++)
>   
>   #define for_each_bucket(b, ca)						\
>   	for (b = (ca)->buckets + (ca)->sb.first_bucket;			\

I guess 'for_each_cache' can be killed, too.

> diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
> index 7057ec48f3d1..e9ccfa17beb8 100644
> --- a/drivers/md/bcache/super.c
> +++ b/drivers/md/bcache/super.c
> @@ -1675,7 +1675,7 @@ static void cache_set_free(struct closure *cl)
>   	for_each_cache(ca, c, i)
>   		if (ca) {
>   			ca->set = NULL;
> -			c->cache[ca->sb.nr_this_dev] = NULL;
> +			c->cache = NULL;
>   			kobject_put(&ca->kobj);
>   		}
>   
> @@ -2166,7 +2166,7 @@ static const char *register_cache_set(struct cache *ca)
>   
>   	list_for_each_entry(c, &bch_cache_sets, list)
>   		if (!memcmp(c->sb.set_uuid, ca->sb.set_uuid, 16)) {
> -			if (c->cache[ca->sb.nr_this_dev])
> +			if (c->cache)
>   				return "duplicate cache set member";
>   
>   			if (!can_attach_cache(ca, c))
> @@ -2216,14 +2216,11 @@ static const char *register_cache_set(struct cache *ca)
>   
>   	kobject_get(&ca->kobj);
>   	ca->set = c;
> -	ca->set->cache[ca->sb.nr_this_dev] = ca;
> -	c->cache_by_alloc[c->caches_loaded++] = ca;
> +	ca->set->cache = ca;
>   
> -	if (c->caches_loaded == c->sb.nr_in_set) {
> -		err = "failed to run cache set";
> -		if (run_cache_set(c) < 0)
> -			goto err;
> -	}
> +	err = "failed to run cache set";
> +	if (run_cache_set(c) < 0)
> +		goto err;
>   
>   	return NULL;
>   err:
> @@ -2240,8 +2237,8 @@ void bch_cache_release(struct kobject *kobj)
>   	unsigned int i;
>   
>   	if (ca->set) {
> -		BUG_ON(ca->set->cache[ca->sb.nr_this_dev] != ca);
> -		ca->set->cache[ca->sb.nr_this_dev] = NULL;
> +		BUG_ON(ca->set->cache != ca);
> +		ca->set->cache = NULL;
>   	}
>   
>   	free_pages((unsigned long) ca->disk_buckets, ilog2(meta_bucket_pages(&ca->sb)));
> 
Cheers,

Hannes
-- 
Dr. Hannes Reinecke            Teamlead Storage & Networking
hare@suse.de                               +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 03/14] bcache: remove for_each_cache()
  2020-08-15  4:10 ` [PATCH 03/14] bcache: remove for_each_cache() Coly Li
@ 2020-08-17  6:13   ` Hannes Reinecke
  2020-08-22 11:40     ` Coly Li
  0 siblings, 1 reply; 34+ messages in thread
From: Hannes Reinecke @ 2020-08-17  6:13 UTC (permalink / raw)
  To: Coly Li, linux-bcache; +Cc: linux-block

On 8/15/20 6:10 AM, Coly Li wrote:
> Since now each cache_set explicitly has single cache, for_each_cache()
> is unnecessary. This patch removes this macro, and update all locations
> where it is used, and makes sure all code logic still being consistent.
> 
> Signed-off-by: Coly Li <colyli@suse.de>
> ---
>   drivers/md/bcache/alloc.c    |  17 ++-
>   drivers/md/bcache/bcache.h   |   9 +-
>   drivers/md/bcache/btree.c    | 103 +++++++---------
>   drivers/md/bcache/journal.c  | 229 ++++++++++++++++-------------------
>   drivers/md/bcache/movinggc.c |  58 +++++----
>   drivers/md/bcache/super.c    | 114 +++++++----------
>   6 files changed, 236 insertions(+), 294 deletions(-)
> 
> diff --git a/drivers/md/bcache/alloc.c b/drivers/md/bcache/alloc.c
> index 3385f6add6df..1b8310992dd0 100644
> --- a/drivers/md/bcache/alloc.c
> +++ b/drivers/md/bcache/alloc.c
> @@ -88,7 +88,6 @@ void bch_rescale_priorities(struct cache_set *c, int sectors)
>   	struct cache *ca;
>   	struct bucket *b;
>   	unsigned long next = c->nbuckets * c->sb.bucket_size / 1024;
> -	unsigned int i;
>   	int r;
>   
>   	atomic_sub(sectors, &c->rescale);
> @@ -104,14 +103,14 @@ void bch_rescale_priorities(struct cache_set *c, int sectors)
>   
>   	c->min_prio = USHRT_MAX;
>   
> -	for_each_cache(ca, c, i)
> -		for_each_bucket(b, ca)
> -			if (b->prio &&
> -			    b->prio != BTREE_PRIO &&
> -			    !atomic_read(&b->pin)) {
> -				b->prio--;
> -				c->min_prio = min(c->min_prio, b->prio);
> -			}
> +	ca = c->cache;
> +	for_each_bucket(b, ca)
> +		if (b->prio &&
> +		    b->prio != BTREE_PRIO &&
> +		    !atomic_read(&b->pin)) {
> +			b->prio--;
> +			c->min_prio = min(c->min_prio, b->prio);
> +		}
>   
>   	mutex_unlock(&c->bucket_lock);
>   }
> diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
> index aa112c1adba1..7ffe6b2d179b 100644
> --- a/drivers/md/bcache/bcache.h
> +++ b/drivers/md/bcache/bcache.h
> @@ -887,9 +887,6 @@ do {									\
>   
>   /* Looping macros */
>   
> -#define for_each_cache(ca, cs, iter)					\
> -	for (iter = 0; ca = cs->cache, iter < 1; iter++)
> -
>   #define for_each_bucket(b, ca)						\
>   	for (b = (ca)->buckets + (ca)->sb.first_bucket;			\
>   	     b < (ca)->buckets + (ca)->sb.nbuckets; b++)
> @@ -931,11 +928,9 @@ static inline uint8_t bucket_gc_gen(struct bucket *b)
>   
>   static inline void wake_up_allocators(struct cache_set *c)
>   {
> -	struct cache *ca;
> -	unsigned int i;
> +	struct cache *ca = c->cache;
>   
> -	for_each_cache(ca, c, i)
> -		wake_up_process(ca->alloc_thread);
> +	wake_up_process(ca->alloc_thread);
>   }
>   
>   static inline void closure_bio_submit(struct cache_set *c,
> diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c
> index e2a719fed53b..0817ad510d9f 100644
> --- a/drivers/md/bcache/btree.c
> +++ b/drivers/md/bcache/btree.c
> @@ -1167,19 +1167,18 @@ static void make_btree_freeing_key(struct btree *b, struct bkey *k)
>   static int btree_check_reserve(struct btree *b, struct btree_op *op)
>   {
>   	struct cache_set *c = b->c;
> -	struct cache *ca;
> -	unsigned int i, reserve = (c->root->level - b->level) * 2 + 1;
> +	struct cache *ca = c->cache;
> +	unsigned int reserve = (c->root->level - b->level) * 2 + 1;
>   
>   	mutex_lock(&c->bucket_lock);
>   
> -	for_each_cache(ca, c, i)
> -		if (fifo_used(&ca->free[RESERVE_BTREE]) < reserve) {
> -			if (op)
> -				prepare_to_wait(&c->btree_cache_wait, &op->wait,
> -						TASK_UNINTERRUPTIBLE);
> -			mutex_unlock(&c->bucket_lock);
> -			return -EINTR;
> -		}
> +	if (fifo_used(&ca->free[RESERVE_BTREE]) < reserve) {
> +		if (op)
> +			prepare_to_wait(&c->btree_cache_wait, &op->wait,
> +					TASK_UNINTERRUPTIBLE);
> +		mutex_unlock(&c->bucket_lock);
> +		return -EINTR;
> +	}
>   
>   	mutex_unlock(&c->bucket_lock);
>   
> @@ -1695,7 +1694,6 @@ static void btree_gc_start(struct cache_set *c)
>   {
>   	struct cache *ca;
>   	struct bucket *b;
> -	unsigned int i;
>   
>   	if (!c->gc_mark_valid)
>   		return;
> @@ -1705,14 +1703,14 @@ static void btree_gc_start(struct cache_set *c)
>   	c->gc_mark_valid = 0;
>   	c->gc_done = ZERO_KEY;
>   
> -	for_each_cache(ca, c, i)
> -		for_each_bucket(b, ca) {
> -			b->last_gc = b->gen;
> -			if (!atomic_read(&b->pin)) {
> -				SET_GC_MARK(b, 0);
> -				SET_GC_SECTORS_USED(b, 0);
> -			}
> +	ca = c->cache;
> +	for_each_bucket(b, ca) {
> +		b->last_gc = b->gen;
> +		if (!atomic_read(&b->pin)) {
> +			SET_GC_MARK(b, 0);
> +			SET_GC_SECTORS_USED(b, 0);
>   		}
> +	}
>   
>   	mutex_unlock(&c->bucket_lock);
>   }
> @@ -1721,7 +1719,8 @@ static void bch_btree_gc_finish(struct cache_set *c)
>   {
>   	struct bucket *b;
>   	struct cache *ca;
> -	unsigned int i;
> +	unsigned int i, j;
> +	uint64_t *k;
>   
>   	mutex_lock(&c->bucket_lock);
>   
> @@ -1739,7 +1738,6 @@ static void bch_btree_gc_finish(struct cache_set *c)
>   		struct bcache_device *d = c->devices[i];
>   		struct cached_dev *dc;
>   		struct keybuf_key *w, *n;
> -		unsigned int j;
>   
>   		if (!d || UUID_FLASH_ONLY(&c->uuids[i]))
>   			continue;
> @@ -1756,29 +1754,27 @@ static void bch_btree_gc_finish(struct cache_set *c)
>   	rcu_read_unlock();
>   
>   	c->avail_nbuckets = 0;
> -	for_each_cache(ca, c, i) {
> -		uint64_t *i;
>   
> -		ca->invalidate_needs_gc = 0;
> +	ca = c->cache;
> +	ca->invalidate_needs_gc = 0;
>   
> -		for (i = ca->sb.d; i < ca->sb.d + ca->sb.keys; i++)
> -			SET_GC_MARK(ca->buckets + *i, GC_MARK_METADATA);
> +	for (k = ca->sb.d; k < ca->sb.d + ca->sb.keys; k++)
> +		SET_GC_MARK(ca->buckets + *k, GC_MARK_METADATA);
>   
> -		for (i = ca->prio_buckets;
> -		     i < ca->prio_buckets + prio_buckets(ca) * 2; i++)
> -			SET_GC_MARK(ca->buckets + *i, GC_MARK_METADATA);
> +	for (k = ca->prio_buckets;
> +	     k < ca->prio_buckets + prio_buckets(ca) * 2; k++)
> +		SET_GC_MARK(ca->buckets + *k, GC_MARK_METADATA);
>   
> -		for_each_bucket(b, ca) {
> -			c->need_gc	= max(c->need_gc, bucket_gc_gen(b));
> +	for_each_bucket(b, ca) {
> +		c->need_gc	= max(c->need_gc, bucket_gc_gen(b));
>   
> -			if (atomic_read(&b->pin))
> -				continue;
> +		if (atomic_read(&b->pin))
> +			continue;
>   
> -			BUG_ON(!GC_MARK(b) && GC_SECTORS_USED(b));
> +		BUG_ON(!GC_MARK(b) && GC_SECTORS_USED(b));
>   
> -			if (!GC_MARK(b) || GC_MARK(b) == GC_MARK_RECLAIMABLE)
> -				c->avail_nbuckets++;
> -		}
> +		if (!GC_MARK(b) || GC_MARK(b) == GC_MARK_RECLAIMABLE)
> +			c->avail_nbuckets++;
>   	}
>   
>   	mutex_unlock(&c->bucket_lock);
> @@ -1830,12 +1826,10 @@ static void bch_btree_gc(struct cache_set *c)
>   
>   static bool gc_should_run(struct cache_set *c)
>   {
> -	struct cache *ca;
> -	unsigned int i;
> +	struct cache *ca = c->cache;
>   
> -	for_each_cache(ca, c, i)
> -		if (ca->invalidate_needs_gc)
> -			return true;
> +	if (ca->invalidate_needs_gc)
> +		return true;
>   
>   	if (atomic_read(&c->sectors_to_gc) < 0)
>   		return true;
> @@ -2081,9 +2075,8 @@ int bch_btree_check(struct cache_set *c)
>   
>   void bch_initial_gc_finish(struct cache_set *c)
>   {
> -	struct cache *ca;
> +	struct cache *ca = c->cache;
>   	struct bucket *b;
> -	unsigned int i;
>   
>   	bch_btree_gc_finish(c);
>   
> @@ -2098,20 +2091,18 @@ void bch_initial_gc_finish(struct cache_set *c)
>   	 * This is only safe for buckets that have no live data in them, which
>   	 * there should always be some of.
>   	 */
> -	for_each_cache(ca, c, i) {
> -		for_each_bucket(b, ca) {
> -			if (fifo_full(&ca->free[RESERVE_PRIO]) &&
> -			    fifo_full(&ca->free[RESERVE_BTREE]))
> -				break;
> +	for_each_bucket(b, ca) {
> +		if (fifo_full(&ca->free[RESERVE_PRIO]) &&
> +		    fifo_full(&ca->free[RESERVE_BTREE]))
> +			break;
>   
> -			if (bch_can_invalidate_bucket(ca, b) &&
> -			    !GC_MARK(b)) {
> -				__bch_invalidate_one_bucket(ca, b);
> -				if (!fifo_push(&ca->free[RESERVE_PRIO],
> -				   b - ca->buckets))
> -					fifo_push(&ca->free[RESERVE_BTREE],
> -						  b - ca->buckets);
> -			}
> +		if (bch_can_invalidate_bucket(ca, b) &&
> +		    !GC_MARK(b)) {
> +			__bch_invalidate_one_bucket(ca, b);
> +			if (!fifo_push(&ca->free[RESERVE_PRIO],
> +			   b - ca->buckets))
> +				fifo_push(&ca->free[RESERVE_BTREE],
> +					  b - ca->buckets);
>   		}
>   	}
>   
> diff --git a/drivers/md/bcache/journal.c b/drivers/md/bcache/journal.c
> index 77fbfd52edcf..027d0f8c4daf 100644
> --- a/drivers/md/bcache/journal.c
> +++ b/drivers/md/bcache/journal.c
> @@ -179,112 +179,109 @@ int bch_journal_read(struct cache_set *c, struct list_head *list)
>   		ret;							\
>   	})
>   
> -	struct cache *ca;
> -	unsigned int iter;
> +	struct cache *ca = c->cache;
>   	int ret = 0;
> +	struct journal_device *ja = &ca->journal;
> +	DECLARE_BITMAP(bitmap, SB_JOURNAL_BUCKETS);
> +	unsigned int i, l, r, m;
> +	uint64_t seq;
>   
> -	for_each_cache(ca, c, iter) {
> -		struct journal_device *ja = &ca->journal;
> -		DECLARE_BITMAP(bitmap, SB_JOURNAL_BUCKETS);
> -		unsigned int i, l, r, m;
> -		uint64_t seq;
> -
> -		bitmap_zero(bitmap, SB_JOURNAL_BUCKETS);
> -		pr_debug("%u journal buckets\n", ca->sb.njournal_buckets);
> +	bitmap_zero(bitmap, SB_JOURNAL_BUCKETS);
> +	pr_debug("%u journal buckets\n", ca->sb.njournal_buckets);
>   
> +	/*
> +	 * Read journal buckets ordered by golden ratio hash to quickly
> +	 * find a sequence of buckets with valid journal entries
> +	 */
> +	for (i = 0; i < ca->sb.njournal_buckets; i++) {
>   		/*
> -		 * Read journal buckets ordered by golden ratio hash to quickly
> -		 * find a sequence of buckets with valid journal entries
> +		 * We must try the index l with ZERO first for
> +		 * correctness due to the scenario that the journal
> +		 * bucket is circular buffer which might have wrapped
>   		 */
> -		for (i = 0; i < ca->sb.njournal_buckets; i++) {
> -			/*
> -			 * We must try the index l with ZERO first for
> -			 * correctness due to the scenario that the journal
> -			 * bucket is circular buffer which might have wrapped
> -			 */
> -			l = (i * 2654435769U) % ca->sb.njournal_buckets;
> +		l = (i * 2654435769U) % ca->sb.njournal_buckets;
>   
> -			if (test_bit(l, bitmap))
> -				break;
> +		if (test_bit(l, bitmap))
> +			break;
>   
> -			if (read_bucket(l))
> -				goto bsearch;
> -		}
> +		if (read_bucket(l))
> +			goto bsearch;
> +	}
>   
> -		/*
> -		 * If that fails, check all the buckets we haven't checked
> -		 * already
> -		 */
> -		pr_debug("falling back to linear search\n");
> +	/*
> +	 * If that fails, check all the buckets we haven't checked
> +	 * already
> +	 */
> +	pr_debug("falling back to linear search\n");
>   
> -		for_each_clear_bit(l, bitmap, ca->sb.njournal_buckets)
> -			if (read_bucket(l))
> -				goto bsearch;
> +	for_each_clear_bit(l, bitmap, ca->sb.njournal_buckets)
> +		if (read_bucket(l))
> +			goto bsearch;
>   
> -		/* no journal entries on this device? */
> -		if (l == ca->sb.njournal_buckets)
> -			continue;
> +	/* no journal entries on this device? */
> +	if (l == ca->sb.njournal_buckets)
> +		goto out;
>   bsearch:
> -		BUG_ON(list_empty(list));
> +	BUG_ON(list_empty(list));
>   
> -		/* Binary search */
> -		m = l;
> -		r = find_next_bit(bitmap, ca->sb.njournal_buckets, l + 1);
> -		pr_debug("starting binary search, l %u r %u\n", l, r);
> +	/* Binary search */
> +	m = l;
> +	r = find_next_bit(bitmap, ca->sb.njournal_buckets, l + 1);
> +	pr_debug("starting binary search, l %u r %u\n", l, r);
>   
> -		while (l + 1 < r) {
> -			seq = list_entry(list->prev, struct journal_replay,
> -					 list)->j.seq;
> +	while (l + 1 < r) {
> +		seq = list_entry(list->prev, struct journal_replay,
> +				 list)->j.seq;
>   
> -			m = (l + r) >> 1;
> -			read_bucket(m);
> +		m = (l + r) >> 1;
> +		read_bucket(m);
>   
> -			if (seq != list_entry(list->prev, struct journal_replay,
> -					      list)->j.seq)
> -				l = m;
> -			else
> -				r = m;
> -		}
> +		if (seq != list_entry(list->prev, struct journal_replay,
> +				      list)->j.seq)
> +			l = m;
> +		else
> +			r = m;
> +	}
>   
> -		/*
> -		 * Read buckets in reverse order until we stop finding more
> -		 * journal entries
> -		 */
> -		pr_debug("finishing up: m %u njournal_buckets %u\n",
> -			 m, ca->sb.njournal_buckets);
> -		l = m;
> +	/*
> +	 * Read buckets in reverse order until we stop finding more
> +	 * journal entries
> +	 */
> +	pr_debug("finishing up: m %u njournal_buckets %u\n",
> +		 m, ca->sb.njournal_buckets);
> +	l = m;
>   
> -		while (1) {
> -			if (!l--)
> -				l = ca->sb.njournal_buckets - 1;
> +	while (1) {
> +		if (!l--)
> +			l = ca->sb.njournal_buckets - 1;
>   
> -			if (l == m)
> -				break;
> +		if (l == m)
> +			break;
>   
> -			if (test_bit(l, bitmap))
> -				continue;
> +		if (test_bit(l, bitmap))
> +			continue;
>   
> -			if (!read_bucket(l))
> -				break;
> -		}
> +		if (!read_bucket(l))
> +			break;
> +	}
>   
> -		seq = 0;
> +	seq = 0;
>   
> -		for (i = 0; i < ca->sb.njournal_buckets; i++)
> -			if (ja->seq[i] > seq) {
> -				seq = ja->seq[i];
> -				/*
> -				 * When journal_reclaim() goes to allocate for
> -				 * the first time, it'll use the bucket after
> -				 * ja->cur_idx
> -				 */
> -				ja->cur_idx = i;
> -				ja->last_idx = ja->discard_idx = (i + 1) %
> -					ca->sb.njournal_buckets;
> +	for (i = 0; i < ca->sb.njournal_buckets; i++)
> +		if (ja->seq[i] > seq) {
> +			seq = ja->seq[i];
> +			/*
> +			 * When journal_reclaim() goes to allocate for
> +			 * the first time, it'll use the bucket after
> +			 * ja->cur_idx
> +			 */
> +			ja->cur_idx = i;
> +			ja->last_idx = ja->discard_idx = (i + 1) %
> +				ca->sb.njournal_buckets;
>   
> -			}
> -	}
> +		}
>   
> +out:
>   	if (!list_empty(list))
>   		c->journal.seq = list_entry(list->prev,
>   					    struct journal_replay,
> @@ -342,12 +339,10 @@ void bch_journal_mark(struct cache_set *c, struct list_head *list)
>   
>   static bool is_discard_enabled(struct cache_set *s)
>   {
> -	struct cache *ca;
> -	unsigned int i;
> +	struct cache *ca = s->cache;
>   
> -	for_each_cache(ca, s, i)
> -		if (ca->discard)
> -			return true;
> +	if (ca->discard)
> +		return true;
>   
>   	return false;
>   }
> @@ -633,9 +628,10 @@ static void do_journal_discard(struct cache *ca)
>   static void journal_reclaim(struct cache_set *c)
>   {
>   	struct bkey *k = &c->journal.key;
> -	struct cache *ca;
> +	struct cache *ca = c->cache;
>   	uint64_t last_seq;
> -	unsigned int iter, n = 0;
> +	unsigned int next;
> +	struct journal_device *ja = &ca->journal;
>   	atomic_t p __maybe_unused;
>   
>   	atomic_long_inc(&c->reclaim);
> @@ -647,46 +643,31 @@ static void journal_reclaim(struct cache_set *c)
>   
>   	/* Update last_idx */
>   
> -	for_each_cache(ca, c, iter) {
> -		struct journal_device *ja = &ca->journal;
> -
> -		while (ja->last_idx != ja->cur_idx &&
> -		       ja->seq[ja->last_idx] < last_seq)
> -			ja->last_idx = (ja->last_idx + 1) %
> -				ca->sb.njournal_buckets;
> -	}
> +	while (ja->last_idx != ja->cur_idx &&
> +	       ja->seq[ja->last_idx] < last_seq)
> +		ja->last_idx = (ja->last_idx + 1) %
> +			ca->sb.njournal_buckets;
>   
> -	for_each_cache(ca, c, iter)
> -		do_journal_discard(ca);
> +	do_journal_discard(ca);
>   
>   	if (c->journal.blocks_free)
>   		goto out;
>   
> -	/*
> -	 * Allocate:
> -	 * XXX: Sort by free journal space
> -	 */
> -
> -	for_each_cache(ca, c, iter) {
> -		struct journal_device *ja = &ca->journal;
> -		unsigned int next = (ja->cur_idx + 1) % ca->sb.njournal_buckets;
> +	next = (ja->cur_idx + 1) % ca->sb.njournal_buckets;
> +	/* No space available on this device */
> +	if (next == ja->discard_idx)
> +		goto out;
>   
> -		/* No space available on this device */
> -		if (next == ja->discard_idx)
> -			continue;
> +	ja->cur_idx = next;
> +	k->ptr[0] = MAKE_PTR(0,
> +			     bucket_to_sector(c, ca->sb.d[ja->cur_idx]),
> +			     ca->sb.nr_this_dev);
> +	atomic_long_inc(&c->reclaimed_journal_buckets);
>   
> -		ja->cur_idx = next;
> -		k->ptr[n++] = MAKE_PTR(0,
> -				  bucket_to_sector(c, ca->sb.d[ja->cur_idx]),
> -				  ca->sb.nr_this_dev);
> -		atomic_long_inc(&c->reclaimed_journal_buckets);
> -	}
> +	bkey_init(k);
> +	SET_KEY_PTRS(k, 1);
> +	c->journal.blocks_free = c->sb.bucket_size >> c->block_bits;
>   
> -	if (n) {
> -		bkey_init(k);
> -		SET_KEY_PTRS(k, n);
> -		c->journal.blocks_free = c->sb.bucket_size >> c->block_bits;
> -	}
>   out:
>   	if (!journal_full(&c->journal))
>   		__closure_wake_up(&c->journal.wait);
> @@ -750,7 +731,7 @@ static void journal_write_unlocked(struct closure *cl)
>   	__releases(c->journal.lock)
>   {
>   	struct cache_set *c = container_of(cl, struct cache_set, journal.io);
> -	struct cache *ca;
> +	struct cache *ca = c->cache;
>   	struct journal_write *w = c->journal.cur;
>   	struct bkey *k = &c->journal.key;
>   	unsigned int i, sectors = set_blocks(w->data, block_bytes(c)) *
> @@ -780,9 +761,7 @@ static void journal_write_unlocked(struct closure *cl)
>   	bkey_copy(&w->data->btree_root, &c->root->key);
>   	bkey_copy(&w->data->uuid_bucket, &c->uuid_bucket);
>   
> -	for_each_cache(ca, c, i)
> -		w->data->prio_bucket[ca->sb.nr_this_dev] = ca->prio_buckets[0];
> -
> +	w->data->prio_bucket[ca->sb.nr_this_dev] = ca->prio_buckets[0];
>   	w->data->magic		= jset_magic(&c->sb);
>   	w->data->version	= BCACHE_JSET_VERSION;
>   	w->data->last_seq	= last_seq(&c->journal);
> diff --git a/drivers/md/bcache/movinggc.c b/drivers/md/bcache/movinggc.c
> index 5872d6470470..b9c3d27ec093 100644
> --- a/drivers/md/bcache/movinggc.c
> +++ b/drivers/md/bcache/movinggc.c
> @@ -196,50 +196,48 @@ static unsigned int bucket_heap_top(struct cache *ca)
>   
>   void bch_moving_gc(struct cache_set *c)
>   {
> -	struct cache *ca;
> +	struct cache *ca = c->cache;
>   	struct bucket *b;
> -	unsigned int i;
> +	unsigned long sectors_to_move, reserve_sectors;
>   
>   	if (!c->copy_gc_enabled)
>   		return;
>   
>   	mutex_lock(&c->bucket_lock);
>   
> -	for_each_cache(ca, c, i) {
> -		unsigned long sectors_to_move = 0;
> -		unsigned long reserve_sectors = ca->sb.bucket_size *
> +	sectors_to_move = 0;
> +	reserve_sectors = ca->sb.bucket_size *
>   			     fifo_used(&ca->free[RESERVE_MOVINGGC]);
>   
> -		ca->heap.used = 0;
> -
> -		for_each_bucket(b, ca) {
> -			if (GC_MARK(b) == GC_MARK_METADATA ||
> -			    !GC_SECTORS_USED(b) ||
> -			    GC_SECTORS_USED(b) == ca->sb.bucket_size ||
> -			    atomic_read(&b->pin))
> -				continue;
> -
> -			if (!heap_full(&ca->heap)) {
> -				sectors_to_move += GC_SECTORS_USED(b);
> -				heap_add(&ca->heap, b, bucket_cmp);
> -			} else if (bucket_cmp(b, heap_peek(&ca->heap))) {
> -				sectors_to_move -= bucket_heap_top(ca);
> -				sectors_to_move += GC_SECTORS_USED(b);
> -
> -				ca->heap.data[0] = b;
> -				heap_sift(&ca->heap, 0, bucket_cmp);
> -			}
> -		}
> +	ca->heap.used = 0;
> +
> +	for_each_bucket(b, ca) {
> +		if (GC_MARK(b) == GC_MARK_METADATA ||
> +		    !GC_SECTORS_USED(b) ||
> +		    GC_SECTORS_USED(b) == ca->sb.bucket_size ||
> +		    atomic_read(&b->pin))
> +			continue;
>   
> -		while (sectors_to_move > reserve_sectors) {
> -			heap_pop(&ca->heap, b, bucket_cmp);
> -			sectors_to_move -= GC_SECTORS_USED(b);
> +		if (!heap_full(&ca->heap)) {
> +			sectors_to_move += GC_SECTORS_USED(b);
> +			heap_add(&ca->heap, b, bucket_cmp);
> +		} else if (bucket_cmp(b, heap_peek(&ca->heap))) {
> +			sectors_to_move -= bucket_heap_top(ca);
> +			sectors_to_move += GC_SECTORS_USED(b);
> +
> +			ca->heap.data[0] = b;
> +			heap_sift(&ca->heap, 0, bucket_cmp);
>   		}
> +	}
>   
> -		while (heap_pop(&ca->heap, b, bucket_cmp))
> -			SET_GC_MOVE(b, 1);
> +	while (sectors_to_move > reserve_sectors) {
> +		heap_pop(&ca->heap, b, bucket_cmp);
> +		sectors_to_move -= GC_SECTORS_USED(b);
>   	}
>   
> +	while (heap_pop(&ca->heap, b, bucket_cmp))
> +		SET_GC_MOVE(b, 1);
> +
>   	mutex_unlock(&c->bucket_lock);
>   
>   	c->moving_gc_keys.last_scanned = ZERO_KEY;
> diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
> index e9ccfa17beb8..2521be9380d6 100644
> --- a/drivers/md/bcache/super.c
> +++ b/drivers/md/bcache/super.c
> @@ -343,8 +343,9 @@ static void bcache_write_super_unlock(struct closure *cl)
>   void bcache_write_super(struct cache_set *c)
>   {
>   	struct closure *cl = &c->sb_write;
> -	struct cache *ca;
> -	unsigned int i, version = BCACHE_SB_VERSION_CDEV_WITH_UUID;
> +	struct cache *ca = c->cache;
> +	struct bio *bio = &ca->sb_bio;
> +	unsigned int version = BCACHE_SB_VERSION_CDEV_WITH_UUID;
>   
>   	down(&c->sb_write_mutex);
>   	closure_init(cl, &c->cl);
> @@ -354,23 +355,19 @@ void bcache_write_super(struct cache_set *c)
>   	if (c->sb.version > version)
>   		version = c->sb.version;
>   
> -	for_each_cache(ca, c, i) {
> -		struct bio *bio = &ca->sb_bio;
> -
> -		ca->sb.version		= version;
> -		ca->sb.seq		= c->sb.seq;
> -		ca->sb.last_mount	= c->sb.last_mount;
> +	ca->sb.version		= version;
> +	ca->sb.seq		= c->sb.seq;
> +	ca->sb.last_mount	= c->sb.last_mount;
>   
> -		SET_CACHE_SYNC(&ca->sb, CACHE_SYNC(&c->sb));
> +	SET_CACHE_SYNC(&ca->sb, CACHE_SYNC(&c->sb));
>   
> -		bio_init(bio, ca->sb_bv, 1);
> -		bio_set_dev(bio, ca->bdev);
> -		bio->bi_end_io	= write_super_endio;
> -		bio->bi_private = ca;
> +	bio_init(bio, ca->sb_bv, 1);
> +	bio_set_dev(bio, ca->bdev);
> +	bio->bi_end_io	= write_super_endio;
> +	bio->bi_private = ca;
>   
> -		closure_get(cl);
> -		__write_super(&ca->sb, ca->sb_disk, bio);
> -	}
> +	closure_get(cl);
> +	__write_super(&ca->sb, ca->sb_disk, bio);
>   
>   	closure_return_with_destructor(cl, bcache_write_super_unlock);
>   }
> @@ -772,26 +769,22 @@ static void bcache_device_unlink(struct bcache_device *d)
>   	lockdep_assert_held(&bch_register_lock);
>   
>   	if (d->c && !test_and_set_bit(BCACHE_DEV_UNLINK_DONE, &d->flags)) {
> -		unsigned int i;
> -		struct cache *ca;
> +		struct cache *ca = d->c->cache;
>   
>   		sysfs_remove_link(&d->c->kobj, d->name);
>   		sysfs_remove_link(&d->kobj, "cache");
>   
> -		for_each_cache(ca, d->c, i)
> -			bd_unlink_disk_holder(ca->bdev, d->disk);
> +		bd_unlink_disk_holder(ca->bdev, d->disk);
>   	}
>   }
>   
>   static void bcache_device_link(struct bcache_device *d, struct cache_set *c,
>   			       const char *name)
>   {
> -	unsigned int i;
> -	struct cache *ca;
> +	struct cache *ca = c->cache;
>   	int ret;
>   
> -	for_each_cache(ca, d->c, i)
> -		bd_link_disk_holder(ca->bdev, d->disk);
> +	bd_link_disk_holder(ca->bdev, d->disk);
>   
>   	snprintf(d->name, BCACHEDEVNAME_SIZE,
>   		 "%s%u", name, d->id);
> @@ -1663,7 +1656,6 @@ static void cache_set_free(struct closure *cl)
>   {
>   	struct cache_set *c = container_of(cl, struct cache_set, cl);
>   	struct cache *ca;
> -	unsigned int i;
>   
>   	debugfs_remove(c->debug);
>   
> @@ -1672,12 +1664,12 @@ static void cache_set_free(struct closure *cl)
>   	bch_journal_free(c);
>   
>   	mutex_lock(&bch_register_lock);
> -	for_each_cache(ca, c, i)
> -		if (ca) {
> -			ca->set = NULL;
> -			c->cache = NULL;
> -			kobject_put(&ca->kobj);
> -		}
> +	ca = c->cache;
> +	if (ca) {
> +		ca->set = NULL;
> +		c->cache = NULL;
> +		kobject_put(&ca->kobj);
> +	}
>   
>   	bch_bset_sort_state_free(&c->sort);
>   	free_pages((unsigned long) c->uuids, ilog2(meta_bucket_pages(&c->sb)));
> @@ -1703,9 +1695,8 @@ static void cache_set_free(struct closure *cl)
>   static void cache_set_flush(struct closure *cl)
>   {
>   	struct cache_set *c = container_of(cl, struct cache_set, caching);
> -	struct cache *ca;
> +	struct cache *ca = c->cache;
>   	struct btree *b;
> -	unsigned int i;
>   
>   	bch_cache_accounting_destroy(&c->accounting);
>   
> @@ -1730,9 +1721,8 @@ static void cache_set_flush(struct closure *cl)
>   			mutex_unlock(&b->write_lock);
>   		}
>   
> -	for_each_cache(ca, c, i)
> -		if (ca->alloc_thread)
> -			kthread_stop(ca->alloc_thread);
> +	if (ca->alloc_thread)
> +		kthread_stop(ca->alloc_thread);
>   
>   	if (c->journal.cur) {
>   		cancel_delayed_work_sync(&c->journal.work);
> @@ -1973,16 +1963,14 @@ static int run_cache_set(struct cache_set *c)
>   {
>   	const char *err = "cannot allocate memory";
>   	struct cached_dev *dc, *t;
> -	struct cache *ca;
> +	struct cache *ca = c->cache;
>   	struct closure cl;
> -	unsigned int i;
>   	LIST_HEAD(journal);
>   	struct journal_replay *l;
>   
>   	closure_init_stack(&cl);
>   
> -	for_each_cache(ca, c, i)
> -		c->nbuckets += ca->sb.nbuckets;
> +	c->nbuckets = ca->sb.nbuckets;
>   	set_gc_sectors(c);
>   
>   	if (CACHE_SYNC(&c->sb)) {
> @@ -2002,10 +1990,8 @@ static int run_cache_set(struct cache_set *c)
>   		j = &list_entry(journal.prev, struct journal_replay, list)->j;
>   
>   		err = "IO error reading priorities";
> -		for_each_cache(ca, c, i) {
> -			if (prio_read(ca, j->prio_bucket[ca->sb.nr_this_dev]))
> -				goto err;
> -		}
> +		if (prio_read(ca, j->prio_bucket[ca->sb.nr_this_dev]))
> +			goto err;
>   
>   		/*
>   		 * If prio_read() fails it'll call cache_set_error and we'll
> @@ -2049,9 +2035,8 @@ static int run_cache_set(struct cache_set *c)
>   		bch_journal_next(&c->journal);
>   
>   		err = "error starting allocator thread";
> -		for_each_cache(ca, c, i)
> -			if (bch_cache_allocator_start(ca))
> -				goto err;
> +		if (bch_cache_allocator_start(ca))
> +			goto err;
>   
>   		/*
>   		 * First place it's safe to allocate: btree_check() and
> @@ -2070,28 +2055,23 @@ static int run_cache_set(struct cache_set *c)
>   		if (bch_journal_replay(c, &journal))
>   			goto err;
>   	} else {
> -		pr_notice("invalidating existing data\n");
> +		unsigned int j;
>   
> -		for_each_cache(ca, c, i) {
> -			unsigned int j;
> -
> -			ca->sb.keys = clamp_t(int, ca->sb.nbuckets >> 7,
> -					      2, SB_JOURNAL_BUCKETS);
> +		pr_notice("invalidating existing data\n");
> +		ca->sb.keys = clamp_t(int, ca->sb.nbuckets >> 7,
> +					2, SB_JOURNAL_BUCKETS);
>   
> -			for (j = 0; j < ca->sb.keys; j++)
> -				ca->sb.d[j] = ca->sb.first_bucket + j;
> -		}
> +		for (j = 0; j < ca->sb.keys; j++)
> +			ca->sb.d[j] = ca->sb.first_bucket + j;
>   
>   		bch_initial_gc_finish(c);
>   
>   		err = "error starting allocator thread";
> -		for_each_cache(ca, c, i)
> -			if (bch_cache_allocator_start(ca))
> -				goto err;
> +		if (bch_cache_allocator_start(ca))
> +			goto err;
>   
>   		mutex_lock(&c->bucket_lock);
> -		for_each_cache(ca, c, i)
> -			bch_prio_write(ca, true);
> +		bch_prio_write(ca, true);
>   		mutex_unlock(&c->bucket_lock);
>   
>   		err = "cannot allocate new UUID bucket";
> @@ -2467,13 +2447,13 @@ static bool bch_is_open_backing(struct block_device *bdev)
>   static bool bch_is_open_cache(struct block_device *bdev)
>   {
>   	struct cache_set *c, *tc;
> -	struct cache *ca;
> -	unsigned int i;
>   
> -	list_for_each_entry_safe(c, tc, &bch_cache_sets, list)
> -		for_each_cache(ca, c, i)
> -			if (ca->bdev == bdev)
> -				return true;
> +	list_for_each_entry_safe(c, tc, &bch_cache_sets, list) {
> +		struct cache *ca = c->cache;
> +		if (ca->bdev == bdev)
> +			return true;
> +	}
> +
>   	return false;
>   }
>   
> 
I guess one could remove the 'ca' variables above, but that's just a 
minor detail.

Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke            Teamlead Storage & Networking
hare@suse.de                               +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 04/14] bcache: add set_uuid in struct cache_set
  2020-08-15  4:10 ` [PATCH 04/14] bcache: add set_uuid in struct cache_set Coly Li
@ 2020-08-17  6:15   ` Hannes Reinecke
  0 siblings, 0 replies; 34+ messages in thread
From: Hannes Reinecke @ 2020-08-17  6:15 UTC (permalink / raw)
  To: Coly Li, linux-bcache; +Cc: linux-block

On 8/15/20 6:10 AM, Coly Li wrote:
> This patch adds a separated set_uuid[16] in struct cache_set, to store
> the uuid of the cache set. This is the preparation to remove the
> embedded struct cache_sb from struct cache_set.
> 
> Signed-off-by: Coly Li <colyli@suse.de>
> ---
>   drivers/md/bcache/bcache.h    |  1 +
>   drivers/md/bcache/debug.c     |  2 +-
>   drivers/md/bcache/super.c     | 24 ++++++++++++------------
>   include/trace/events/bcache.h |  4 ++--
>   4 files changed, 16 insertions(+), 15 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke            Teamlead Storage & Networking
hare@suse.de                               +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 05/14] bcache: only use block_bytes() on struct cache
  2020-08-15  4:10 ` [PATCH 05/14] bcache: only use block_bytes() on struct cache Coly Li
@ 2020-08-17  6:16   ` Hannes Reinecke
  0 siblings, 0 replies; 34+ messages in thread
From: Hannes Reinecke @ 2020-08-17  6:16 UTC (permalink / raw)
  To: Coly Li, linux-bcache; +Cc: linux-block

On 8/15/20 6:10 AM, Coly Li wrote:
> Because struct cache_set and struct cache both have struct cache_sb,
> therefore macro block_bytes() can be used on both of them. When removing
> the embedded struct cache_sb from struct cache_set, this macro won't be
> used on struct cache_set anymore.
> 
> This patch unifies all block_bytes() usage only on struct cache, this is
> one of the preparation to remove the embedded struct cache_sb from
> struct cache_set.
> 
> Signed-off-by: Coly Li <colyli@suse.de>
> ---
>   drivers/md/bcache/bcache.h  |  2 +-
>   drivers/md/bcache/btree.c   | 24 ++++++++++++------------
>   drivers/md/bcache/debug.c   |  8 ++++----
>   drivers/md/bcache/journal.c |  8 ++++----
>   drivers/md/bcache/request.c |  2 +-
>   drivers/md/bcache/super.c   |  2 +-
>   drivers/md/bcache/sysfs.c   |  2 +-
>   7 files changed, 24 insertions(+), 24 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke            Teamlead Storage & Networking
hare@suse.de                               +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 06/14] bcache: remove useless alloc_bucket_pages()
  2020-08-15  4:10 ` [PATCH 06/14] bcache: remove useless alloc_bucket_pages() Coly Li
@ 2020-08-17  6:17   ` Hannes Reinecke
  0 siblings, 0 replies; 34+ messages in thread
From: Hannes Reinecke @ 2020-08-17  6:17 UTC (permalink / raw)
  To: Coly Li, linux-bcache; +Cc: linux-block

On 8/15/20 6:10 AM, Coly Li wrote:
> Now no one uses alloc_bucket_pages() anymore, remove it from bcache.h.
> 
> Signed-off-by: Coly Li <colyli@suse.de>
> ---
>   drivers/md/bcache/super.c | 3 ---
>   1 file changed, 3 deletions(-)
> 
> diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
> index 5636648902e3..748b08ab4f11 100644
> --- a/drivers/md/bcache/super.c
> +++ b/drivers/md/bcache/super.c
> @@ -1832,9 +1832,6 @@ void bch_cache_set_unregister(struct cache_set *c)
>   	bch_cache_set_stop(c);
>   }
>   
> -#define alloc_bucket_pages(gfp, c)			\
> -	((void *) __get_free_pages(__GFP_ZERO|__GFP_COMP|gfp, ilog2(bucket_pages(c))))
> -
>   #define alloc_meta_bucket_pawedges(gfp, sb)		\
>   	((void *) __get_free_pages(__GFP_ZERO|__GFP_COMP|gfp, ilog2(meta_bucket_pages(sb))))
>   
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke            Teamlead Storage & Networking
hare@suse.de                               +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 07/14] bcache: remove useless bucket_pages()
  2020-08-15  4:10 ` [PATCH 07/14] bcache: remove useless bucket_pages() Coly Li
@ 2020-08-17  6:18   ` Hannes Reinecke
  0 siblings, 0 replies; 34+ messages in thread
From: Hannes Reinecke @ 2020-08-17  6:18 UTC (permalink / raw)
  To: Coly Li, linux-bcache; +Cc: linux-block

On 8/15/20 6:10 AM, Coly Li wrote:
> It seems alloc_bucket_pages() is the only user of bucket_pages().
> Considering alloc_bucket_pages() is removed from bcache code, it is safe
> to remove the useless macro bucket_pages() now.
> 
> Signed-off-by: Coly Li <colyli@suse.de>
> ---
>   drivers/md/bcache/bcache.h | 1 -
>   1 file changed, 1 deletion(-)
> 
> diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
> index 29bec61cafbb..48a2585b6bbb 100644
> --- a/drivers/md/bcache/bcache.h
> +++ b/drivers/md/bcache/bcache.h
> @@ -757,7 +757,6 @@ struct bbio {
>   #define btree_default_blocks(c)						\
>   	((unsigned int) ((PAGE_SECTORS * (c)->btree_pages) >> (c)->block_bits))
>   
> -#define bucket_pages(c)		((c)->sb.bucket_size / PAGE_SECTORS)
>   #define bucket_bytes(c)		((c)->sb.bucket_size << 9)
>   #define block_bytes(ca)		((ca)->sb.block_size << 9)
>   
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke            Teamlead Storage & Networking
hare@suse.de                               +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 08/14] bcache: only use bucket_bytes() on struct cache
  2020-08-15  4:10 ` [PATCH 08/14] bcache: only use bucket_bytes() on struct cache Coly Li
@ 2020-08-17  6:19   ` Hannes Reinecke
  0 siblings, 0 replies; 34+ messages in thread
From: Hannes Reinecke @ 2020-08-17  6:19 UTC (permalink / raw)
  To: Coly Li, linux-bcache; +Cc: linux-block

On 8/15/20 6:10 AM, Coly Li wrote:
> Because struct cache_set and struct cache both have struct cache_sb,
> macro bucket_bytes() currently are used on both of them. When removing
> the embedded struct cache_sb from struct cache_set, this macro won't be
> used on struct cache_set anymore.
> 
> This patch unifies all bucket_bytes() usage only on struct cache, this is
> one of the preparation to remove the embedded struct cache_sb from
> struct cache_set.
> 
> Signed-off-by: Coly Li <colyli@suse.de>
> ---
>   drivers/md/bcache/bcache.h | 2 +-
>   drivers/md/bcache/sysfs.c  | 2 +-
>   2 files changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
> index 48a2585b6bbb..94d4baf4c405 100644
> --- a/drivers/md/bcache/bcache.h
> +++ b/drivers/md/bcache/bcache.h
> @@ -757,7 +757,7 @@ struct bbio {
>   #define btree_default_blocks(c)						\
>   	((unsigned int) ((PAGE_SECTORS * (c)->btree_pages) >> (c)->block_bits))
>   
> -#define bucket_bytes(c)		((c)->sb.bucket_size << 9)
> +#define bucket_bytes(ca)	((ca)->sb.bucket_size << 9)
>   #define block_bytes(ca)		((ca)->sb.block_size << 9)
>   
>   static inline unsigned int meta_bucket_pages(struct cache_sb *sb)
> diff --git a/drivers/md/bcache/sysfs.c b/drivers/md/bcache/sysfs.c
> index b9f524ab5cc8..4bfe98faadcc 100644
> --- a/drivers/md/bcache/sysfs.c
> +++ b/drivers/md/bcache/sysfs.c
> @@ -713,7 +713,7 @@ SHOW(__bch_cache_set)
>   
>   	sysfs_print(synchronous,		CACHE_SYNC(&c->sb));
>   	sysfs_print(journal_delay_ms,		c->journal_delay_ms);
> -	sysfs_hprint(bucket_size,		bucket_bytes(c));
> +	sysfs_hprint(bucket_size,		bucket_bytes(c->cache));
>   	sysfs_hprint(block_size,		block_bytes(c->cache));
>   	sysfs_print(tree_depth,			c->root->level);
>   	sysfs_print(root_usage_percent,		bch_root_usage(c));
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke            Teamlead Storage & Networking
hare@suse.de                               +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 09/14] bcache: avoid data copy between cache_set->sb and cache->sb
  2020-08-15  4:10 ` [PATCH 09/14] bcache: avoid data copy between cache_set->sb and cache->sb Coly Li
@ 2020-08-17  6:22   ` Hannes Reinecke
  2020-08-17  6:30     ` Coly Li
  0 siblings, 1 reply; 34+ messages in thread
From: Hannes Reinecke @ 2020-08-17  6:22 UTC (permalink / raw)
  To: Coly Li, linux-bcache; +Cc: linux-block

On 8/15/20 6:10 AM, Coly Li wrote:
> struct cache_sb embedded in struct cache_set is only partial used and
> not a real copy from cache's in-memory super block. When removing the
> embedded cache_set->sb, it is unncessary to copy data between these two
> in-memory super blocks (cache_set->sb and cache->sb), it is sufficient
> to just use cache->sb.
> 
> This patch removes the data copy between these two in-memory super
> blocks in bch_cache_set_alloc() and bcache_write_super(). In future
> except for set_uuid, cache's super block will be referenced by cache
> set, no copy any more.
> 
> Signed-off-by: Coly Li <colyli@suse.de>
> ---
>   drivers/md/bcache/super.c | 22 +++-------------------
>   1 file changed, 3 insertions(+), 19 deletions(-)
> 
> diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
> index 748b08ab4f11..05c5a7e867bb 100644
> --- a/drivers/md/bcache/super.c
> +++ b/drivers/md/bcache/super.c
> @@ -350,16 +350,10 @@ void bcache_write_super(struct cache_set *c)
>   	down(&c->sb_write_mutex);
>   	closure_init(cl, &c->cl);
>   
> -	c->sb.seq++;
> +	ca->sb.seq++;
>   
> -	if (c->sb.version > version)
> -		version = c->sb.version;
> -
> -	ca->sb.version		= version;
> -	ca->sb.seq		= c->sb.seq;
> -	ca->sb.last_mount	= c->sb.last_mount;
> -
> -	SET_CACHE_SYNC(&ca->sb, CACHE_SYNC(&c->sb));
> +	if (ca->sb.version < version)
> +		ca->sb.version = version;
>   
>   	bio_init(bio, ca->sb_bv, 1);
>   	bio_set_dev(bio, ca->bdev);
> @@ -1860,16 +1854,6 @@ struct cache_set *bch_cache_set_alloc(struct cache_sb *sb)
>   	bch_cache_accounting_init(&c->accounting, &c->cl);
>   
>   	memcpy(c->set_uuid, sb->set_uuid, 16);
> -	c->sb.block_size	= sb->block_size;
> -	c->sb.bucket_size	= sb->bucket_size;
> -	c->sb.nr_in_set		= sb->nr_in_set;
> -	c->sb.last_mount	= sb->last_mount;
> -	c->sb.version		= sb->version;
> -	if (c->sb.version >= BCACHE_SB_VERSION_CDEV_WITH_FEATURES) {
> -		c->sb.feature_compat = sb->feature_compat;
> -		c->sb.feature_ro_compat = sb->feature_ro_compat;
> -		c->sb.feature_incompat = sb->feature_incompat;
> -	}
>   
>   	c->bucket_bits		= ilog2(sb->bucket_size);
>   	c->block_bits		= ilog2(sb->block_size);
> 
Please fold it into patch 13, as then it's obvious why we don't need 
this copy actions anymore.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke            Teamlead Storage & Networking
hare@suse.de                               +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 10/14] bcache: don't check seq numbers in register_cache_set()
  2020-08-15  4:10 ` [PATCH 10/14] bcache: don't check seq numbers in register_cache_set() Coly Li
@ 2020-08-17  6:23   ` Hannes Reinecke
  0 siblings, 0 replies; 34+ messages in thread
From: Hannes Reinecke @ 2020-08-17  6:23 UTC (permalink / raw)
  To: Coly Li, linux-bcache; +Cc: linux-block

On 8/15/20 6:10 AM, Coly Li wrote:
> In order to update the partial super block of cache set, the seq numbers
> of cache and cache set are checked in register_cache_set(). If cache's
> seq number is larger than cache set's seq number, cache set must update
> its partial super block from cache's super block. It is unncessary when
> the embedded struct cache_sb is removed from struct cache set.
> 
> This patch removed the seq numbers checking from register_cache_set(),
> because later there will be no such partial super block in struct cache
> set, the cache set will directly reference in-memory super block from
> struct cache. This is a preparation patch for removing embedded struct
> cache_sb from struct cache_set.
> 
> Signed-off-by: Coly Li <colyli@suse.de>
> ---
>   drivers/md/bcache/super.c | 15 ---------------
>   1 file changed, 15 deletions(-)
> 
> diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
> index 05c5a7e867bb..4ba713d0d9b0 100644
> --- a/drivers/md/bcache/super.c
> +++ b/drivers/md/bcache/super.c
> @@ -2160,21 +2160,6 @@ static const char *register_cache_set(struct cache *ca)
>   	    sysfs_create_link(&c->kobj, &ca->kobj, buf))
>   		goto err;
>   
> -	/*
> -	 * A special case is both ca->sb.seq and c->sb.seq are 0,
> -	 * such condition happens on a new created cache device whose
> -	 * super block is never flushed yet. In this case c->sb.version
> -	 * and other members should be updated too, otherwise we will
> -	 * have a mistaken super block version in cache set.
> -	 */
> -	if (ca->sb.seq > c->sb.seq || c->sb.seq == 0) {
> -		c->sb.version		= ca->sb.version;
> -		memcpy(c->set_uuid, ca->sb.set_uuid, 16);
> -		c->sb.flags             = ca->sb.flags;
> -		c->sb.seq		= ca->sb.seq;
> -		pr_debug("set version = %llu\n", c->sb.version);
> -	}
> -
>   	kobject_get(&ca->kobj);
>   	ca->set = c;
>   	ca->set->cache = ca;
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke            Teamlead Storage & Networking
hare@suse.de                               +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 11/14] bcache: remove can_attach_cache()
  2020-08-15  4:10 ` [PATCH 11/14] bcache: remove can_attach_cache() Coly Li
@ 2020-08-17  6:24   ` Hannes Reinecke
  0 siblings, 0 replies; 34+ messages in thread
From: Hannes Reinecke @ 2020-08-17  6:24 UTC (permalink / raw)
  To: Coly Li, linux-bcache; +Cc: linux-block

On 8/15/20 6:10 AM, Coly Li wrote:
> After removing the embedded struct cache_sb from struct cache_set, cache
> set will directly reference the in-memory super block of struct cache.
> It is unnecessary to compare block_size, bucket_size and nr_in_set from
> the identical in-memory super block in can_attach_cache().
> 
> This is a preparation patch for latter removing cache_set->sb from
> struct cache_set.
> 
> Signed-off-by: Coly Li <colyli@suse.de>
> ---
>   drivers/md/bcache/super.c | 10 ----------
>   1 file changed, 10 deletions(-)
> 
> diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
> index 4ba713d0d9b0..a2bba78d78e6 100644
> --- a/drivers/md/bcache/super.c
> +++ b/drivers/md/bcache/super.c
> @@ -2112,13 +2112,6 @@ static int run_cache_set(struct cache_set *c)
>   	return -EIO;
>   }
>   
> -static bool can_attach_cache(struct cache *ca, struct cache_set *c)
> -{
> -	return ca->sb.block_size	== c->sb.block_size &&
> -		ca->sb.bucket_size	== c->sb.bucket_size &&
> -		ca->sb.nr_in_set	== c->sb.nr_in_set;
> -}
> -
>   static const char *register_cache_set(struct cache *ca)
>   {
>   	char buf[12];
> @@ -2130,9 +2123,6 @@ static const char *register_cache_set(struct cache *ca)
>   			if (c->cache)
>   				return "duplicate cache set member";
>   
> -			if (!can_attach_cache(ca, c))
> -				return "cache sb does not match set";
> -
>   			if (!CACHE_SYNC(&ca->sb))
>   				SET_CACHE_SYNC(&c->sb, false);
>   
> 
Reviewed=by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke            Teamlead Storage & Networking
hare@suse.de                               +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 12/14] bcache: check and set sync status on cache's in-memory super block
  2020-08-15  4:10 ` [PATCH 12/14] bcache: check and set sync status on cache's in-memory super block Coly Li
@ 2020-08-17  6:25   ` Hannes Reinecke
  0 siblings, 0 replies; 34+ messages in thread
From: Hannes Reinecke @ 2020-08-17  6:25 UTC (permalink / raw)
  To: Coly Li, linux-bcache; +Cc: linux-block

On 8/15/20 6:10 AM, Coly Li wrote:
> Currently the cache's sync status is checked and set on cache set's in-
> memory partial super block. After removing the embedded struct cache_sb
> from cache set and reference cache's in-memory super block from struct
> cache_set, the sync status can set and check directly on cache's super
> block.
> 
> This patch checks and sets the cache sync status directly on cache's
> in-memory super block. This is a preparation for later removing embedded
> struct cache_sb from struct cache_set.
> 
> Signed-off-by: Coly Li <colyli@suse.de>
> ---
>   drivers/md/bcache/alloc.c   | 2 +-
>   drivers/md/bcache/journal.c | 2 +-
>   drivers/md/bcache/super.c   | 7 ++-----
>   drivers/md/bcache/sysfs.c   | 6 +++---
>   4 files changed, 7 insertions(+), 10 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke            Teamlead Storage & Networking
hare@suse.de                               +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 13/14] bcache: remove embedded struct cache_sb from struct cache_set
  2020-08-15  4:10 ` [PATCH 13/14] bcache: remove embedded struct cache_sb from struct cache_set Coly Li
@ 2020-08-17  6:26   ` Hannes Reinecke
  0 siblings, 0 replies; 34+ messages in thread
From: Hannes Reinecke @ 2020-08-17  6:26 UTC (permalink / raw)
  To: Coly Li, linux-bcache; +Cc: linux-block

On 8/15/20 6:10 AM, Coly Li wrote:
> Since bcache code was merged into mainline kerrnel, each cache set only
> as one single cache in it. The multiple caches framework is here but the
> code is far from completed. Considering the multiple copies of cached
> data can also be stored on e.g. md raid1 devices, it is unnecessary to
> support multiple caches in one cache set indeed.
> 
> The previous preparation patches fix the dependencies of explicitly
> making a cache set only have single cache. Now we don't have to maintain
> an embedded partial super block in struct cache_set, the in-memory super
> block can be directly referenced from struct cache.
> 
> This patch removes the embedded struct cache_sb from struct cache_set,
> and fixes all locations where the superb lock was referenced from this
> removed super block by referencing the in-memory super block of struct
> cache.
> 
> Signed-off-by: Coly Li <colyli@suse.de>
> ---
>   drivers/md/bcache/alloc.c     |  6 +++---
>   drivers/md/bcache/bcache.h    |  4 +---
>   drivers/md/bcache/btree.c     | 17 +++++++++--------
>   drivers/md/bcache/btree.h     |  2 +-
>   drivers/md/bcache/extents.c   |  6 +++---
>   drivers/md/bcache/features.c  |  4 ++--
>   drivers/md/bcache/io.c        |  2 +-
>   drivers/md/bcache/journal.c   | 11 ++++++-----
>   drivers/md/bcache/request.c   |  4 ++--
>   drivers/md/bcache/super.c     | 19 +++++++++----------
>   drivers/md/bcache/writeback.c |  2 +-
>   11 files changed, 38 insertions(+), 39 deletions(-)
> Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke            Teamlead Storage & Networking
hare@suse.de                               +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 09/14] bcache: avoid data copy between cache_set->sb and cache->sb
  2020-08-17  6:22   ` Hannes Reinecke
@ 2020-08-17  6:30     ` Coly Li
  0 siblings, 0 replies; 34+ messages in thread
From: Coly Li @ 2020-08-17  6:30 UTC (permalink / raw)
  To: Hannes Reinecke; +Cc: linux-bcache, linux-block

On 2020/8/17 14:22, Hannes Reinecke wrote:
> On 8/15/20 6:10 AM, Coly Li wrote:
>> struct cache_sb embedded in struct cache_set is only partial used and
>> not a real copy from cache's in-memory super block. When removing the
>> embedded cache_set->sb, it is unncessary to copy data between these two
>> in-memory super blocks (cache_set->sb and cache->sb), it is sufficient
>> to just use cache->sb.
>>
>> This patch removes the data copy between these two in-memory super
>> blocks in bch_cache_set_alloc() and bcache_write_super(). In future
>> except for set_uuid, cache's super block will be referenced by cache
>> set, no copy any more.
>>
>> Signed-off-by: Coly Li <colyli@suse.de>
>> ---
>>   drivers/md/bcache/super.c | 22 +++-------------------
>>   1 file changed, 3 insertions(+), 19 deletions(-)
>>
>> diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
>> index 748b08ab4f11..05c5a7e867bb 100644
>> --- a/drivers/md/bcache/super.c
>> +++ b/drivers/md/bcache/super.c
>> @@ -350,16 +350,10 @@ void bcache_write_super(struct cache_set *c)
>>       down(&c->sb_write_mutex);
>>       closure_init(cl, &c->cl);
>>   -    c->sb.seq++;
>> +    ca->sb.seq++;
>>   -    if (c->sb.version > version)
>> -        version = c->sb.version;
>> -
>> -    ca->sb.version        = version;
>> -    ca->sb.seq        = c->sb.seq;
>> -    ca->sb.last_mount    = c->sb.last_mount;
>> -
>> -    SET_CACHE_SYNC(&ca->sb, CACHE_SYNC(&c->sb));
>> +    if (ca->sb.version < version)
>> +        ca->sb.version = version;
>>         bio_init(bio, ca->sb_bv, 1);
>>       bio_set_dev(bio, ca->bdev);
>> @@ -1860,16 +1854,6 @@ struct cache_set *bch_cache_set_alloc(struct
>> cache_sb *sb)
>>       bch_cache_accounting_init(&c->accounting, &c->cl);
>>         memcpy(c->set_uuid, sb->set_uuid, 16);
>> -    c->sb.block_size    = sb->block_size;
>> -    c->sb.bucket_size    = sb->bucket_size;
>> -    c->sb.nr_in_set        = sb->nr_in_set;
>> -    c->sb.last_mount    = sb->last_mount;
>> -    c->sb.version        = sb->version;
>> -    if (c->sb.version >= BCACHE_SB_VERSION_CDEV_WITH_FEATURES) {
>> -        c->sb.feature_compat = sb->feature_compat;
>> -        c->sb.feature_ro_compat = sb->feature_ro_compat;
>> -        c->sb.feature_incompat = sb->feature_incompat;
>> -    }
>>         c->bucket_bits        = ilog2(sb->bucket_size);
>>       c->block_bits        = ilog2(sb->block_size);
>>
> Please fold it into patch 13, as then it's obvious why we don't need
> this copy actions anymore.

Sure, I will do it in next version series.

Thanks.

Coly Li


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 14/14] bcache: move struct cache_sb out of uapi bcache.h
  2020-08-15  4:10 ` [PATCH 14/14] bcache: move struct cache_sb out of uapi bcache.h Coly Li
@ 2020-08-17  6:36   ` Hannes Reinecke
  2020-08-17  7:10     ` Coly Li
  2020-08-17 10:28   ` Christoph Hellwig
  1 sibling, 1 reply; 34+ messages in thread
From: Hannes Reinecke @ 2020-08-17  6:36 UTC (permalink / raw)
  To: Coly Li, linux-bcache; +Cc: linux-block

On 8/15/20 6:10 AM, Coly Li wrote:
> struct cache_sb does not exactly map to cache_sb_disk, it is only for
> in-memory super block and dosn't belong to uapi bcache.h.
> 
> This patch moves the struct cache_sb definition and other depending
> macros and inline routines from include/uapi/linux/bcache.h to
> drivers/md/bcache/bcache.h, this is the proper location to have them.
> 
And that I'm not sure of.
The 'uapi' directory is there to hold the user-visible kernel API.
So the real question is whether the bcache superblock is or should be 
part of the user API or not.
Hence an alternative way would be to detail out the entire superblock in
include/uapi/linux/bcache.h, and remove the definitions from
drivers/md/bcache/bcache.h.

There doesn't seem to be a consistent policy here; some things like MD 
have their superblock in the uapi directory, others like btrfs only have 
the ioctl definitions there.

I'm somewhat in favour of having the superblock definition as part of 
the uapi, as this would make writing external tools like blkid easier.
But then the ultimate decision is yours.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke            Teamlead Storage & Networking
hare@suse.de                               +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 14/14] bcache: move struct cache_sb out of uapi bcache.h
  2020-08-17  6:36   ` Hannes Reinecke
@ 2020-08-17  7:10     ` Coly Li
  0 siblings, 0 replies; 34+ messages in thread
From: Coly Li @ 2020-08-17  7:10 UTC (permalink / raw)
  To: Hannes Reinecke; +Cc: linux-bcache, linux-block

On 2020/8/17 14:36, Hannes Reinecke wrote:
> On 8/15/20 6:10 AM, Coly Li wrote:
>> struct cache_sb does not exactly map to cache_sb_disk, it is only for
>> in-memory super block and dosn't belong to uapi bcache.h.
>>
>> This patch moves the struct cache_sb definition and other depending
>> macros and inline routines from include/uapi/linux/bcache.h to
>> drivers/md/bcache/bcache.h, this is the proper location to have them.
>>
> And that I'm not sure of.
> The 'uapi' directory is there to hold the user-visible kernel API.
> So the real question is whether the bcache superblock is or should be
> part of the user API or not.

Now the superblock structure divided into two formats,
- struct cache_sb_disk
  The exact structure representing on-disk bcache superblock and the
format is visible and shared with user space tools.
- struct cache_sb
  This is in-memory super block only used internal bcache code. It is
generated and converted from struct cache_sb_disk, but removed some
unnecessary part for in-memory usage.

This patch moves the in-memory version: struct cache_sb from the uapi
header to bcache internal header.

> Hence an alternative way would be to detail out the entire superblock in
> include/uapi/linux/bcache.h, and remove the definitions from
> drivers/md/bcache/bcache.h.
> 

It makes sense to, because the following flags operation indeed is
related to on-disk format,
BITMASK(CACHE_SYNC,			struct cache_sb, flags, 0, 1);
BITMASK(CACHE_DISCARD,			struct cache_sb, flags, 1, 1);
BITMASK(CACHE_REPLACEMENT,		struct cache_sb, flags, 2, 3);

The suggestion to move struct cache_sb out of the uapi header was from
hch, which makes sense too because struct cache_sb is not part of uapi
anymore.

> There doesn't seem to be a consistent policy here; some things like MD
> have their superblock in the uapi directory, others like btrfs only have
> the ioctl definitions there.
> 
> I'm somewhat in favour of having the superblock definition as part of
> the uapi, as this would make writing external tools like blkid easier.
> But then the ultimate decision is yours.

Yes, struct cache_sb_disk is still in uapi hearder, which is 100%
compatible with the original mixed used struct cache_sb. The "mixed
used" means it was used for both on-disk and in-memory, and both for
struct cache_set and struct cache, this is not good IMHO.

Thanks.

Coly Li


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 14/14] bcache: move struct cache_sb out of uapi bcache.h
  2020-08-15  4:10 ` [PATCH 14/14] bcache: move struct cache_sb out of uapi bcache.h Coly Li
  2020-08-17  6:36   ` Hannes Reinecke
@ 2020-08-17 10:28   ` Christoph Hellwig
  2020-08-17 11:48     ` Coly Li
  1 sibling, 1 reply; 34+ messages in thread
From: Christoph Hellwig @ 2020-08-17 10:28 UTC (permalink / raw)
  To: Coly Li; +Cc: linux-bcache, linux-block

On Sat, Aug 15, 2020 at 12:10:43PM +0800, Coly Li wrote:
> struct cache_sb does not exactly map to cache_sb_disk, it is only for
> in-memory super block and dosn't belong to uapi bcache.h.
> 
> This patch moves the struct cache_sb definition and other depending
> macros and inline routines from include/uapi/linux/bcache.h to
> drivers/md/bcache/bcache.h, this is the proper location to have them.

Honestly, nothing in include/uapi/linux/bcache.h is a UAPI.  It seems
to be a random mix of the on-disk format and internals, but certainly
noting that declares a UAPI at all.

Which might not invalidate this patch, but while you touch it you
should probably add a patch that once only the on-disk format is
left the file gets renamed to say drivers/md/bcache/disk_format.h.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 14/14] bcache: move struct cache_sb out of uapi bcache.h
  2020-08-17 10:28   ` Christoph Hellwig
@ 2020-08-17 11:48     ` Coly Li
  0 siblings, 0 replies; 34+ messages in thread
From: Coly Li @ 2020-08-17 11:48 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: linux-bcache, linux-block

On 2020/8/17 18:28, Christoph Hellwig wrote:
> On Sat, Aug 15, 2020 at 12:10:43PM +0800, Coly Li wrote:
>> struct cache_sb does not exactly map to cache_sb_disk, it is only for
>> in-memory super block and dosn't belong to uapi bcache.h.
>>
>> This patch moves the struct cache_sb definition and other depending
>> macros and inline routines from include/uapi/linux/bcache.h to
>> drivers/md/bcache/bcache.h, this is the proper location to have them.
> 
> Honestly, nothing in include/uapi/linux/bcache.h is a UAPI.  It seems
> to be a random mix of the on-disk format and internals, but certainly
> noting that declares a UAPI at all.
> 
> Which might not invalidate this patch, but while you touch it you
> should probably add a patch that once only the on-disk format is
> left the file gets renamed to say drivers/md/bcache/disk_format.h.
> 
OK then I keep include/uapi/linux/bcache as it is for this moment.

Thanks for the comments.

Coly Li

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 03/14] bcache: remove for_each_cache()
  2020-08-17  6:13   ` Hannes Reinecke
@ 2020-08-22 11:40     ` Coly Li
  0 siblings, 0 replies; 34+ messages in thread
From: Coly Li @ 2020-08-22 11:40 UTC (permalink / raw)
  To: Hannes Reinecke, linux-bcache; +Cc: linux-block

On 2020/8/17 14:13, Hannes Reinecke wrote:
> On 8/15/20 6:10 AM, Coly Li wrote:
>> Since now each cache_set explicitly has single cache, for_each_cache()
>> is unnecessary. This patch removes this macro, and update all locations
>> where it is used, and makes sure all code logic still being consistent.
>>
>> Signed-off-by: Coly Li <colyli@suse.de>
>> ---
>>   drivers/md/bcache/alloc.c    |  17 ++-
>>   drivers/md/bcache/bcache.h   |   9 +-
>>   drivers/md/bcache/btree.c    | 103 +++++++---------
>>   drivers/md/bcache/journal.c  | 229 ++++++++++++++++-------------------
>>   drivers/md/bcache/movinggc.c |  58 +++++----
>>   drivers/md/bcache/super.c    | 114 +++++++----------
>>   6 files changed, 236 insertions(+), 294 deletions(-)
>>
>> diff --git a/drivers/md/bcache/alloc.c b/drivers/md/bcache/alloc.c
>> index 3385f6add6df..1b8310992dd0 100644
>> --- a/drivers/md/bcache/alloc.c
>> +++ b/drivers/md/bcache/alloc.c
>> @@ -88,7 +88,6 @@ void bch_rescale_priorities(struct cache_set *c, int
>> sectors)
>>       struct cache *ca;
>>       struct bucket *b;
>>       unsigned long next = c->nbuckets * c->sb.bucket_size / 1024;
>> -    unsigned int i;
>>       int r;
>>         atomic_sub(sectors, &c->rescale);
>> @@ -104,14 +103,14 @@ void bch_rescale_priorities(struct cache_set *c,
>> int sectors)
>>         c->min_prio = USHRT_MAX;
>>   -    for_each_cache(ca, c, i)
>> -        for_each_bucket(b, ca)
>> -            if (b->prio &&
>> -                b->prio != BTREE_PRIO &&
>> -                !atomic_read(&b->pin)) {
>> -                b->prio--;
>> -                c->min_prio = min(c->min_prio, b->prio);
>> -            }
>> +    ca = c->cache;
>> +    for_each_bucket(b, ca)
>> +        if (b->prio &&
>> +            b->prio != BTREE_PRIO &&
>> +            !atomic_read(&b->pin)) {
>> +            b->prio--;
>> +            c->min_prio = min(c->min_prio, b->prio);
>> +        }
>>         mutex_unlock(&c->bucket_lock);

[snipped]

>>  
> I guess one could remove the 'ca' variables above, but that's just a
> minor detail.

I was thinking of use a macro BCH_CACHE_SB() to reduce c->cache->sb, but
this macro is 12 characters, which does not make code shorter. I don't
make a dicision whether to make it or not. Let me keep it in current
shape, and make dicision later.

Thanks.

Coly Li

^ permalink raw reply	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2020-08-22 11:40 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-08-15  4:10 [PATCH 00/14] bcache: remove multiple caches code framework Coly Li
2020-08-15  4:10 ` [PATCH 01/14] bcache: remove 'int n' from parameter list of bch_bucket_alloc_set() Coly Li
2020-08-17  6:08   ` Hannes Reinecke
2020-08-15  4:10 ` [PATCH 02/14] bcache: explicitly make cache_set only have single cache Coly Li
2020-08-17  6:11   ` Hannes Reinecke
2020-08-15  4:10 ` [PATCH 03/14] bcache: remove for_each_cache() Coly Li
2020-08-17  6:13   ` Hannes Reinecke
2020-08-22 11:40     ` Coly Li
2020-08-15  4:10 ` [PATCH 04/14] bcache: add set_uuid in struct cache_set Coly Li
2020-08-17  6:15   ` Hannes Reinecke
2020-08-15  4:10 ` [PATCH 05/14] bcache: only use block_bytes() on struct cache Coly Li
2020-08-17  6:16   ` Hannes Reinecke
2020-08-15  4:10 ` [PATCH 06/14] bcache: remove useless alloc_bucket_pages() Coly Li
2020-08-17  6:17   ` Hannes Reinecke
2020-08-15  4:10 ` [PATCH 07/14] bcache: remove useless bucket_pages() Coly Li
2020-08-17  6:18   ` Hannes Reinecke
2020-08-15  4:10 ` [PATCH 08/14] bcache: only use bucket_bytes() on struct cache Coly Li
2020-08-17  6:19   ` Hannes Reinecke
2020-08-15  4:10 ` [PATCH 09/14] bcache: avoid data copy between cache_set->sb and cache->sb Coly Li
2020-08-17  6:22   ` Hannes Reinecke
2020-08-17  6:30     ` Coly Li
2020-08-15  4:10 ` [PATCH 10/14] bcache: don't check seq numbers in register_cache_set() Coly Li
2020-08-17  6:23   ` Hannes Reinecke
2020-08-15  4:10 ` [PATCH 11/14] bcache: remove can_attach_cache() Coly Li
2020-08-17  6:24   ` Hannes Reinecke
2020-08-15  4:10 ` [PATCH 12/14] bcache: check and set sync status on cache's in-memory super block Coly Li
2020-08-17  6:25   ` Hannes Reinecke
2020-08-15  4:10 ` [PATCH 13/14] bcache: remove embedded struct cache_sb from struct cache_set Coly Li
2020-08-17  6:26   ` Hannes Reinecke
2020-08-15  4:10 ` [PATCH 14/14] bcache: move struct cache_sb out of uapi bcache.h Coly Li
2020-08-17  6:36   ` Hannes Reinecke
2020-08-17  7:10     ` Coly Li
2020-08-17 10:28   ` Christoph Hellwig
2020-08-17 11:48     ` Coly Li

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.