All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 00/13] bcache: device failure handling improvement
@ 2018-01-14 14:42 Coly Li
  2018-01-14 14:42 ` [PATCH v3 01/13] bcache: set writeback_rate_update_seconds in range [1, 60] seconds Coly Li
                   ` (13 more replies)
  0 siblings, 14 replies; 32+ messages in thread
From: Coly Li @ 2018-01-14 14:42 UTC (permalink / raw)
  To: linux-bcache; +Cc: linux-block, Coly Li

Hi maintainers and folks,

This patch set tries to improve bcache device failure handling, includes
cache device and backing device failures.

The basic idea to handle failed cache device is,
- Unregister cache set
- Detach all backing devices which are attached to this cache set
- Stop all the detached bcache devices
- Stop all flash only volume on the cache set
The above process is named 'cache set retire' by me. The result of cache
set retire is, cache set and bcache devices are all removed, following
I/O requests will get failed immediately to notift upper layer or user
space coce that the cache device is failed or disconnected.

For failed backing device, there are two kinds of failures to handle,
- If device is disconnected, and kernel thread dc->status_update_thread
  finds it is offline for BACKING_DEV_OFFLINE_TIMEOUT (5) seconds, the
  kernel thread will set dc->io_disable and call bcache_device_stop() to
  stop and remove the bcache device from system.
- If device is alive but returns too many I/O errors, after errors number
  exceeds dc->error_limit, call bch_cached_dev_error() to set
  dc->io_disable and stop bcache device. Then the broken backing device
  and its bcache device will be removed from system.

The v3 patch set adds one more patch to fix the detach issue found in
v2 patch set.

A basic testing covered with writethrough, writeback, writearound mode, and
read/write/readwrite workloads, cache set or bcache device can be removed
by too many I/O errors or delete the device. For plugging out physical
disks, a kernel bug triggers rcu oops in __do_softirq() and locks up all
following accesses to the disconnected disk, this blocks my testing.

Open issues:
1, A kernel bug in __do_softirq() when plugging out hard disk with heavy
   I/O blocks my physical disk disconnection test. This is not problem
   introduced from this patch set, if any one knows this bug, please give
   me a hint.

Changelog:
v3: fix detach issue find in v2 patch set.
v2: fixes all problems found in v1 review.
    add patches to handle backing device failure.
    add one more patch to set writeback_rate_update_seconds range.
    include a patch from Junhui Tang.
v1: the initial version, only handles cache device failure.

Any comment, question and review are warmly welcome. Thanks in advance.

Coly Li
---

Coly Li (12):
  bcache: set writeback_rate_update_seconds in range [1, 60] seconds
  bcache: properly set task state in bch_writeback_thread()
  bcache: set task properly in allocator_wait()
  bcache: fix cached_dev->count usage for bch_cache_set_error()
  bcache: quit dc->writeback_thread when BCACHE_DEV_DETACHING is set
  bcache: stop dc->writeback_rate_update properly
  bcache: set error_limit correctly
  bcache: add CACHE_SET_IO_DISABLE to struct cache_set flags
  bcache: stop all attached bcache devices for a retired cache set
  bcache: add backing_request_endio() for bi_end_io of attached backing
    device I/O
  bcache: add io_disable to struct cached_dev
  bcache: stop bcache device when backing device is offline

Tang Junhui (1):
  bcache: fix inaccurate io state for detached bcache devices

 drivers/md/bcache/alloc.c     |   5 +-
 drivers/md/bcache/bcache.h    |  37 ++++++++-
 drivers/md/bcache/btree.c     |  10 ++-
 drivers/md/bcache/io.c        |  16 +++-
 drivers/md/bcache/journal.c   |   4 +-
 drivers/md/bcache/request.c   | 187 +++++++++++++++++++++++++++++++++++-------
 drivers/md/bcache/super.c     | 134 ++++++++++++++++++++++++++++--
 drivers/md/bcache/sysfs.c     |  45 +++++++++-
 drivers/md/bcache/util.h      |   6 --
 drivers/md/bcache/writeback.c |  99 ++++++++++++++++++----
 drivers/md/bcache/writeback.h |   5 +-
 11 files changed, 474 insertions(+), 74 deletions(-)

-- 
2.15.1

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v3 01/13] bcache: set writeback_rate_update_seconds in range [1, 60] seconds
  2018-01-14 14:42 [PATCH v3 00/13] bcache: device failure handling improvement Coly Li
@ 2018-01-14 14:42 ` Coly Li
  2018-01-16  9:03   ` Hannes Reinecke
  2018-01-14 14:42 ` [PATCH v3 02/13] bcache: properly set task state in bch_writeback_thread() Coly Li
                   ` (12 subsequent siblings)
  13 siblings, 1 reply; 32+ messages in thread
From: Coly Li @ 2018-01-14 14:42 UTC (permalink / raw)
  To: linux-bcache; +Cc: linux-block, Coly Li

dc->writeback_rate_update_seconds can be set via sysfs and its value can
be set to [1, ULONG_MAX].  It does not make sense to set such a large
value, 60 seconds is long enough value considering the default 5 seconds
works well for long time.

Because dc->writeback_rate_update is a special delayed work, it re-arms
itself inside the delayed work routine update_writeback_rate(). When
stopping it by cancel_delayed_work_sync(), there should be a timeout to
wait and make sure the re-armed delayed work is stopped too. A small max
value of dc->writeback_rate_update_seconds is also helpful to decide a
reasonable small timeout.

This patch limits sysfs interface to set dc->writeback_rate_update_seconds
in range of [1, 60] seconds, and replaces the hand-coded number by macros.

Signed-off-by: Coly Li <colyli@suse.de>
---
 drivers/md/bcache/sysfs.c     | 3 +++
 drivers/md/bcache/writeback.c | 2 +-
 drivers/md/bcache/writeback.h | 3 +++
 3 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/drivers/md/bcache/sysfs.c b/drivers/md/bcache/sysfs.c
index b4184092c727..a74a752c9e0f 100644
--- a/drivers/md/bcache/sysfs.c
+++ b/drivers/md/bcache/sysfs.c
@@ -215,6 +215,9 @@ STORE(__cached_dev)
 	sysfs_strtoul_clamp(writeback_rate,
 			    dc->writeback_rate.rate, 1, INT_MAX);
 
+	sysfs_strtoul_clamp(writeback_rate_update_seconds,
+			    dc->writeback_rate_update_seconds,
+			    1, WRITEBACK_RATE_UPDATE_SECS_MAX);
 	d_strtoul_nonzero(writeback_rate_update_seconds);
 	d_strtoul(writeback_rate_i_term_inverse);
 	d_strtoul_nonzero(writeback_rate_p_term_inverse);
diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
index 51306a19ab03..0ade883b6316 100644
--- a/drivers/md/bcache/writeback.c
+++ b/drivers/md/bcache/writeback.c
@@ -652,7 +652,7 @@ void bch_cached_dev_writeback_init(struct cached_dev *dc)
 	dc->writeback_rate.rate		= 1024;
 	dc->writeback_rate_minimum	= 8;
 
-	dc->writeback_rate_update_seconds = 5;
+	dc->writeback_rate_update_seconds = WRITEBACK_RATE_UPDATE_SECS_DEFAULT;
 	dc->writeback_rate_p_term_inverse = 40;
 	dc->writeback_rate_i_term_inverse = 10000;
 
diff --git a/drivers/md/bcache/writeback.h b/drivers/md/bcache/writeback.h
index 66f1c527fa24..587b25599856 100644
--- a/drivers/md/bcache/writeback.h
+++ b/drivers/md/bcache/writeback.h
@@ -8,6 +8,9 @@
 #define MAX_WRITEBACKS_IN_PASS  5
 #define MAX_WRITESIZE_IN_PASS   5000	/* *512b */
 
+#define WRITEBACK_RATE_UPDATE_SECS_MAX		60
+#define WRITEBACK_RATE_UPDATE_SECS_DEFAULT	5
+
 /*
  * 14 (16384ths) is chosen here as something that each backing device
  * should be a reasonable fraction of the share, and not to blow up
-- 
2.15.1

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v3 02/13] bcache: properly set task state in bch_writeback_thread()
  2018-01-14 14:42 [PATCH v3 00/13] bcache: device failure handling improvement Coly Li
  2018-01-14 14:42 ` [PATCH v3 01/13] bcache: set writeback_rate_update_seconds in range [1, 60] seconds Coly Li
@ 2018-01-14 14:42 ` Coly Li
  2018-01-16  9:02   ` Hannes Reinecke
  2018-01-14 14:42 ` [PATCH v3 03/13] bcache: set task properly in allocator_wait() Coly Li
                   ` (11 subsequent siblings)
  13 siblings, 1 reply; 32+ messages in thread
From: Coly Li @ 2018-01-14 14:42 UTC (permalink / raw)
  To: linux-bcache
  Cc: linux-block, Coly Li, Michael Lyle, Hannes Reinecke, Junhui Tang

Kernel thread routine bch_writeback_thread() has the following code block,

447         down_write(&dc->writeback_lock);
448~450     if (check conditions) {
451                 up_write(&dc->writeback_lock);
452                 set_current_state(TASK_INTERRUPTIBLE);
453
454                 if (kthread_should_stop())
455                         return 0;
456
457                 schedule();
458                 continue;
459         }

If condition check is true, its task state is set to TASK_INTERRUPTIBLE
and call schedule() to wait for others to wake up it.

There are 2 issues in current code,
1, Task state is set to TASK_INTERRUPTIBLE after the condition checks, if
   another process changes the condition and call wake_up_process(dc->
   writeback_thread), then at line 452 task state is set back to
   TASK_INTERRUPTIBLE, the writeback kernel thread will lose a chance to be
   waken up.
2, At line 454 if kthread_should_stop() is true, writeback kernel thread
   will return to kernel/kthread.c:kthread() with TASK_INTERRUPTIBLE and
   call do_exit(). It is not good to enter do_exit() with task state
   TASK_INTERRUPTIBLE, in following code path might_sleep() is called and a
   warning message is reported by __might_sleep(): "WARNING: do not call
   blocking ops when !TASK_RUNNING; state=1 set at [xxxx]".

For the first issue, task state should be set before condition checks.
Ineed because dc->writeback_lock is required when modifying all the
conditions, calling set_current_state() inside code block where dc->
writeback_lock is hold is safe. But this is quite implicit, so I still move
set_current_state() before all the condition checks.

For the second issue, frankley speaking it does not hurt when kernel thread
exits with TASK_INTERRUPTIBLE state, but this warning message scares users,
makes them feel there might be something risky with bcache and hurt their
data.  Setting task state to TASK_RUNNING before returning fixes this
problem.

Changelog:
v2: fix the race issue in v1 patch.
v1: initial buggy fix.

Signed-off-by: Coly Li <colyli@suse.de>
Cc: Michael Lyle <mlyle@lyle.org>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Junhui Tang <tang.junhui@zte.com.cn>
---
 drivers/md/bcache/writeback.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
index 0ade883b6316..f1d2fc15abcc 100644
--- a/drivers/md/bcache/writeback.c
+++ b/drivers/md/bcache/writeback.c
@@ -564,18 +564,21 @@ static int bch_writeback_thread(void *arg)
 
 	while (!kthread_should_stop()) {
 		down_write(&dc->writeback_lock);
+		set_current_state(TASK_INTERRUPTIBLE);
 		if (!atomic_read(&dc->has_dirty) ||
 		    (!test_bit(BCACHE_DEV_DETACHING, &dc->disk.flags) &&
 		     !dc->writeback_running)) {
 			up_write(&dc->writeback_lock);
-			set_current_state(TASK_INTERRUPTIBLE);
 
-			if (kthread_should_stop())
+			if (kthread_should_stop()) {
+				set_current_state(TASK_RUNNING);
 				return 0;
+			}
 
 			schedule();
 			continue;
 		}
+		set_current_state(TASK_RUNNING);
 
 		searched_full_index = refill_dirty(dc);
 
-- 
2.15.1

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v3 03/13] bcache: set task properly in allocator_wait()
  2018-01-14 14:42 [PATCH v3 00/13] bcache: device failure handling improvement Coly Li
  2018-01-14 14:42 ` [PATCH v3 01/13] bcache: set writeback_rate_update_seconds in range [1, 60] seconds Coly Li
  2018-01-14 14:42 ` [PATCH v3 02/13] bcache: properly set task state in bch_writeback_thread() Coly Li
@ 2018-01-14 14:42 ` Coly Li
  2018-01-16  9:05   ` Hannes Reinecke
  2018-01-14 14:42 ` [PATCH v3 04/13] bcache: fix cached_dev->count usage for bch_cache_set_error() Coly Li
                   ` (10 subsequent siblings)
  13 siblings, 1 reply; 32+ messages in thread
From: Coly Li @ 2018-01-14 14:42 UTC (permalink / raw)
  To: linux-bcache
  Cc: linux-block, Coly Li, Michael Lyle, Hannes Reinecke, Junhui Tang

Kernel thread routine bch_allocator_thread() references macro
allocator_wait() to wait for a condition or quit to do_exit()
when kthread_should_stop() is true. Here is the code block,

284         while (1) {                                                   \
285                 set_current_state(TASK_INTERRUPTIBLE);                \
286                 if (cond)                                             \
287                         break;                                        \
288                                                                       \
289                 mutex_unlock(&(ca)->set->bucket_lock);                \
290                 if (kthread_should_stop())                            \
291                         return 0;                                     \
292                                                                       \
293                 schedule();                                           \
294                 mutex_lock(&(ca)->set->bucket_lock);                  \
295         }                                                             \
296         __set_current_state(TASK_RUNNING);                            \

At line 285, task state is set to TASK_INTERRUPTIBLE, if at line 290
kthread_should_stop() is true, the kernel thread will terminate and return
to kernel/kthread.s:kthread(), then calls do_exit() with TASK_INTERRUPTIBLE
state. This is not a suggested behavior and a warning message will be
reported by might_sleep() in do_exit() code path: "WARNING: do not call
blocking ops when !TASK_RUNNING; state=1 set at [xxxx]".

This patch fixes this problem by setting task state to TASK_RUNNING if
kthread_should_stop() is true and before kernel thread returns back to
kernel/kthread.s:kthread().

Changelog:
v2: fix the race issue in v1 patch.
v1: initial buggy fix.

Signed-off-by: Coly Li <colyli@suse.de>
Cc: Michael Lyle <mlyle@lyle.org>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Junhui Tang <tang.junhui@zte.com.cn>
---
 drivers/md/bcache/alloc.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/md/bcache/alloc.c b/drivers/md/bcache/alloc.c
index 6cc6c0f9c3a9..458e1d38577d 100644
--- a/drivers/md/bcache/alloc.c
+++ b/drivers/md/bcache/alloc.c
@@ -287,8 +287,10 @@ do {									\
 			break;						\
 									\
 		mutex_unlock(&(ca)->set->bucket_lock);			\
-		if (kthread_should_stop())				\
+		if (kthread_should_stop()) {				\
+			set_current_state(TASK_RUNNING);		\
 			return 0;					\
+		}							\
 									\
 		schedule();						\
 		mutex_lock(&(ca)->set->bucket_lock);			\
-- 
2.15.1

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v3 04/13] bcache: fix cached_dev->count usage for bch_cache_set_error()
  2018-01-14 14:42 [PATCH v3 00/13] bcache: device failure handling improvement Coly Li
                   ` (2 preceding siblings ...)
  2018-01-14 14:42 ` [PATCH v3 03/13] bcache: set task properly in allocator_wait() Coly Li
@ 2018-01-14 14:42 ` Coly Li
  2018-01-14 14:42 ` [PATCH v3 05/13] bcache: quit dc->writeback_thread when BCACHE_DEV_DETACHING is set Coly Li
                   ` (9 subsequent siblings)
  13 siblings, 0 replies; 32+ messages in thread
From: Coly Li @ 2018-01-14 14:42 UTC (permalink / raw)
  To: linux-bcache; +Cc: linux-block, Coly Li, Michael Lyle, Junhui Tang

When bcache metadata I/O fails, bcache will call bch_cache_set_error()
to retire the whole cache set. The expected behavior to retire a cache
set is to unregister the cache set, and unregister all backing device
attached to this cache set, then remove sysfs entries of the cache set
and all attached backing devices, finally release memory of structs
cache_set, cache, cached_dev and bcache_device.

In my testing when journal I/O failure triggered by disconnected cache
device, sometimes the cache set cannot be retired, and its sysfs
entry /sys/fs/bcache/<uuid> still exits and the backing device also
references it. This is not expected behavior.

When metadata I/O failes, the call senquence to retire whole cache set is,
        bch_cache_set_error()
        bch_cache_set_unregister()
        bch_cache_set_stop()
        __cache_set_unregister()     <- called as callback by calling
                                        clousre_queue(&c->caching)
        cache_set_flush()            <- called as a callback when refcount
                                        of cache_set->caching is 0
        cache_set_free()             <- called as a callback when refcount
                                        of catch_set->cl is 0
        bch_cache_set_release()      <- called as a callback when refcount
                                        of catch_set->kobj is 0

I find if kernel thread bch_writeback_thread() quits while-loop when
kthread_should_stop() is true and searched_full_index is false, clousre
callback cache_set_flush() set by continue_at() will never be called. The
result is, bcache fails to retire whole cache set.

cache_set_flush() will be called when refcount of closure c->caching is 0,
and in function bcache_device_detach() refcount of closure c->caching is
released to 0 by clousre_put(). In metadata error code path, function
bcache_device_detach() is called by cached_dev_detach_finish(). This is a
callback routine being called when cached_dev->count is 0. This refcount
is decreased by cached_dev_put().

The above dependence indicates, cache_set_flush() will be called when
refcount of cache_set->cl is 0, and refcount of cache_set->cl to be 0
when refcount of cache_dev->count is 0.

The reason why sometimes cache_dev->count is not 0 (when metadata I/O fails
and bch_cache_set_error() called) is, in bch_writeback_thread(), refcount
of cache_dev is not decreased properly.

In bch_writeback_thread(), cached_dev_put() is called only when
searched_full_index is true and cached_dev->writeback_keys is empty, a.k.a
there is no dirty data on cache. In most of run time it is correct, but
when bch_writeback_thread() quits the while-loop while cache is still
dirty, current code forget to call cached_dev_put() before this kernel
thread exits. This is why sometimes cache_set_flush() is not executed and
cache set fails to be retired.

The reason to call cached_dev_put() in bch_writeback_rate() is, when the
cache device changes from clean to dirty, cached_dev_get() is called, to
make sure during writeback operatiions both backing and cache devices
won't be released.

Adding following code in bch_writeback_thread() does not work,
   static int bch_writeback_thread(void *arg)
        }

+       if (atomic_read(&dc->has_dirty))
+               cached_dev_put()
+
        return 0;
 }
because writeback kernel thread can be waken up and start via sysfs entry:
        echo 1 > /sys/block/bcache<N>/bcache/writeback_running
It is difficult to check whether backing device is dirty without race and
extra lock. So the above modification will introduce potential refcount
underflow in some conditions.

The correct fix is, to take cached dev refcount when creating the kernel
thread, and put it before the kernel thread exits. Then bcache does not
need to take a cached dev refcount when cache turns from clean to dirty,
or to put a cached dev refcount when cache turns from ditry to clean. The
writeback kernel thread is alwasy safe to reference data structure from
cache set, cache and cached device (because a refcount of cache device is
taken for it already), and no matter the kernel thread is stopped by I/O
errors or system reboot, cached_dev->count can always be used correctly.

The patch is simple, but understanding how it works is quite complicated.

Changelog:
v2: set dc->writeback_thread to NULL in this patch, as suggested by Hannes.
v1: initial version for review.

Signed-off-by: Coly Li <colyli@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Cc: Michael Lyle <mlyle@lyle.org>
Cc: Junhui Tang <tang.junhui@zte.com.cn>
---
 drivers/md/bcache/super.c     |  1 -
 drivers/md/bcache/writeback.c | 11 ++++++++---
 drivers/md/bcache/writeback.h |  2 --
 3 files changed, 8 insertions(+), 6 deletions(-)

diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index 133b81225ea9..d14e09cce2f6 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -1052,7 +1052,6 @@ int bch_cached_dev_attach(struct cached_dev *dc, struct cache_set *c)
 	if (BDEV_STATE(&dc->sb) == BDEV_STATE_DIRTY) {
 		bch_sectors_dirty_init(&dc->disk);
 		atomic_set(&dc->has_dirty, 1);
-		refcount_inc(&dc->count);
 		bch_writeback_queue(dc);
 	}
 
diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
index f1d2fc15abcc..b280c134dd4d 100644
--- a/drivers/md/bcache/writeback.c
+++ b/drivers/md/bcache/writeback.c
@@ -572,7 +572,7 @@ static int bch_writeback_thread(void *arg)
 
 			if (kthread_should_stop()) {
 				set_current_state(TASK_RUNNING);
-				return 0;
+				break;
 			}
 
 			schedule();
@@ -585,7 +585,6 @@ static int bch_writeback_thread(void *arg)
 		if (searched_full_index &&
 		    RB_EMPTY_ROOT(&dc->writeback_keys.keys)) {
 			atomic_set(&dc->has_dirty, 0);
-			cached_dev_put(dc);
 			SET_BDEV_STATE(&dc->sb, BDEV_STATE_CLEAN);
 			bch_write_bdev_super(dc, NULL);
 		}
@@ -606,6 +605,9 @@ static int bch_writeback_thread(void *arg)
 		}
 	}
 
+	dc->writeback_thread = NULL;
+	cached_dev_put(dc);
+
 	return 0;
 }
 
@@ -669,10 +671,13 @@ int bch_cached_dev_writeback_start(struct cached_dev *dc)
 	if (!dc->writeback_write_wq)
 		return -ENOMEM;
 
+	cached_dev_get(dc);
 	dc->writeback_thread = kthread_create(bch_writeback_thread, dc,
 					      "bcache_writeback");
-	if (IS_ERR(dc->writeback_thread))
+	if (IS_ERR(dc->writeback_thread)) {
+		cached_dev_put(dc);
 		return PTR_ERR(dc->writeback_thread);
+	}
 
 	schedule_delayed_work(&dc->writeback_rate_update,
 			      dc->writeback_rate_update_seconds * HZ);
diff --git a/drivers/md/bcache/writeback.h b/drivers/md/bcache/writeback.h
index 587b25599856..0bba8f1c6cdf 100644
--- a/drivers/md/bcache/writeback.h
+++ b/drivers/md/bcache/writeback.h
@@ -105,8 +105,6 @@ static inline void bch_writeback_add(struct cached_dev *dc)
 {
 	if (!atomic_read(&dc->has_dirty) &&
 	    !atomic_xchg(&dc->has_dirty, 1)) {
-		refcount_inc(&dc->count);
-
 		if (BDEV_STATE(&dc->sb) != BDEV_STATE_DIRTY) {
 			SET_BDEV_STATE(&dc->sb, BDEV_STATE_DIRTY);
 			/* XXX: should do this synchronously */
-- 
2.15.1

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v3 05/13] bcache: quit dc->writeback_thread when BCACHE_DEV_DETACHING is set
  2018-01-14 14:42 [PATCH v3 00/13] bcache: device failure handling improvement Coly Li
                   ` (3 preceding siblings ...)
  2018-01-14 14:42 ` [PATCH v3 04/13] bcache: fix cached_dev->count usage for bch_cache_set_error() Coly Li
@ 2018-01-14 14:42 ` Coly Li
  2018-01-16  9:11   ` Hannes Reinecke
  2018-01-14 14:42 ` [PATCH v3 06/13] bcache: stop dc->writeback_rate_update properly Coly Li
                   ` (8 subsequent siblings)
  13 siblings, 1 reply; 32+ messages in thread
From: Coly Li @ 2018-01-14 14:42 UTC (permalink / raw)
  To: linux-bcache
  Cc: linux-block, Coly Li, Michael Lyle, Hannes Reinecke, Huijun Tang

In patch "bcache: fix cached_dev->count usage for bch_cache_set_error()",
cached_dev_get() is called when creating dc->writeback_thread, and
cached_dev_put() is called when exiting dc->writeback_thread. This
modification works well unless people detach the bcache device manually by
    'echo 1 > /sys/block/bcache<N>/bcache/detach'
Because this sysfs interface only calls bch_cached_dev_detach() which wakes
up dc->writeback_thread but does not stop it. The reason is, before patch
"bcache: fix cached_dev->count usage for bch_cache_set_error()", inside
bch_writeback_thread(), if cache is not dirty after writeback,
cached_dev_put() will be called here. And in cached_dev_make_request() when
a new write request makes cache from clean to dirty, cached_dev_get() will
be called there. Since we don't operate dc->count in these locations,
refcount d->count cannot be dropped after cache becomes clean, and
cached_dev_detach_finish() won't be called to detach bcache device.

This patch fixes the issue by checking whether BCACHE_DEV_DETACHING is
set inside bch_writeback_thread(). If this bit is set and cache is clean
(no existing writeback_keys), break the while-loop, call cached_dev_put()
and quit the writeback thread.

Please note if cache is still dirty, even BCACHE_DEV_DETACHING is set the
writeback thread should continue to perform writeback, this is the original
design of manually detach.

I compose a separte patch because that patch "bcache: fix cached_dev->count
usage for bch_cache_set_error()" already gets a "Reviewed-by:" from Hannes
Reinecke. Also this fix is not trivial and good for a separate patch.

Signed-off-by: Coly Li <colyli@suse.de>
Cc: Michael Lyle <mlyle@lyle.org>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Huijun Tang <tang.junhui@zte.com.cn>
---
 drivers/md/bcache/writeback.c | 20 +++++++++++++++++---
 1 file changed, 17 insertions(+), 3 deletions(-)

diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
index b280c134dd4d..4dbeaaa575bf 100644
--- a/drivers/md/bcache/writeback.c
+++ b/drivers/md/bcache/writeback.c
@@ -565,9 +565,15 @@ static int bch_writeback_thread(void *arg)
 	while (!kthread_should_stop()) {
 		down_write(&dc->writeback_lock);
 		set_current_state(TASK_INTERRUPTIBLE);
-		if (!atomic_read(&dc->has_dirty) ||
-		    (!test_bit(BCACHE_DEV_DETACHING, &dc->disk.flags) &&
-		     !dc->writeback_running)) {
+		/*
+		 * If the bache device is detaching, skip here and continue
+		 * to perform writeback. Otherwise, if no dirty data on cache,
+		 * or there is dirty data on cache but writeback is disabled,
+		 * the writeback thread should sleep here and wait for others
+		 * to wake up it.
+		 */
+		if (!test_bit(BCACHE_DEV_DETACHING, &dc->disk.flags) &&
+		    (!atomic_read(&dc->has_dirty) || !dc->writeback_running)) {
 			up_write(&dc->writeback_lock);
 
 			if (kthread_should_stop()) {
@@ -587,6 +593,14 @@ static int bch_writeback_thread(void *arg)
 			atomic_set(&dc->has_dirty, 0);
 			SET_BDEV_STATE(&dc->sb, BDEV_STATE_CLEAN);
 			bch_write_bdev_super(dc, NULL);
+			/*
+			 * If bcache device is detaching via sysfs interface,
+			 * writeback thread should stop after there is no dirty
+			 * data on cache. BCACHE_DEV_DETACHING flag is set in
+			 * bch_cached_dev_detach().
+			 */
+			if (test_bit(BCACHE_DEV_DETACHING, &dc->disk.flags))
+				break;
 		}
 
 		up_write(&dc->writeback_lock);
-- 
2.15.1

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v3 06/13] bcache: stop dc->writeback_rate_update properly
  2018-01-14 14:42 [PATCH v3 00/13] bcache: device failure handling improvement Coly Li
                   ` (4 preceding siblings ...)
  2018-01-14 14:42 ` [PATCH v3 05/13] bcache: quit dc->writeback_thread when BCACHE_DEV_DETACHING is set Coly Li
@ 2018-01-14 14:42 ` Coly Li
  2018-01-14 14:42 ` [PATCH v3 07/13] bcache: set error_limit correctly Coly Li
                   ` (7 subsequent siblings)
  13 siblings, 0 replies; 32+ messages in thread
From: Coly Li @ 2018-01-14 14:42 UTC (permalink / raw)
  To: linux-bcache
  Cc: linux-block, Coly Li, Michael Lyle, Hannes Reinecke, Junhui Tang

struct delayed_work writeback_rate_update in struct cache_dev is a delayed
worker to call function update_writeback_rate() in period (the interval is
defined by dc->writeback_rate_update_seconds).

When a metadate I/O error happens on cache device, bcache error handling
routine bch_cache_set_error() will call bch_cache_set_unregister() to
retire whole cache set. On the unregister code path, this delayed work is
stopped by calling cancel_delayed_work_sync(&dc->writeback_rate_update).

dc->writeback_rate_update is a special delayed work from others in bcache.
In its routine update_writeback_rate(), this delayed work is re-armed
itself. That means when cancel_delayed_work_sync() returns, this delayed
work can still be executed after several seconds defined by
dc->writeback_rate_update_seconds.

The problem is, after cancel_delayed_work_sync() returns, the cache set
unregister code path will continue and release memory of struct cache set.
Then the delayed work is scheduled to run, __update_writeback_rate()
will reference the already released cache_set memory, and trigger a NULL
pointer deference fault.

This patch introduces two more bcache device flags,
- BCACHE_DEV_WB_RUNNING
  bit set:  bcache device is in writeback mode and running, it is OK for
            dc->writeback_rate_update to re-arm itself.
  bit clear:bcache device is trying to stop dc->writeback_rate_update,
            this delayed work should not re-arm itself and quit.
- BCACHE_DEV_RATE_DW_RUNNING
  bit set:  routine update_writeback_rate() is executing.
  bit clear: routine update_writeback_rate() quits.

This patch also adds a function cancel_writeback_rate_update_dwork() to
wait for dc->writeback_rate_update quits before cancel it by calling
cancel_delayed_work_sync(). In order to avoid a deadlock by unexpected
quit dc->writeback_rate_update, after time_out seconds this function will
give up and continue to call cancel_delayed_work_sync().

And here I explain how this patch stops self re-armed delayed work properly
with the above stuffs.

update_writeback_rate() sets BCACHE_DEV_RATE_DW_RUNNING at its beginning
and clears BCACHE_DEV_RATE_DW_RUNNING at its end. Before calling
cancel_writeback_rate_update_dwork() clear flag BCACHE_DEV_WB_RUNNING.

Before calling cancel_delayed_work_sync() wait utill flag
BCACHE_DEV_RATE_DW_RUNNING is clear. So when calling
cancel_delayed_work_sync(), dc->writeback_rate_update must be already re-
armed, or quite by seeing BCACHE_DEV_WB_RUNNING cleared. In both cases
delayed work routine update_writeback_rate() won't be executed after
cancel_delayed_work_sync() returns.

Inside update_writeback_rate() before calling schedule_delayed_work(), flag
BCACHE_DEV_WB_RUNNING is checked before. If this flag is cleared, it means
someone is about to stop the delayed work. Because flag
BCACHE_DEV_RATE_DW_RUNNING is set already and cancel_delayed_work_sync()
has to wait for this flag to be cleared, we don't need to worry about race
condition here.

If update_writeback_rate() is scheduled to run after checking
BCACHE_DEV_RATE_DW_RUNNING and before calling cancel_delayed_work_sync()
in cancel_writeback_rate_update_dwork(), it is also safe. Because at this
moment BCACHE_DEV_WB_RUNNING is cleared with memory barrier. As I mentioned
previously, update_writeback_rate() will see BCACHE_DEV_WB_RUNNING is clear
and quit immediately.

Because there are more dependences inside update_writeback_rate() to struct
cache_set memory, dc->writeback_rate_update is not a simple self re-arm
delayed work. After trying many different methods (e.g. hold dc->count, or
use locks), this is the only way I can find which works to properly stop
dc->writeback_rate_update delayed work.

Changelog:
v2: Try to fix the race issue which is pointed out by Junhui.
v1: The initial version for review

Signed-off-by: Coly Li <colyli@suse.de>
Cc: Michael Lyle <mlyle@lyle.org>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Junhui Tang <tang.junhui@zte.com.cn>
---
 drivers/md/bcache/bcache.h    |  9 +++++----
 drivers/md/bcache/super.c     | 39 +++++++++++++++++++++++++++++++++++----
 drivers/md/bcache/sysfs.c     |  3 ++-
 drivers/md/bcache/writeback.c | 29 ++++++++++++++++++++++++++++-
 4 files changed, 70 insertions(+), 10 deletions(-)

diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
index 5e2d4e80198e..88d938c8d027 100644
--- a/drivers/md/bcache/bcache.h
+++ b/drivers/md/bcache/bcache.h
@@ -258,10 +258,11 @@ struct bcache_device {
 	struct gendisk		*disk;
 
 	unsigned long		flags;
-#define BCACHE_DEV_CLOSING	0
-#define BCACHE_DEV_DETACHING	1
-#define BCACHE_DEV_UNLINK_DONE	2
-
+#define BCACHE_DEV_CLOSING		0
+#define BCACHE_DEV_DETACHING		1
+#define BCACHE_DEV_UNLINK_DONE		2
+#define BCACHE_DEV_WB_RUNNING		4
+#define BCACHE_DEV_RATE_DW_RUNNING	8
 	unsigned		nr_stripes;
 	unsigned		stripe_size;
 	atomic_t		*stripe_sectors_dirty;
diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index d14e09cce2f6..6d888e8fea8c 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -899,6 +899,32 @@ void bch_cached_dev_run(struct cached_dev *dc)
 		pr_debug("error creating sysfs link");
 }
 
+/*
+ * If BCACHE_DEV_RATE_DW_RUNNING is set, it means routine of the delayed
+ * work dc->writeback_rate_update is running. Wait until the routine
+ * quits (BCACHE_DEV_RATE_DW_RUNNING is clear), then continue to
+ * cancel it. If BCACHE_DEV_RATE_DW_RUNNING is not clear after time_out
+ * seconds, give up waiting here and continue to cancel it too.
+ */
+static void cancel_writeback_rate_update_dwork(struct cached_dev *dc)
+{
+	int time_out = WRITEBACK_RATE_UPDATE_SECS_MAX * HZ;
+
+	do {
+		if (!test_bit(BCACHE_DEV_RATE_DW_RUNNING,
+			      &dc->disk.flags))
+			break;
+		time_out--;
+		schedule_timeout_interruptible(1);
+	} while (time_out > 0);
+
+	if (time_out == 0)
+		pr_warn("bcache: give up waiting for "
+			"dc->writeback_write_update to quit");
+
+	cancel_delayed_work_sync(&dc->writeback_rate_update);
+}
+
 static void cached_dev_detach_finish(struct work_struct *w)
 {
 	struct cached_dev *dc = container_of(w, struct cached_dev, detach);
@@ -911,7 +937,9 @@ static void cached_dev_detach_finish(struct work_struct *w)
 
 	mutex_lock(&bch_register_lock);
 
-	cancel_delayed_work_sync(&dc->writeback_rate_update);
+	if (test_and_clear_bit(BCACHE_DEV_WB_RUNNING, &dc->disk.flags))
+		cancel_writeback_rate_update_dwork(dc);
+
 	if (!IS_ERR_OR_NULL(dc->writeback_thread)) {
 		kthread_stop(dc->writeback_thread);
 		dc->writeback_thread = NULL;
@@ -954,6 +982,7 @@ void bch_cached_dev_detach(struct cached_dev *dc)
 	closure_get(&dc->disk.cl);
 
 	bch_writeback_queue(dc);
+
 	cached_dev_put(dc);
 }
 
@@ -1079,14 +1108,16 @@ static void cached_dev_free(struct closure *cl)
 {
 	struct cached_dev *dc = container_of(cl, struct cached_dev, disk.cl);
 
-	cancel_delayed_work_sync(&dc->writeback_rate_update);
+	mutex_lock(&bch_register_lock);
+
+	if (test_and_clear_bit(BCACHE_DEV_WB_RUNNING, &dc->disk.flags))
+		cancel_writeback_rate_update_dwork(dc);
+
 	if (!IS_ERR_OR_NULL(dc->writeback_thread))
 		kthread_stop(dc->writeback_thread);
 	if (dc->writeback_write_wq)
 		destroy_workqueue(dc->writeback_write_wq);
 
-	mutex_lock(&bch_register_lock);
-
 	if (atomic_read(&dc->running))
 		bd_unlink_disk_holder(dc->bdev, dc->disk.disk);
 	bcache_device_free(&dc->disk);
diff --git a/drivers/md/bcache/sysfs.c b/drivers/md/bcache/sysfs.c
index a74a752c9e0f..b7166c504cdb 100644
--- a/drivers/md/bcache/sysfs.c
+++ b/drivers/md/bcache/sysfs.c
@@ -304,7 +304,8 @@ STORE(bch_cached_dev)
 		bch_writeback_queue(dc);
 
 	if (attr == &sysfs_writeback_percent)
-		schedule_delayed_work(&dc->writeback_rate_update,
+		if (!test_and_set_bit(BCACHE_DEV_WB_RUNNING, &dc->disk.flags))
+			schedule_delayed_work(&dc->writeback_rate_update,
 				      dc->writeback_rate_update_seconds * HZ);
 
 	mutex_unlock(&bch_register_lock);
diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
index 4dbeaaa575bf..8f98ef1038d3 100644
--- a/drivers/md/bcache/writeback.c
+++ b/drivers/md/bcache/writeback.c
@@ -115,6 +115,21 @@ static void update_writeback_rate(struct work_struct *work)
 					     struct cached_dev,
 					     writeback_rate_update);
 
+	/*
+	 * should check BCACHE_DEV_RATE_DW_RUNNING before calling
+	 * cancel_delayed_work_sync().
+	 */
+	set_bit(BCACHE_DEV_RATE_DW_RUNNING, &dc->disk.flags);
+	/* paired with where BCACHE_DEV_RATE_DW_RUNNING is tested */
+	smp_mb();
+
+	if (!test_bit(BCACHE_DEV_WB_RUNNING, &dc->disk.flags)) {
+		clear_bit(BCACHE_DEV_RATE_DW_RUNNING, &dc->disk.flags);
+		/* paired with where BCACHE_DEV_RATE_DW_RUNNING is tested */
+		smp_mb();
+		return;
+	}
+
 	down_read(&dc->writeback_lock);
 
 	if (atomic_read(&dc->has_dirty) &&
@@ -123,8 +138,18 @@ static void update_writeback_rate(struct work_struct *work)
 
 	up_read(&dc->writeback_lock);
 
-	schedule_delayed_work(&dc->writeback_rate_update,
+	if (test_bit(BCACHE_DEV_WB_RUNNING, &dc->disk.flags)) {
+		schedule_delayed_work(&dc->writeback_rate_update,
 			      dc->writeback_rate_update_seconds * HZ);
+	}
+
+	/*
+	 * should check BCACHE_DEV_RATE_DW_RUNNING before calling
+	 * cancel_delayed_work_sync().
+	 */
+	clear_bit(BCACHE_DEV_RATE_DW_RUNNING, &dc->disk.flags);
+	/* paired with where BCACHE_DEV_RATE_DW_RUNNING is tested */
+	smp_mb();
 }
 
 static unsigned writeback_delay(struct cached_dev *dc, unsigned sectors)
@@ -675,6 +700,7 @@ void bch_cached_dev_writeback_init(struct cached_dev *dc)
 	dc->writeback_rate_p_term_inverse = 40;
 	dc->writeback_rate_i_term_inverse = 10000;
 
+	WARN_ON(test_and_clear_bit(BCACHE_DEV_WB_RUNNING, &dc->disk.flags));
 	INIT_DELAYED_WORK(&dc->writeback_rate_update, update_writeback_rate);
 }
 
@@ -693,6 +719,7 @@ int bch_cached_dev_writeback_start(struct cached_dev *dc)
 		return PTR_ERR(dc->writeback_thread);
 	}
 
+	WARN_ON(test_and_set_bit(BCACHE_DEV_WB_RUNNING, &dc->disk.flags));
 	schedule_delayed_work(&dc->writeback_rate_update,
 			      dc->writeback_rate_update_seconds * HZ);
 
-- 
2.15.1

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v3 07/13] bcache: set error_limit correctly
  2018-01-14 14:42 [PATCH v3 00/13] bcache: device failure handling improvement Coly Li
                   ` (5 preceding siblings ...)
  2018-01-14 14:42 ` [PATCH v3 06/13] bcache: stop dc->writeback_rate_update properly Coly Li
@ 2018-01-14 14:42 ` Coly Li
  2018-01-14 14:42 ` [PATCH v3 08/13] bcache: add CACHE_SET_IO_DISABLE to struct cache_set flags Coly Li
                   ` (6 subsequent siblings)
  13 siblings, 0 replies; 32+ messages in thread
From: Coly Li @ 2018-01-14 14:42 UTC (permalink / raw)
  To: linux-bcache; +Cc: linux-block, Coly Li, Michael Lyle, Junhui Tang

Struct cache uses io_errors for two purposes,
- Error decay: when cache set error_decay is set, io_errors is used to
  generate a small piece of delay when I/O error happens.
- I/O errors counter: in order to generate big enough value for error
  decay, I/O errors counter value is stored by left shifting 20 bits (a.k.a
  IO_ERROR_SHIFT).

In function bch_count_io_errors(), if I/O errors counter reaches cache set
error limit, bch_cache_set_error() will be called to retire the whold cache
set. But current code is problematic when checking the error limit, see the
following code piece from bch_count_io_errors(),

 90     if (error) {
 91             char buf[BDEVNAME_SIZE];
 92             unsigned errors = atomic_add_return(1 << IO_ERROR_SHIFT,
 93                                                 &ca->io_errors);
 94             errors >>= IO_ERROR_SHIFT;
 95
 96             if (errors < ca->set->error_limit)
 97                     pr_err("%s: IO error on %s, recovering",
 98                            bdevname(ca->bdev, buf), m);
 99             else
100                     bch_cache_set_error(ca->set,
101                                         "%s: too many IO errors %s",
102                                         bdevname(ca->bdev, buf), m);
103     }

At line 94, errors is right shifting IO_ERROR_SHIFT bits, now it is real
errors counter to compare at line 96. But ca->set->error_limit is initia-
lized with an amplified value in bch_cache_set_alloc(),
1545         c->error_limit  = 8 << IO_ERROR_SHIFT;

It means by default, in bch_count_io_errors(), before 8<<20 errors happened
bch_cache_set_error() won't be called to retire the problematic cache
device. If the average request size is 64KB, it means bcache won't handle
failed device until 512GB data is requested. This is too large to be an I/O
threashold. So I believe the correct error limit should be much less.

This patch sets default cache set error limit to 8, then in
bch_count_io_errors() when errors counter reaches 8 (if it is default
value), function bch_cache_set_error() will be called to retire the whole
cache set. This patch also removes bits shifting when store or show
io_error_limit value via sysfs interface.

Nowadays most of SSDs handle internal flash failure automatically by LBA
address re-indirect mapping. If an I/O error can be observed by upper layer
code, it will be a notable error because that SSD can not re-indirect
map the problematic LBA address to an available flash block. This situation
indicates the whole SSD will be failed very soon. Therefore setting 8 as
the default io error limit value makes sense, it is enough for most of
cache devices.

Changelog:
v2: add reviewed-by from Hannes.
v1: initial version for review.

Signed-off-by: Coly Li <colyli@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Cc: Michael Lyle <mlyle@lyle.org>
Cc: Junhui Tang <tang.junhui@zte.com.cn>
---
 drivers/md/bcache/bcache.h | 1 +
 drivers/md/bcache/super.c  | 2 +-
 drivers/md/bcache/sysfs.c  | 4 ++--
 3 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
index 88d938c8d027..7d7512fa4f09 100644
--- a/drivers/md/bcache/bcache.h
+++ b/drivers/md/bcache/bcache.h
@@ -663,6 +663,7 @@ struct cache_set {
 		ON_ERROR_UNREGISTER,
 		ON_ERROR_PANIC,
 	}			on_error;
+#define DEFAULT_IO_ERROR_LIMIT 8
 	unsigned		error_limit;
 	unsigned		error_decay;
 
diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index 6d888e8fea8c..a373648b5d4b 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -1583,7 +1583,7 @@ struct cache_set *bch_cache_set_alloc(struct cache_sb *sb)
 
 	c->congested_read_threshold_us	= 2000;
 	c->congested_write_threshold_us	= 20000;
-	c->error_limit	= 8 << IO_ERROR_SHIFT;
+	c->error_limit	= DEFAULT_IO_ERROR_LIMIT;
 
 	return c;
 err:
diff --git a/drivers/md/bcache/sysfs.c b/drivers/md/bcache/sysfs.c
index b7166c504cdb..ba62e987b503 100644
--- a/drivers/md/bcache/sysfs.c
+++ b/drivers/md/bcache/sysfs.c
@@ -560,7 +560,7 @@ SHOW(__bch_cache_set)
 
 	/* See count_io_errors for why 88 */
 	sysfs_print(io_error_halflife,	c->error_decay * 88);
-	sysfs_print(io_error_limit,	c->error_limit >> IO_ERROR_SHIFT);
+	sysfs_print(io_error_limit,	c->error_limit);
 
 	sysfs_hprint(congested,
 		     ((uint64_t) bch_get_congested(c)) << 9);
@@ -660,7 +660,7 @@ STORE(__bch_cache_set)
 	}
 
 	if (attr == &sysfs_io_error_limit)
-		c->error_limit = strtoul_or_return(buf) << IO_ERROR_SHIFT;
+		c->error_limit = strtoul_or_return(buf);
 
 	/* See count_io_errors() for why 88 */
 	if (attr == &sysfs_io_error_halflife)
-- 
2.15.1

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v3 08/13] bcache: add CACHE_SET_IO_DISABLE to struct cache_set flags
  2018-01-14 14:42 [PATCH v3 00/13] bcache: device failure handling improvement Coly Li
                   ` (6 preceding siblings ...)
  2018-01-14 14:42 ` [PATCH v3 07/13] bcache: set error_limit correctly Coly Li
@ 2018-01-14 14:42 ` Coly Li
  2018-01-14 14:42 ` [PATCH v3 09/13] bcache: stop all attached bcache devices for a retired cache set Coly Li
                   ` (5 subsequent siblings)
  13 siblings, 0 replies; 32+ messages in thread
From: Coly Li @ 2018-01-14 14:42 UTC (permalink / raw)
  To: linux-bcache
  Cc: linux-block, Coly Li, Junhui Tang, Michael Lyle, Pavel Vazharov

When too many I/Os failed on cache device, bch_cache_set_error() is called
in the error handling code path to retire whole problematic cache set. If
new I/O requests continue to come and take refcount dc->count, the cache
set won't be retired immediately, this is a problem.

Further more, there are several kernel thread and self-armed kernel work
may still running after bch_cache_set_error() is called. It needs to wait
quite a while for them to stop, or they won't stop at all. They also
prevent the cache set from being retired.

The solution in this patch is, to add per cache set flag to disable I/O
request on this cache and all attached backing devices. Then new coming I/O
requests can be rejected in *_make_request() before taking refcount, kernel
threads and self-armed kernel worker can stop very fast when flags bit
CACHE_SET_IO_DISABLE is set.

Because bcache also do internal I/Os for writeback, garbage collection,
bucket allocation, journaling, this kind of I/O should be disabled after
bch_cache_set_error() is called. So closure_bio_submit() is modified to
check whether CACHE_SET_IO_DISABLE is set on cache_set->flags. If set,
closure_bio_submit() will set bio->bi_status to BLK_STS_IOERR and
return, generic_make_request() won't be called.

A sysfs interface is also added to set or clear CACHE_SET_IO_DISABLE bit
from cache_set->flags, to disable or enable cache set I/O for debugging. It
is helpful to trigger more corner case issues for failed cache device.

Changelog
v2, more changes by previous review,
- Use CACHE_SET_IO_DISABLE of cache_set->flags, suggested by Junhui.
- Check CACHE_SET_IO_DISABLE in bch_btree_gc() to stop a while-loop, this
  is reported and inspired from origal patch of Pavel Vazharov.
v1, initial version.

Signed-off-by: Coly Li <colyli@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Cc: Junhui Tang <tang.junhui@zte.com.cn>
Cc: Michael Lyle <mlyle@lyle.org>
Cc: Pavel Vazharov <freakpv@gmail.com>
---
 drivers/md/bcache/alloc.c     |  3 ++-
 drivers/md/bcache/bcache.h    | 18 ++++++++++++++++++
 drivers/md/bcache/btree.c     | 10 +++++++---
 drivers/md/bcache/io.c        |  2 +-
 drivers/md/bcache/journal.c   |  4 ++--
 drivers/md/bcache/request.c   | 26 +++++++++++++++++++-------
 drivers/md/bcache/super.c     |  6 +++++-
 drivers/md/bcache/sysfs.c     | 20 ++++++++++++++++++++
 drivers/md/bcache/util.h      |  6 ------
 drivers/md/bcache/writeback.c | 35 +++++++++++++++++++++++++++--------
 10 files changed, 101 insertions(+), 29 deletions(-)

diff --git a/drivers/md/bcache/alloc.c b/drivers/md/bcache/alloc.c
index 458e1d38577d..004cc3cc6123 100644
--- a/drivers/md/bcache/alloc.c
+++ b/drivers/md/bcache/alloc.c
@@ -287,7 +287,8 @@ do {									\
 			break;						\
 									\
 		mutex_unlock(&(ca)->set->bucket_lock);			\
-		if (kthread_should_stop()) {				\
+		if (kthread_should_stop() ||				\
+		    test_bit(CACHE_SET_IO_DISABLE, &ca->set->flags)) {	\
 			set_current_state(TASK_RUNNING);		\
 			return 0;					\
 		}							\
diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
index 7d7512fa4f09..c41736960045 100644
--- a/drivers/md/bcache/bcache.h
+++ b/drivers/md/bcache/bcache.h
@@ -475,10 +475,15 @@ struct gc_stat {
  *
  * CACHE_SET_RUNNING means all cache devices have been registered and journal
  * replay is complete.
+ *
+ * CACHE_SET_IO_DISABLE is set when bcache is stopping the whold cache set, all
+ * external and internal I/O should be denied when this flag is set.
+ *
  */
 #define CACHE_SET_UNREGISTERING		0
 #define	CACHE_SET_STOPPING		1
 #define	CACHE_SET_RUNNING		2
+#define CACHE_SET_IO_DISABLE		4
 
 struct cache_set {
 	struct closure		cl;
@@ -862,6 +867,19 @@ static inline void wake_up_allocators(struct cache_set *c)
 		wake_up_process(ca->alloc_thread);
 }
 
+static inline void closure_bio_submit(struct cache_set *c,
+				      struct bio *bio,
+				      struct closure *cl)
+{
+	closure_get(cl);
+	if (unlikely(test_bit(CACHE_SET_IO_DISABLE, &c->flags))) {
+		bio->bi_status = BLK_STS_IOERR;
+		bio_endio(bio);
+		return;
+	}
+	generic_make_request(bio);
+}
+
 /* Forward declarations */
 
 void bch_count_io_errors(struct cache *, blk_status_t, int, const char *);
diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c
index bf3a48aa9a9a..0a0bc63011b4 100644
--- a/drivers/md/bcache/btree.c
+++ b/drivers/md/bcache/btree.c
@@ -1744,6 +1744,7 @@ static void bch_btree_gc(struct cache_set *c)
 
 	btree_gc_start(c);
 
+	/* if CACHE_SET_IO_DISABLE set, gc thread should stop too */
 	do {
 		ret = btree_root(gc_root, c, &op, &writes, &stats);
 		closure_sync(&writes);
@@ -1751,7 +1752,7 @@ static void bch_btree_gc(struct cache_set *c)
 
 		if (ret && ret != -EAGAIN)
 			pr_warn("gc failed!");
-	} while (ret);
+	} while (ret && !test_bit(CACHE_SET_IO_DISABLE, &c->flags));
 
 	bch_btree_gc_finish(c);
 	wake_up_allocators(c);
@@ -1789,9 +1790,12 @@ static int bch_gc_thread(void *arg)
 
 	while (1) {
 		wait_event_interruptible(c->gc_wait,
-			   kthread_should_stop() || gc_should_run(c));
+			   kthread_should_stop() ||
+			   test_bit(CACHE_SET_IO_DISABLE, &c->flags) ||
+			   gc_should_run(c));
 
-		if (kthread_should_stop())
+		if (kthread_should_stop() ||
+		    test_bit(CACHE_SET_IO_DISABLE, &c->flags))
 			break;
 
 		set_gc_sectors(c);
diff --git a/drivers/md/bcache/io.c b/drivers/md/bcache/io.c
index a783c5a41ff1..8013ecbcdbda 100644
--- a/drivers/md/bcache/io.c
+++ b/drivers/md/bcache/io.c
@@ -38,7 +38,7 @@ void __bch_submit_bbio(struct bio *bio, struct cache_set *c)
 	bio_set_dev(bio, PTR_CACHE(c, &b->key, 0)->bdev);
 
 	b->submit_time_us = local_clock_us();
-	closure_bio_submit(bio, bio->bi_private);
+	closure_bio_submit(c, bio, bio->bi_private);
 }
 
 void bch_submit_bbio(struct bio *bio, struct cache_set *c,
diff --git a/drivers/md/bcache/journal.c b/drivers/md/bcache/journal.c
index a87165c1d8e5..979873641030 100644
--- a/drivers/md/bcache/journal.c
+++ b/drivers/md/bcache/journal.c
@@ -62,7 +62,7 @@ reread:		left = ca->sb.bucket_size - offset;
 		bio_set_op_attrs(bio, REQ_OP_READ, 0);
 		bch_bio_map(bio, data);
 
-		closure_bio_submit(bio, &cl);
+		closure_bio_submit(ca->set, bio, &cl);
 		closure_sync(&cl);
 
 		/* This function could be simpler now since we no longer write
@@ -653,7 +653,7 @@ static void journal_write_unlocked(struct closure *cl)
 	spin_unlock(&c->journal.lock);
 
 	while ((bio = bio_list_pop(&list)))
-		closure_bio_submit(bio, cl);
+		closure_bio_submit(c, bio, cl);
 
 	continue_at(cl, journal_write_done, NULL);
 }
diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c
index 1a46b41dac70..02296bda6384 100644
--- a/drivers/md/bcache/request.c
+++ b/drivers/md/bcache/request.c
@@ -747,7 +747,7 @@ static void cached_dev_read_error(struct closure *cl)
 
 		/* XXX: invalidate cache */
 
-		closure_bio_submit(bio, cl);
+		closure_bio_submit(s->iop.c, bio, cl);
 	}
 
 	continue_at(cl, cached_dev_cache_miss_done, NULL);
@@ -872,7 +872,7 @@ static int cached_dev_cache_miss(struct btree *b, struct search *s,
 	s->cache_miss	= miss;
 	s->iop.bio	= cache_bio;
 	bio_get(cache_bio);
-	closure_bio_submit(cache_bio, &s->cl);
+	closure_bio_submit(s->iop.c, cache_bio, &s->cl);
 
 	return ret;
 out_put:
@@ -880,7 +880,7 @@ static int cached_dev_cache_miss(struct btree *b, struct search *s,
 out_submit:
 	miss->bi_end_io		= request_endio;
 	miss->bi_private	= &s->cl;
-	closure_bio_submit(miss, &s->cl);
+	closure_bio_submit(s->iop.c, miss, &s->cl);
 	return ret;
 }
 
@@ -945,7 +945,7 @@ static void cached_dev_write(struct cached_dev *dc, struct search *s)
 
 		if ((bio_op(bio) != REQ_OP_DISCARD) ||
 		    blk_queue_discard(bdev_get_queue(dc->bdev)))
-			closure_bio_submit(bio, cl);
+			closure_bio_submit(s->iop.c, bio, cl);
 	} else if (s->iop.writeback) {
 		bch_writeback_add(dc);
 		s->iop.bio = bio;
@@ -960,12 +960,12 @@ static void cached_dev_write(struct cached_dev *dc, struct search *s)
 			flush->bi_private = cl;
 			flush->bi_opf = REQ_OP_WRITE | REQ_PREFLUSH;
 
-			closure_bio_submit(flush, cl);
+			closure_bio_submit(s->iop.c, flush, cl);
 		}
 	} else {
 		s->iop.bio = bio_clone_fast(bio, GFP_NOIO, dc->disk.bio_split);
 
-		closure_bio_submit(bio, cl);
+		closure_bio_submit(s->iop.c, bio, cl);
 	}
 
 	closure_call(&s->iop.cl, bch_data_insert, NULL, cl);
@@ -981,7 +981,7 @@ static void cached_dev_nodata(struct closure *cl)
 		bch_journal_meta(s->iop.c, cl);
 
 	/* If it's a flush, we send the flush to the backing device too */
-	closure_bio_submit(bio, cl);
+	closure_bio_submit(s->iop.c, bio, cl);
 
 	continue_at(cl, cached_dev_bio_complete, NULL);
 }
@@ -996,6 +996,12 @@ static blk_qc_t cached_dev_make_request(struct request_queue *q,
 	struct cached_dev *dc = container_of(d, struct cached_dev, disk);
 	int rw = bio_data_dir(bio);
 
+	if (unlikely(d->c && test_bit(CACHE_SET_IO_DISABLE, &d->c->flags))) {
+		bio->bi_status = BLK_STS_IOERR;
+		bio_endio(bio);
+		return BLK_QC_T_NONE;
+	}
+
 	atomic_set(&dc->backing_idle, 0);
 	generic_start_io_acct(q, rw, bio_sectors(bio), &d->disk->part0);
 
@@ -1112,6 +1118,12 @@ static blk_qc_t flash_dev_make_request(struct request_queue *q,
 	struct bcache_device *d = bio->bi_disk->private_data;
 	int rw = bio_data_dir(bio);
 
+	if (unlikely(d->c && test_bit(CACHE_SET_IO_DISABLE, &d->c->flags))) {
+		bio->bi_status = BLK_STS_IOERR;
+		bio_endio(bio);
+		return BLK_QC_T_NONE;
+	}
+
 	generic_start_io_acct(q, rw, bio_sectors(bio), &d->disk->part0);
 
 	s = search_alloc(bio, d);
diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index a373648b5d4b..4204d75aee7b 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -521,7 +521,7 @@ static void prio_io(struct cache *ca, uint64_t bucket, int op,
 	bio_set_op_attrs(bio, op, REQ_SYNC|REQ_META|op_flags);
 	bch_bio_map(bio, ca->disk_buckets);
 
-	closure_bio_submit(bio, &ca->prio);
+	closure_bio_submit(ca->set, bio, &ca->prio);
 	closure_sync(cl);
 }
 
@@ -1349,6 +1349,9 @@ bool bch_cache_set_error(struct cache_set *c, const char *fmt, ...)
 	    test_bit(CACHE_SET_STOPPING, &c->flags))
 		return false;
 
+	if (test_and_set_bit(CACHE_SET_IO_DISABLE, &c->flags))
+		pr_warn("bcache: CACHE_SET_IO_DISABLE already set");
+
 	/* XXX: we can be called from atomic context
 	acquire_console_sem();
 	*/
@@ -1584,6 +1587,7 @@ struct cache_set *bch_cache_set_alloc(struct cache_sb *sb)
 	c->congested_read_threshold_us	= 2000;
 	c->congested_write_threshold_us	= 20000;
 	c->error_limit	= DEFAULT_IO_ERROR_LIMIT;
+	WARN_ON(test_and_clear_bit(CACHE_SET_IO_DISABLE, &c->flags));
 
 	return c;
 err:
diff --git a/drivers/md/bcache/sysfs.c b/drivers/md/bcache/sysfs.c
index ba62e987b503..afb051bcfca1 100644
--- a/drivers/md/bcache/sysfs.c
+++ b/drivers/md/bcache/sysfs.c
@@ -92,6 +92,7 @@ read_attribute(partial_stripes_expensive);
 
 rw_attribute(synchronous);
 rw_attribute(journal_delay_ms);
+rw_attribute(io_disable);
 rw_attribute(discard);
 rw_attribute(running);
 rw_attribute(label);
@@ -577,6 +578,8 @@ SHOW(__bch_cache_set)
 	sysfs_printf(gc_always_rewrite,		"%i", c->gc_always_rewrite);
 	sysfs_printf(btree_shrinker_disabled,	"%i", c->shrinker_disabled);
 	sysfs_printf(copy_gc_enabled,		"%i", c->copy_gc_enabled);
+	sysfs_printf(io_disable,		"%i",
+		     test_bit(CACHE_SET_IO_DISABLE, &c->flags));
 
 	if (attr == &sysfs_bset_tree_stats)
 		return bch_bset_print_stats(c, buf);
@@ -666,6 +669,22 @@ STORE(__bch_cache_set)
 	if (attr == &sysfs_io_error_halflife)
 		c->error_decay = strtoul_or_return(buf) / 88;
 
+	if (attr == &sysfs_io_disable) {
+		int v = strtoul_or_return(buf);
+
+		if (v) {
+			if (test_and_set_bit(CACHE_SET_IO_DISABLE,
+					     &c->flags))
+				pr_warn("bcache: CACHE_SET_IO_DISABLE"
+					" already set");
+		} else {
+			if (!test_and_clear_bit(CACHE_SET_IO_DISABLE,
+						&c->flags))
+				pr_warn("bcache: CACHE_SET_IO_DISABLE"
+					" already cleared");
+		}
+	}
+
 	sysfs_strtoul(journal_delay_ms,		c->journal_delay_ms);
 	sysfs_strtoul(verify,			c->verify);
 	sysfs_strtoul(key_merging_disabled,	c->key_merging_disabled);
@@ -748,6 +767,7 @@ static struct attribute *bch_cache_set_internal_files[] = {
 	&sysfs_gc_always_rewrite,
 	&sysfs_btree_shrinker_disabled,
 	&sysfs_copy_gc_enabled,
+	&sysfs_io_disable,
 	NULL
 };
 KTYPE(bch_cache_set_internal);
diff --git a/drivers/md/bcache/util.h b/drivers/md/bcache/util.h
index 4df4c5c1cab2..7944eea54fa9 100644
--- a/drivers/md/bcache/util.h
+++ b/drivers/md/bcache/util.h
@@ -565,12 +565,6 @@ static inline sector_t bdev_sectors(struct block_device *bdev)
 	return bdev->bd_inode->i_size >> 9;
 }
 
-#define closure_bio_submit(bio, cl)					\
-do {									\
-	closure_get(cl);						\
-	generic_make_request(bio);					\
-} while (0)
-
 uint64_t bch_crc64_update(uint64_t, const void *, size_t);
 uint64_t bch_crc64(const void *, size_t);
 
diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
index 8f98ef1038d3..3d7d8452e0de 100644
--- a/drivers/md/bcache/writeback.c
+++ b/drivers/md/bcache/writeback.c
@@ -114,6 +114,7 @@ static void update_writeback_rate(struct work_struct *work)
 	struct cached_dev *dc = container_of(to_delayed_work(work),
 					     struct cached_dev,
 					     writeback_rate_update);
+	struct cache_set *c = dc->disk.c;
 
 	/*
 	 * should check BCACHE_DEV_RATE_DW_RUNNING before calling
@@ -123,7 +124,12 @@ static void update_writeback_rate(struct work_struct *work)
 	/* paired with where BCACHE_DEV_RATE_DW_RUNNING is tested */
 	smp_mb();
 
-	if (!test_bit(BCACHE_DEV_WB_RUNNING, &dc->disk.flags)) {
+	/*
+	 * CACHE_SET_IO_DISABLE might be set via sysfs interface,
+	 * check it here too.
+	 */
+	if (!test_bit(BCACHE_DEV_WB_RUNNING, &dc->disk.flags) ||
+	    test_bit(CACHE_SET_IO_DISABLE, &c->flags)) {
 		clear_bit(BCACHE_DEV_RATE_DW_RUNNING, &dc->disk.flags);
 		/* paired with where BCACHE_DEV_RATE_DW_RUNNING is tested */
 		smp_mb();
@@ -138,7 +144,12 @@ static void update_writeback_rate(struct work_struct *work)
 
 	up_read(&dc->writeback_lock);
 
-	if (test_bit(BCACHE_DEV_WB_RUNNING, &dc->disk.flags)) {
+	/*
+	 * CACHE_SET_IO_DISABLE might be set via sysfs interface,
+	 * check it here too.
+	 */
+	if (test_bit(BCACHE_DEV_WB_RUNNING, &dc->disk.flags) &&
+	    !test_bit(CACHE_SET_IO_DISABLE, &c->flags)) {
 		schedule_delayed_work(&dc->writeback_rate_update,
 			      dc->writeback_rate_update_seconds * HZ);
 	}
@@ -278,7 +289,7 @@ static void write_dirty(struct closure *cl)
 		bio_set_dev(&io->bio, io->dc->bdev);
 		io->bio.bi_end_io	= dirty_endio;
 
-		closure_bio_submit(&io->bio, cl);
+		closure_bio_submit(io->dc->disk.c, &io->bio, cl);
 	}
 
 	atomic_set(&dc->writeback_sequence_next, next_sequence);
@@ -304,7 +315,7 @@ static void read_dirty_submit(struct closure *cl)
 {
 	struct dirty_io *io = container_of(cl, struct dirty_io, cl);
 
-	closure_bio_submit(&io->bio, cl);
+	closure_bio_submit(io->dc->disk.c, &io->bio, cl);
 
 	continue_at(cl, write_dirty, io->dc->writeback_write_wq);
 }
@@ -330,7 +341,9 @@ static void read_dirty(struct cached_dev *dc)
 
 	next = bch_keybuf_next(&dc->writeback_keys);
 
-	while (!kthread_should_stop() && next) {
+	while (!kthread_should_stop() &&
+	       !test_bit(CACHE_SET_IO_DISABLE, &dc->disk.c->flags) &&
+	       next) {
 		size = 0;
 		nk = 0;
 
@@ -427,7 +440,9 @@ static void read_dirty(struct cached_dev *dc)
 			}
 		}
 
-		while (!kthread_should_stop() && delay) {
+		while (!kthread_should_stop() &&
+		       !test_bit(CACHE_SET_IO_DISABLE, &dc->disk.c->flags) &&
+		       delay) {
 			schedule_timeout_interruptible(delay);
 			delay = writeback_delay(dc, 0);
 		}
@@ -583,11 +598,13 @@ static bool refill_dirty(struct cached_dev *dc)
 static int bch_writeback_thread(void *arg)
 {
 	struct cached_dev *dc = arg;
+	struct cache_set *c = dc->disk.c;
 	bool searched_full_index;
 
 	bch_ratelimit_reset(&dc->writeback_rate);
 
-	while (!kthread_should_stop()) {
+	while (!kthread_should_stop() &&
+	       !test_bit(CACHE_SET_IO_DISABLE, &c->flags)) {
 		down_write(&dc->writeback_lock);
 		set_current_state(TASK_INTERRUPTIBLE);
 		/*
@@ -601,7 +618,8 @@ static int bch_writeback_thread(void *arg)
 		    (!atomic_read(&dc->has_dirty) || !dc->writeback_running)) {
 			up_write(&dc->writeback_lock);
 
-			if (kthread_should_stop()) {
+			if (kthread_should_stop() ||
+			    test_bit(CACHE_SET_IO_DISABLE, &c->flags)) {
 				set_current_state(TASK_RUNNING);
 				break;
 			}
@@ -637,6 +655,7 @@ static int bch_writeback_thread(void *arg)
 
 			while (delay &&
 			       !kthread_should_stop() &&
+			       !test_bit(CACHE_SET_IO_DISABLE, &c->flags) &&
 			       !test_bit(BCACHE_DEV_DETACHING, &dc->disk.flags))
 				delay = schedule_timeout_interruptible(delay);
 
-- 
2.15.1

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v3 09/13] bcache: stop all attached bcache devices for a retired cache set
  2018-01-14 14:42 [PATCH v3 00/13] bcache: device failure handling improvement Coly Li
                   ` (7 preceding siblings ...)
  2018-01-14 14:42 ` [PATCH v3 08/13] bcache: add CACHE_SET_IO_DISABLE to struct cache_set flags Coly Li
@ 2018-01-14 14:42 ` Coly Li
  2018-01-14 14:42 ` [PATCH v3 10/13] bcache: fix inaccurate io state for detached bcache devices Coly Li
                   ` (4 subsequent siblings)
  13 siblings, 0 replies; 32+ messages in thread
From: Coly Li @ 2018-01-14 14:42 UTC (permalink / raw)
  To: linux-bcache; +Cc: linux-block, Coly Li, Junhui Tang, Michael Lyle

When there are too many I/O errors on cache device, current bcache code
will retire the whole cache set, and detach all bcache devices. But the
detached bcache devices are not stopped, which is problematic when bcache
is in writeback mode.

If the retired cache set has dirty data of backing devices, continue
writing to bcache device will write to backing device directly. If the
LBA of write request has a dirty version cached on cache device, next time
when the cache device is re-registered and backing device re-attached to
it again, the stale dirty data on cache device will be written to backing
device, and overwrite latest directly written data. This situation causes
a quite data corruption.

This patch checkes whether cache_set->io_disable is true in
__cache_set_unregister(). If cache_set->io_disable is true, it means cache
set is unregistering by too many I/O errors, then all attached bcache
devices will be stopped as well. If cache_set->io_disable is not true, it
means __cache_set_unregister() is triggered by writing 1 to sysfs file
/sys/fs/bcache/<UUID>/bcache/stop. This is an exception because users do
it explicitly, this patch keeps existing behavior and does not stop any
bcache device.

Even the failed cache device has no dirty data, stopping bcache device is
still a desired behavior by many Ceph and data base users. Then their
application will report I/O errors due to disappeared bcache device, and
operation people will know the cache device is broken or disconnected.

Changelog:
v2: add Reviewed-by from Hannes.
v1: initial version for review.

Signed-off-by: Coly Li <colyli@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Cc: Junhui Tang <tang.junhui@zte.com.cn>
Cc: Michael Lyle <mlyle@lyle.org>
---
 drivers/md/bcache/super.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index 4204d75aee7b..97e3bb8e1aee 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -1478,6 +1478,14 @@ static void __cache_set_unregister(struct closure *cl)
 				dc = container_of(c->devices[i],
 						  struct cached_dev, disk);
 				bch_cached_dev_detach(dc);
+				/*
+				 * If we come here by too many I/O errors,
+				 * bcache device should be stopped too, to
+				 * keep data consistency on cache and
+				 * backing devices.
+				 */
+				if (test_bit(CACHE_SET_IO_DISABLE, &c->flags))
+					bcache_device_stop(c->devices[i]);
 			} else {
 				bcache_device_stop(c->devices[i]);
 			}
-- 
2.15.1

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v3 10/13] bcache: fix inaccurate io state for detached bcache devices
  2018-01-14 14:42 [PATCH v3 00/13] bcache: device failure handling improvement Coly Li
                   ` (8 preceding siblings ...)
  2018-01-14 14:42 ` [PATCH v3 09/13] bcache: stop all attached bcache devices for a retired cache set Coly Li
@ 2018-01-14 14:42 ` Coly Li
  2018-01-16  9:27   ` Hannes Reinecke
  2018-01-14 14:42 ` [PATCH v3 11/13] bcache: add backing_request_endio() for bi_end_io of attached backing device I/O Coly Li
                   ` (3 subsequent siblings)
  13 siblings, 1 reply; 32+ messages in thread
From: Coly Li @ 2018-01-14 14:42 UTC (permalink / raw)
  To: linux-bcache; +Cc: linux-block, Tang Junhui

From: Tang Junhui <tang.junhui@zte.com.cn>

When we run IO in a detached device,  and run iostat to shows IO status,
normally it will show like bellow (Omitted some fields):
Device: ... avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sdd        ... 15.89     0.53    1.82    0.20    2.23   1.81  52.30
bcache0    ... 15.89   115.42    0.00    0.00    0.00   2.40  69.60
but after IO stopped, there are still very big avgqu-sz and %util
values as bellow:
Device: ... avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
bcache0   ...      0   5326.32    0.00    0.00    0.00   0.00 100.10

The reason for this issue is that, only generic_start_io_acct() called
and no generic_end_io_acct() called for detached device in
cached_dev_make_request(). See the code:
//start generic_start_io_acct()
generic_start_io_acct(q, rw, bio_sectors(bio), &d->disk->part0);
if (cached_dev_get(dc)) {
	//will callback generic_end_io_acct()
}
else {
	//will not call generic_end_io_acct()
}

This patch calls generic_end_io_acct() in the end of IO for detached
devices, so we can show IO state correctly.

(Modified to use GFP_NOIO in kzalloc() by Coly Li)

Signed-off-by: Tang Junhui <tang.junhui@zte.com.cn>
Reviewed-by: Coly Li <colyli@suse.de>
---
 drivers/md/bcache/request.c | 58 +++++++++++++++++++++++++++++++++++++++------
 1 file changed, 51 insertions(+), 7 deletions(-)

diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c
index 02296bda6384..e09c5ae745be 100644
--- a/drivers/md/bcache/request.c
+++ b/drivers/md/bcache/request.c
@@ -986,6 +986,55 @@ static void cached_dev_nodata(struct closure *cl)
 	continue_at(cl, cached_dev_bio_complete, NULL);
 }
 
+struct detached_dev_io_private {
+	struct bcache_device	*d;
+	unsigned long		start_time;
+	bio_end_io_t		*bi_end_io;
+	void			*bi_private;
+};
+
+static void detatched_dev_end_io(struct bio *bio)
+{
+	struct detached_dev_io_private *ddip;
+
+	ddip = bio->bi_private;
+	bio->bi_end_io = ddip->bi_end_io;
+	bio->bi_private = ddip->bi_private;
+
+	generic_end_io_acct(ddip->d->disk->queue,
+			    bio_data_dir(bio),
+			    &ddip->d->disk->part0, ddip->start_time);
+
+	kfree(ddip);
+
+	bio->bi_end_io(bio);
+}
+
+static void detached_dev_do_request(struct bcache_device *d, struct bio *bio)
+{
+	struct detached_dev_io_private *ddip;
+	struct cached_dev *dc = container_of(d, struct cached_dev, disk);
+
+	/*
+	 * no need to call closure_get(&dc->disk.cl),
+	 * because upper layer had already opened bcache device,
+	 * which would call closure_get(&dc->disk.cl)
+	 */
+	ddip = kzalloc(sizeof(struct detached_dev_io_private), GFP_NOIO);
+	ddip->d = d;
+	ddip->start_time = jiffies;
+	ddip->bi_end_io = bio->bi_end_io;
+	ddip->bi_private = bio->bi_private;
+	bio->bi_end_io = detatched_dev_end_io;
+	bio->bi_private = ddip;
+
+	if ((bio_op(bio) == REQ_OP_DISCARD) &&
+	    !blk_queue_discard(bdev_get_queue(dc->bdev)))
+		bio->bi_end_io(bio);
+	else
+		generic_make_request(bio);
+}
+
 /* Cached devices - read & write stuff */
 
 static blk_qc_t cached_dev_make_request(struct request_queue *q,
@@ -1028,13 +1077,8 @@ static blk_qc_t cached_dev_make_request(struct request_queue *q,
 			else
 				cached_dev_read(dc, s);
 		}
-	} else {
-		if ((bio_op(bio) == REQ_OP_DISCARD) &&
-		    !blk_queue_discard(bdev_get_queue(dc->bdev)))
-			bio_endio(bio);
-		else
-			generic_make_request(bio);
-	}
+	} else
+		detached_dev_do_request(d, bio);
 
 	return BLK_QC_T_NONE;
 }
-- 
2.15.1

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v3 11/13] bcache: add backing_request_endio() for bi_end_io of attached backing device I/O
  2018-01-14 14:42 [PATCH v3 00/13] bcache: device failure handling improvement Coly Li
                   ` (9 preceding siblings ...)
  2018-01-14 14:42 ` [PATCH v3 10/13] bcache: fix inaccurate io state for detached bcache devices Coly Li
@ 2018-01-14 14:42 ` Coly Li
  2018-01-16  9:28   ` Hannes Reinecke
  2018-01-14 14:42 ` [PATCH v3 12/13] bcache: add io_disable to struct cached_dev Coly Li
                   ` (2 subsequent siblings)
  13 siblings, 1 reply; 32+ messages in thread
From: Coly Li @ 2018-01-14 14:42 UTC (permalink / raw)
  To: linux-bcache; +Cc: linux-block, Coly Li, Junhui Tang, Michael Lyle

In order to catch I/O error of backing device, a separate bi_end_io
call back is required. Then a per backing device counter can record I/O
errors number and retire the backing device if the counter reaches a
per backing device I/O error limit.

This patch adds backing_request_endio() to bcache backing device I/O code
path, this is a preparation for further complicated backing device failure
handling. So far there is no real code logic change, I make this change a
separate patch to make sure it is stable and reliable for further work.

Signed-off-by: Coly Li <colyli@suse.de>
Cc: Junhui Tang <tang.junhui@zte.com.cn>
Cc: Michael Lyle <mlyle@lyle.org>
---
 drivers/md/bcache/request.c   | 95 +++++++++++++++++++++++++++++++++++--------
 drivers/md/bcache/super.c     |  1 +
 drivers/md/bcache/writeback.c |  1 +
 3 files changed, 81 insertions(+), 16 deletions(-)

diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c
index e09c5ae745be..ad4cf71f7eab 100644
--- a/drivers/md/bcache/request.c
+++ b/drivers/md/bcache/request.c
@@ -139,6 +139,7 @@ static void bch_data_invalidate(struct closure *cl)
 	}
 
 	op->insert_data_done = true;
+	/* get in bch_data_insert() */
 	bio_put(bio);
 out:
 	continue_at(cl, bch_data_insert_keys, op->wq);
@@ -630,6 +631,38 @@ static void request_endio(struct bio *bio)
 	closure_put(cl);
 }
 
+static void backing_request_endio(struct bio *bio)
+{
+	struct closure *cl = bio->bi_private;
+
+	if (bio->bi_status) {
+		struct search *s = container_of(cl, struct search, cl);
+		/*
+		 * If a bio has REQ_PREFLUSH for writeback mode, it is
+		 * speically assembled in cached_dev_write() for a non-zero
+		 * write request which has REQ_PREFLUSH. we don't set
+		 * s->iop.status by this failure, the status will be decided
+		 * by result of bch_data_insert() operation.
+		 */
+		if (unlikely(s->iop.writeback &&
+			     bio->bi_opf & REQ_PREFLUSH)) {
+			char buf[BDEVNAME_SIZE];
+
+			bio_devname(bio, buf);
+			pr_err("Can't flush %s: returned bi_status %i",
+				buf, bio->bi_status);
+		} else {
+			/* set to orig_bio->bi_status in bio_complete() */
+			s->iop.status = bio->bi_status;
+		}
+		s->recoverable = false;
+		/* should count I/O error for backing device here */
+	}
+
+	bio_put(bio);
+	closure_put(cl);
+}
+
 static void bio_complete(struct search *s)
 {
 	if (s->orig_bio) {
@@ -644,13 +677,21 @@ static void bio_complete(struct search *s)
 	}
 }
 
-static void do_bio_hook(struct search *s, struct bio *orig_bio)
+static void do_bio_hook(struct search *s,
+			struct bio *orig_bio,
+			bio_end_io_t *end_io_fn)
 {
 	struct bio *bio = &s->bio.bio;
 
 	bio_init(bio, NULL, 0);
 	__bio_clone_fast(bio, orig_bio);
-	bio->bi_end_io		= request_endio;
+	/*
+	 * bi_end_io can be set separately somewhere else, e.g. the
+	 * variants in,
+	 * - cache_bio->bi_end_io from cached_dev_cache_miss()
+	 * - n->bi_end_io from cache_lookup_fn()
+	 */
+	bio->bi_end_io		= end_io_fn;
 	bio->bi_private		= &s->cl;
 
 	bio_cnt_set(bio, 3);
@@ -676,7 +717,7 @@ static inline struct search *search_alloc(struct bio *bio,
 	s = mempool_alloc(d->c->search, GFP_NOIO);
 
 	closure_init(&s->cl, NULL);
-	do_bio_hook(s, bio);
+	do_bio_hook(s, bio, request_endio);
 
 	s->orig_bio		= bio;
 	s->cache_miss		= NULL;
@@ -743,10 +784,11 @@ static void cached_dev_read_error(struct closure *cl)
 		trace_bcache_read_retry(s->orig_bio);
 
 		s->iop.status = 0;
-		do_bio_hook(s, s->orig_bio);
+		do_bio_hook(s, s->orig_bio, backing_request_endio);
 
 		/* XXX: invalidate cache */
 
+		/* I/O request sent to backing device */
 		closure_bio_submit(s->iop.c, bio, cl);
 	}
 
@@ -859,7 +901,7 @@ static int cached_dev_cache_miss(struct btree *b, struct search *s,
 	bio_copy_dev(cache_bio, miss);
 	cache_bio->bi_iter.bi_size	= s->insert_bio_sectors << 9;
 
-	cache_bio->bi_end_io	= request_endio;
+	cache_bio->bi_end_io	= backing_request_endio;
 	cache_bio->bi_private	= &s->cl;
 
 	bch_bio_map(cache_bio, NULL);
@@ -872,14 +914,16 @@ static int cached_dev_cache_miss(struct btree *b, struct search *s,
 	s->cache_miss	= miss;
 	s->iop.bio	= cache_bio;
 	bio_get(cache_bio);
+	/* I/O request sent to backing device */
 	closure_bio_submit(s->iop.c, cache_bio, &s->cl);
 
 	return ret;
 out_put:
 	bio_put(cache_bio);
 out_submit:
-	miss->bi_end_io		= request_endio;
+	miss->bi_end_io		= backing_request_endio;
 	miss->bi_private	= &s->cl;
+	/* I/O request sent to backing device */
 	closure_bio_submit(s->iop.c, miss, &s->cl);
 	return ret;
 }
@@ -943,31 +987,48 @@ static void cached_dev_write(struct cached_dev *dc, struct search *s)
 		s->iop.bio = s->orig_bio;
 		bio_get(s->iop.bio);
 
-		if ((bio_op(bio) != REQ_OP_DISCARD) ||
-		    blk_queue_discard(bdev_get_queue(dc->bdev)))
-			closure_bio_submit(s->iop.c, bio, cl);
+		if (bio_op(bio) == REQ_OP_DISCARD &&
+		    !blk_queue_discard(bdev_get_queue(dc->bdev)))
+			goto insert_data;
+
+		/* I/O request sent to backing device */
+		bio->bi_end_io = backing_request_endio;
+		closure_bio_submit(s->iop.c, bio, cl);
+
 	} else if (s->iop.writeback) {
 		bch_writeback_add(dc);
 		s->iop.bio = bio;
 
 		if (bio->bi_opf & REQ_PREFLUSH) {
-			/* Also need to send a flush to the backing device */
-			struct bio *flush = bio_alloc_bioset(GFP_NOIO, 0,
-							     dc->disk.bio_split);
-
+			/*
+			 * Also need to send a flush to the backing
+			 * device, if failed on backing device.
+			 */
+			struct bio *flush;
+
+			flush = bio_alloc_bioset(GFP_NOIO, 0,
+						 dc->disk.bio_split);
+			if (!flush) {
+				s->iop.status = BLK_STS_RESOURCE;
+				goto insert_data;
+			}
 			bio_copy_dev(flush, bio);
-			flush->bi_end_io = request_endio;
+			flush->bi_end_io = backing_request_endio;
 			flush->bi_private = cl;
 			flush->bi_opf = REQ_OP_WRITE | REQ_PREFLUSH;
-
+			/* I/O request sent to backing device */
 			closure_bio_submit(s->iop.c, flush, cl);
 		}
+		bch_writeback_add(dc);
+
 	} else {
 		s->iop.bio = bio_clone_fast(bio, GFP_NOIO, dc->disk.bio_split);
-
+		/* I/O request sent to backing device */
+		bio->bi_end_io = backing_request_endio;
 		closure_bio_submit(s->iop.c, bio, cl);
 	}
 
+insert_data:
 	closure_call(&s->iop.cl, bch_data_insert, NULL, cl);
 	continue_at(cl, cached_dev_write_complete, NULL);
 }
@@ -981,6 +1042,7 @@ static void cached_dev_nodata(struct closure *cl)
 		bch_journal_meta(s->iop.c, cl);
 
 	/* If it's a flush, we send the flush to the backing device too */
+	bio->bi_end_io = backing_request_endio;
 	closure_bio_submit(s->iop.c, bio, cl);
 
 	continue_at(cl, cached_dev_bio_complete, NULL);
@@ -1078,6 +1140,7 @@ static blk_qc_t cached_dev_make_request(struct request_queue *q,
 				cached_dev_read(dc, s);
 		}
 	} else
+		/* I/O request sent to backing device */
 		detached_dev_do_request(d, bio);
 
 	return BLK_QC_T_NONE;
diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index 97e3bb8e1aee..08a0b541a4da 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -265,6 +265,7 @@ void bch_write_bdev_super(struct cached_dev *dc, struct closure *parent)
 	bio->bi_private = dc;
 
 	closure_get(cl);
+	/* I/O request sent to backing device */
 	__write_super(&dc->sb, bio);
 
 	closure_return_with_destructor(cl, bch_write_bdev_super_unlock);
diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
index 3d7d8452e0de..4ebe0119ea7e 100644
--- a/drivers/md/bcache/writeback.c
+++ b/drivers/md/bcache/writeback.c
@@ -289,6 +289,7 @@ static void write_dirty(struct closure *cl)
 		bio_set_dev(&io->bio, io->dc->bdev);
 		io->bio.bi_end_io	= dirty_endio;
 
+		/* I/O request sent to backing device */
 		closure_bio_submit(io->dc->disk.c, &io->bio, cl);
 	}
 
-- 
2.15.1

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v3 12/13] bcache: add io_disable to struct cached_dev
  2018-01-14 14:42 [PATCH v3 00/13] bcache: device failure handling improvement Coly Li
                   ` (10 preceding siblings ...)
  2018-01-14 14:42 ` [PATCH v3 11/13] bcache: add backing_request_endio() for bi_end_io of attached backing device I/O Coly Li
@ 2018-01-14 14:42 ` Coly Li
  2018-01-16  9:32   ` Hannes Reinecke
  2018-01-14 14:42 ` [PATCH v3 13/13] bcache: stop bcache device when backing device is offline Coly Li
  2018-01-24 22:23 ` [PATCH v3 00/13] bcache: device failure handling improvement Nix
  13 siblings, 1 reply; 32+ messages in thread
From: Coly Li @ 2018-01-14 14:42 UTC (permalink / raw)
  To: linux-bcache
  Cc: linux-block, Coly Li, Michael Lyle, Hannes Reinecke, Junhui Tang

If a bcache device is configured to writeback mode, current code does not
handle write I/O errors on backing devices properly.

In writeback mode, write request is written to cache device, and
latter being flushed to backing device. If I/O failed when writing from
cache device to the backing device, bcache code just ignores the error and
upper layer code is NOT noticed that the backing device is broken.

This patch tries to handle backing device failure like how the cache device
failure is handled,
- Add a error counter 'io_errors' and error limit 'error_limit' in struct
  cached_dev. Add another io_disable to struct cached_dev to disable I/Os
  on the problematic backing device.
- When I/O error happens on backing device, increase io_errors counter. And
  if io_errors reaches error_limit, set cache_dev->io_disable to true, and
  stop the bcache device.

The result is, if backing device is broken of disconnected, and I/O errors
reach its error limit, backing device will be disabled and the associated
bcache device will be removed from system.

Signed-off-by: Coly Li <colyli@suse.de>
Cc: Michael Lyle <mlyle@lyle.org>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Junhui Tang <tang.junhui@zte.com.cn>
---
 drivers/md/bcache/bcache.h  |  7 +++++++
 drivers/md/bcache/io.c      | 14 ++++++++++++++
 drivers/md/bcache/request.c | 14 ++++++++++++--
 drivers/md/bcache/super.c   | 22 ++++++++++++++++++++++
 drivers/md/bcache/sysfs.c   | 15 ++++++++++++++-
 5 files changed, 69 insertions(+), 3 deletions(-)

diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
index c41736960045..5a811959392d 100644
--- a/drivers/md/bcache/bcache.h
+++ b/drivers/md/bcache/bcache.h
@@ -360,6 +360,7 @@ struct cached_dev {
 	unsigned		sequential_cutoff;
 	unsigned		readahead;
 
+	unsigned		io_disable:1;
 	unsigned		verify:1;
 	unsigned		bypass_torture_test:1;
 
@@ -379,6 +380,10 @@ struct cached_dev {
 	unsigned		writeback_rate_i_term_inverse;
 	unsigned		writeback_rate_p_term_inverse;
 	unsigned		writeback_rate_minimum;
+
+#define DEFAULT_CACHED_DEV_ERROR_LIMIT 64
+	atomic_t		io_errors;
+	unsigned		error_limit;
 };
 
 enum alloc_reserve {
@@ -882,6 +887,7 @@ static inline void closure_bio_submit(struct cache_set *c,
 
 /* Forward declarations */
 
+void bch_count_backing_io_errors(struct cached_dev *dc, struct bio *bio);
 void bch_count_io_errors(struct cache *, blk_status_t, int, const char *);
 void bch_bbio_count_io_errors(struct cache_set *, struct bio *,
 			      blk_status_t, const char *);
@@ -909,6 +915,7 @@ int bch_bucket_alloc_set(struct cache_set *, unsigned,
 			 struct bkey *, int, bool);
 bool bch_alloc_sectors(struct cache_set *, struct bkey *, unsigned,
 		       unsigned, unsigned, bool);
+bool bch_cached_dev_error(struct cached_dev *dc);
 
 __printf(2, 3)
 bool bch_cache_set_error(struct cache_set *, const char *, ...);
diff --git a/drivers/md/bcache/io.c b/drivers/md/bcache/io.c
index 8013ecbcdbda..7fac97ae036e 100644
--- a/drivers/md/bcache/io.c
+++ b/drivers/md/bcache/io.c
@@ -50,6 +50,20 @@ void bch_submit_bbio(struct bio *bio, struct cache_set *c,
 }
 
 /* IO errors */
+void bch_count_backing_io_errors(struct cached_dev *dc, struct bio *bio)
+{
+	char buf[BDEVNAME_SIZE];
+	unsigned errors;
+
+	WARN_ONCE(!dc, "NULL pointer of struct cached_dev");
+
+	errors = atomic_add_return(1, &dc->io_errors);
+	if (errors < dc->error_limit)
+		pr_err("%s: IO error on backing device, unrecoverable",
+			bio_devname(bio, buf));
+	else
+		bch_cached_dev_error(dc);
+}
 
 void bch_count_io_errors(struct cache *ca,
 			 blk_status_t error,
diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c
index ad4cf71f7eab..386b388ce296 100644
--- a/drivers/md/bcache/request.c
+++ b/drivers/md/bcache/request.c
@@ -637,6 +637,8 @@ static void backing_request_endio(struct bio *bio)
 
 	if (bio->bi_status) {
 		struct search *s = container_of(cl, struct search, cl);
+		struct cached_dev *dc = container_of(s->d,
+						     struct cached_dev, disk);
 		/*
 		 * If a bio has REQ_PREFLUSH for writeback mode, it is
 		 * speically assembled in cached_dev_write() for a non-zero
@@ -657,6 +659,7 @@ static void backing_request_endio(struct bio *bio)
 		}
 		s->recoverable = false;
 		/* should count I/O error for backing device here */
+		bch_count_backing_io_errors(dc, bio);
 	}
 
 	bio_put(bio);
@@ -1067,8 +1070,14 @@ static void detatched_dev_end_io(struct bio *bio)
 			    bio_data_dir(bio),
 			    &ddip->d->disk->part0, ddip->start_time);
 
-	kfree(ddip);
+	if (bio->bi_status) {
+		struct cached_dev *dc = container_of(ddip->d,
+						     struct cached_dev, disk);
+		/* should count I/O error for backing device here */
+		bch_count_backing_io_errors(dc, bio);
+	}
 
+	kfree(ddip);
 	bio->bi_end_io(bio);
 }
 
@@ -1107,7 +1116,8 @@ static blk_qc_t cached_dev_make_request(struct request_queue *q,
 	struct cached_dev *dc = container_of(d, struct cached_dev, disk);
 	int rw = bio_data_dir(bio);
 
-	if (unlikely(d->c && test_bit(CACHE_SET_IO_DISABLE, &d->c->flags))) {
+	if (unlikely((d->c && test_bit(CACHE_SET_IO_DISABLE, &d->c->flags)) ||
+		     dc->io_disable)) {
 		bio->bi_status = BLK_STS_IOERR;
 		bio_endio(bio);
 		return BLK_QC_T_NONE;
diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index 08a0b541a4da..14fce3623770 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -1188,6 +1188,10 @@ static int cached_dev_init(struct cached_dev *dc, unsigned block_size)
 		max(dc->disk.disk->queue->backing_dev_info->ra_pages,
 		    q->backing_dev_info->ra_pages);
 
+	atomic_set(&dc->io_errors, 0);
+	dc->io_disable = false;
+	dc->error_limit = DEFAULT_CACHED_DEV_ERROR_LIMIT;
+
 	bch_cached_dev_request_init(dc);
 	bch_cached_dev_writeback_init(dc);
 	return 0;
@@ -1339,6 +1343,24 @@ int bch_flash_dev_create(struct cache_set *c, uint64_t size)
 	return flash_dev_run(c, u);
 }
 
+bool bch_cached_dev_error(struct cached_dev *dc)
+{
+	char name[BDEVNAME_SIZE];
+
+	if (!dc || test_bit(BCACHE_DEV_CLOSING, &dc->disk.flags))
+		return false;
+
+	dc->io_disable = true;
+	/* make others know io_disable is true earlier */
+	smp_mb();
+
+	pr_err("bcache: stop %s: too many IO errors on backing device %s\n",
+		dc->disk.name, bdevname(dc->bdev, name));
+
+	bcache_device_stop(&dc->disk);
+	return true;
+}
+
 /* Cache set */
 
 __printf(2, 3)
diff --git a/drivers/md/bcache/sysfs.c b/drivers/md/bcache/sysfs.c
index afb051bcfca1..7288927f2a47 100644
--- a/drivers/md/bcache/sysfs.c
+++ b/drivers/md/bcache/sysfs.c
@@ -131,7 +131,9 @@ SHOW(__bch_cached_dev)
 	var_print(writeback_delay);
 	var_print(writeback_percent);
 	sysfs_hprint(writeback_rate,	dc->writeback_rate.rate << 9);
-
+	sysfs_hprint(io_errors,		atomic_read(&dc->io_errors));
+	sysfs_printf(io_error_limit,	"%i", dc->error_limit);
+	sysfs_printf(io_disable,	"%i", dc->io_disable);
 	var_print(writeback_rate_update_seconds);
 	var_print(writeback_rate_i_term_inverse);
 	var_print(writeback_rate_p_term_inverse);
@@ -223,6 +225,14 @@ STORE(__cached_dev)
 	d_strtoul(writeback_rate_i_term_inverse);
 	d_strtoul_nonzero(writeback_rate_p_term_inverse);
 
+	sysfs_strtoul_clamp(io_error_limit, dc->error_limit, 0, INT_MAX);
+
+	if (attr == &sysfs_io_disable) {
+		int v = strtoul_or_return(buf);
+
+		dc->io_disable = v ? 1 : 0;
+	}
+
 	d_strtoi_h(sequential_cutoff);
 	d_strtoi_h(readahead);
 
@@ -330,6 +340,9 @@ static struct attribute *bch_cached_dev_files[] = {
 	&sysfs_writeback_rate_i_term_inverse,
 	&sysfs_writeback_rate_p_term_inverse,
 	&sysfs_writeback_rate_debug,
+	&sysfs_errors,
+	&sysfs_io_error_limit,
+	&sysfs_io_disable,
 	&sysfs_dirty_data,
 	&sysfs_stripe_size,
 	&sysfs_partial_stripes_expensive,
-- 
2.15.1

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v3 13/13] bcache: stop bcache device when backing device is offline
  2018-01-14 14:42 [PATCH v3 00/13] bcache: device failure handling improvement Coly Li
                   ` (11 preceding siblings ...)
  2018-01-14 14:42 ` [PATCH v3 12/13] bcache: add io_disable to struct cached_dev Coly Li
@ 2018-01-14 14:42 ` Coly Li
  2018-01-16  9:33   ` Hannes Reinecke
  2018-01-24 22:23 ` [PATCH v3 00/13] bcache: device failure handling improvement Nix
  13 siblings, 1 reply; 32+ messages in thread
From: Coly Li @ 2018-01-14 14:42 UTC (permalink / raw)
  To: linux-bcache
  Cc: linux-block, Coly Li, Michael Lyle, Hannes Reinecke, Junhui Tang

Currently bcache does not handle backing device failure, if backing
device is offline and disconnected from system, its bcache device can still
be accessible. If the bcache device is in writeback mode, I/O requests even
can success if the requests hit on cache device. That is to say, when and
how bcache handles offline backing device is undefined.

This patch tries to handle backing device offline in a rather simple way,
- Add cached_dev->status_update_thread kernel thread to update backing
  device status in every 1 second.
- Add cached_dev->offline_seconds to record how many seconds the backing
  device is observed to be offline. If the backing device is offline for
  BACKING_DEV_OFFLINE_TIMEOUT (30) seconds, set dc->io_disable to 1 and
  call bcache_device_stop() to stop the bache device which linked to the
  offline backing device.

Now if a backing device is offline for BACKING_DEV_OFFLINE_TIMEOUT seconds,
its bcache device will be removed, then user space application writing on
it will get error immediately, and handler the device failure in time.

This patch is quite simple, does not handle more complicated situations.
Once the bcache device is stopped, users need to recovery the backing
device, register and attach it manually.

Signed-off-by: Coly Li <colyli@suse.de>
Cc: Michael Lyle <mlyle@lyle.org>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Junhui Tang <tang.junhui@zte.com.cn>
---
 drivers/md/bcache/bcache.h |  2 ++
 drivers/md/bcache/super.c  | 55 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 57 insertions(+)

diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
index 5a811959392d..9eedb35d01bc 100644
--- a/drivers/md/bcache/bcache.h
+++ b/drivers/md/bcache/bcache.h
@@ -338,6 +338,7 @@ struct cached_dev {
 
 	struct keybuf		writeback_keys;
 
+	struct task_struct	*status_update_thread;
 	/*
 	 * Order the write-half of writeback operations strongly in dispatch
 	 * order.  (Maintain LBA order; don't allow reads completing out of
@@ -384,6 +385,7 @@ struct cached_dev {
 #define DEFAULT_CACHED_DEV_ERROR_LIMIT 64
 	atomic_t		io_errors;
 	unsigned		error_limit;
+	unsigned		offline_seconds;
 };
 
 enum alloc_reserve {
diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index 14fce3623770..85adf1e29d11 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -646,6 +646,11 @@ static int ioctl_dev(struct block_device *b, fmode_t mode,
 		     unsigned int cmd, unsigned long arg)
 {
 	struct bcache_device *d = b->bd_disk->private_data;
+	struct cached_dev *dc = container_of(d, struct cached_dev, disk);
+
+	if (dc->io_disable)
+		return -EIO;
+
 	return d->ioctl(d, mode, cmd, arg);
 }
 
@@ -856,6 +861,45 @@ static void calc_cached_dev_sectors(struct cache_set *c)
 	c->cached_dev_sectors = sectors;
 }
 
+#define BACKING_DEV_OFFLINE_TIMEOUT 5
+static int cached_dev_status_update(void *arg)
+{
+	struct cached_dev *dc = arg;
+	struct request_queue *q;
+	char buf[BDEVNAME_SIZE];
+
+	/*
+	 * If this delayed worker is stopping outside, directly quit here.
+	 * dc->io_disable might be set via sysfs interface, so check it
+	 * here too.
+	 */
+	while (!kthread_should_stop() && !dc->io_disable) {
+		q = bdev_get_queue(dc->bdev);
+		if (blk_queue_dying(q))
+			dc->offline_seconds++;
+		else
+			dc->offline_seconds = 0;
+
+		if (dc->offline_seconds >= BACKING_DEV_OFFLINE_TIMEOUT) {
+			pr_err("%s: device offline for %d seconds",
+				bdevname(dc->bdev, buf),
+				BACKING_DEV_OFFLINE_TIMEOUT);
+			pr_err("%s: disable I/O request due to backing "
+				"device offline", dc->disk.name);
+			dc->io_disable = true;
+			/* let others know earlier that io_disable is true */
+			smp_mb();
+			bcache_device_stop(&dc->disk);
+			break;
+		}
+
+		schedule_timeout_interruptible(HZ);
+	}
+
+	dc->status_update_thread = NULL;
+	return 0;
+}
+
 void bch_cached_dev_run(struct cached_dev *dc)
 {
 	struct bcache_device *d = &dc->disk;
@@ -898,6 +942,15 @@ void bch_cached_dev_run(struct cached_dev *dc)
 	if (sysfs_create_link(&d->kobj, &disk_to_dev(d->disk)->kobj, "dev") ||
 	    sysfs_create_link(&disk_to_dev(d->disk)->kobj, &d->kobj, "bcache"))
 		pr_debug("error creating sysfs link");
+
+	dc->status_update_thread = kthread_run(cached_dev_status_update,
+					       dc,
+					      "bcache_status_update");
+	if (IS_ERR(dc->status_update_thread)) {
+		pr_warn("bcache: failed to create bcache_status_update "
+			"kthread, continue to run without monitoring backing "
+			"device status");
+	}
 }
 
 /*
@@ -1118,6 +1171,8 @@ static void cached_dev_free(struct closure *cl)
 		kthread_stop(dc->writeback_thread);
 	if (dc->writeback_write_wq)
 		destroy_workqueue(dc->writeback_write_wq);
+	if (!IS_ERR_OR_NULL(dc->status_update_thread))
+		kthread_stop(dc->status_update_thread);
 
 	if (atomic_read(&dc->running))
 		bd_unlink_disk_holder(dc->bdev, dc->disk.disk);
-- 
2.15.1

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* Re: [PATCH v3 02/13] bcache: properly set task state in bch_writeback_thread()
  2018-01-14 14:42 ` [PATCH v3 02/13] bcache: properly set task state in bch_writeback_thread() Coly Li
@ 2018-01-16  9:02   ` Hannes Reinecke
  0 siblings, 0 replies; 32+ messages in thread
From: Hannes Reinecke @ 2018-01-16  9:02 UTC (permalink / raw)
  To: Coly Li, linux-bcache; +Cc: linux-block, Michael Lyle, Junhui Tang

On 01/14/2018 03:42 PM, Coly Li wrote:
> Kernel thread routine bch_writeback_thread() has the following code block,
> 
> 447         down_write(&dc->writeback_lock);
> 448~450     if (check conditions) {
> 451                 up_write(&dc->writeback_lock);
> 452                 set_current_state(TASK_INTERRUPTIBLE);
> 453
> 454                 if (kthread_should_stop())
> 455                         return 0;
> 456
> 457                 schedule();
> 458                 continue;
> 459         }
> 
> If condition check is true, its task state is set to TASK_INTERRUPTIBLE
> and call schedule() to wait for others to wake up it.
> 
> There are 2 issues in current code,
> 1, Task state is set to TASK_INTERRUPTIBLE after the condition checks, if
>    another process changes the condition and call wake_up_process(dc->
>    writeback_thread), then at line 452 task state is set back to
>    TASK_INTERRUPTIBLE, the writeback kernel thread will lose a chance to be
>    waken up.
> 2, At line 454 if kthread_should_stop() is true, writeback kernel thread
>    will return to kernel/kthread.c:kthread() with TASK_INTERRUPTIBLE and
>    call do_exit(). It is not good to enter do_exit() with task state
>    TASK_INTERRUPTIBLE, in following code path might_sleep() is called and a
>    warning message is reported by __might_sleep(): "WARNING: do not call
>    blocking ops when !TASK_RUNNING; state=1 set at [xxxx]".
> 
> For the first issue, task state should be set before condition checks.
> Ineed because dc->writeback_lock is required when modifying all the
> conditions, calling set_current_state() inside code block where dc->
> writeback_lock is hold is safe. But this is quite implicit, so I still move
> set_current_state() before all the condition checks.
> 
> For the second issue, frankley speaking it does not hurt when kernel thread
> exits with TASK_INTERRUPTIBLE state, but this warning message scares users,
> makes them feel there might be something risky with bcache and hurt their
> data.  Setting task state to TASK_RUNNING before returning fixes this
> problem.
> 
> Changelog:
> v2: fix the race issue in v1 patch.
> v1: initial buggy fix.
> 
> Signed-off-by: Coly Li <colyli@suse.de>
> Cc: Michael Lyle <mlyle@lyle.org>
> Cc: Hannes Reinecke <hare@suse.de>
> Cc: Junhui Tang <tang.junhui@zte.com.cn>
> ---
>  drivers/md/bcache/writeback.c | 7 +++++--
>  1 file changed, 5 insertions(+), 2 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.com>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare@suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v3 01/13] bcache: set writeback_rate_update_seconds in range [1, 60] seconds
  2018-01-14 14:42 ` [PATCH v3 01/13] bcache: set writeback_rate_update_seconds in range [1, 60] seconds Coly Li
@ 2018-01-16  9:03   ` Hannes Reinecke
  0 siblings, 0 replies; 32+ messages in thread
From: Hannes Reinecke @ 2018-01-16  9:03 UTC (permalink / raw)
  To: Coly Li, linux-bcache; +Cc: linux-block

On 01/14/2018 03:42 PM, Coly Li wrote:
> dc->writeback_rate_update_seconds can be set via sysfs and its value can
> be set to [1, ULONG_MAX].  It does not make sense to set such a large
> value, 60 seconds is long enough value considering the default 5 seconds
> works well for long time.
> 
> Because dc->writeback_rate_update is a special delayed work, it re-arms
> itself inside the delayed work routine update_writeback_rate(). When
> stopping it by cancel_delayed_work_sync(), there should be a timeout to
> wait and make sure the re-armed delayed work is stopped too. A small max
> value of dc->writeback_rate_update_seconds is also helpful to decide a
> reasonable small timeout.
> 
> This patch limits sysfs interface to set dc->writeback_rate_update_seconds
> in range of [1, 60] seconds, and replaces the hand-coded number by macros.
> 
> Signed-off-by: Coly Li <colyli@suse.de>
> ---
>  drivers/md/bcache/sysfs.c     | 3 +++
>  drivers/md/bcache/writeback.c | 2 +-
>  drivers/md/bcache/writeback.h | 3 +++
>  3 files changed, 7 insertions(+), 1 deletion(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.com>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare@suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v3 03/13] bcache: set task properly in allocator_wait()
  2018-01-14 14:42 ` [PATCH v3 03/13] bcache: set task properly in allocator_wait() Coly Li
@ 2018-01-16  9:05   ` Hannes Reinecke
  2018-01-16  9:29     ` Coly Li
  0 siblings, 1 reply; 32+ messages in thread
From: Hannes Reinecke @ 2018-01-16  9:05 UTC (permalink / raw)
  To: Coly Li, linux-bcache; +Cc: linux-block, Michael Lyle, Junhui Tang

On 01/14/2018 03:42 PM, Coly Li wrote:
> Kernel thread routine bch_allocator_thread() references macro
> allocator_wait() to wait for a condition or quit to do_exit()
> when kthread_should_stop() is true. Here is the code block,
> 
> 284         while (1) {                                                   \
> 285                 set_current_state(TASK_INTERRUPTIBLE);                \
> 286                 if (cond)                                             \
> 287                         break;                                        \
> 288                                                                       \
> 289                 mutex_unlock(&(ca)->set->bucket_lock);                \
> 290                 if (kthread_should_stop())                            \
> 291                         return 0;                                     \
> 292                                                                       \
> 293                 schedule();                                           \
> 294                 mutex_lock(&(ca)->set->bucket_lock);                  \
> 295         }                                                             \
> 296         __set_current_state(TASK_RUNNING);                            \
> 
> At line 285, task state is set to TASK_INTERRUPTIBLE, if at line 290
> kthread_should_stop() is true, the kernel thread will terminate and return
> to kernel/kthread.s:kthread(), then calls do_exit() with TASK_INTERRUPTIBLE
> state. This is not a suggested behavior and a warning message will be
> reported by might_sleep() in do_exit() code path: "WARNING: do not call
> blocking ops when !TASK_RUNNING; state=1 set at [xxxx]".
> 
> This patch fixes this problem by setting task state to TASK_RUNNING if
> kthread_should_stop() is true and before kernel thread returns back to
> kernel/kthread.s:kthread().
> 
> Changelog:
> v2: fix the race issue in v1 patch.
> v1: initial buggy fix.
> 
> Signed-off-by: Coly Li <colyli@suse.de>
> Cc: Michael Lyle <mlyle@lyle.org>
> Cc: Hannes Reinecke <hare@suse.de>
> Cc: Junhui Tang <tang.junhui@zte.com.cn>
> ---
>  drivers/md/bcache/alloc.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/md/bcache/alloc.c b/drivers/md/bcache/alloc.c
> index 6cc6c0f9c3a9..458e1d38577d 100644
> --- a/drivers/md/bcache/alloc.c
> +++ b/drivers/md/bcache/alloc.c
> @@ -287,8 +287,10 @@ do {									\
>  			break;						\
>  									\
>  		mutex_unlock(&(ca)->set->bucket_lock);			\
> -		if (kthread_should_stop())				\
> +		if (kthread_should_stop()) {				\
> +			set_current_state(TASK_RUNNING);		\
>  			return 0;					\
> +		}							\
>  									\
>  		schedule();						\
>  		mutex_lock(&(ca)->set->bucket_lock);			\
> 
Might be an idea to merge it with the previous patch.

Other than that:

Reviewed-by: Hannes Reinecke <hare@suse.com>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare@suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v3 05/13] bcache: quit dc->writeback_thread when BCACHE_DEV_DETACHING is set
  2018-01-14 14:42 ` [PATCH v3 05/13] bcache: quit dc->writeback_thread when BCACHE_DEV_DETACHING is set Coly Li
@ 2018-01-16  9:11   ` Hannes Reinecke
  2018-01-26  6:21     ` Coly Li
  0 siblings, 1 reply; 32+ messages in thread
From: Hannes Reinecke @ 2018-01-16  9:11 UTC (permalink / raw)
  To: Coly Li, linux-bcache
  Cc: linux-block, Michael Lyle, Hannes Reinecke, Huijun Tang

On 01/14/2018 03:42 PM, Coly Li wrote:
> In patch "bcache: fix cached_dev->count usage for bch_cache_set_error()",
> cached_dev_get() is called when creating dc->writeback_thread, and
> cached_dev_put() is called when exiting dc->writeback_thread. This
> modification works well unless people detach the bcache device manually by
>     'echo 1 > /sys/block/bcache<N>/bcache/detach'
> Because this sysfs interface only calls bch_cached_dev_detach() which wakes
> up dc->writeback_thread but does not stop it. The reason is, before patch
> "bcache: fix cached_dev->count usage for bch_cache_set_error()", inside
> bch_writeback_thread(), if cache is not dirty after writeback,
> cached_dev_put() will be called here. And in cached_dev_make_request() when
> a new write request makes cache from clean to dirty, cached_dev_get() will
> be called there. Since we don't operate dc->count in these locations,
> refcount d->count cannot be dropped after cache becomes clean, and
> cached_dev_detach_finish() won't be called to detach bcache device.
> 
> This patch fixes the issue by checking whether BCACHE_DEV_DETACHING is
> set inside bch_writeback_thread(). If this bit is set and cache is clean
> (no existing writeback_keys), break the while-loop, call cached_dev_put()
> and quit the writeback thread.
> 
> Please note if cache is still dirty, even BCACHE_DEV_DETACHING is set the
> writeback thread should continue to perform writeback, this is the original
> design of manually detach.
> 
> I compose a separte patch because that patch "bcache: fix cached_dev->count
> usage for bch_cache_set_error()" already gets a "Reviewed-by:" from Hannes
> Reinecke. Also this fix is not trivial and good for a separate patch.
> 
> Signed-off-by: Coly Li <colyli@suse.de>
> Cc: Michael Lyle <mlyle@lyle.org>
> Cc: Hannes Reinecke <hare@suse.com>
> Cc: Huijun Tang <tang.junhui@zte.com.cn>
> ---
>  drivers/md/bcache/writeback.c | 20 +++++++++++++++++---
>  1 file changed, 17 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
> index b280c134dd4d..4dbeaaa575bf 100644
> --- a/drivers/md/bcache/writeback.c
> +++ b/drivers/md/bcache/writeback.c
> @@ -565,9 +565,15 @@ static int bch_writeback_thread(void *arg)
>  	while (!kthread_should_stop()) {
>  		down_write(&dc->writeback_lock);
>  		set_current_state(TASK_INTERRUPTIBLE);
> -		if (!atomic_read(&dc->has_dirty) ||
> -		    (!test_bit(BCACHE_DEV_DETACHING, &dc->disk.flags) &&
> -		     !dc->writeback_running)) {
> +		/*
> +		 * If the bache device is detaching, skip here and continue
> +		 * to perform writeback. Otherwise, if no dirty data on cache,
> +		 * or there is dirty data on cache but writeback is disabled,
> +		 * the writeback thread should sleep here and wait for others
> +		 * to wake up it.
> +		 */
> +		if (!test_bit(BCACHE_DEV_DETACHING, &dc->disk.flags) &&
> +		    (!atomic_read(&dc->has_dirty) || !dc->writeback_running)) {
>  			up_write(&dc->writeback_lock);
>  
>  			if (kthread_should_stop()) {
> @@ -587,6 +593,14 @@ static int bch_writeback_thread(void *arg)
>  			atomic_set(&dc->has_dirty, 0);
>  			SET_BDEV_STATE(&dc->sb, BDEV_STATE_CLEAN);
>  			bch_write_bdev_super(dc, NULL);
> +			/*
> +			 * If bcache device is detaching via sysfs interface,
> +			 * writeback thread should stop after there is no dirty
> +			 * data on cache. BCACHE_DEV_DETACHING flag is set in
> +			 * bch_cached_dev_detach().
> +			 */
> +			if (test_bit(BCACHE_DEV_DETACHING, &dc->disk.flags))
> +				break;
>  		}
>  
>  		up_write(&dc->writeback_lock);
> 
Checking several atomic flags in one statement renders the atomic pretty
much pointless; you need to protect the 'if' clause with some lock or
just check _one_ atomic statement.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare@suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v3 10/13] bcache: fix inaccurate io state for detached bcache devices
  2018-01-14 14:42 ` [PATCH v3 10/13] bcache: fix inaccurate io state for detached bcache devices Coly Li
@ 2018-01-16  9:27   ` Hannes Reinecke
  0 siblings, 0 replies; 32+ messages in thread
From: Hannes Reinecke @ 2018-01-16  9:27 UTC (permalink / raw)
  To: Coly Li, linux-bcache; +Cc: linux-block, Tang Junhui

On 01/14/2018 03:42 PM, Coly Li wrote:
> From: Tang Junhui <tang.junhui@zte.com.cn>
> 
> When we run IO in a detached device,  and run iostat to shows IO status,
> normally it will show like bellow (Omitted some fields):
> Device: ... avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
> sdd        ... 15.89     0.53    1.82    0.20    2.23   1.81  52.30
> bcache0    ... 15.89   115.42    0.00    0.00    0.00   2.40  69.60
> but after IO stopped, there are still very big avgqu-sz and %util
> values as bellow:
> Device: ... avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
> bcache0   ...      0   5326.32    0.00    0.00    0.00   0.00 100.10
> 
> The reason for this issue is that, only generic_start_io_acct() called
> and no generic_end_io_acct() called for detached device in
> cached_dev_make_request(). See the code:
> //start generic_start_io_acct()
> generic_start_io_acct(q, rw, bio_sectors(bio), &d->disk->part0);
> if (cached_dev_get(dc)) {
> 	//will callback generic_end_io_acct()
> }
> else {
> 	//will not call generic_end_io_acct()
> }
> 
> This patch calls generic_end_io_acct() in the end of IO for detached
> devices, so we can show IO state correctly.
> 
> (Modified to use GFP_NOIO in kzalloc() by Coly Li)
> 
> Signed-off-by: Tang Junhui <tang.junhui@zte.com.cn>
> Reviewed-by: Coly Li <colyli@suse.de>
> ---
>  drivers/md/bcache/request.c | 58 +++++++++++++++++++++++++++++++++++++++------
>  1 file changed, 51 insertions(+), 7 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.com>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare@suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v3 11/13] bcache: add backing_request_endio() for bi_end_io of attached backing device I/O
  2018-01-14 14:42 ` [PATCH v3 11/13] bcache: add backing_request_endio() for bi_end_io of attached backing device I/O Coly Li
@ 2018-01-16  9:28   ` Hannes Reinecke
  0 siblings, 0 replies; 32+ messages in thread
From: Hannes Reinecke @ 2018-01-16  9:28 UTC (permalink / raw)
  To: Coly Li, linux-bcache; +Cc: linux-block, Junhui Tang, Michael Lyle

On 01/14/2018 03:42 PM, Coly Li wrote:
> In order to catch I/O error of backing device, a separate bi_end_io
> call back is required. Then a per backing device counter can record I/O
> errors number and retire the backing device if the counter reaches a
> per backing device I/O error limit.
> 
> This patch adds backing_request_endio() to bcache backing device I/O code
> path, this is a preparation for further complicated backing device failure
> handling. So far there is no real code logic change, I make this change a
> separate patch to make sure it is stable and reliable for further work.
> 
> Signed-off-by: Coly Li <colyli@suse.de>
> Cc: Junhui Tang <tang.junhui@zte.com.cn>
> Cc: Michael Lyle <mlyle@lyle.org>
> ---
>  drivers/md/bcache/request.c   | 95 +++++++++++++++++++++++++++++++++++--------
>  drivers/md/bcache/super.c     |  1 +
>  drivers/md/bcache/writeback.c |  1 +
>  3 files changed, 81 insertions(+), 16 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.com>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare@suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v3 03/13] bcache: set task properly in allocator_wait()
  2018-01-16  9:05   ` Hannes Reinecke
@ 2018-01-16  9:29     ` Coly Li
  0 siblings, 0 replies; 32+ messages in thread
From: Coly Li @ 2018-01-16  9:29 UTC (permalink / raw)
  To: Hannes Reinecke, linux-bcache; +Cc: linux-block, Michael Lyle, Junhui Tang

On 16/01/2018 5:05 PM, Hannes Reinecke wrote:
> On 01/14/2018 03:42 PM, Coly Li wrote:
>> Kernel thread routine bch_allocator_thread() references macro
>> allocator_wait() to wait for a condition or quit to do_exit()
>> when kthread_should_stop() is true. Here is the code block,
>>
>> 284         while (1) {                                                   \
>> 285                 set_current_state(TASK_INTERRUPTIBLE);                \
>> 286                 if (cond)                                             \
>> 287                         break;                                        \
>> 288                                                                       \
>> 289                 mutex_unlock(&(ca)->set->bucket_lock);                \
>> 290                 if (kthread_should_stop())                            \
>> 291                         return 0;                                     \
>> 292                                                                       \
>> 293                 schedule();                                           \
>> 294                 mutex_lock(&(ca)->set->bucket_lock);                  \
>> 295         }                                                             \
>> 296         __set_current_state(TASK_RUNNING);                            \
>>
>> At line 285, task state is set to TASK_INTERRUPTIBLE, if at line 290
>> kthread_should_stop() is true, the kernel thread will terminate and return
>> to kernel/kthread.s:kthread(), then calls do_exit() with TASK_INTERRUPTIBLE
>> state. This is not a suggested behavior and a warning message will be
>> reported by might_sleep() in do_exit() code path: "WARNING: do not call
>> blocking ops when !TASK_RUNNING; state=1 set at [xxxx]".
>>
>> This patch fixes this problem by setting task state to TASK_RUNNING if
>> kthread_should_stop() is true and before kernel thread returns back to
>> kernel/kthread.s:kthread().
>>
>> Changelog:
>> v2: fix the race issue in v1 patch.
>> v1: initial buggy fix.
>>
>> Signed-off-by: Coly Li <colyli@suse.de>
>> Cc: Michael Lyle <mlyle@lyle.org>
>> Cc: Hannes Reinecke <hare@suse.de>
>> Cc: Junhui Tang <tang.junhui@zte.com.cn>
>> ---
>>  drivers/md/bcache/alloc.c | 4 +++-
>>  1 file changed, 3 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/md/bcache/alloc.c b/drivers/md/bcache/alloc.c
>> index 6cc6c0f9c3a9..458e1d38577d 100644
>> --- a/drivers/md/bcache/alloc.c
>> +++ b/drivers/md/bcache/alloc.c
>> @@ -287,8 +287,10 @@ do {									\
>>  			break;						\
>>  									\
>>  		mutex_unlock(&(ca)->set->bucket_lock);			\
>> -		if (kthread_should_stop())				\
>> +		if (kthread_should_stop()) {				\
>> +			set_current_state(TASK_RUNNING);		\
>>  			return 0;					\
>> +		}							\
>>  									\
>>  		schedule();						\
>>  		mutex_lock(&(ca)->set->bucket_lock);			\
>>
> Might be an idea to merge it with the previous patch.
> 
> Other than that:
> 
> Reviewed-by: Hannes Reinecke <hare@suse.com>

Hi Hannes,

Sure, I will do that in v4 patche set. Thanks for the review.

Coly Li

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v3 12/13] bcache: add io_disable to struct cached_dev
  2018-01-14 14:42 ` [PATCH v3 12/13] bcache: add io_disable to struct cached_dev Coly Li
@ 2018-01-16  9:32   ` Hannes Reinecke
  0 siblings, 0 replies; 32+ messages in thread
From: Hannes Reinecke @ 2018-01-16  9:32 UTC (permalink / raw)
  To: Coly Li, linux-bcache
  Cc: linux-block, Michael Lyle, Hannes Reinecke, Junhui Tang

On 01/14/2018 03:42 PM, Coly Li wrote:
> If a bcache device is configured to writeback mode, current code does not
> handle write I/O errors on backing devices properly.
> 
> In writeback mode, write request is written to cache device, and
> latter being flushed to backing device. If I/O failed when writing from
> cache device to the backing device, bcache code just ignores the error and
> upper layer code is NOT noticed that the backing device is broken.
> 
> This patch tries to handle backing device failure like how the cache device
> failure is handled,
> - Add a error counter 'io_errors' and error limit 'error_limit' in struct
>   cached_dev. Add another io_disable to struct cached_dev to disable I/Os
>   on the problematic backing device.
> - When I/O error happens on backing device, increase io_errors counter. And
>   if io_errors reaches error_limit, set cache_dev->io_disable to true, and
>   stop the bcache device.
> 
> The result is, if backing device is broken of disconnected, and I/O errors
> reach its error limit, backing device will be disabled and the associated
> bcache device will be removed from system.
> 
> Signed-off-by: Coly Li <colyli@suse.de>
> Cc: Michael Lyle <mlyle@lyle.org>
> Cc: Hannes Reinecke <hare@suse.com>
> Cc: Junhui Tang <tang.junhui@zte.com.cn>
> ---
>  drivers/md/bcache/bcache.h  |  7 +++++++
>  drivers/md/bcache/io.c      | 14 ++++++++++++++
>  drivers/md/bcache/request.c | 14 ++++++++++++--
>  drivers/md/bcache/super.c   | 22 ++++++++++++++++++++++
>  drivers/md/bcache/sysfs.c   | 15 ++++++++++++++-
>  5 files changed, 69 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
> index c41736960045..5a811959392d 100644
> --- a/drivers/md/bcache/bcache.h
> +++ b/drivers/md/bcache/bcache.h
> @@ -360,6 +360,7 @@ struct cached_dev {
>  	unsigned		sequential_cutoff;
>  	unsigned		readahead;
>  
> +	unsigned		io_disable:1;
>  	unsigned		verify:1;
>  	unsigned		bypass_torture_test:1;
>  
> @@ -379,6 +380,10 @@ struct cached_dev {
>  	unsigned		writeback_rate_i_term_inverse;
>  	unsigned		writeback_rate_p_term_inverse;
>  	unsigned		writeback_rate_minimum;
> +
> +#define DEFAULT_CACHED_DEV_ERROR_LIMIT 64
> +	atomic_t		io_errors;
> +	unsigned		error_limit;
>  };
>  
>  enum alloc_reserve {
> @@ -882,6 +887,7 @@ static inline void closure_bio_submit(struct cache_set *c,
>  
>  /* Forward declarations */
>  
> +void bch_count_backing_io_errors(struct cached_dev *dc, struct bio *bio);
>  void bch_count_io_errors(struct cache *, blk_status_t, int, const char *);
>  void bch_bbio_count_io_errors(struct cache_set *, struct bio *,
>  			      blk_status_t, const char *);
> @@ -909,6 +915,7 @@ int bch_bucket_alloc_set(struct cache_set *, unsigned,
>  			 struct bkey *, int, bool);
>  bool bch_alloc_sectors(struct cache_set *, struct bkey *, unsigned,
>  		       unsigned, unsigned, bool);
> +bool bch_cached_dev_error(struct cached_dev *dc);
>  
>  __printf(2, 3)
>  bool bch_cache_set_error(struct cache_set *, const char *, ...);
> diff --git a/drivers/md/bcache/io.c b/drivers/md/bcache/io.c
> index 8013ecbcdbda..7fac97ae036e 100644
> --- a/drivers/md/bcache/io.c
> +++ b/drivers/md/bcache/io.c
> @@ -50,6 +50,20 @@ void bch_submit_bbio(struct bio *bio, struct cache_set *c,
>  }
>  
>  /* IO errors */
> +void bch_count_backing_io_errors(struct cached_dev *dc, struct bio *bio)
> +{
> +	char buf[BDEVNAME_SIZE];
> +	unsigned errors;
> +
> +	WARN_ONCE(!dc, "NULL pointer of struct cached_dev");
> +
> +	errors = atomic_add_return(1, &dc->io_errors);
> +	if (errors < dc->error_limit)
> +		pr_err("%s: IO error on backing device, unrecoverable",
> +			bio_devname(bio, buf));
> +	else
> +		bch_cached_dev_error(dc);
> +}
>  
>  void bch_count_io_errors(struct cache *ca,
>  			 blk_status_t error,
> diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c
> index ad4cf71f7eab..386b388ce296 100644
> --- a/drivers/md/bcache/request.c
> +++ b/drivers/md/bcache/request.c
> @@ -637,6 +637,8 @@ static void backing_request_endio(struct bio *bio)
>  
>  	if (bio->bi_status) {
>  		struct search *s = container_of(cl, struct search, cl);
> +		struct cached_dev *dc = container_of(s->d,
> +						     struct cached_dev, disk);
>  		/*
>  		 * If a bio has REQ_PREFLUSH for writeback mode, it is
>  		 * speically assembled in cached_dev_write() for a non-zero
> @@ -657,6 +659,7 @@ static void backing_request_endio(struct bio *bio)
>  		}
>  		s->recoverable = false;
>  		/* should count I/O error for backing device here */
> +		bch_count_backing_io_errors(dc, bio);
>  	}
>  
>  	bio_put(bio);
> @@ -1067,8 +1070,14 @@ static void detatched_dev_end_io(struct bio *bio)
>  			    bio_data_dir(bio),
>  			    &ddip->d->disk->part0, ddip->start_time);
>  
> -	kfree(ddip);
> +	if (bio->bi_status) {
> +		struct cached_dev *dc = container_of(ddip->d,
> +						     struct cached_dev, disk);
> +		/* should count I/O error for backing device here */
> +		bch_count_backing_io_errors(dc, bio);
> +	}
>  
> +	kfree(ddip);
>  	bio->bi_end_io(bio);
>  }
>  
> @@ -1107,7 +1116,8 @@ static blk_qc_t cached_dev_make_request(struct request_queue *q,
>  	struct cached_dev *dc = container_of(d, struct cached_dev, disk);
>  	int rw = bio_data_dir(bio);
>  
> -	if (unlikely(d->c && test_bit(CACHE_SET_IO_DISABLE, &d->c->flags))) {
> +	if (unlikely((d->c && test_bit(CACHE_SET_IO_DISABLE, &d->c->flags)) ||
> +		     dc->io_disable)) {
>  		bio->bi_status = BLK_STS_IOERR;
>  		bio_endio(bio);
>  		return BLK_QC_T_NONE;
> diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
> index 08a0b541a4da..14fce3623770 100644
> --- a/drivers/md/bcache/super.c
> +++ b/drivers/md/bcache/super.c
> @@ -1188,6 +1188,10 @@ static int cached_dev_init(struct cached_dev *dc, unsigned block_size)
>  		max(dc->disk.disk->queue->backing_dev_info->ra_pages,
>  		    q->backing_dev_info->ra_pages);
>  
> +	atomic_set(&dc->io_errors, 0);
> +	dc->io_disable = false;
> +	dc->error_limit = DEFAULT_CACHED_DEV_ERROR_LIMIT;
> +
>  	bch_cached_dev_request_init(dc);
>  	bch_cached_dev_writeback_init(dc);
>  	return 0;
> @@ -1339,6 +1343,24 @@ int bch_flash_dev_create(struct cache_set *c, uint64_t size)
>  	return flash_dev_run(c, u);
>  }
>  
> +bool bch_cached_dev_error(struct cached_dev *dc)
> +{
> +	char name[BDEVNAME_SIZE];
> +
> +	if (!dc || test_bit(BCACHE_DEV_CLOSING, &dc->disk.flags))
> +		return false;
> +
> +	dc->io_disable = true;
> +	/* make others know io_disable is true earlier */
> +	smp_mb();
> +
> +	pr_err("bcache: stop %s: too many IO errors on backing device %s\n",
> +		dc->disk.name, bdevname(dc->bdev, name));
> +
> +	bcache_device_stop(&dc->disk);
> +	return true;
> +}
> +
>  /* Cache set */
>  
>  __printf(2, 3)
> diff --git a/drivers/md/bcache/sysfs.c b/drivers/md/bcache/sysfs.c
> index afb051bcfca1..7288927f2a47 100644
> --- a/drivers/md/bcache/sysfs.c
> +++ b/drivers/md/bcache/sysfs.c
> @@ -131,7 +131,9 @@ SHOW(__bch_cached_dev)
>  	var_print(writeback_delay);
>  	var_print(writeback_percent);
>  	sysfs_hprint(writeback_rate,	dc->writeback_rate.rate << 9);
> -
> +	sysfs_hprint(io_errors,		atomic_read(&dc->io_errors));
> +	sysfs_printf(io_error_limit,	"%i", dc->error_limit);
> +	sysfs_printf(io_disable,	"%i", dc->io_disable);
>  	var_print(writeback_rate_update_seconds);
>  	var_print(writeback_rate_i_term_inverse);
>  	var_print(writeback_rate_p_term_inverse);
> @@ -223,6 +225,14 @@ STORE(__cached_dev)
>  	d_strtoul(writeback_rate_i_term_inverse);
>  	d_strtoul_nonzero(writeback_rate_p_term_inverse);
>  
> +	sysfs_strtoul_clamp(io_error_limit, dc->error_limit, 0, INT_MAX);
> +
> +	if (attr == &sysfs_io_disable) {
> +		int v = strtoul_or_return(buf);
> +
> +		dc->io_disable = v ? 1 : 0;
> +	}
> +
>  	d_strtoi_h(sequential_cutoff);
>  	d_strtoi_h(readahead);
>  
> @@ -330,6 +340,9 @@ static struct attribute *bch_cached_dev_files[] = {
>  	&sysfs_writeback_rate_i_term_inverse,
>  	&sysfs_writeback_rate_p_term_inverse,
>  	&sysfs_writeback_rate_debug,
> +	&sysfs_errors,
> +	&sysfs_io_error_limit,
> +	&sysfs_io_disable,
>  	&sysfs_dirty_data,
>  	&sysfs_stripe_size,
>  	&sysfs_partial_stripes_expensive,
> 
Personally, I'm not a big fan of using smp_mb() and not proper locking.
But in this case it should be okay.

Reviewed-by: Hannes Reinecke <hare@suse.com>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare@suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v3 13/13] bcache: stop bcache device when backing device is offline
  2018-01-14 14:42 ` [PATCH v3 13/13] bcache: stop bcache device when backing device is offline Coly Li
@ 2018-01-16  9:33   ` Hannes Reinecke
  0 siblings, 0 replies; 32+ messages in thread
From: Hannes Reinecke @ 2018-01-16  9:33 UTC (permalink / raw)
  To: Coly Li, linux-bcache
  Cc: linux-block, Michael Lyle, Hannes Reinecke, Junhui Tang

On 01/14/2018 03:42 PM, Coly Li wrote:
> Currently bcache does not handle backing device failure, if backing
> device is offline and disconnected from system, its bcache device can still
> be accessible. If the bcache device is in writeback mode, I/O requests even
> can success if the requests hit on cache device. That is to say, when and
> how bcache handles offline backing device is undefined.
> 
> This patch tries to handle backing device offline in a rather simple way,
> - Add cached_dev->status_update_thread kernel thread to update backing
>   device status in every 1 second.
> - Add cached_dev->offline_seconds to record how many seconds the backing
>   device is observed to be offline. If the backing device is offline for
>   BACKING_DEV_OFFLINE_TIMEOUT (30) seconds, set dc->io_disable to 1 and
>   call bcache_device_stop() to stop the bache device which linked to the
>   offline backing device.
> 
> Now if a backing device is offline for BACKING_DEV_OFFLINE_TIMEOUT seconds,
> its bcache device will be removed, then user space application writing on
> it will get error immediately, and handler the device failure in time.
> 
> This patch is quite simple, does not handle more complicated situations.
> Once the bcache device is stopped, users need to recovery the backing
> device, register and attach it manually.
> 
> Signed-off-by: Coly Li <colyli@suse.de>
> Cc: Michael Lyle <mlyle@lyle.org>
> Cc: Hannes Reinecke <hare@suse.com>
> Cc: Junhui Tang <tang.junhui@zte.com.cn>
> ---
>  drivers/md/bcache/bcache.h |  2 ++
>  drivers/md/bcache/super.c  | 55 ++++++++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 57 insertions(+)
> 
> diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
> index 5a811959392d..9eedb35d01bc 100644
> --- a/drivers/md/bcache/bcache.h
> +++ b/drivers/md/bcache/bcache.h
> @@ -338,6 +338,7 @@ struct cached_dev {
>  
>  	struct keybuf		writeback_keys;
>  
> +	struct task_struct	*status_update_thread;
>  	/*
>  	 * Order the write-half of writeback operations strongly in dispatch
>  	 * order.  (Maintain LBA order; don't allow reads completing out of
> @@ -384,6 +385,7 @@ struct cached_dev {
>  #define DEFAULT_CACHED_DEV_ERROR_LIMIT 64
>  	atomic_t		io_errors;
>  	unsigned		error_limit;
> +	unsigned		offline_seconds;
>  };
>  
>  enum alloc_reserve {
> diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
> index 14fce3623770..85adf1e29d11 100644
> --- a/drivers/md/bcache/super.c
> +++ b/drivers/md/bcache/super.c
> @@ -646,6 +646,11 @@ static int ioctl_dev(struct block_device *b, fmode_t mode,
>  		     unsigned int cmd, unsigned long arg)
>  {
>  	struct bcache_device *d = b->bd_disk->private_data;
> +	struct cached_dev *dc = container_of(d, struct cached_dev, disk);
> +
> +	if (dc->io_disable)
> +		return -EIO;
> +
>  	return d->ioctl(d, mode, cmd, arg);
>  }
>  
> @@ -856,6 +861,45 @@ static void calc_cached_dev_sectors(struct cache_set *c)
>  	c->cached_dev_sectors = sectors;
>  }
>  
> +#define BACKING_DEV_OFFLINE_TIMEOUT 5
> +static int cached_dev_status_update(void *arg)
> +{
> +	struct cached_dev *dc = arg;
> +	struct request_queue *q;
> +	char buf[BDEVNAME_SIZE];
> +
> +	/*
> +	 * If this delayed worker is stopping outside, directly quit here.
> +	 * dc->io_disable might be set via sysfs interface, so check it
> +	 * here too.
> +	 */
> +	while (!kthread_should_stop() && !dc->io_disable) {
> +		q = bdev_get_queue(dc->bdev);
> +		if (blk_queue_dying(q))
> +			dc->offline_seconds++;
> +		else
> +			dc->offline_seconds = 0;
> +
> +		if (dc->offline_seconds >= BACKING_DEV_OFFLINE_TIMEOUT) {
> +			pr_err("%s: device offline for %d seconds",
> +				bdevname(dc->bdev, buf),
> +				BACKING_DEV_OFFLINE_TIMEOUT);
> +			pr_err("%s: disable I/O request due to backing "
> +				"device offline", dc->disk.name);
> +			dc->io_disable = true;
> +			/* let others know earlier that io_disable is true */
> +			smp_mb();
> +			bcache_device_stop(&dc->disk);
> +			break;
> +		}
> +
> +		schedule_timeout_interruptible(HZ);
> +	}
> +
> +	dc->status_update_thread = NULL;
> +	return 0;
> +}
> +
>  void bch_cached_dev_run(struct cached_dev *dc)
>  {
>  	struct bcache_device *d = &dc->disk;
> @@ -898,6 +942,15 @@ void bch_cached_dev_run(struct cached_dev *dc)
>  	if (sysfs_create_link(&d->kobj, &disk_to_dev(d->disk)->kobj, "dev") ||
>  	    sysfs_create_link(&disk_to_dev(d->disk)->kobj, &d->kobj, "bcache"))
>  		pr_debug("error creating sysfs link");
> +
> +	dc->status_update_thread = kthread_run(cached_dev_status_update,
> +					       dc,
> +					      "bcache_status_update");
> +	if (IS_ERR(dc->status_update_thread)) {
> +		pr_warn("bcache: failed to create bcache_status_update "
> +			"kthread, continue to run without monitoring backing "
> +			"device status");
> +	}
>  }
>  
>  /*
> @@ -1118,6 +1171,8 @@ static void cached_dev_free(struct closure *cl)
>  		kthread_stop(dc->writeback_thread);
>  	if (dc->writeback_write_wq)
>  		destroy_workqueue(dc->writeback_write_wq);
> +	if (!IS_ERR_OR_NULL(dc->status_update_thread))
> +		kthread_stop(dc->status_update_thread);
>  
>  	if (atomic_read(&dc->running))
>  		bd_unlink_disk_holder(dc->bdev, dc->disk.disk);
> 
Hmm. Not exactly thrilled with this solution; maybe worth discussing it
at LSF.
But I can't see how it could be done better currently.

Reviewed-by: Hannes Reinecke <hare@suse.com>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare@suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v3 00/13] bcache: device failure handling improvement
  2018-01-14 14:42 [PATCH v3 00/13] bcache: device failure handling improvement Coly Li
                   ` (12 preceding siblings ...)
  2018-01-14 14:42 ` [PATCH v3 13/13] bcache: stop bcache device when backing device is offline Coly Li
@ 2018-01-24 22:23 ` Nix
  2018-01-25  3:35   ` Re[2]: " Pavel Goran
  13 siblings, 1 reply; 32+ messages in thread
From: Nix @ 2018-01-24 22:23 UTC (permalink / raw)
  To: Coly Li; +Cc: linux-bcache, linux-block

On 14 Jan 2018, Coly Li said:

> Hi maintainers and folks,
>
> This patch set tries to improve bcache device failure handling, includes
> cache device and backing device failures.
>
> The basic idea to handle failed cache device is,
> - Unregister cache set
> - Detach all backing devices which are attached to this cache set
> - Stop all the detached bcache devices
> - Stop all flash only volume on the cache set
> The above process is named 'cache set retire' by me. The result of cache
> set retire is, cache set and bcache devices are all removed, following
> I/O requests will get failed immediately to notift upper layer or user
> space coce that the cache device is failed or disconnected.

This feels wrong to me. If a cache device is writethrough, the cache is
a pure optimization: having such a device fail should not lead to I/O
failures of any sort, but should only flip the cache device to 'none' so
that writes to the backing store simply don't get cached any more.

Anything else leads to a reliability reduction, since in the end cache
devices *will* fail.

-- 
NULL && (void)

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re[2]: [PATCH v3 00/13] bcache: device failure handling improvement
  2018-01-24 22:23 ` [PATCH v3 00/13] bcache: device failure handling improvement Nix
@ 2018-01-25  3:35   ` Pavel Goran
  2018-01-25 18:57     ` Nix
  0 siblings, 1 reply; 32+ messages in thread
From: Pavel Goran @ 2018-01-25  3:35 UTC (permalink / raw)
  To: Nix; +Cc: Coly Li, linux-bcache, linux-block

Hello Nix,

Thursday, January 25, 2018, 1:23:19 AM, you wrote:

> On 14 Jan 2018, Coly Li said:

>> Hi maintainers and folks,
>>
>> This patch set tries to improve bcache device failure handling, includes
>> cache device and backing device failures.
>>
>> The basic idea to handle failed cache device is,
>> - Unregister cache set
>> - Detach all backing devices which are attached to this cache set
>> - Stop all the detached bcache devices
>> - Stop all flash only volume on the cache set
>> The above process is named 'cache set retire' by me. The result of cache
>> set retire is, cache set and bcache devices are all removed, following
>> I/O requests will get failed immediately to notift upper layer or user
>> space coce that the cache device is failed or disconnected.

> This feels wrong to me. If a cache device is writethrough, the cache is
> a pure optimization: having such a device fail should not lead to I/O
> failures of any sort, but should only flip the cache device to 'none' so
> that writes to the backing store simply don't get cached any more.

> Anything else leads to a reliability reduction, since in the end cache
> devices *will* fail.

It's one of those choices: "if something can't work as intended, should it be
allowed to work at all?" I believe different use cases will have different
answers to this question. So ideally this should be configurable by some kind
of option stored in superblock, much like the "Errors behavior" option of ext*
filesystems.

Of course, this only applies to "writethrough" and "writearound" modes with
zero dirty data; "writeback" bcache devices (or devices switched from
writeback and still having some dirty data) should probably be disabled if the
cache device fails.

Pavel Goran
  

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v3 00/13] bcache: device failure handling improvement
  2018-01-25  3:35   ` Re[2]: " Pavel Goran
@ 2018-01-25 18:57     ` Nix
  2018-01-26  4:15       ` Re[2]: " Pavel Goran
  0 siblings, 1 reply; 32+ messages in thread
From: Nix @ 2018-01-25 18:57 UTC (permalink / raw)
  To: Pavel Goran; +Cc: Coly Li, linux-bcache, linux-block

On 25 Jan 2018, Pavel Goran told this:

> Hello Nix,
>
> Thursday, January 25, 2018, 1:23:19 AM, you wrote:
>
>> This feels wrong to me. If a cache device is writethrough, the cache is
>> a pure optimization: having such a device fail should not lead to I/O
>> failures of any sort, but should only flip the cache device to 'none' so
>> that writes to the backing store simply don't get cached any more.
>
>> Anything else leads to a reliability reduction, since in the end cache
>> devices *will* fail.
>
> It's one of those choices: "if something can't work as intended, should it be
> allowed to work at all?"

Given that the only difference between a bcache with a writearound cache
and a bcache with no cache is performance... is it really ever going to
beneficial to users to have a working system suddenly start throwing
write errors and probably become instantly nonfunctional because a
cache device has worn out, when it is perfectly possible to just
automatically dissociate the failed cache and slow down a bit?

I would suggest that no user would ever want the former behaviour, since
it amounts to behaviour that worsens a slight slowdown into a complete
cessation of service (in effect, an infinite "slowdown"). Is it better
to have a system working correctly but more slowly than before, or one
that without warning stops working entirely? Is this really even in
question?!

> Of course, this only applies to "writethrough" and "writearound" modes with
> zero dirty data; "writeback" bcache devices (or devices switched from
> writeback and still having some dirty data) should probably be disabled if the
> cache device fails.

Oh yes, definitely. That's simple correctness. The filesystem is no
longer valid if you make the cache device disappear in this case: at the
very least it needs a thorough fscking, i.e. sysadmin attention.

-- 
NULL && (void)

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re[2]: [PATCH v3 00/13] bcache: device failure handling improvement
  2018-01-25 18:57     ` Nix
@ 2018-01-26  4:15       ` Pavel Goran
  2018-01-26  4:56         ` Coly Li
  0 siblings, 1 reply; 32+ messages in thread
From: Pavel Goran @ 2018-01-26  4:15 UTC (permalink / raw)
  To: Nix; +Cc: Coly Li, linux-bcache, linux-block

Hello Nix,

Thursday, January 25, 2018, 9:57:25 PM, you wrote:

> On 25 Jan 2018, Pavel Goran told this:

>> Hello Nix,
>>
>> Thursday, January 25, 2018, 1:23:19 AM, you wrote:
>>
>>> This feels wrong to me. If a cache device is writethrough, the cache is
>>> a pure optimization: having such a device fail should not lead to I/O
>>> failures of any sort, but should only flip the cache device to 'none' so
>>> that writes to the backing store simply don't get cached any more.
>>
>>> Anything else leads to a reliability reduction, since in the end cache
>>> devices *will* fail.
>>
>> It's one of those choices: "if something can't work as intended, should it be
>> allowed to work at all?"

> Given that the only difference between a bcache with a writearound cache
> and a bcache with no cache is performance... is it really ever going to
> beneficial to users to have a working system suddenly start throwing
> write errors and probably become instantly nonfunctional because a
> cache device has worn out, when it is perfectly possible to just
> automatically dissociate the failed cache and slow down a bit?

> I would suggest that no user would ever want the former behaviour, since
> it amounts to behaviour that worsens a slight slowdown into a complete
> cessation of service (in effect, an infinite "slowdown"). Is it better
> to have a system working correctly but more slowly than before, or one
> that without warning stops working entirely? Is this really even in
> question?!

Well, there is the "Fail-Fast" principle [1] and all that. For a home user
(which is my case, for example), this approach doesn't make much sense.
However, large-scale users, like cloud providers, can have a different point
of view.

It's just a speculation on my part, but consider a bunch of bcache devices
that serve as parts of a RAID6 array. It may be desirable to deactivate the
bcache device that lost its caching capabilities, so that (1) the array would
not slow down, (2) the array would report its degraded state to
administrators.

Anyway, probably the author of this patch could explain it better. Maybe I
completely misunderstand the intention.

Pavel Goran

[1] https://en.wikipedia.org/wiki/Fail-fast

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v3 00/13] bcache: device failure handling improvement
  2018-01-26  4:15       ` Re[2]: " Pavel Goran
@ 2018-01-26  4:56         ` Coly Li
  2018-01-26  5:51           ` Michael Lyle
  2018-02-16 12:11           ` Nix
  0 siblings, 2 replies; 32+ messages in thread
From: Coly Li @ 2018-01-26  4:56 UTC (permalink / raw)
  To: Pavel Goran, Nix; +Cc: linux-bcache, linux-block

On 26/01/2018 12:15 PM, Pavel Goran wrote:
> Hello Nix,
> 
> Thursday, January 25, 2018, 9:57:25 PM, you wrote:
> 
>> On 25 Jan 2018, Pavel Goran told this:
> 
>>> Hello Nix,
>>>
>>> Thursday, January 25, 2018, 1:23:19 AM, you wrote:
>>>
>>>> This feels wrong to me. If a cache device is writethrough, the cache is
>>>> a pure optimization: having such a device fail should not lead to I/O
>>>> failures of any sort, but should only flip the cache device to 'none' so
>>>> that writes to the backing store simply don't get cached any more.
>>>
>>>> Anything else leads to a reliability reduction, since in the end cache
>>>> devices *will* fail.
>>>
>>> It's one of those choices: "if something can't work as intended, should it be
>>> allowed to work at all?"
> 
>> Given that the only difference between a bcache with a writearound cache
>> and a bcache with no cache is performance... is it really ever going to
>> beneficial to users to have a working system suddenly start throwing
>> write errors and probably become instantly nonfunctional because a
>> cache device has worn out, when it is perfectly possible to just
>> automatically dissociate the failed cache and slow down a bit?
> 
>> I would suggest that no user would ever want the former behaviour, since
>> it amounts to behaviour that worsens a slight slowdown into a complete
>> cessation of service (in effect, an infinite "slowdown"). Is it better
>> to have a system working correctly but more slowly than before, or one
>> that without warning stops working entirely? Is this really even in
>> question?!
> 
> Well, there is the "Fail-Fast" principle [1] and all that. For a home user
> (which is my case, for example), this approach doesn't make much sense.
> However, large-scale users, like cloud providers, can have a different point
> of view.
> 
> It's just a speculation on my part, but consider a bunch of bcache devices
> that serve as parts of a RAID6 array. It may be desirable to deactivate the
> bcache device that lost its caching capabilities, so that (1) the array would
> not slow down, (2) the array would report its degraded state to
> administrators.
> 
> Anyway, probably the author of this patch could explain it better. Maybe I
> completely misunderstand the intention.

Hi Pavel and Nix,

These days I am working on back porting and response emails a little bit
slowly.

Most of the intention is from our customers, and partners from data
base, cloud service, Ceph storage and so on. So yes, it is mostly
enterprise use cases focused.

I take Nix's suggestion in serious, and I will try to see whether it is
possible to add a default-disabled option. When it is enabled, cache set
retiring won't stop bcache devices if cache set is clean.

In order to make the failure handling simple and fast, I will not
distinct each bcache device whether it has dirty data on cache set. Once
the cache set is dirty, all attached bcache devices will be stopped.

It seems when this option is enabled by users, it should work for
writeback and writethrough, and no side effect to writearound and non.

Nix, how do you think of the above idea ?

Thanks for all your constructive discussion :-)

Coly Li

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v3 00/13] bcache: device failure handling improvement
  2018-01-26  4:56         ` Coly Li
@ 2018-01-26  5:51           ` Michael Lyle
  2018-01-26  6:23             ` Coly Li
  2018-02-16 12:11           ` Nix
  1 sibling, 1 reply; 32+ messages in thread
From: Michael Lyle @ 2018-01-26  5:51 UTC (permalink / raw)
  To: Coly Li; +Cc: Pavel Goran, Nix, linux-bcache, linux-block

Hey Coly,

On Thu, Jan 25, 2018 at 8:56 PM, Coly Li <colyli@suse.de> wrote:
> In order to make the failure handling simple and fast, I will not
> distinct each bcache device whether it has dirty data on cache set. Once
> the cache set is dirty, all attached bcache devices will be stopped.

Surely it shouldn't stop any writethrough/writearound ones though, right?

> It seems when this option is enabled by users, it should work for
> writeback and writethrough, and no side effect to writearound and non.

Writethrough should be the same as writearound, shouldn't it?

Thanks,

Mike

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v3 05/13] bcache: quit dc->writeback_thread when BCACHE_DEV_DETACHING is set
  2018-01-16  9:11   ` Hannes Reinecke
@ 2018-01-26  6:21     ` Coly Li
  0 siblings, 0 replies; 32+ messages in thread
From: Coly Li @ 2018-01-26  6:21 UTC (permalink / raw)
  To: Hannes Reinecke, linux-bcache
  Cc: linux-block, Michael Lyle, Hannes Reinecke, Huijun Tang

On 16/01/2018 5:11 PM, Hannes Reinecke wrote:
> On 01/14/2018 03:42 PM, Coly Li wrote:
>> In patch "bcache: fix cached_dev->count usage for bch_cache_set_error()",
>> cached_dev_get() is called when creating dc->writeback_thread, and
>> cached_dev_put() is called when exiting dc->writeback_thread. This
>> modification works well unless people detach the bcache device manually by
>>     'echo 1 > /sys/block/bcache<N>/bcache/detach'
>> Because this sysfs interface only calls bch_cached_dev_detach() which wakes
>> up dc->writeback_thread but does not stop it. The reason is, before patch
>> "bcache: fix cached_dev->count usage for bch_cache_set_error()", inside
>> bch_writeback_thread(), if cache is not dirty after writeback,
>> cached_dev_put() will be called here. And in cached_dev_make_request() when
>> a new write request makes cache from clean to dirty, cached_dev_get() will
>> be called there. Since we don't operate dc->count in these locations,
>> refcount d->count cannot be dropped after cache becomes clean, and
>> cached_dev_detach_finish() won't be called to detach bcache device.
>>
>> This patch fixes the issue by checking whether BCACHE_DEV_DETACHING is
>> set inside bch_writeback_thread(). If this bit is set and cache is clean
>> (no existing writeback_keys), break the while-loop, call cached_dev_put()
>> and quit the writeback thread.
>>
>> Please note if cache is still dirty, even BCACHE_DEV_DETACHING is set the
>> writeback thread should continue to perform writeback, this is the original
>> design of manually detach.
>>
>> I compose a separte patch because that patch "bcache: fix cached_dev->count
>> usage for bch_cache_set_error()" already gets a "Reviewed-by:" from Hannes
>> Reinecke. Also this fix is not trivial and good for a separate patch.
>>
>> Signed-off-by: Coly Li <colyli@suse.de>
>> Cc: Michael Lyle <mlyle@lyle.org>
>> Cc: Hannes Reinecke <hare@suse.com>
>> Cc: Huijun Tang <tang.junhui@zte.com.cn>
>> ---
>>  drivers/md/bcache/writeback.c | 20 +++++++++++++++++---
>>  1 file changed, 17 insertions(+), 3 deletions(-)
>>
>> diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
>> index b280c134dd4d..4dbeaaa575bf 100644
>> --- a/drivers/md/bcache/writeback.c
>> +++ b/drivers/md/bcache/writeback.c
>> @@ -565,9 +565,15 @@ static int bch_writeback_thread(void *arg)
>>  	while (!kthread_should_stop()) {
>>  		down_write(&dc->writeback_lock);
>>  		set_current_state(TASK_INTERRUPTIBLE);
>> -		if (!atomic_read(&dc->has_dirty) ||
>> -		    (!test_bit(BCACHE_DEV_DETACHING, &dc->disk.flags) &&
>> -		     !dc->writeback_running)) {
>> +		/*
>> +		 * If the bache device is detaching, skip here and continue
>> +		 * to perform writeback. Otherwise, if no dirty data on cache,
>> +		 * or there is dirty data on cache but writeback is disabled,
>> +		 * the writeback thread should sleep here and wait for others
>> +		 * to wake up it.
>> +		 */
>> +		if (!test_bit(BCACHE_DEV_DETACHING, &dc->disk.flags) &&
>> +		    (!atomic_read(&dc->has_dirty) || !dc->writeback_running)) {
>>  			up_write(&dc->writeback_lock);
>>  
>>  			if (kthread_should_stop()) {
>> @@ -587,6 +593,14 @@ static int bch_writeback_thread(void *arg)
>>  			atomic_set(&dc->has_dirty, 0);
>>  			SET_BDEV_STATE(&dc->sb, BDEV_STATE_CLEAN);
>>  			bch_write_bdev_super(dc, NULL);
>> +			/*
>> +			 * If bcache device is detaching via sysfs interface,
>> +			 * writeback thread should stop after there is no dirty
>> +			 * data on cache. BCACHE_DEV_DETACHING flag is set in
>> +			 * bch_cached_dev_detach().
>> +			 */
>> +			if (test_bit(BCACHE_DEV_DETACHING, &dc->disk.flags))
>> +				break;
>>  		}
>>  
>>  		up_write(&dc->writeback_lock);
>>
> Checking several atomic flags in one statement renders the atomic pretty
> much pointless; you need to protect the 'if' clause with some lock or
> just check _one_ atomic statement.

Hi Hannes,

This is a special condition, let me explain why I feel it is safe here.
1, test_bit(BCACHE_DEV_DETACHING, &dc->disk.flags) only changes state
once, when the bcache device is about to stop. It can be regarded as
constant value.
2, dc->writeback_running is default to true, there are 2 conditions,
2.1 if dc->writeback_running is set to false but the condition check is
not updated: the writeback thread will run one more loop and goes into
sleep.
2.2 if dc->writeback_running is previously set to false and now set to
true agian, and the condition check is not updated: this value can only
be set via sysfs, and bch_writeback_queue() will be called. Then the
kthread will call schedule() with TASK_RUNNING, and will be waken up
soon by task scheduler.

Indeed, it also does not hurt even dc->has_dirty is not atomic_t. The
only reason I can see for being atomic_t is atomic_xchg() in
bch_writeback_add(), also I feel it is not mandatory...

Thanks.

Coly Li

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v3 00/13] bcache: device failure handling improvement
  2018-01-26  5:51           ` Michael Lyle
@ 2018-01-26  6:23             ` Coly Li
  0 siblings, 0 replies; 32+ messages in thread
From: Coly Li @ 2018-01-26  6:23 UTC (permalink / raw)
  To: Michael Lyle; +Cc: Pavel Goran, Nix, linux-bcache, linux-block

On 26/01/2018 1:51 PM, Michael Lyle wrote:
> Hey Coly,
> 
> On Thu, Jan 25, 2018 at 8:56 PM, Coly Li <colyli@suse.de> wrote:
>> In order to make the failure handling simple and fast, I will not
>> distinct each bcache device whether it has dirty data on cache set. Once
>> the cache set is dirty, all attached bcache devices will be stopped.
> 
> Surely it shouldn't stop any writethrough/writearound ones though, right?
> 
>> It seems when this option is enabled by users, it should work for
>> writeback and writethrough, and no side effect to writearound and non.
> 
> Writethrough should be the same as writearound, shouldn't it?

Yes, they are no different with such an option.

Coly Li

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v3 00/13] bcache: device failure handling improvement
  2018-01-26  4:56         ` Coly Li
  2018-01-26  5:51           ` Michael Lyle
@ 2018-02-16 12:11           ` Nix
  1 sibling, 0 replies; 32+ messages in thread
From: Nix @ 2018-02-16 12:11 UTC (permalink / raw)
  To: Coly Li; +Cc: Pavel Goran, linux-bcache, linux-block

On 26 Jan 2018, Coly Li uttered the following:
> Nix, how do you think of the above idea ?

This is weeks too late because you already sent the patch through, but I
think it's an excellent idea :)

^ permalink raw reply	[flat|nested] 32+ messages in thread

end of thread, other threads:[~2018-02-16 12:11 UTC | newest]

Thread overview: 32+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-01-14 14:42 [PATCH v3 00/13] bcache: device failure handling improvement Coly Li
2018-01-14 14:42 ` [PATCH v3 01/13] bcache: set writeback_rate_update_seconds in range [1, 60] seconds Coly Li
2018-01-16  9:03   ` Hannes Reinecke
2018-01-14 14:42 ` [PATCH v3 02/13] bcache: properly set task state in bch_writeback_thread() Coly Li
2018-01-16  9:02   ` Hannes Reinecke
2018-01-14 14:42 ` [PATCH v3 03/13] bcache: set task properly in allocator_wait() Coly Li
2018-01-16  9:05   ` Hannes Reinecke
2018-01-16  9:29     ` Coly Li
2018-01-14 14:42 ` [PATCH v3 04/13] bcache: fix cached_dev->count usage for bch_cache_set_error() Coly Li
2018-01-14 14:42 ` [PATCH v3 05/13] bcache: quit dc->writeback_thread when BCACHE_DEV_DETACHING is set Coly Li
2018-01-16  9:11   ` Hannes Reinecke
2018-01-26  6:21     ` Coly Li
2018-01-14 14:42 ` [PATCH v3 06/13] bcache: stop dc->writeback_rate_update properly Coly Li
2018-01-14 14:42 ` [PATCH v3 07/13] bcache: set error_limit correctly Coly Li
2018-01-14 14:42 ` [PATCH v3 08/13] bcache: add CACHE_SET_IO_DISABLE to struct cache_set flags Coly Li
2018-01-14 14:42 ` [PATCH v3 09/13] bcache: stop all attached bcache devices for a retired cache set Coly Li
2018-01-14 14:42 ` [PATCH v3 10/13] bcache: fix inaccurate io state for detached bcache devices Coly Li
2018-01-16  9:27   ` Hannes Reinecke
2018-01-14 14:42 ` [PATCH v3 11/13] bcache: add backing_request_endio() for bi_end_io of attached backing device I/O Coly Li
2018-01-16  9:28   ` Hannes Reinecke
2018-01-14 14:42 ` [PATCH v3 12/13] bcache: add io_disable to struct cached_dev Coly Li
2018-01-16  9:32   ` Hannes Reinecke
2018-01-14 14:42 ` [PATCH v3 13/13] bcache: stop bcache device when backing device is offline Coly Li
2018-01-16  9:33   ` Hannes Reinecke
2018-01-24 22:23 ` [PATCH v3 00/13] bcache: device failure handling improvement Nix
2018-01-25  3:35   ` Re[2]: " Pavel Goran
2018-01-25 18:57     ` Nix
2018-01-26  4:15       ` Re[2]: " Pavel Goran
2018-01-26  4:56         ` Coly Li
2018-01-26  5:51           ` Michael Lyle
2018-01-26  6:23             ` Coly Li
2018-02-16 12:11           ` Nix

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.