stable.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 01/19] bcache: never writeback a discard operation
       [not found] <20190209045311.15677-1-colyli@suse.de>
@ 2019-02-09  4:52 ` Coly Li
  2019-02-09  4:52 ` [PATCH 06/19] bcache: treat stale && dirty keys as bad keys Coly Li
  2019-02-09  4:53 ` [PATCH 19/19] bcache: use (REQ_META|REQ_PRIO) to indicate bio for metadata Coly Li
  2 siblings, 0 replies; 5+ messages in thread
From: Coly Li @ 2019-02-09  4:52 UTC (permalink / raw)
  To: axboe
  Cc: linux-bcache, linux-block, Daniel Axtens, Kent Overstreet,
	stable, Coly Li

From: Daniel Axtens <dja@axtens.net>

Some users see panics like the following when performing fstrim on a
bcached volume:

[  529.803060] BUG: unable to handle kernel NULL pointer dereference at 0000000000000008
[  530.183928] #PF error: [normal kernel read fault]
[  530.412392] PGD 8000001f42163067 P4D 8000001f42163067 PUD 1f42168067 PMD 0
[  530.750887] Oops: 0000 [#1] SMP PTI
[  530.920869] CPU: 10 PID: 4167 Comm: fstrim Kdump: loaded Not tainted 5.0.0-rc1+ #3
[  531.290204] Hardware name: HP ProLiant DL360 Gen9/ProLiant DL360 Gen9, BIOS P89 12/27/2015
[  531.693137] RIP: 0010:blk_queue_split+0x148/0x620
[  531.922205] Code: 60 38 89 55 a0 45 31 db 45 31 f6 45 31 c9 31 ff 89 4d 98 85 db 0f 84 7f 04 00 00 44 8b 6d 98 4c 89 ee 48 c1 e6 04 49 03 70 78 <8b> 46 08 44 8b 56 0c 48
8b 16 44 29 e0 39 d8 48 89 55 a8 0f 47 c3
[  532.838634] RSP: 0018:ffffb9b708df39b0 EFLAGS: 00010246
[  533.093571] RAX: 00000000ffffffff RBX: 0000000000046000 RCX: 0000000000000000
[  533.441865] RDX: 0000000000000200 RSI: 0000000000000000 RDI: 0000000000000000
[  533.789922] RBP: ffffb9b708df3a48 R08: ffff940d3b3fdd20 R09: 0000000000000000
[  534.137512] R10: ffffb9b708df3958 R11: 0000000000000000 R12: 0000000000000000
[  534.485329] R13: 0000000000000000 R14: 0000000000000000 R15: ffff940d39212020
[  534.833319] FS:  00007efec26e3840(0000) GS:ffff940d1f480000(0000) knlGS:0000000000000000
[  535.224098] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  535.504318] CR2: 0000000000000008 CR3: 0000001f4e256004 CR4: 00000000001606e0
[  535.851759] Call Trace:
[  535.970308]  ? mempool_alloc_slab+0x15/0x20
[  536.174152]  ? bch_data_insert+0x42/0xd0 [bcache]
[  536.403399]  blk_mq_make_request+0x97/0x4f0
[  536.607036]  generic_make_request+0x1e2/0x410
[  536.819164]  submit_bio+0x73/0x150
[  536.980168]  ? submit_bio+0x73/0x150
[  537.149731]  ? bio_associate_blkg_from_css+0x3b/0x60
[  537.391595]  ? _cond_resched+0x1a/0x50
[  537.573774]  submit_bio_wait+0x59/0x90
[  537.756105]  blkdev_issue_discard+0x80/0xd0
[  537.959590]  ext4_trim_fs+0x4a9/0x9e0
[  538.137636]  ? ext4_trim_fs+0x4a9/0x9e0
[  538.324087]  ext4_ioctl+0xea4/0x1530
[  538.497712]  ? _copy_to_user+0x2a/0x40
[  538.679632]  do_vfs_ioctl+0xa6/0x600
[  538.853127]  ? __do_sys_newfstat+0x44/0x70
[  539.051951]  ksys_ioctl+0x6d/0x80
[  539.212785]  __x64_sys_ioctl+0x1a/0x20
[  539.394918]  do_syscall_64+0x5a/0x110
[  539.568674]  entry_SYSCALL_64_after_hwframe+0x44/0xa9

We have observed it where both:
1) LVM/devmapper is involved (bcache backing device is LVM volume) and
2) writeback cache is involved (bcache cache_mode is writeback)

On one machine, we can reliably reproduce it with:

 # echo writeback > /sys/block/bcache0/bcache/cache_mode
   (not sure whether above line is required)
 # mount /dev/bcache0 /test
 # for i in {0..10}; do
	file="$(mktemp /test/zero.XXX)"
	dd if=/dev/zero of="$file" bs=1M count=256
	sync
	rm $file
    done
  # fstrim -v /test

Observing this with tracepoints on, we see the following writes:

fstrim-18019 [022] .... 91107.302026: bcache_write: 73f95583-561c-408f-a93a-4cbd2498f5c8 inode 0  DS 4260112 + 196352 hit 0 bypass 1
fstrim-18019 [022] .... 91107.302050: bcache_write: 73f95583-561c-408f-a93a-4cbd2498f5c8 inode 0  DS 4456464 + 262144 hit 0 bypass 1
fstrim-18019 [022] .... 91107.302075: bcache_write: 73f95583-561c-408f-a93a-4cbd2498f5c8 inode 0  DS 4718608 + 81920 hit 0 bypass 1
fstrim-18019 [022] .... 91107.302094: bcache_write: 73f95583-561c-408f-a93a-4cbd2498f5c8 inode 0  DS 5324816 + 180224 hit 0 bypass 1
fstrim-18019 [022] .... 91107.302121: bcache_write: 73f95583-561c-408f-a93a-4cbd2498f5c8 inode 0  DS 5505040 + 262144 hit 0 bypass 1
fstrim-18019 [022] .... 91107.302145: bcache_write: 73f95583-561c-408f-a93a-4cbd2498f5c8 inode 0  DS 5767184 + 81920 hit 0 bypass 1
fstrim-18019 [022] .... 91107.308777: bcache_write: 73f95583-561c-408f-a93a-4cbd2498f5c8 inode 0  DS 6373392 + 180224 hit 1 bypass 0
<crash>

Note the final one has different hit/bypass flags.

This is because in should_writeback(), we were hitting a case where
the partial stripe condition was returning true and so
should_writeback() was returning true early.

If that hadn't been the case, it would have hit the would_skip test, and
as would_skip == s->iop.bypass == true, should_writeback() would have
returned false.

Looking at the git history from 'commit 72c270612bd3 ("bcache: Write out
full stripes")', it looks like the idea was to optimise for raid5/6:

       * If a stripe is already dirty, force writes to that stripe to
	 writeback mode - to help build up full stripes of dirty data

To fix this issue, make sure that should_writeback() on a discard op
never returns true.

More details of debugging:
https://www.spinics.net/lists/linux-bcache/msg06996.html

Previous reports:
 - https://bugzilla.kernel.org/show_bug.cgi?id=201051
 - https://bugzilla.kernel.org/show_bug.cgi?id=196103
 - https://www.spinics.net/lists/linux-bcache/msg06885.html

(Coly Li: minor modification to follow maximum 75 chars per line rule)

Cc: Kent Overstreet <koverstreet@google.com>
Cc: stable@vger.kernel.org
Fixes: 72c270612bd3 ("bcache: Write out full stripes")
Signed-off-by: Daniel Axtens <dja@axtens.net>
Signed-off-by: Coly Li <colyli@suse.de>
---
 drivers/md/bcache/writeback.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/md/bcache/writeback.h b/drivers/md/bcache/writeback.h
index 6a743d3bb338..4e4c6810dc3c 100644
--- a/drivers/md/bcache/writeback.h
+++ b/drivers/md/bcache/writeback.h
@@ -71,6 +71,9 @@ static inline bool should_writeback(struct cached_dev *dc, struct bio *bio,
 	    in_use > bch_cutoff_writeback_sync)
 		return false;
 
+	if (bio_op(bio) == REQ_OP_DISCARD)
+		return false;
+
 	if (dc->partial_stripes_expensive &&
 	    bcache_dev_stripe_dirty(dc, bio->bi_iter.bi_sector,
 				    bio_sectors(bio)))
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH 06/19] bcache: treat stale && dirty keys as bad keys
       [not found] <20190209045311.15677-1-colyli@suse.de>
  2019-02-09  4:52 ` [PATCH 01/19] bcache: never writeback a discard operation Coly Li
@ 2019-02-09  4:52 ` Coly Li
       [not found]   ` <20190212132825.E476F217D9@mail.kernel.org>
  2019-02-09  4:53 ` [PATCH 19/19] bcache: use (REQ_META|REQ_PRIO) to indicate bio for metadata Coly Li
  2 siblings, 1 reply; 5+ messages in thread
From: Coly Li @ 2019-02-09  4:52 UTC (permalink / raw)
  To: axboe; +Cc: linux-bcache, linux-block, Tang Junhui, stable, Coly Li

From: Tang Junhui <tang.junhui.linux@gmail.com>

Stale && dirty keys can be produced in the follow way:
After writeback in write_dirty_finish(), dirty keys k1 will
replace by clean keys k2
==>ret = bch_btree_insert(dc->disk.c, &keys, NULL, &w->key);
==>btree_insert_fn(struct btree_op *b_op, struct btree *b)
==>static int bch_btree_insert_node(struct btree *b,
       struct btree_op *op,
       struct keylist *insert_keys,
       atomic_t *journal_ref,
Then two steps:
A) update k1 to k2 in btree node memory;
   bch_btree_insert_keys(b, op, insert_keys, replace_key)
B) Write the bset(contains k2) to cache disk by a 30s delay work
   bch_btree_leaf_dirty(b, journal_ref).
But before the 30s delay work write the bset to cache device,
these things happened:
A) GC works, and reclaim the bucket k2 point to;
B) Allocator works, and invalidate the bucket k2 point to,
   and increase the gen of the bucket, and place it into free_inc
   fifo;
C) Until now, the 30s delay work still does not finish work,
   so in the disk, the key still is k1, it is dirty and stale
   (its gen is smaller than the gen of the bucket). and then the
   machine power off suddenly happens;
D) When the machine power on again, after the btree reconstruction,
   the stale dirty key appear.

In bch_extent_bad(), when expensive_debug_checks is off, it would
treat the dirty key as good even it is stale keys, and it would
cause bellow probelms:
A) In read_dirty() it would cause machine crash:
   BUG_ON(ptr_stale(dc->disk.c, &w->key, 0));
B) It could be worse when reads hits stale dirty keys, it would
   read old incorrect data.

This patch tolerate the existence of these stale && dirty keys,
and treat them as bad key in bch_extent_bad().

(Coly Li: fix indent which was modified by sender's email client)

Signed-off-by: Tang Junhui <tang.junhui.linux@gmail.com>
Cc: stable@vger.kernel.org
Signed-off-by: Coly Li <colyli@suse.de>
---
 drivers/md/bcache/extents.c | 13 +++++++------
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/drivers/md/bcache/extents.c b/drivers/md/bcache/extents.c
index 956004366699..886710043025 100644
--- a/drivers/md/bcache/extents.c
+++ b/drivers/md/bcache/extents.c
@@ -538,6 +538,7 @@ static bool bch_extent_bad(struct btree_keys *bk, const struct bkey *k)
 {
 	struct btree *b = container_of(bk, struct btree, keys);
 	unsigned int i, stale;
+	char buf[80];
 
 	if (!KEY_PTRS(k) ||
 	    bch_extent_invalid(bk, k))
@@ -547,19 +548,19 @@ static bool bch_extent_bad(struct btree_keys *bk, const struct bkey *k)
 		if (!ptr_available(b->c, k, i))
 			return true;
 
-	if (!expensive_debug_checks(b->c) && KEY_DIRTY(k))
-		return false;
-
 	for (i = 0; i < KEY_PTRS(k); i++) {
 		stale = ptr_stale(b->c, k, i);
 
+		if (stale && KEY_DIRTY(k)) {
+			bch_extent_to_text(buf, sizeof(buf), k);
+			pr_info("stale dirty pointer, stale %u, key: %s",
+				stale, buf);
+		}
+
 		btree_bug_on(stale > BUCKET_GC_GEN_MAX, b,
 			     "key too stale: %i, need_gc %u",
 			     stale, b->c->need_gc);
 
-		btree_bug_on(stale && KEY_DIRTY(k) && KEY_SIZE(k),
-			     b, "stale dirty pointer");
-
 		if (stale)
 			return true;
 
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH 19/19] bcache: use (REQ_META|REQ_PRIO) to indicate bio for metadata
       [not found] <20190209045311.15677-1-colyli@suse.de>
  2019-02-09  4:52 ` [PATCH 01/19] bcache: never writeback a discard operation Coly Li
  2019-02-09  4:52 ` [PATCH 06/19] bcache: treat stale && dirty keys as bad keys Coly Li
@ 2019-02-09  4:53 ` Coly Li
       [not found]   ` <20190212132824.1D1502084E@mail.kernel.org>
  2 siblings, 1 reply; 5+ messages in thread
From: Coly Li @ 2019-02-09  4:53 UTC (permalink / raw)
  To: axboe
  Cc: linux-bcache, linux-block, Coly Li, stable, Dave Chinner,
	Christoph Hellwig

In 'commit 752f66a75aba ("bcache: use REQ_PRIO to indicate bio for
metadata")' REQ_META is replaced by REQ_PRIO to indicate metadata bio.
This assumption is not always correct, e.g. XFS uses REQ_META to mark
metadata bio other than REQ_PRIO. This is why Nix noticed that bcache
does not cache metadata for XFS after the above commit.

Thanks to Dave Chinner, he explains the difference between REQ_META and
REQ_PRIO from view of file system developer. Here I quote part of his
explanation from mailing list,
   REQ_META is used for metadata. REQ_PRIO is used to communicate to
   the lower layers that the submitter considers this IO to be more
   important that non REQ_PRIO IO and so dispatch should be expedited.

   IOWs, if the filesystem considers metadata IO to be more important
   that user data IO, then it will use REQ_PRIO | REQ_META rather than
   just REQ_META.

Then it seems bios with REQ_META or REQ_PRIO should both be cached for
performance optimation, because they are all probably low I/O latency
demand by upper layer (e.g. file system).

So in this patch, when we want to decide whether to bypass the cache,
REQ_META and REQ_PRIO are both checked. Then both metadata and
high priority I/O requests will be handled properly.

Reported-by: Nix <nix@esperi.org.uk>
Signed-off-by: Coly Li <colyli@suse.de>
Reviewed-by: Andre Noll <maan@tuebingen.mpg.de>
Tested-by: Nix <nix@esperi.org.uk>
Cc: stable@vger.kernel.org
Cc: Dave Chinner <david@fromorbit.com>
Cc: Christoph Hellwig <hch@lst.de>
---
 drivers/md/bcache/request.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c
index 15070412a32e..f101bfe8657a 100644
--- a/drivers/md/bcache/request.c
+++ b/drivers/md/bcache/request.c
@@ -392,10 +392,11 @@ static bool check_should_bypass(struct cached_dev *dc, struct bio *bio)
 
 	/*
 	 * Flag for bypass if the IO is for read-ahead or background,
-	 * unless the read-ahead request is for metadata (eg, for gfs2).
+	 * unless the read-ahead request is for metadata
+	 * (eg, for gfs2 or xfs).
 	 */
 	if (bio->bi_opf & (REQ_RAHEAD|REQ_BACKGROUND) &&
-	    !(bio->bi_opf & REQ_PRIO))
+	    !(bio->bi_opf & (REQ_META|REQ_PRIO)))
 		goto skip;
 
 	if (bio->bi_iter.bi_sector & (c->sb.block_size - 1) ||
@@ -877,7 +878,7 @@ static int cached_dev_cache_miss(struct btree *b, struct search *s,
 	}
 
 	if (!(bio->bi_opf & REQ_RAHEAD) &&
-	    !(bio->bi_opf & REQ_PRIO) &&
+	    !(bio->bi_opf & (REQ_META|REQ_PRIO)) &&
 	    s->iop.c->gc_stats.in_use < CUTOFF_CACHE_READA)
 		reada = min_t(sector_t, dc->readahead >> 9,
 			      get_capacity(bio->bi_disk) - bio_end_sector(bio));
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH 06/19] bcache: treat stale && dirty keys as bad keys
       [not found]   ` <20190212132825.E476F217D9@mail.kernel.org>
@ 2019-02-12 16:42     ` Coly Li
  0 siblings, 0 replies; 5+ messages in thread
From: Coly Li @ 2019-02-12 16:42 UTC (permalink / raw)
  To: Sasha Levin, Tang Junhui; +Cc: axboe, linux-bcache, linux-block, stable

On 2019/2/12 9:28 下午, Sasha Levin wrote:
> Hi,
> 
> [This is an automated email]
> 
> This commit has been processed because it contains a -stable tag.
> The stable tag indicates that it's relevant for the following trees: all
> 
> The bot has tested the following trees: v4.20.7, v4.19.20, v4.14.98, v4.9.155, v4.4.173, v3.18.134.
> 
> v4.20.7: Build OK!
> v4.19.20: Failed to apply! Possible dependencies:
>     149d0efada77 ("bcache: replace hard coded number with BUCKET_GC_GEN_MAX")
> 
> v4.14.98: Failed to apply! Possible dependencies:
>     1d316e658374 ("bcache: implement PI controller for writeback rate")
>     25d8be77e192 ("block: move bio_alloc_pages() to bcache")
>     27a40ab9269e ("bcache: add backing_request_endio() for bi_end_io")
>     2831231d4c3f ("bcache: reduce cache_set devices iteration by devices_max_used")
>     3b304d24a718 ("bcache: convert cached_dev.count from atomic_t to refcount_t")
>     3fd47bfe55b0 ("bcache: stop dc->writeback_rate_update properly")
>     5138ac6748e3 ("bcache: fix misleading error message in bch_count_io_errors()")
>     539d39eb2708 ("bcache: fix wrong return value in bch_debug_init()")
>     5fa89fb9a86b ("bcache: don't write back data if reading it failed")
>     6f10f7d1b02b ("bcache: style fix to replace 'unsigned' by 'unsigned int'")
>     771f393e8ffc ("bcache: add CACHE_SET_IO_DISABLE to struct cache_set flags")
>     7ba0d830dc0e ("bcache: set error_limit correctly")
>     7e027ca4b534 ("bcache: add stop_when_cache_set_failed option to backing device")
>     804f3c6981f5 ("bcache: fix cached_dev->count usage for bch_cache_set_error()")
>     a8500fc816b1 ("bcache: rearrange writeback main thread ratelimit")
>     b1092c9af9ed ("bcache: allow quick writeback when backing idle")
>     bc082a55d25c ("bcache: fix inaccurate io state for detached bcache devices")
>     c7b7bd07404c ("bcache: add io_disable to struct cached_dev")
> 
> v4.9.155: Failed to apply! Possible dependencies:
>     1d316e658374 ("bcache: implement PI controller for writeback rate")
>     2831231d4c3f ("bcache: reduce cache_set devices iteration by devices_max_used")
>     297e3d854784 ("blk-throttle: make throtl_slice tunable")
>     3fd47bfe55b0 ("bcache: stop dc->writeback_rate_update properly")
>     4e4cbee93d56 ("block: switch bios to blk_status_t")
>     5138ac6748e3 ("bcache: fix misleading error message in bch_count_io_errors()")
>     6f10f7d1b02b ("bcache: style fix to replace 'unsigned' by 'unsigned int'")
>     7e027ca4b534 ("bcache: add stop_when_cache_set_failed option to backing device")
>     87760e5eef35 ("block: hook up writeback throttling")
>     9e234eeafbe1 ("blk-throttle: add a simple idle detection")
>     c7b7bd07404c ("bcache: add io_disable to struct cached_dev")
>     cf43e6be865a ("block: add scalable completion tracking of requests")
>     e806402130c9 ("block: split out request-only flags into a new namespace")
>     fbbaf700e7b1 ("block: trace completion of all bios.")
> 
> v4.4.173: Failed to apply! Possible dependencies:
>     005411ea7ee7 ("doc: update block/queue-sysfs.txt entries")
>     1d316e658374 ("bcache: implement PI controller for writeback rate")
>     27489a3c827b ("blk-mq: turn hctx->run_work into a regular work struct")
>     2831231d4c3f ("bcache: reduce cache_set devices iteration by devices_max_used")
>     297e3d854784 ("blk-throttle: make throtl_slice tunable")
>     38f8baae8905 ("block: factor out chained bio completion")
>     3fd47bfe55b0 ("bcache: stop dc->writeback_rate_update properly")
>     4e4cbee93d56 ("block: switch bios to blk_status_t")
>     511cbce2ff8b ("irq_poll: make blk-iopoll available outside the block layer")
>     5138ac6748e3 ("bcache: fix misleading error message in bch_count_io_errors()")
>     6f10f7d1b02b ("bcache: style fix to replace 'unsigned' by 'unsigned int'")
>     7e027ca4b534 ("bcache: add stop_when_cache_set_failed option to backing device")
>     87760e5eef35 ("block: hook up writeback throttling")
>     8d354f133e86 ("blk-mq: improve layout of blk_mq_hw_ctx")
>     9467f85960a3 ("blk-mq/cpu-notif: Convert to new hotplug state machine")
>     9e234eeafbe1 ("blk-throttle: add a simple idle detection")
>     af3e3a5259e3 ("block: don't unecessarily clobber bi_error for chained bios")
>     ba8c6967b739 ("block: cleanup bio_endio")
>     c7b7bd07404c ("bcache: add io_disable to struct cached_dev")
>     cf43e6be865a ("block: add scalable completion tracking of requests")
>     e57690fe009b ("blk-mq: don't overwrite rq->mq_ctx")
>     fbbaf700e7b1 ("block: trace completion of all bios.")
> 
> v3.18.134: Failed to apply! Possible dependencies:
>     0f8087ecdeac ("block: Consolidate static integrity profile properties")
>     1b94b5567e9c ("Btrfs, raid56: use a variant to record the operation type")
>     1d316e658374 ("bcache: implement PI controller for writeback rate")
>     2831231d4c3f ("bcache: reduce cache_set devices iteration by devices_max_used")
>     2c8cdd6ee4e7 ("Btrfs, replace: write dirty pages into the replace target device")
>     326e1dbb5736 ("block: remove management of bi_remaining when restoring original bi_end_io")
>     3fd47bfe55b0 ("bcache: stop dc->writeback_rate_update properly")
>     4246a0b63bd8 ("block: add a bi_error field to struct bio")
>     4e4cbee93d56 ("block: switch bios to blk_status_t")
>     5138ac6748e3 ("bcache: fix misleading error message in bch_count_io_errors()")
>     5a6ac9eacb49 ("Btrfs, raid56: support parity scrub on raid56")
>     6e9606d2a2dc ("Btrfs: add ref_count and free function for btrfs_bio")
>     6f10f7d1b02b ("bcache: style fix to replace 'unsigned' by 'unsigned int'")
>     7e027ca4b534 ("bcache: add stop_when_cache_set_failed option to backing device")
>     8e5cfb55d3f7 ("Btrfs: Make raid_map array be inlined in btrfs_bio structure")
>     af8e2d1df984 ("Btrfs, scrub: repair the common data on RAID5/6 if it is corrupted")
>     b89e1b012c7f ("Btrfs, raid56: don't change bbio and raid_map")
>     c4cf5261f8bf ("bio: skip atomic inc/dec of ->bi_remaining for non-chains")
>     c7b7bd07404c ("bcache: add io_disable to struct cached_dev")
>     f90523d1aa3c ("Btrfs: remove noused bbio_ret in __btrfs_map_block in condition")
> 
> 
> How should we proceed with this patch?

Can we rebase this patch for stable kernels, and send them to stable
kernel maintainers separately? If this behavior is accepted for stable
kernel maintenance, I would suggest Junhui Tang to think about to rebase
for these stable kernels.

Thanks.

-- 

Coly Li

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH 19/19] bcache: use (REQ_META|REQ_PRIO) to indicate bio for metadata
       [not found]   ` <20190212132824.1D1502084E@mail.kernel.org>
@ 2019-02-12 16:48     ` Coly Li
  0 siblings, 0 replies; 5+ messages in thread
From: Coly Li @ 2019-02-12 16:48 UTC (permalink / raw)
  To: Sasha Levin
  Cc: axboe, linux-bcache, linux-block, stable, Dave Chinner,
	Christoph Hellwig

On 2019/2/12 9:28 下午, Sasha Levin wrote:
> Hi,
> 
> [This is an automated email]
> 
> This commit has been processed because it contains a -stable tag.
> The stable tag indicates that it's relevant for the following trees: all
> 
> The bot has tested the following trees: v4.20.7, v4.19.20, v4.14.98, v4.9.155, v4.4.173, v3.18.134.
> 
> v4.20.7: Build OK!
> v4.19.20: Failed to apply! Possible dependencies:
>     752f66a75aba ("bcache: use REQ_PRIO to indicate bio for metadata")
> 
> v4.14.98: Failed to apply! Possible dependencies:
>     752f66a75aba ("bcache: use REQ_PRIO to indicate bio for metadata")
>     b41c9b0266e8 ("bcache: update bio->bi_opf bypass/writeback REQ_ flag hints")
> 
> v4.9.155: Failed to apply! Possible dependencies:
>     752f66a75aba ("bcache: use REQ_PRIO to indicate bio for metadata")
>     83b5df67c509 ("bcache: use op_is_sync to check for synchronous requests")
>     b41c9b0266e8 ("bcache: update bio->bi_opf bypass/writeback REQ_ flag hints")
> 
> v4.4.173: Failed to apply! Possible dependencies:
>     09cbfeaf1a5a ("mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros")
>     1c2e54e1ed6f ("dm thin: bump thin and thin-pool target versions")
>     1eff9d322a44 ("block: rename bio bi_rw to bi_opf")
>     202bae52934d ("dm thin: unroll issue_discard() to create longer discard bio chains")
>     38f252553300 ("block: add __blkdev_issue_discard")
>     3dba53a958a7 ("dm thin: use __blkdev_issue_discard for async discard support")
>     4e49ea4a3d27 ("block/fs/drivers: remove rw argument from submit_bio")
>     83b5df67c509 ("bcache: use op_is_sync to check for synchronous requests")
>     9082e87bfbf8 ("block: remove struct bio_batch")
>     a6111d11b8b5 ("btrfs: raid56: Use raid_write_end_io for scrub")
>     b41c9b0266e8 ("bcache: update bio->bi_opf bypass/writeback REQ_ flag hints")
>     bbd848e0fade ("block: reinstate early return of -EOPNOTSUPP from blkdev_issue_discard")
>     c3667cc61904 ("dm thin: consistently return -ENOSPC if pool has run out of data space")
>     c8d93247f1d0 ("bcache: use op_is_write instead of checking for REQ_WRITE")
>     d57d611505d9 ("kernel/fs: fix I/O wait not accounted for RW O_DSYNC")
> 
> v3.18.134: Failed to apply! Possible dependencies:
>     1b94b5567e9c ("Btrfs, raid56: use a variant to record the operation type")
>     1eff9d322a44 ("block: rename bio bi_rw to bi_opf")
>     2c8cdd6ee4e7 ("Btrfs, replace: write dirty pages into the replace target device")
>     326e1dbb5736 ("block: remove management of bi_remaining when restoring original bi_end_io")
>     4245215d6a8d ("Btrfs, raid56: fix use-after-free problem in the final device replace procedure on raid56")
>     5a6ac9eacb49 ("Btrfs, raid56: support parity scrub on raid56")
>     6e9606d2a2dc ("Btrfs: add ref_count and free function for btrfs_bio")
>     83b5df67c509 ("bcache: use op_is_sync to check for synchronous requests")
>     8e5cfb55d3f7 ("Btrfs: Make raid_map array be inlined in btrfs_bio structure")
>     af8e2d1df984 ("Btrfs, scrub: repair the common data on RAID5/6 if it is corrupted")
>     b41c9b0266e8 ("bcache: update bio->bi_opf bypass/writeback REQ_ flag hints")
>     b7c44ed9d2fc ("block: manipulate bio->bi_flags through helpers")
>     b89e1b012c7f ("Btrfs, raid56: don't change bbio and raid_map")
>     c4cf5261f8bf ("bio: skip atomic inc/dec of ->bi_remaining for non-chains")
>     c8d93247f1d0 ("bcache: use op_is_write instead of checking for REQ_WRITE")
>     f90523d1aa3c ("Btrfs: remove noused bbio_ret in __btrfs_map_block in condition")
> 
> 
> How should we proceed with this patch?

Can I rebase this patch for each stable kernel, and send the patch to
stable@vger.kernel.org ?

Thanks.

-- 

Coly Li

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2019-02-12 16:48 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <20190209045311.15677-1-colyli@suse.de>
2019-02-09  4:52 ` [PATCH 01/19] bcache: never writeback a discard operation Coly Li
2019-02-09  4:52 ` [PATCH 06/19] bcache: treat stale && dirty keys as bad keys Coly Li
     [not found]   ` <20190212132825.E476F217D9@mail.kernel.org>
2019-02-12 16:42     ` Coly Li
2019-02-09  4:53 ` [PATCH 19/19] bcache: use (REQ_META|REQ_PRIO) to indicate bio for metadata Coly Li
     [not found]   ` <20190212132824.1D1502084E@mail.kernel.org>
2019-02-12 16:48     ` Coly Li

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).