Linux-Block Archive on lore.kernel.org
 help / color / Atom feed
* [RFC PATCH v2 0/4] block layer change necessary for bcache zoned device support
@ 2020-05-16  3:54 Coly Li
  2020-05-16  3:54 ` [RFC PATCH v2 1/4] block: change REQ_OP_ZONE_RESET from 6 to 13 Coly Li
                   ` (3 more replies)
  0 siblings, 4 replies; 26+ messages in thread
From: Coly Li @ 2020-05-16  3:54 UTC (permalink / raw)
  To: linux-block, damien.lemoal, hare, hch, axboe
  Cc: linux-bcache, kbusch, Coly Li

Hi folks,

Recently I am working on supprt bcache to be created on zoned devices
e.g. host managed SMR hard drives, then frequent random READ I/Os on
these SMR drives can be accelerated.

To make the bcache code work correctly, there are some small but maybe
important changes of block layer code are necessary.

Thanks for the review comments from Keith Busch, I fix the typo and post
the v2 series for your review and comments.

Thank you all in advance.

Coly Li 
---

Coly Li (4):
  block: change REQ_OP_ZONE_RESET from 6 to 13
  block: block: change REQ_OP_ZONE_RESET_ALL from 8 to 15
  block: remove queue_is_mq restriction from blk_revalidate_disk_zones()
  block: set bi_size to REQ_OP_ZONE_RESET bio

 block/blk-zoned.c         | 6 ++++--
 include/linux/blk_types.h | 8 ++++----
 2 files changed, 8 insertions(+), 6 deletions(-)

-- 
2.25.0


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [RFC PATCH v2 1/4] block: change REQ_OP_ZONE_RESET from 6 to 13
  2020-05-16  3:54 [RFC PATCH v2 0/4] block layer change necessary for bcache zoned device support Coly Li
@ 2020-05-16  3:54 ` Coly Li
  2020-05-16  4:06   ` Chaitanya Kulkarni
                     ` (2 more replies)
  2020-05-16  3:54 ` [RFC PATCH v2 2/4] block: block: change REQ_OP_ZONE_RESET_ALL from 8 to 15 Coly Li
                   ` (2 subsequent siblings)
  3 siblings, 3 replies; 26+ messages in thread
From: Coly Li @ 2020-05-16  3:54 UTC (permalink / raw)
  To: linux-block, damien.lemoal, hare, hch, axboe
  Cc: linux-bcache, kbusch, Coly Li, Hannes Reinecke, Jens Axboe,
	Johannes Thumshirn, Shaun Tancheff

For a zoned device, e.g. host managed SMR hard drive, REQ_OP_ZONE_RESET
is to reset the LBA of a zone's write pointer back to the start LBA of
this zone. After the write point is reset, all previously stored data
in this zone is invalid and unaccessible anymore. Therefore, this op
code changes on disk data, belongs to a WRITE request op code.

Current REQ_OP_ZONE_RESET is defined as number 6, but the convention of
the op code is, READ requests are even numbers, and WRITE requests are
odd numbers. See how op_is_write defined,

397 static inline bool op_is_write(unsigned int op)
398 {
399         return (op & 1);
400 }

When create bcache device on top of the zoned SMR drive, bcache driver
has to handle the REQ_OP_ZONE_RESET bio by invalidate all cached data
belongs to the LBA range of the restting zone. A wrong op code value
will make the this zone management bio goes into bcache' read requests
code path and causes undefined result.

This patch changes REQ_OP_ZONE_RESET value from 6 to 13, the new odd
number will make bcache driver handle this zone management bio properly
in write requests code path.

Fixes: 87374179c535 ("block: add a proper block layer data direction encoding")
Signed-off-by: Coly Li <colyli@suse.de>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Damien Le Moal <damien.lemoal@wdc.com>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Jens Axboe <axboe@fb.com>
Cc: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Cc: Keith Busch <kbusch@kernel.org>
Cc: Shaun Tancheff <shaun.tancheff@seagate.com>
---
 include/linux/blk_types.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index 31eb92876be7..8f7bc15a6de3 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -282,8 +282,6 @@ enum req_opf {
 	REQ_OP_DISCARD		= 3,
 	/* securely erase sectors */
 	REQ_OP_SECURE_ERASE	= 5,
-	/* reset a zone write pointer */
-	REQ_OP_ZONE_RESET	= 6,
 	/* write the same sector many times */
 	REQ_OP_WRITE_SAME	= 7,
 	/* reset all the zone present on the device */
@@ -296,6 +294,8 @@ enum req_opf {
 	REQ_OP_ZONE_CLOSE	= 11,
 	/* Transition a zone to full */
 	REQ_OP_ZONE_FINISH	= 12,
+	/* reset a zone write pointer */
+	REQ_OP_ZONE_RESET	= 13,
 
 	/* SCSI passthrough using struct scsi_request */
 	REQ_OP_SCSI_IN		= 32,
-- 
2.25.0


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [RFC PATCH v2 2/4] block: block: change REQ_OP_ZONE_RESET_ALL from 8 to 15
  2020-05-16  3:54 [RFC PATCH v2 0/4] block layer change necessary for bcache zoned device support Coly Li
  2020-05-16  3:54 ` [RFC PATCH v2 1/4] block: change REQ_OP_ZONE_RESET from 6 to 13 Coly Li
@ 2020-05-16  3:54 ` Coly Li
  2020-05-18  0:36   ` Damien Le Moal
  2020-05-16  3:54 ` [RFC PATCH v2 3/4] block: remove queue_is_mq restriction from blk_revalidate_disk_zones() Coly Li
  2020-05-16  3:54 ` [RFC PATCH v2 4/4] block: set bi_size to REQ_OP_ZONE_RESET bio Coly Li
  3 siblings, 1 reply; 26+ messages in thread
From: Coly Li @ 2020-05-16  3:54 UTC (permalink / raw)
  To: linux-block, damien.lemoal, hare, hch, axboe
  Cc: linux-bcache, kbusch, Coly Li, Chaitanya Kulkarni, Johannes Thumshirn

For a zoned device, e.g. host managed SMR hard drive, the request op
REQ_OP_ZONE_RESET_ALL is to reset LBA of all zones' writer pointers to
the start LBA of the zone they belong to. After all the write pointers
are reset, all previously stored data is invalid and unaccessible.
Therefore this op code changes on-disk data, belongs to WRITE request
op code.

Similar to the problem of REQ_OP_ZONE_RESET, REQ_OP_ZONE_RESET_ALL is
even number 8, it means bios with REQ_OP_ZONE_RESET_ALL in bio->bi_opf
will be treated as READ request by op_is_write().

Same problem will happen when bcache device is created on top of zoned
device like host managed SMR hard drive, bcache driver should invalid
all cached data for the REQ_OP_ZONE_RESET_ALL request, but this zone
management bio is mistakenly treated as READ request and go into bcache
read code path, which will cause undefined behavior.

This patch changes REQ_OP_ZONE_RESET_ALL value from 8 to 15, this new
odd number will make bcache driver handle this zone management bio in
write request code path properly.

Fixes: e84e8f066395 ("block: add req op to reset all zones and flag")
Signed-off-by: Coly Li <colyli@suse.de>
Cc: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Cc: Damien Le Moal <damien.lemoal@wdc.com>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Cc: Keith Busch <kbusch@kernel.org>
---
Changelog:
v2: fix typo for REQ_OP_ZONE_RESET_ALL.
v1: initial version.

 include/linux/blk_types.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index 8f7bc15a6de3..618032fa1b29 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -284,8 +284,6 @@ enum req_opf {
 	REQ_OP_SECURE_ERASE	= 5,
 	/* write the same sector many times */
 	REQ_OP_WRITE_SAME	= 7,
-	/* reset all the zone present on the device */
-	REQ_OP_ZONE_RESET_ALL	= 8,
 	/* write the zero filled sector many times */
 	REQ_OP_WRITE_ZEROES	= 9,
 	/* Open a zone */
@@ -296,6 +294,8 @@ enum req_opf {
 	REQ_OP_ZONE_FINISH	= 12,
 	/* reset a zone write pointer */
 	REQ_OP_ZONE_RESET	= 13,
+	/* reset all the zone present on the device */
+	REQ_OP_ZONE_RESET_ALL	= 15,
 
 	/* SCSI passthrough using struct scsi_request */
 	REQ_OP_SCSI_IN		= 32,
-- 
2.25.0


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [RFC PATCH v2 3/4] block: remove queue_is_mq restriction from blk_revalidate_disk_zones()
  2020-05-16  3:54 [RFC PATCH v2 0/4] block layer change necessary for bcache zoned device support Coly Li
  2020-05-16  3:54 ` [RFC PATCH v2 1/4] block: change REQ_OP_ZONE_RESET from 6 to 13 Coly Li
  2020-05-16  3:54 ` [RFC PATCH v2 2/4] block: block: change REQ_OP_ZONE_RESET_ALL from 8 to 15 Coly Li
@ 2020-05-16  3:54 ` Coly Li
  2020-05-16 12:40   ` Christoph Hellwig
  2020-05-18  0:39   ` Damien Le Moal
  2020-05-16  3:54 ` [RFC PATCH v2 4/4] block: set bi_size to REQ_OP_ZONE_RESET bio Coly Li
  3 siblings, 2 replies; 26+ messages in thread
From: Coly Li @ 2020-05-16  3:54 UTC (permalink / raw)
  To: linux-block, damien.lemoal, hare, hch, axboe
  Cc: linux-bcache, kbusch, Coly Li

The bcache driver is bio based and NOT request based multiqueued driver,
if a zoned SMR hard drive is used as backing device of a bcache device,
calling blk_revalidate_disk_zones() for the bcache device will fail due
to the following check in blk_revalidate_disk_zones(),
478       if (WARN_ON_ONCE(!queue_is_mq(q)))
479             return -EIO;

Now bcache is able to export the zoned information from the underlying
zoned SMR drives and format zonefs on top of a bcache device, the
resitriction that a zoned device should be multiqueued is unnecessary
for now.

Although in commit ae58954d8734c ("block: don't handle bio based drivers
in blk_revalidate_disk_zones") it is said that bio based drivers should
not call blk_revalidate_disk_zones() and just manually update their own
q->nr_zones, but this is inaccurate. The bio based drivers also need to
set their zone size and initialize bitmaps for cnv and seq zones, it is
necessary to call blk_revalidate_disk_zones() for bio based drivers.

This patch removes the above queue_is_mq() restriction to permit
bcache driver calls blk_revalidate_disk_zones() for bcache device zoned
information initialization.

Fixes: ae58954d8734c ("block: don't handle bio based drivers in blk_revalidate_disk_zones")
Signed-off-by: Coly Li <colyli@suse.de>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Damien Le Moal <damien.lemoal@wdc.com>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Keith Busch <kbusch@kernel.org>
---
 block/blk-zoned.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/block/blk-zoned.c b/block/blk-zoned.c
index f87956e0dcaf..1e0708c68267 100644
--- a/block/blk-zoned.c
+++ b/block/blk-zoned.c
@@ -475,8 +475,6 @@ int blk_revalidate_disk_zones(struct gendisk *disk)
 
 	if (WARN_ON_ONCE(!blk_queue_is_zoned(q)))
 		return -EIO;
-	if (WARN_ON_ONCE(!queue_is_mq(q)))
-		return -EIO;
 
 	/*
 	 * Ensure that all memory allocations in this context are done as if
-- 
2.25.0


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [RFC PATCH v2 4/4] block: set bi_size to REQ_OP_ZONE_RESET bio
  2020-05-16  3:54 [RFC PATCH v2 0/4] block layer change necessary for bcache zoned device support Coly Li
                   ` (2 preceding siblings ...)
  2020-05-16  3:54 ` [RFC PATCH v2 3/4] block: remove queue_is_mq restriction from blk_revalidate_disk_zones() Coly Li
@ 2020-05-16  3:54 ` Coly Li
  2020-05-16 12:53   ` Christoph Hellwig
  2020-05-18  0:59   ` Damien Le Moal
  3 siblings, 2 replies; 26+ messages in thread
From: Coly Li @ 2020-05-16  3:54 UTC (permalink / raw)
  To: linux-block, damien.lemoal, hare, hch, axboe
  Cc: linux-bcache, kbusch, Coly Li, Ajay Joshi, Chaitanya Kulkarni,
	Hannes Reinecke, Johannes Thumshirn

Now for zoned device, zone management ioctl commands are converted into
zone management bios and handled by blkdev_zone_mgmt(). There are 4 zone
management bios are handled, their op code is,
- REQ_OP_ZONE_RESET
  Reset the zone's writer pointer and empty all previously stored data.
- REQ_OP_ZONE_OPEN
  Open the zones in the specified sector range, no influence on data.
- REQ_OP_ZONE_CLOSE
  Close the zones in the specified sector range, no influence on data.
- REQ_OP_ZONE_FINISH
  Mark the zone as full, no influence on data.
All the zone management bios has 0 byte size, a.k.a their bi_size is 0.

Exept for REQ_OP_ZONE_RESET request, zero length bio works fine for
other zone management bio, before the zoned device e.g. host managed SMR
hard drive can be created as a bcache device.

When a bcache device (virtual block device to forward bios like md raid
drivers) can be created on top of the zoned device, and a fast SSD is
attached as a cache device, bcache driver may cache the frequent random
READ requests on fast SSD to accelerate hot data READ performance.

When bcache driver receives a zone management bio for REQ_OP_ZONE_RESET
op, while forwarding the request to underlying zoned device e.g. host
managed SMR hard drive, it should also invalidate all cached data from
SSD for the resetting zone. Otherwise bcache will continue provide the
outdated cached data to READ request and cause potential data storage
inconsistency and corruption.

In order to invalidate outdated data from SSD for the reset zone, bcache
needs to know not only the start LBA but also the range length of the
resetting zone. Otherwise, bcache won't be able to accurately invalidate
the outdated cached data.

Is it possible to simply set the bi_size inside bcache driver? The
answer is NO. Although every REQ_OP_ZONE_RESET bio has exact length as
zone size or q->limits.chunk_sectors, it is possible that some other
layer stacking block driver (in the future) exists between bcache driver
and blkdev_zone_mgmt() where the zone management bio is made.

The best location to set bi_size is where the zone management bio is
composed in blkdev_zone_mgmt(), then no matter how this bio is split
before bcache driver receives it, bcache driver can always correctly
invalidate the resetting range.

This patch sets the bi_size of REQ_OP_ZONE_RESET bio for each resetting
zone. Here REQ_OP_ZONE_RESET_ALL is special whose bi_size should be set
as capacity of whole drive size, then bcache can invalidate all cached
data from SSD for the zoned backing device.

With this change, now bcache code can handle REQ_OP_ZONE_RESET bio in
the way very similar to REQ_OP_DISCARD bio with very little change.

Signed-off-by: Coly Li <colyli@suse.de>
Cc: Ajay Joshi <ajay.joshi@wdc.com>
Cc: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Damien Le Moal <damien.lemoal@wdc.com>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Cc: Keith Busch <kbusch@kernel.org>
---
Changelog:
v2: fix typo for REQ_OP_ZONE_RESET_ALL.
v1: initial version.

 block/blk-zoned.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/block/blk-zoned.c b/block/blk-zoned.c
index 1e0708c68267..01d91314399b 100644
--- a/block/blk-zoned.c
+++ b/block/blk-zoned.c
@@ -227,11 +227,15 @@ int blkdev_zone_mgmt(struct block_device *bdev, enum req_opf op,
 		if (op == REQ_OP_ZONE_RESET &&
 		    blkdev_allow_reset_all_zones(bdev, sector, nr_sectors)) {
 			bio->bi_opf = REQ_OP_ZONE_RESET_ALL;
+			bio->bi_iter.bi_sector = sector;
+			bio->bi_iter.bi_size = nr_sectors;
 			break;
 		}
 
 		bio->bi_opf = op | REQ_SYNC;
 		bio->bi_iter.bi_sector = sector;
+		if (op == REQ_OP_ZONE_RESET)
+			bio->bi_iter.bi_size = zone_sectors;
 		sector += zone_sectors;
 
 		/* This may take a while, so be nice to others */
-- 
2.25.0


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC PATCH v2 1/4] block: change REQ_OP_ZONE_RESET from 6 to 13
  2020-05-16  3:54 ` [RFC PATCH v2 1/4] block: change REQ_OP_ZONE_RESET from 6 to 13 Coly Li
@ 2020-05-16  4:06   ` Chaitanya Kulkarni
  2020-05-16  9:33     ` Coly Li
  2020-05-16 12:38   ` Christoph Hellwig
  2020-05-18  0:33   ` Damien Le Moal
  2 siblings, 1 reply; 26+ messages in thread
From: Chaitanya Kulkarni @ 2020-05-16  4:06 UTC (permalink / raw)
  To: Coly Li, linux-block, Damien Le Moal, hare, hch, axboe
  Cc: linux-bcache, kbusch, Hannes Reinecke, Jens Axboe,
	Johannes Thumshirn, Shaun Tancheff

On 05/15/2020 08:55 PM, Coly Li wrote:
>   	REQ_OP_ZONE_FINISH	= 12,
> +	/* reset a zone write pointer */
> +	REQ_OP_ZONE_RESET	= 13,

I think 13 is taken, I found this in block tree branch :-
for-5.8/block

316         REQ_OP_ZONE_FINISH      = 12,
317         /* write data at the current zone write pointer */
318         REQ_OP_ZONE_APPEND      = 13,

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC PATCH v2 1/4] block: change REQ_OP_ZONE_RESET from 6 to 13
  2020-05-16  4:06   ` Chaitanya Kulkarni
@ 2020-05-16  9:33     ` Coly Li
  0 siblings, 0 replies; 26+ messages in thread
From: Coly Li @ 2020-05-16  9:33 UTC (permalink / raw)
  To: Chaitanya Kulkarni, linux-block, Damien Le Moal, hare, hch, axboe
  Cc: linux-bcache, kbusch, Hannes Reinecke, Jens Axboe,
	Johannes Thumshirn, Shaun Tancheff

On 2020/5/16 12:06, Chaitanya Kulkarni wrote:
> On 05/15/2020 08:55 PM, Coly Li wrote:
>>   	REQ_OP_ZONE_FINISH	= 12,
>> +	/* reset a zone write pointer */
>> +	REQ_OP_ZONE_RESET	= 13,
> 
> I think 13 is taken, I found this in block tree branch :-
> for-5.8/block
> 
> 316         REQ_OP_ZONE_FINISH      = 12,
> 317         /* write data at the current zone write pointer */
> 318         REQ_OP_ZONE_APPEND      = 13,
> 

Aha, sure. Then I have to look at 15.

Thanks for the hint.

Coly Li

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC PATCH v2 1/4] block: change REQ_OP_ZONE_RESET from 6 to 13
  2020-05-16  3:54 ` [RFC PATCH v2 1/4] block: change REQ_OP_ZONE_RESET from 6 to 13 Coly Li
  2020-05-16  4:06   ` Chaitanya Kulkarni
@ 2020-05-16 12:38   ` Christoph Hellwig
  2020-05-16 12:44     ` Coly Li
  2020-05-18  0:33   ` Damien Le Moal
  2 siblings, 1 reply; 26+ messages in thread
From: Christoph Hellwig @ 2020-05-16 12:38 UTC (permalink / raw)
  To: Coly Li
  Cc: linux-block, damien.lemoal, hare, hch, axboe, linux-bcache,
	kbusch, Hannes Reinecke, Jens Axboe, Johannes Thumshirn,
	Shaun Tancheff

On Sat, May 16, 2020 at 11:54:31AM +0800, Coly Li wrote:
> For a zoned device, e.g. host managed SMR hard drive, REQ_OP_ZONE_RESET
> is to reset the LBA of a zone's write pointer back to the start LBA of
> this zone. After the write point is reset, all previously stored data
> in this zone is invalid and unaccessible anymore. Therefore, this op
> code changes on disk data, belongs to a WRITE request op code.
> 
> Current REQ_OP_ZONE_RESET is defined as number 6, but the convention of
> the op code is, READ requests are even numbers, and WRITE requests are
> odd numbers. See how op_is_write defined,

The convention is all about data transfer, and zone reset does not
transfer any data.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC PATCH v2 3/4] block: remove queue_is_mq restriction from blk_revalidate_disk_zones()
  2020-05-16  3:54 ` [RFC PATCH v2 3/4] block: remove queue_is_mq restriction from blk_revalidate_disk_zones() Coly Li
@ 2020-05-16 12:40   ` Christoph Hellwig
  2020-05-16 13:13     ` Coly Li
  2020-05-18  0:39   ` Damien Le Moal
  1 sibling, 1 reply; 26+ messages in thread
From: Christoph Hellwig @ 2020-05-16 12:40 UTC (permalink / raw)
  To: Coly Li
  Cc: linux-block, damien.lemoal, hare, hch, axboe, linux-bcache, kbusch

On Sat, May 16, 2020 at 11:54:33AM +0800, Coly Li wrote:
> The bcache driver is bio based and NOT request based multiqueued driver,
> if a zoned SMR hard drive is used as backing device of a bcache device,
> calling blk_revalidate_disk_zones() for the bcache device will fail due
> to the following check in blk_revalidate_disk_zones(),
> 478       if (WARN_ON_ONCE(!queue_is_mq(q)))
> 479             return -EIO;
> 
> Now bcache is able to export the zoned information from the underlying
> zoned SMR drives and format zonefs on top of a bcache device, the
> resitriction that a zoned device should be multiqueued is unnecessary
> for now.
> 
> Although in commit ae58954d8734c ("block: don't handle bio based drivers
> in blk_revalidate_disk_zones") it is said that bio based drivers should
> not call blk_revalidate_disk_zones() and just manually update their own
> q->nr_zones, but this is inaccurate. The bio based drivers also need to
> set their zone size and initialize bitmaps for cnv and seq zones, it is
> necessary to call blk_revalidate_disk_zones() for bio based drivers.

Why would you need these bitmaps for bcache?  There is no reason to
serialize requests for stacking drivers, and you can already derive
if a zone is sequential or not from whatever internal information
you use.

So without a user that actually makes sense: NAK.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC PATCH v2 1/4] block: change REQ_OP_ZONE_RESET from 6 to 13
  2020-05-16 12:38   ` Christoph Hellwig
@ 2020-05-16 12:44     ` Coly Li
  2020-05-16 12:50       ` Christoph Hellwig
  0 siblings, 1 reply; 26+ messages in thread
From: Coly Li @ 2020-05-16 12:44 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: linux-block, damien.lemoal, hare, axboe, linux-bcache, kbusch,
	Hannes Reinecke, Jens Axboe, Johannes Thumshirn, Shaun Tancheff

On 2020/5/16 20:38, Christoph Hellwig wrote:
> On Sat, May 16, 2020 at 11:54:31AM +0800, Coly Li wrote:
>> For a zoned device, e.g. host managed SMR hard drive, REQ_OP_ZONE_RESET
>> is to reset the LBA of a zone's write pointer back to the start LBA of
>> this zone. After the write point is reset, all previously stored data
>> in this zone is invalid and unaccessible anymore. Therefore, this op
>> code changes on disk data, belongs to a WRITE request op code.
>>
>> Current REQ_OP_ZONE_RESET is defined as number 6, but the convention of
>> the op code is, READ requests are even numbers, and WRITE requests are
>> odd numbers. See how op_is_write defined,
> 
> The convention is all about data transfer, and zone reset does not
> transfer any data.
> 

Yes you are right, just like REQ_OP_DISCARD which does not transfer any
data but changes the data on device. If the request changes the stored
data, it does transfer data.

This is why in patch "block: set bi_size to REQ_OP_ZONE_RESET bio" I set
bi_size to the REQ_OP_ZONE_RESET and REQ_OP_ZONE_RESET_ALL bios.

Coly Li

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC PATCH v2 1/4] block: change REQ_OP_ZONE_RESET from 6 to 13
  2020-05-16 12:44     ` Coly Li
@ 2020-05-16 12:50       ` Christoph Hellwig
  2020-05-16 13:05         ` Coly Li
  0 siblings, 1 reply; 26+ messages in thread
From: Christoph Hellwig @ 2020-05-16 12:50 UTC (permalink / raw)
  To: Coly Li
  Cc: Christoph Hellwig, linux-block, damien.lemoal, hare, axboe,
	linux-bcache, kbusch, Hannes Reinecke, Jens Axboe,
	Johannes Thumshirn, Shaun Tancheff

On Sat, May 16, 2020 at 08:44:45PM +0800, Coly Li wrote:
> Yes you are right, just like REQ_OP_DISCARD which does not transfer any
> data but changes the data on device. If the request changes the stored
> data, it does transfer data.

REQ_OP_DISCARD is a special case, because most implementation end up
transferring data, it just gets attached in the low-level driver.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC PATCH v2 4/4] block: set bi_size to REQ_OP_ZONE_RESET bio
  2020-05-16  3:54 ` [RFC PATCH v2 4/4] block: set bi_size to REQ_OP_ZONE_RESET bio Coly Li
@ 2020-05-16 12:53   ` Christoph Hellwig
  2020-05-18  0:59   ` Damien Le Moal
  1 sibling, 0 replies; 26+ messages in thread
From: Christoph Hellwig @ 2020-05-16 12:53 UTC (permalink / raw)
  To: Coly Li
  Cc: linux-block, damien.lemoal, hare, hch, axboe, linux-bcache,
	kbusch, Ajay Joshi, Chaitanya Kulkarni, Hannes Reinecke,
	Johannes Thumshirn

This looks ok, but I want to hear Damiens opinion.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC PATCH v2 1/4] block: change REQ_OP_ZONE_RESET from 6 to 13
  2020-05-16 12:50       ` Christoph Hellwig
@ 2020-05-16 13:05         ` Coly Li
  2020-05-16 15:36           ` Christoph Hellwig
  0 siblings, 1 reply; 26+ messages in thread
From: Coly Li @ 2020-05-16 13:05 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: linux-block, damien.lemoal, hare, axboe, linux-bcache, kbusch,
	Hannes Reinecke, Jens Axboe, Johannes Thumshirn, Shaun Tancheff

On 2020/5/16 20:50, Christoph Hellwig wrote:
> On Sat, May 16, 2020 at 08:44:45PM +0800, Coly Li wrote:
>> Yes you are right, just like REQ_OP_DISCARD which does not transfer any
>> data but changes the data on device. If the request changes the stored
>> data, it does transfer data.
> 
> REQ_OP_DISCARD is a special case, because most implementation end up
> transferring data, it just gets attached in the low-level driver.
> 

Yes, REQ_OP_ZONE_RESET and REQ_OP_ZONE_RESET_ALL are quite similar to
REQ_OP_DISCARD. Data read from the LBA range of reset zone is not
suggested and the content is undefined.

For bcache, similar to REQ_OP_DISCARD, REQ_OP_ZONE_RESET and
REQ_OP_ZONE_RESET_ALL are handled in same way: If the backing device
supports discard/zone_reset, and the operation successes, then cached
data on SSD covered by the LBA range should be invalid, otherwise users
will read outdated and garbage data.

We should treat REQ_OP_ZONE_RESET and REQ_OP_ZONE_RESET_ALL being in
WRITE direction.

Coly Li

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC PATCH v2 3/4] block: remove queue_is_mq restriction from blk_revalidate_disk_zones()
  2020-05-16 12:40   ` Christoph Hellwig
@ 2020-05-16 13:13     ` Coly Li
  2020-05-16 15:35       ` Christoph Hellwig
  2020-05-18  1:07       ` Damien Le Moal
  0 siblings, 2 replies; 26+ messages in thread
From: Coly Li @ 2020-05-16 13:13 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: linux-block, damien.lemoal, hare, axboe, linux-bcache, kbusch

On 2020/5/16 20:40, Christoph Hellwig wrote:
> On Sat, May 16, 2020 at 11:54:33AM +0800, Coly Li wrote:
>> The bcache driver is bio based and NOT request based multiqueued driver,
>> if a zoned SMR hard drive is used as backing device of a bcache device,
>> calling blk_revalidate_disk_zones() for the bcache device will fail due
>> to the following check in blk_revalidate_disk_zones(),
>> 478       if (WARN_ON_ONCE(!queue_is_mq(q)))
>> 479             return -EIO;
>>
>> Now bcache is able to export the zoned information from the underlying
>> zoned SMR drives and format zonefs on top of a bcache device, the
>> resitriction that a zoned device should be multiqueued is unnecessary
>> for now.
>>
>> Although in commit ae58954d8734c ("block: don't handle bio based drivers
>> in blk_revalidate_disk_zones") it is said that bio based drivers should
>> not call blk_revalidate_disk_zones() and just manually update their own
>> q->nr_zones, but this is inaccurate. The bio based drivers also need to
>> set their zone size and initialize bitmaps for cnv and seq zones, it is
>> necessary to call blk_revalidate_disk_zones() for bio based drivers.
> 
> Why would you need these bitmaps for bcache?  There is no reason to
> serialize requests for stacking drivers, and you can already derive
> if a zone is sequential or not from whatever internal information
> you use.
> 
> So without a user that actually makes sense: NAK.
> 

It is OK for me to set the zone_nr and zone size without calling
blk_revalidate_disk_zones().

Coly Li

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC PATCH v2 3/4] block: remove queue_is_mq restriction from blk_revalidate_disk_zones()
  2020-05-16 13:13     ` Coly Li
@ 2020-05-16 15:35       ` Christoph Hellwig
  2020-05-18  1:07       ` Damien Le Moal
  1 sibling, 0 replies; 26+ messages in thread
From: Christoph Hellwig @ 2020-05-16 15:35 UTC (permalink / raw)
  To: Coly Li
  Cc: Christoph Hellwig, linux-block, damien.lemoal, hare, axboe,
	linux-bcache, kbusch

On Sat, May 16, 2020 at 09:13:50PM +0800, Coly Li wrote:
> It is OK for me to set the zone_nr and zone size without calling
> blk_revalidate_disk_zones().

Yes.  And without having seen your code I'm pretty sure it should
get simpler by not using blk_revalidate_disk_zones.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC PATCH v2 1/4] block: change REQ_OP_ZONE_RESET from 6 to 13
  2020-05-16 13:05         ` Coly Li
@ 2020-05-16 15:36           ` Christoph Hellwig
  2020-05-17  5:30             ` Coly Li
  0 siblings, 1 reply; 26+ messages in thread
From: Christoph Hellwig @ 2020-05-16 15:36 UTC (permalink / raw)
  To: Coly Li
  Cc: Christoph Hellwig, linux-block, damien.lemoal, hare, axboe,
	linux-bcache, kbusch, Hannes Reinecke, Jens Axboe,
	Johannes Thumshirn, Shaun Tancheff

On Sat, May 16, 2020 at 09:05:39PM +0800, Coly Li wrote:
> On 2020/5/16 20:50, Christoph Hellwig wrote:
> > On Sat, May 16, 2020 at 08:44:45PM +0800, Coly Li wrote:
> >> Yes you are right, just like REQ_OP_DISCARD which does not transfer any
> >> data but changes the data on device. If the request changes the stored
> >> data, it does transfer data.
> > 
> > REQ_OP_DISCARD is a special case, because most implementation end up
> > transferring data, it just gets attached in the low-level driver.
> > 
> 
> Yes, REQ_OP_ZONE_RESET and REQ_OP_ZONE_RESET_ALL are quite similar to
> REQ_OP_DISCARD. Data read from the LBA range of reset zone is not
> suggested and the content is undefined.
> 
> For bcache, similar to REQ_OP_DISCARD, REQ_OP_ZONE_RESET and
> REQ_OP_ZONE_RESET_ALL are handled in same way: If the backing device
> supports discard/zone_reset, and the operation successes, then cached
> data on SSD covered by the LBA range should be invalid, otherwise users
> will read outdated and garbage data.
> 
> We should treat REQ_OP_ZONE_RESET and REQ_OP_ZONE_RESET_ALL being in
> WRITE direction.

No, the difference is that the underlying SCSI/ATA/NVMe command tend to
usually actually transfer data.  Take a look at the special_vec field
in struct request and the RQF_SPECIAL_PAYLOAD flag.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC PATCH v2 1/4] block: change REQ_OP_ZONE_RESET from 6 to 13
  2020-05-16 15:36           ` Christoph Hellwig
@ 2020-05-17  5:30             ` Coly Li
  2020-05-18  6:53               ` Hannes Reinecke
  0 siblings, 1 reply; 26+ messages in thread
From: Coly Li @ 2020-05-17  5:30 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: linux-block, damien.lemoal, hare, axboe, linux-bcache, kbusch,
	Hannes Reinecke, Jens Axboe, Johannes Thumshirn, Shaun Tancheff

On 2020/5/16 23:36, Christoph Hellwig wrote:
> On Sat, May 16, 2020 at 09:05:39PM +0800, Coly Li wrote:
>> On 2020/5/16 20:50, Christoph Hellwig wrote:
>>> On Sat, May 16, 2020 at 08:44:45PM +0800, Coly Li wrote:
>>>> Yes you are right, just like REQ_OP_DISCARD which does not transfer any
>>>> data but changes the data on device. If the request changes the stored
>>>> data, it does transfer data.
>>>
>>> REQ_OP_DISCARD is a special case, because most implementation end up
>>> transferring data, it just gets attached in the low-level driver.
>>>
>>
>> Yes, REQ_OP_ZONE_RESET and REQ_OP_ZONE_RESET_ALL are quite similar to
>> REQ_OP_DISCARD. Data read from the LBA range of reset zone is not
>> suggested and the content is undefined.
>>
>> For bcache, similar to REQ_OP_DISCARD, REQ_OP_ZONE_RESET and
>> REQ_OP_ZONE_RESET_ALL are handled in same way: If the backing device
>> supports discard/zone_reset, and the operation successes, then cached
>> data on SSD covered by the LBA range should be invalid, otherwise users
>> will read outdated and garbage data.
>>
>> We should treat REQ_OP_ZONE_RESET and REQ_OP_ZONE_RESET_ALL being in
>> WRITE direction.
> 
> No, the difference is that the underlying SCSI/ATA/NVMe command tend to
> usually actually transfer data.  Take a look at the special_vec field
> in struct request and the RQF_SPECIAL_PAYLOAD flag.
> 

Then bio_data_dir() will be problematic, as it is defined,
 52 /*
 53  * Return the data direction, READ or WRITE.
 54  */
 55 #define bio_data_dir(bio) \
 56         (op_is_write(bio_op(bio)) ? WRITE : READ)

For the REQ_OP_ZONE_RESET bio, bio_data_dir() will report it as READ.

Since the author is you, how to fix this ?

Coly Li

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC PATCH v2 1/4] block: change REQ_OP_ZONE_RESET from 6 to 13
  2020-05-16  3:54 ` [RFC PATCH v2 1/4] block: change REQ_OP_ZONE_RESET from 6 to 13 Coly Li
  2020-05-16  4:06   ` Chaitanya Kulkarni
  2020-05-16 12:38   ` Christoph Hellwig
@ 2020-05-18  0:33   ` Damien Le Moal
  2020-05-18  5:09     ` Chaitanya Kulkarni
  2 siblings, 1 reply; 26+ messages in thread
From: Damien Le Moal @ 2020-05-18  0:33 UTC (permalink / raw)
  To: Coly Li, linux-block, hare, hch, axboe
  Cc: linux-bcache, kbusch, Hannes Reinecke, Jens Axboe,
	Johannes Thumshirn, Shaun Tancheff

On 2020/05/16 12:55, Coly Li wrote:
> For a zoned device, e.g. host managed SMR hard drive, REQ_OP_ZONE_RESET
> is to reset the LBA of a zone's write pointer back to the start LBA of
> this zone. After the write point is reset, all previously stored data
> in this zone is invalid and unaccessible anymore. Therefore, this op
> code changes on disk data, belongs to a WRITE request op code.
> 
> Current REQ_OP_ZONE_RESET is defined as number 6, but the convention of
> the op code is, READ requests are even numbers, and WRITE requests are
> odd numbers. See how op_is_write defined,
> 
> 397 static inline bool op_is_write(unsigned int op)
> 398 {
> 399         return (op & 1);
> 400 }
> 
> When create bcache device on top of the zoned SMR drive, bcache driver
> has to handle the REQ_OP_ZONE_RESET bio by invalidate all cached data
> belongs to the LBA range of the restting zone. A wrong op code value
> will make the this zone management bio goes into bcache' read requests
> code path and causes undefined result.
> 
> This patch changes REQ_OP_ZONE_RESET value from 6 to 13, the new odd
> number will make bcache driver handle this zone management bio properly
> in write requests code path.
> 
> Fixes: 87374179c535 ("block: add a proper block layer data direction encoding")
> Signed-off-by: Coly Li <colyli@suse.de>
> Cc: Christoph Hellwig <hch@lst.de>
> Cc: Damien Le Moal <damien.lemoal@wdc.com>
> Cc: Hannes Reinecke <hare@suse.de>
> Cc: Jens Axboe <axboe@fb.com>
> Cc: Johannes Thumshirn <johannes.thumshirn@wdc.com>
> Cc: Keith Busch <kbusch@kernel.org>
> Cc: Shaun Tancheff <shaun.tancheff@seagate.com>
> ---
>  include/linux/blk_types.h | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
> index 31eb92876be7..8f7bc15a6de3 100644
> --- a/include/linux/blk_types.h
> +++ b/include/linux/blk_types.h
> @@ -282,8 +282,6 @@ enum req_opf {
>  	REQ_OP_DISCARD		= 3,
>  	/* securely erase sectors */
>  	REQ_OP_SECURE_ERASE	= 5,
> -	/* reset a zone write pointer */
> -	REQ_OP_ZONE_RESET	= 6,
>  	/* write the same sector many times */
>  	REQ_OP_WRITE_SAME	= 7,
>  	/* reset all the zone present on the device */
> @@ -296,6 +294,8 @@ enum req_opf {
>  	REQ_OP_ZONE_CLOSE	= 11,
>  	/* Transition a zone to full */
>  	REQ_OP_ZONE_FINISH	= 12,
> +	/* reset a zone write pointer */
> +	REQ_OP_ZONE_RESET	= 13,

As Chaitanya pointed out, this is already used. Please rebase on Jens
block/for-5.8/block branch.

I do not see any problem with this change, but as Christoph commented, since
zone reset does not transfer any data, I do not really see the necessity for
this. Zone reset is indeed data-destructive, but it is not a "write" command.
But following DISCARD and having the op number as an odd number is OK I think.

>  
>  	/* SCSI passthrough using struct scsi_request */
>  	REQ_OP_SCSI_IN		= 32,
> 


-- 
Damien Le Moal
Western Digital Research

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC PATCH v2 2/4] block: block: change REQ_OP_ZONE_RESET_ALL from 8 to 15
  2020-05-16  3:54 ` [RFC PATCH v2 2/4] block: block: change REQ_OP_ZONE_RESET_ALL from 8 to 15 Coly Li
@ 2020-05-18  0:36   ` Damien Le Moal
  0 siblings, 0 replies; 26+ messages in thread
From: Damien Le Moal @ 2020-05-18  0:36 UTC (permalink / raw)
  To: Coly Li, linux-block, hare, hch, axboe
  Cc: linux-bcache, kbusch, Chaitanya Kulkarni, Johannes Thumshirn

On 2020/05/16 12:55, Coly Li wrote:
> For a zoned device, e.g. host managed SMR hard drive, the request op
> REQ_OP_ZONE_RESET_ALL is to reset LBA of all zones' writer pointers to
> the start LBA of the zone they belong to. After all the write pointers
> are reset, all previously stored data is invalid and unaccessible.
> Therefore this op code changes on-disk data, belongs to WRITE request
> op code.
> 
> Similar to the problem of REQ_OP_ZONE_RESET, REQ_OP_ZONE_RESET_ALL is
> even number 8, it means bios with REQ_OP_ZONE_RESET_ALL in bio->bi_opf
> will be treated as READ request by op_is_write().
> 
> Same problem will happen when bcache device is created on top of zoned
> device like host managed SMR hard drive, bcache driver should invalid
> all cached data for the REQ_OP_ZONE_RESET_ALL request, but this zone
> management bio is mistakenly treated as READ request and go into bcache
> read code path, which will cause undefined behavior.
> 
> This patch changes REQ_OP_ZONE_RESET_ALL value from 8 to 15, this new
> odd number will make bcache driver handle this zone management bio in
> write request code path properly.
> 
> Fixes: e84e8f066395 ("block: add req op to reset all zones and flag")
> Signed-off-by: Coly Li <colyli@suse.de>
> Cc: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
> Cc: Damien Le Moal <damien.lemoal@wdc.com>
> Cc: Hannes Reinecke <hare@suse.com>
> Cc: Jens Axboe <axboe@kernel.dk>
> Cc: Johannes Thumshirn <johannes.thumshirn@wdc.com>
> Cc: Keith Busch <kbusch@kernel.org>
> ---
> Changelog:
> v2: fix typo for REQ_OP_ZONE_RESET_ALL.
> v1: initial version.
> 
>  include/linux/blk_types.h | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
> index 8f7bc15a6de3..618032fa1b29 100644
> --- a/include/linux/blk_types.h
> +++ b/include/linux/blk_types.h
> @@ -284,8 +284,6 @@ enum req_opf {
>  	REQ_OP_SECURE_ERASE	= 5,
>  	/* write the same sector many times */
>  	REQ_OP_WRITE_SAME	= 7,
> -	/* reset all the zone present on the device */
> -	REQ_OP_ZONE_RESET_ALL	= 8,
>  	/* write the zero filled sector many times */
>  	REQ_OP_WRITE_ZEROES	= 9,
>  	/* Open a zone */
> @@ -296,6 +294,8 @@ enum req_opf {
>  	REQ_OP_ZONE_FINISH	= 12,
>  	/* reset a zone write pointer */
>  	REQ_OP_ZONE_RESET	= 13,
> +	/* reset all the zone present on the device */
> +	REQ_OP_ZONE_RESET_ALL	= 15,

Same comment as for patch 1. And you can probably squash this patch into patch 1
to avoid repeating mostly the same explanation twice in the commit message.
REQ_OP_ZONE_RESET and REQ_OP_ZONE_RESET_ALL are very closely related. The latter
may actually be processed using the former anyway, so the op number change for
both should be a single patch to be consistent.

>  
>  	/* SCSI passthrough using struct scsi_request */
>  	REQ_OP_SCSI_IN		= 32,
> 


-- 
Damien Le Moal
Western Digital Research

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC PATCH v2 3/4] block: remove queue_is_mq restriction from blk_revalidate_disk_zones()
  2020-05-16  3:54 ` [RFC PATCH v2 3/4] block: remove queue_is_mq restriction from blk_revalidate_disk_zones() Coly Li
  2020-05-16 12:40   ` Christoph Hellwig
@ 2020-05-18  0:39   ` Damien Le Moal
  1 sibling, 0 replies; 26+ messages in thread
From: Damien Le Moal @ 2020-05-18  0:39 UTC (permalink / raw)
  To: Coly Li, linux-block, hare, hch, axboe; +Cc: linux-bcache, kbusch

On 2020/05/16 12:55, Coly Li wrote:
> The bcache driver is bio based and NOT request based multiqueued driver,
> if a zoned SMR hard drive is used as backing device of a bcache device,
> calling blk_revalidate_disk_zones() for the bcache device will fail due
> to the following check in blk_revalidate_disk_zones(),
> 478       if (WARN_ON_ONCE(!queue_is_mq(q)))
> 479             return -EIO;
> 
> Now bcache is able to export the zoned information from the underlying
> zoned SMR drives and format zonefs on top of a bcache device, the
> resitriction that a zoned device should be multiqueued is unnecessary
> for now.
> 
> Although in commit ae58954d8734c ("block: don't handle bio based drivers
> in blk_revalidate_disk_zones") it is said that bio based drivers should
> not call blk_revalidate_disk_zones() and just manually update their own
> q->nr_zones, but this is inaccurate. The bio based drivers also need to
> set their zone size and initialize bitmaps for cnv and seq zones, it is
> necessary to call blk_revalidate_disk_zones() for bio based drivers.
> 
> This patch removes the above queue_is_mq() restriction to permit
> bcache driver calls blk_revalidate_disk_zones() for bcache device zoned
> information initialization.
> 
> Fixes: ae58954d8734c ("block: don't handle bio based drivers in blk_revalidate_disk_zones")
> Signed-off-by: Coly Li <colyli@suse.de>
> Cc: Christoph Hellwig <hch@lst.de>
> Cc: Damien Le Moal <damien.lemoal@wdc.com>
> Cc: Hannes Reinecke <hare@suse.com>
> Cc: Jens Axboe <axboe@kernel.dk>
> Cc: Keith Busch <kbusch@kernel.org>
> ---
>  block/blk-zoned.c | 2 --
>  1 file changed, 2 deletions(-)
> 
> diff --git a/block/blk-zoned.c b/block/blk-zoned.c
> index f87956e0dcaf..1e0708c68267 100644
> --- a/block/blk-zoned.c
> +++ b/block/blk-zoned.c
> @@ -475,8 +475,6 @@ int blk_revalidate_disk_zones(struct gendisk *disk)
>  
>  	if (WARN_ON_ONCE(!blk_queue_is_zoned(q)))
>  		return -EIO;
> -	if (WARN_ON_ONCE(!queue_is_mq(q)))
> -		return -EIO;

Same comment as Christoph: BIO drivers can set nr_zones and chunk_sectors
themselves, manually. They also do not need the bitmaps as BIO devices do not
have an elevator, and as Christoph said, can (if needed) look at the bitmaps of
the underlying device (e.g. for zone conv vs seq zone type differentiation).

So I do not think this patch is needed, and would only cause more memory
consumption for no good reason that I can see.

>  
>  	/*
>  	 * Ensure that all memory allocations in this context are done as if
> 


-- 
Damien Le Moal
Western Digital Research

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC PATCH v2 4/4] block: set bi_size to REQ_OP_ZONE_RESET bio
  2020-05-16  3:54 ` [RFC PATCH v2 4/4] block: set bi_size to REQ_OP_ZONE_RESET bio Coly Li
  2020-05-16 12:53   ` Christoph Hellwig
@ 2020-05-18  0:59   ` Damien Le Moal
  2020-05-18  2:32     ` Coly Li
  1 sibling, 1 reply; 26+ messages in thread
From: Damien Le Moal @ 2020-05-18  0:59 UTC (permalink / raw)
  To: Coly Li, linux-block, hare, hch, axboe
  Cc: linux-bcache, kbusch, Ajay Joshi, Chaitanya Kulkarni,
	Hannes Reinecke, Johannes Thumshirn

On 2020/05/16 12:55, Coly Li wrote:
> Now for zoned device, zone management ioctl commands are converted into
> zone management bios and handled by blkdev_zone_mgmt(). There are 4 zone
> management bios are handled, their op code is,
> - REQ_OP_ZONE_RESET
>   Reset the zone's writer pointer and empty all previously stored data.
> - REQ_OP_ZONE_OPEN
>   Open the zones in the specified sector range, no influence on data.
> - REQ_OP_ZONE_CLOSE
>   Close the zones in the specified sector range, no influence on data.
> - REQ_OP_ZONE_FINISH
>   Mark the zone as full, no influence on data.
> All the zone management bios has 0 byte size, a.k.a their bi_size is 0.
> 
> Exept for REQ_OP_ZONE_RESET request, zero length bio works fine for
> other zone management bio, before the zoned device e.g. host managed SMR
> hard drive can be created as a bcache device.
> 
> When a bcache device (virtual block device to forward bios like md raid
> drivers) can be created on top of the zoned device, and a fast SSD is
> attached as a cache device, bcache driver may cache the frequent random
> READ requests on fast SSD to accelerate hot data READ performance.
> 
> When bcache driver receives a zone management bio for REQ_OP_ZONE_RESET
> op, while forwarding the request to underlying zoned device e.g. host
> managed SMR hard drive, it should also invalidate all cached data from
> SSD for the resetting zone. Otherwise bcache will continue provide the
> outdated cached data to READ request and cause potential data storage
> inconsistency and corruption.
> 
> In order to invalidate outdated data from SSD for the reset zone, bcache
> needs to know not only the start LBA but also the range length of the
> resetting zone. Otherwise, bcache won't be able to accurately invalidate
> the outdated cached data.
> 
> Is it possible to simply set the bi_size inside bcache driver? The
> answer is NO. Although every REQ_OP_ZONE_RESET bio has exact length as
> zone size or q->limits.chunk_sectors, it is possible that some other
> layer stacking block driver (in the future) exists between bcache driver
> and blkdev_zone_mgmt() where the zone management bio is made.

But bcache "knows" what underlying devices it is using, right ? So getting the
SMR drive zone size using blk_queue_zone_sectors(), bdev_zone_sectors() or by
directly accessing q->limits->chunk_sectors should not be a problem at all, no ?

> 
> The best location to set bi_size is where the zone management bio is
> composed in blkdev_zone_mgmt(), then no matter how this bio is split
> before bcache driver receives it, bcache driver can always correctly
> invalidate the resetting range.
> 
> This patch sets the bi_size of REQ_OP_ZONE_RESET bio for each resetting
> zone. Here REQ_OP_ZONE_RESET_ALL is special whose bi_size should be set
> as capacity of whole drive size, then bcache can invalidate all cached
> data from SSD for the zoned backing device.

NACK. The problem here is that struct bio_vec bv_len field and struct bvec_iter
bi_size field are both "unsigned int". So they can describe at most 4G x 512B =
2TB ranges. For REQ_OP_ZONE_RESET_ALL, that is simply way too small (we already
are at 20TB capacity for SMR). One cannot issue a BIO with a length that is the
entire disk capacity.

I really think that bcache should handle the zone management ops as a special
case. I understand the goal of trying to minimize that by integrating them as
much as possible into the regular bcache IO path, but that really seems
dangerous to me. Since these operations will need remapping in bcache anyway
(for handling the first hidden zone containing the super block), all special
processing can be done under a single "if" which should not impact too much
performance of regular read/write commands in the hot path.

Device mapper has such code: see __split_and_process_bio() and its use of
op_is_zone_mgmt() function to handle zone management requests slightly
differently than other BIOs. That remove the need for relying on op_is_write()
direction (patch 1 and 2 in this series) for reset zones too, which in the end
will I think make your bcache code a lot more solid.

> 
> With this change, now bcache code can handle REQ_OP_ZONE_RESET bio in
> the way very similar to REQ_OP_DISCARD bio with very little change.
> 
> Signed-off-by: Coly Li <colyli@suse.de>
> Cc: Ajay Joshi <ajay.joshi@wdc.com>
> Cc: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
> Cc: Christoph Hellwig <hch@lst.de>
> Cc: Damien Le Moal <damien.lemoal@wdc.com>
> Cc: Hannes Reinecke <hare@suse.de>
> Cc: Johannes Thumshirn <johannes.thumshirn@wdc.com>
> Cc: Keith Busch <kbusch@kernel.org>
> ---
> Changelog:
> v2: fix typo for REQ_OP_ZONE_RESET_ALL.
> v1: initial version.
> 
>  block/blk-zoned.c | 4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/block/blk-zoned.c b/block/blk-zoned.c
> index 1e0708c68267..01d91314399b 100644
> --- a/block/blk-zoned.c
> +++ b/block/blk-zoned.c
> @@ -227,11 +227,15 @@ int blkdev_zone_mgmt(struct block_device *bdev, enum req_opf op,
>  		if (op == REQ_OP_ZONE_RESET &&
>  		    blkdev_allow_reset_all_zones(bdev, sector, nr_sectors)) {
>  			bio->bi_opf = REQ_OP_ZONE_RESET_ALL;
> +			bio->bi_iter.bi_sector = sector;
> +			bio->bi_iter.bi_size = nr_sectors;
>  			break;
>  		}
>  
>  		bio->bi_opf = op | REQ_SYNC;
>  		bio->bi_iter.bi_sector = sector;
> +		if (op == REQ_OP_ZONE_RESET)
> +			bio->bi_iter.bi_size = zone_sectors;
>  		sector += zone_sectors;
>  
>  		/* This may take a while, so be nice to others */
> 


-- 
Damien Le Moal
Western Digital Research

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC PATCH v2 3/4] block: remove queue_is_mq restriction from blk_revalidate_disk_zones()
  2020-05-16 13:13     ` Coly Li
  2020-05-16 15:35       ` Christoph Hellwig
@ 2020-05-18  1:07       ` Damien Le Moal
  1 sibling, 0 replies; 26+ messages in thread
From: Damien Le Moal @ 2020-05-18  1:07 UTC (permalink / raw)
  To: Coly Li, Christoph Hellwig; +Cc: linux-block, hare, axboe, linux-bcache, kbusch

On 2020/05/16 22:14, Coly Li wrote:
> On 2020/5/16 20:40, Christoph Hellwig wrote:
>> On Sat, May 16, 2020 at 11:54:33AM +0800, Coly Li wrote:
>>> The bcache driver is bio based and NOT request based multiqueued driver,
>>> if a zoned SMR hard drive is used as backing device of a bcache device,
>>> calling blk_revalidate_disk_zones() for the bcache device will fail due
>>> to the following check in blk_revalidate_disk_zones(),
>>> 478       if (WARN_ON_ONCE(!queue_is_mq(q)))
>>> 479             return -EIO;
>>>
>>> Now bcache is able to export the zoned information from the underlying
>>> zoned SMR drives and format zonefs on top of a bcache device, the
>>> resitriction that a zoned device should be multiqueued is unnecessary
>>> for now.
>>>
>>> Although in commit ae58954d8734c ("block: don't handle bio based drivers
>>> in blk_revalidate_disk_zones") it is said that bio based drivers should
>>> not call blk_revalidate_disk_zones() and just manually update their own
>>> q->nr_zones, but this is inaccurate. The bio based drivers also need to
>>> set their zone size and initialize bitmaps for cnv and seq zones, it is
>>> necessary to call blk_revalidate_disk_zones() for bio based drivers.
>>
>> Why would you need these bitmaps for bcache?  There is no reason to
>> serialize requests for stacking drivers, and you can already derive
>> if a zone is sequential or not from whatever internal information
>> you use.
>>
>> So without a user that actually makes sense: NAK.
>>
> 
> It is OK for me to set the zone_nr and zone size without calling
> blk_revalidate_disk_zones().

Yes, no problem with that.

This is how device mapper BIO based targets handle zoned devices. See
dm_set_device_limits() which uses bdev_stack_limits() for handling chunk_sectors
(zone size) and directly set q->limits.zoned to blk_queue_zoned_model(q) of the
the underlying device. For the number of zones, see dm_table_set_restrictions()
which uses blkdev_nr_zones(t->md->disk) to set q->nr_zones. Note that
blkdev_nr_zones() uses the gendisk capacity of the logical device (the bcache
device in your case) and the zone size, so both must be set first before calling
blkdev_nr_zones().


> 
> Coly Li
> 


-- 
Damien Le Moal
Western Digital Research

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC PATCH v2 4/4] block: set bi_size to REQ_OP_ZONE_RESET bio
  2020-05-18  0:59   ` Damien Le Moal
@ 2020-05-18  2:32     ` Coly Li
  0 siblings, 0 replies; 26+ messages in thread
From: Coly Li @ 2020-05-18  2:32 UTC (permalink / raw)
  To: Damien Le Moal, linux-block, hare, hch, axboe
  Cc: linux-bcache, kbusch, Ajay Joshi, Chaitanya Kulkarni,
	Hannes Reinecke, Johannes Thumshirn

On 2020/5/18 08:59, Damien Le Moal wrote:
> On 2020/05/16 12:55, Coly Li wrote:
>> Now for zoned device, zone management ioctl commands are converted into
>> zone management bios and handled by blkdev_zone_mgmt(). There are 4 zone
>> management bios are handled, their op code is,
>> - REQ_OP_ZONE_RESET
>>   Reset the zone's writer pointer and empty all previously stored data.
>> - REQ_OP_ZONE_OPEN
>>   Open the zones in the specified sector range, no influence on data.
>> - REQ_OP_ZONE_CLOSE
>>   Close the zones in the specified sector range, no influence on data.
>> - REQ_OP_ZONE_FINISH
>>   Mark the zone as full, no influence on data.
>> All the zone management bios has 0 byte size, a.k.a their bi_size is 0.
>>
>> Exept for REQ_OP_ZONE_RESET request, zero length bio works fine for
>> other zone management bio, before the zoned device e.g. host managed SMR
>> hard drive can be created as a bcache device.
>>
>> When a bcache device (virtual block device to forward bios like md raid
>> drivers) can be created on top of the zoned device, and a fast SSD is
>> attached as a cache device, bcache driver may cache the frequent random
>> READ requests on fast SSD to accelerate hot data READ performance.
>>
>> When bcache driver receives a zone management bio for REQ_OP_ZONE_RESET
>> op, while forwarding the request to underlying zoned device e.g. host
>> managed SMR hard drive, it should also invalidate all cached data from
>> SSD for the resetting zone. Otherwise bcache will continue provide the
>> outdated cached data to READ request and cause potential data storage
>> inconsistency and corruption.
>>
>> In order to invalidate outdated data from SSD for the reset zone, bcache
>> needs to know not only the start LBA but also the range length of the
>> resetting zone. Otherwise, bcache won't be able to accurately invalidate
>> the outdated cached data.
>>
>> Is it possible to simply set the bi_size inside bcache driver? The
>> answer is NO. Although every REQ_OP_ZONE_RESET bio has exact length as
>> zone size or q->limits.chunk_sectors, it is possible that some other
>> layer stacking block driver (in the future) exists between bcache driver
>> and blkdev_zone_mgmt() where the zone management bio is made.
> 
> But bcache "knows" what underlying devices it is using, right ? So getting the
> SMR drive zone size using blk_queue_zone_sectors(), bdev_zone_sectors() or by
> directly accessing q->limits->chunk_sectors should not be a problem at all, no ?
> 
>>
>> The best location to set bi_size is where the zone management bio is
>> composed in blkdev_zone_mgmt(), then no matter how this bio is split
>> before bcache driver receives it, bcache driver can always correctly
>> invalidate the resetting range.
>>
>> This patch sets the bi_size of REQ_OP_ZONE_RESET bio for each resetting
>> zone. Here REQ_OP_ZONE_RESET_ALL is special whose bi_size should be set
>> as capacity of whole drive size, then bcache can invalidate all cached
>> data from SSD for the zoned backing device.
> 
> NACK. The problem here is that struct bio_vec bv_len field and struct bvec_iter
> bi_size field are both "unsigned int". So they can describe at most 4G x 512B =
> 2TB ranges. For REQ_OP_ZONE_RESET_ALL, that is simply way too small (we already
> are at 20TB capacity for SMR). One cannot issue a BIO with a length that is the
> entire disk capacity.
> 
> I really think that bcache should handle the zone management ops as a special
> case. I understand the goal of trying to minimize that by integrating them as
> much as possible into the regular bcache IO path, but that really seems
> dangerous to me. Since these operations will need remapping in bcache anyway
> (for handling the first hidden zone containing the super block), all special
> processing can be done under a single "if" which should not impact too much
> performance of regular read/write commands in the hot path.
> 
> Device mapper has such code: see __split_and_process_bio() and its use of
> op_is_zone_mgmt() function to handle zone management requests slightly
> differently than other BIOs. That remove the need for relying on op_is_write()
> direction (patch 1 and 2 in this series) for reset zones too, which in the end
> will I think make your bcache code a lot more solid.
> 

Copied. I understand and agree with you and Christoph now. Let me
reconstruct the bcache code and maybe the depended change of generic
block layer code can be avoided.

Damien and Christoph, thank you all for the comments.

Coly Li

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC PATCH v2 1/4] block: change REQ_OP_ZONE_RESET from 6 to 13
  2020-05-18  0:33   ` Damien Le Moal
@ 2020-05-18  5:09     ` Chaitanya Kulkarni
  0 siblings, 0 replies; 26+ messages in thread
From: Chaitanya Kulkarni @ 2020-05-18  5:09 UTC (permalink / raw)
  To: Damien Le Moal, Coly Li, linux-block, hare, hch, axboe
  Cc: linux-bcache, kbusch, Hannes Reinecke, Jens Axboe,
	Johannes Thumshirn, Shaun Tancheff

On 5/17/20 5:33 PM, Damien Le Moal wrote:
> As Chaitanya pointed out, this is already used. Please rebase on Jens
> block/for-5.8/block branch.
> 
> I do not see any problem with this change, but as Christoph commented, since
> zone reset does not transfer any data, I do not really see the necessity for
> this. Zone reset is indeed data-destructive, but it is not a "write" command.
> But following DISCARD and having the op number as an odd number is OK I think.
Let's keep reset consistent with discard and write-zeroes.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC PATCH v2 1/4] block: change REQ_OP_ZONE_RESET from 6 to 13
  2020-05-17  5:30             ` Coly Li
@ 2020-05-18  6:53               ` Hannes Reinecke
  2020-05-18  6:56                 ` Damien Le Moal
  0 siblings, 1 reply; 26+ messages in thread
From: Hannes Reinecke @ 2020-05-18  6:53 UTC (permalink / raw)
  To: Coly Li, Christoph Hellwig
  Cc: linux-block, damien.lemoal, hare, axboe, linux-bcache, kbusch,
	Jens Axboe, Johannes Thumshirn, Shaun Tancheff

On 5/17/20 7:30 AM, Coly Li wrote:
> On 2020/5/16 23:36, Christoph Hellwig wrote:
>> On Sat, May 16, 2020 at 09:05:39PM +0800, Coly Li wrote:
>>> On 2020/5/16 20:50, Christoph Hellwig wrote:
>>>> On Sat, May 16, 2020 at 08:44:45PM +0800, Coly Li wrote:
>>>>> Yes you are right, just like REQ_OP_DISCARD which does not transfer any
>>>>> data but changes the data on device. If the request changes the stored
>>>>> data, it does transfer data.
>>>>
>>>> REQ_OP_DISCARD is a special case, because most implementation end up
>>>> transferring data, it just gets attached in the low-level driver.
>>>>
>>>
>>> Yes, REQ_OP_ZONE_RESET and REQ_OP_ZONE_RESET_ALL are quite similar to
>>> REQ_OP_DISCARD. Data read from the LBA range of reset zone is not
>>> suggested and the content is undefined.
>>>
>>> For bcache, similar to REQ_OP_DISCARD, REQ_OP_ZONE_RESET and
>>> REQ_OP_ZONE_RESET_ALL are handled in same way: If the backing device
>>> supports discard/zone_reset, and the operation successes, then cached
>>> data on SSD covered by the LBA range should be invalid, otherwise users
>>> will read outdated and garbage data.
>>>
>>> We should treat REQ_OP_ZONE_RESET and REQ_OP_ZONE_RESET_ALL being in
>>> WRITE direction.
>>
>> No, the difference is that the underlying SCSI/ATA/NVMe command tend to
>> usually actually transfer data.  Take a look at the special_vec field
>> in struct request and the RQF_SPECIAL_PAYLOAD flag.
>>
> 
> Then bio_data_dir() will be problematic, as it is defined,
>   52 /*
>   53  * Return the data direction, READ or WRITE.
>   54  */
>   55 #define bio_data_dir(bio) \
>   56         (op_is_write(bio_op(bio)) ? WRITE : READ)
> 
> For the REQ_OP_ZONE_RESET bio, bio_data_dir() will report it as READ.
> 
> Since the author is you, how to fix this ?
> 

Well, but I do think that Coly is right in that bio_data_dir() is very 
unconvincing, or at least improperly documented.

I can't quite follow Christoph's argument in that bio_data_dir() 
reflects whether data is _actually_ transferred (if so, why would 
REQ_OP_ZONE_CLOSE() be marked as WRITE?)
However, in most cases bio_data_dir() will only come into effect if 
there are attached bvecs.

So the _correct_ usage for bio_data_dir() would be to check if there is 
data attached to the bio (eg by using bio_has_data()), and only call 
bio_data_dir() if that returns true. For false you'd need to add a 
switch evaluating the individual commands.

And, btw, bio_has_data() needs updating, too; all the zone commands are 
missing from there, too.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke            Teamlead Storage & Networking
hare@suse.de                               +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC PATCH v2 1/4] block: change REQ_OP_ZONE_RESET from 6 to 13
  2020-05-18  6:53               ` Hannes Reinecke
@ 2020-05-18  6:56                 ` Damien Le Moal
  0 siblings, 0 replies; 26+ messages in thread
From: Damien Le Moal @ 2020-05-18  6:56 UTC (permalink / raw)
  To: Hannes Reinecke, Coly Li, Christoph Hellwig
  Cc: linux-block, hare, axboe, linux-bcache, kbusch, Jens Axboe,
	Johannes Thumshirn, Shaun Tancheff

On 2020/05/18 15:53, Hannes Reinecke wrote:
> On 5/17/20 7:30 AM, Coly Li wrote:
>> On 2020/5/16 23:36, Christoph Hellwig wrote:
>>> On Sat, May 16, 2020 at 09:05:39PM +0800, Coly Li wrote:
>>>> On 2020/5/16 20:50, Christoph Hellwig wrote:
>>>>> On Sat, May 16, 2020 at 08:44:45PM +0800, Coly Li wrote:
>>>>>> Yes you are right, just like REQ_OP_DISCARD which does not transfer any
>>>>>> data but changes the data on device. If the request changes the stored
>>>>>> data, it does transfer data.
>>>>>
>>>>> REQ_OP_DISCARD is a special case, because most implementation end up
>>>>> transferring data, it just gets attached in the low-level driver.
>>>>>
>>>>
>>>> Yes, REQ_OP_ZONE_RESET and REQ_OP_ZONE_RESET_ALL are quite similar to
>>>> REQ_OP_DISCARD. Data read from the LBA range of reset zone is not
>>>> suggested and the content is undefined.
>>>>
>>>> For bcache, similar to REQ_OP_DISCARD, REQ_OP_ZONE_RESET and
>>>> REQ_OP_ZONE_RESET_ALL are handled in same way: If the backing device
>>>> supports discard/zone_reset, and the operation successes, then cached
>>>> data on SSD covered by the LBA range should be invalid, otherwise users
>>>> will read outdated and garbage data.
>>>>
>>>> We should treat REQ_OP_ZONE_RESET and REQ_OP_ZONE_RESET_ALL being in
>>>> WRITE direction.
>>>
>>> No, the difference is that the underlying SCSI/ATA/NVMe command tend to
>>> usually actually transfer data.  Take a look at the special_vec field
>>> in struct request and the RQF_SPECIAL_PAYLOAD flag.
>>>
>>
>> Then bio_data_dir() will be problematic, as it is defined,
>>   52 /*
>>   53  * Return the data direction, READ or WRITE.
>>   54  */
>>   55 #define bio_data_dir(bio) \
>>   56         (op_is_write(bio_op(bio)) ? WRITE : READ)
>>
>> For the REQ_OP_ZONE_RESET bio, bio_data_dir() will report it as READ.
>>
>> Since the author is you, how to fix this ?
>>
> 
> Well, but I do think that Coly is right in that bio_data_dir() is very 
> unconvincing, or at least improperly documented.
> 
> I can't quite follow Christoph's argument in that bio_data_dir() 
> reflects whether data is _actually_ transferred (if so, why would 
> REQ_OP_ZONE_CLOSE() be marked as WRITE?)
> However, in most cases bio_data_dir() will only come into effect if 
> there are attached bvecs.
> 
> So the _correct_ usage for bio_data_dir() would be to check if there is 
> data attached to the bio (eg by using bio_has_data()), and only call 
> bio_data_dir() if that returns true. For false you'd need to add a 
> switch evaluating the individual commands.
> 
> And, btw, bio_has_data() needs updating, too; all the zone commands are 
> missing from there, too.

Not really. They all have 0 size, so bio_has_data() always return false for zone
management ops. It will return true only for report zones.

> 
> Cheers,
> 
> Hannes
> 


-- 
Damien Le Moal
Western Digital Research

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, back to index

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-05-16  3:54 [RFC PATCH v2 0/4] block layer change necessary for bcache zoned device support Coly Li
2020-05-16  3:54 ` [RFC PATCH v2 1/4] block: change REQ_OP_ZONE_RESET from 6 to 13 Coly Li
2020-05-16  4:06   ` Chaitanya Kulkarni
2020-05-16  9:33     ` Coly Li
2020-05-16 12:38   ` Christoph Hellwig
2020-05-16 12:44     ` Coly Li
2020-05-16 12:50       ` Christoph Hellwig
2020-05-16 13:05         ` Coly Li
2020-05-16 15:36           ` Christoph Hellwig
2020-05-17  5:30             ` Coly Li
2020-05-18  6:53               ` Hannes Reinecke
2020-05-18  6:56                 ` Damien Le Moal
2020-05-18  0:33   ` Damien Le Moal
2020-05-18  5:09     ` Chaitanya Kulkarni
2020-05-16  3:54 ` [RFC PATCH v2 2/4] block: block: change REQ_OP_ZONE_RESET_ALL from 8 to 15 Coly Li
2020-05-18  0:36   ` Damien Le Moal
2020-05-16  3:54 ` [RFC PATCH v2 3/4] block: remove queue_is_mq restriction from blk_revalidate_disk_zones() Coly Li
2020-05-16 12:40   ` Christoph Hellwig
2020-05-16 13:13     ` Coly Li
2020-05-16 15:35       ` Christoph Hellwig
2020-05-18  1:07       ` Damien Le Moal
2020-05-18  0:39   ` Damien Le Moal
2020-05-16  3:54 ` [RFC PATCH v2 4/4] block: set bi_size to REQ_OP_ZONE_RESET bio Coly Li
2020-05-16 12:53   ` Christoph Hellwig
2020-05-18  0:59   ` Damien Le Moal
2020-05-18  2:32     ` Coly Li

Linux-Block Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-block/0 linux-block/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-block linux-block/ https://lore.kernel.org/linux-block \
		linux-block@vger.kernel.org
	public-inbox-index linux-block

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-block


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git