linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 00/15] block atomic writes
@ 2024-01-24 11:38 John Garry
  2024-01-24 11:38 ` [PATCH v3 01/15] block: Add atomic write operations to request_queue limits John Garry
                   ` (15 more replies)
  0 siblings, 16 replies; 59+ messages in thread
From: John Garry @ 2024-01-24 11:38 UTC (permalink / raw)
  To: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-xfs, linux-fsdevel,
	tytso, jbongio, linux-scsi, ming.lei, ojaswin, bvanassche,
	John Garry

This series introduces a proposal to implementing atomic writes in the
kernel for torn-write protection.

This series takes the approach of adding a new "atomic" flag to each of
pwritev2() and iocb->ki_flags - RWF_ATOMIC and IOCB_ATOMIC, respectively.
When set, these indicate that we want the write issued "atomically".

Only direct IO is supported and for block devices here. For this, atomic
write HW is required, like SCSI ATOMIC WRITE (16).

I plan to send a series for supporting atomic writes for XFS later this
week, but initially only for XFS rtvol.

Updated man pages have been posted at:
https://lore.kernel.org/lkml/20240124112731.28579-1-john.g.garry@oracle.com/T/#m520dca97a9748de352b5a723d3155a4bb1e46456

The goal here is to provide an interface that allows applications use
application-specific block sizes larger than logical block size
reported by the storage device or larger than filesystem block size as
reported by stat().

With this new interface, application blocks will never be torn or
fractured when written. For a power fail, for each individual application
block, all or none of the data to be written. A racing atomic write and
read will mean that the read sees all the old data or all the new data,
but never a mix of old and new.

Three new fields are added to struct statx - atomic_write_unit_min,
atomic_write_unit_max, and atomic_write_segments_max. For each atomic
individual write, the total length of a write must be a between
atomic_write_unit_min and atomic_write_unit_max, inclusive, and a
power-of-2. The write must also be at a natural offset in the file
wrt the write length. For pwritev2, iovcnt is limited by
atomic_write_segments_max.

SCSI sd.c and scsi_debug and NVMe kernel support is added.

This series is based on v6.8-rc1.

Changes since v2:
- Support atomic_write_segments_max
- Limit atomic write paramaters to max_hw_sectors_kb
- Don't increase fmode_t
- Change value for RWF_ATOMIC
- Various tidying (including advised by Jan)

Changes since v1:
- Drop XFS support for now
- Tidy NVMe changes and also add checks for atomic write violating max
  AW PF length and boundary (if any)
- Reject - instead of ignoring - RWF_ATOMIC for files which do not
  support atomic writes
- Update block sysfs documentation
- Various tidy-ups

Alan Adamson (2):
  nvme: Support atomic writes
  nvme: Ensure atomic writes will be executed atomically

Himanshu Madhani (2):
  block: Add atomic write operations to request_queue limits
  block: Add REQ_ATOMIC flag

John Garry (9):
  block: Limit atomic writes according to bio and queue limits
  block: Pass blk_queue_get_max_sectors() a request pointer
  block: Limit atomic write IO size according to
    atomic_write_max_sectors
  block: Error an attempt to split an atomic write bio
  block: Add checks to merging of atomic writes
  block: Add fops atomic write support
  scsi: sd: Support reading atomic write properties from block limits
    VPD
  scsi: sd: Add WRITE_ATOMIC_16 support
  scsi: scsi_debug: Atomic write support

Prasad Singamsetty (2):
  fs/bdev: Add atomic write support info to statx
  fs: Add RWF_ATOMIC and IOCB_ATOMIC flags for atomic write support

 Documentation/ABI/stable/sysfs-block |  52 +++
 block/bdev.c                         |  37 +-
 block/blk-merge.c                    |  94 ++++-
 block/blk-mq.c                       |   2 +-
 block/blk-settings.c                 | 103 +++++
 block/blk-sysfs.c                    |  33 ++
 block/blk.h                          |   9 +-
 block/fops.c                         |  44 +-
 drivers/nvme/host/core.c             |  71 ++++
 drivers/nvme/host/nvme.h             |   2 +
 drivers/scsi/scsi_debug.c            | 589 +++++++++++++++++++++------
 drivers/scsi/scsi_trace.c            |  22 +
 drivers/scsi/sd.c                    |  93 ++++-
 drivers/scsi/sd.h                    |   8 +
 fs/stat.c                            |  47 ++-
 include/linux/blk_types.h            |   2 +
 include/linux/blkdev.h               |  45 +-
 include/linux/fs.h                   |  12 +
 include/linux/stat.h                 |   3 +
 include/scsi/scsi_proto.h            |   1 +
 include/trace/events/scsi.h          |   1 +
 include/uapi/linux/fs.h              |   5 +-
 include/uapi/linux/stat.h            |   9 +-
 23 files changed, 1123 insertions(+), 161 deletions(-)

-- 
2.31.1


^ permalink raw reply	[flat|nested] 59+ messages in thread

* [PATCH v3 01/15] block: Add atomic write operations to request_queue limits
  2024-01-24 11:38 [PATCH v3 00/15] block atomic writes John Garry
@ 2024-01-24 11:38 ` John Garry
  2024-02-13  6:22   ` Christoph Hellwig
  2024-01-24 11:38 ` [PATCH v3 02/15] block: Limit atomic writes according to bio and queue limits John Garry
                   ` (14 subsequent siblings)
  15 siblings, 1 reply; 59+ messages in thread
From: John Garry @ 2024-01-24 11:38 UTC (permalink / raw)
  To: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-xfs, linux-fsdevel,
	tytso, jbongio, linux-scsi, ming.lei, ojaswin, bvanassche,
	Himanshu Madhani, John Garry

From: Himanshu Madhani <himanshu.madhani@oracle.com>

Add the following limits:
- atomic_write_boundary_bytes
- atomic_write_max_bytes
- atomic_write_unit_max_bytes
- atomic_write_unit_min_bytes

All atomic writes limits are initialised to 0 to indicate no atomic write
support. Stacked devices are just not supported either for now.

Signed-off-by: Himanshu Madhani <himanshu.madhani@oracle.com>
#jpg: Heavy rewrite
Signed-off-by: John Garry <john.g.garry@oracle.com>
---
This will conflict with
https://lore.kernel.org/linux-nvme/20240122173645.1686078-1-hch@lst.de/T/#mf77609a2064fe9387706ce564d8246c5243eeb99,
but I will rebase when that is merged and I assume
blk_atomic_writes_update_limits() will be merged into a larger "update"
function.

 Documentation/ABI/stable/sysfs-block | 52 ++++++++++++++++++
 block/blk-settings.c                 | 79 ++++++++++++++++++++++++++++
 block/blk-sysfs.c                    | 33 ++++++++++++
 include/linux/blkdev.h               | 40 ++++++++++++++
 4 files changed, 204 insertions(+)

diff --git a/Documentation/ABI/stable/sysfs-block b/Documentation/ABI/stable/sysfs-block
index 1fe9a553c37b..ac3c6b46f1a3 100644
--- a/Documentation/ABI/stable/sysfs-block
+++ b/Documentation/ABI/stable/sysfs-block
@@ -21,6 +21,58 @@ Description:
 		device is offset from the internal allocation unit's
 		natural alignment.
 
+What:		/sys/block/<disk>/atomic_write_max_bytes
+Date:		January 2024
+Contact:	Himanshu Madhani <himanshu.madhani@oracle.com>
+Description:
+		[RO] This parameter specifies the maximum atomic write
+		size reported by the device. This parameter is relevant
+		for merging of writes, where a merged atomic write
+		operation must not exceed this number of bytes.
+		This parameter may be greater to the value in
+		atomic_write_unit_max_bytes as
+		atomic_write_unit_max_bytes will be rounded down to a
+		power-of-two and atomic_write_unit_max_bytes may also be
+		limited by some other queue limits, such as max_segments.
+		This parameter - along with atomic_write_unit_min_bytes
+		and atomic_write_unit_max_bytes - will not be larger than
+		max_hw_sectors_kb, but may be larger than max_sectors_kb.
+
+
+What:		/sys/block/<disk>/atomic_write_unit_min_bytes
+Date:		January 2024
+Contact:	Himanshu Madhani <himanshu.madhani@oracle.com>
+Description:
+		[RO] This parameter specifies the smallest block which can
+		be written atomically with an atomic write operation. All
+		atomic write operations must begin at a
+		atomic_write_unit_min boundary and must be multiples of
+		atomic_write_unit_min. This value must be a power-of-two.
+
+
+What:		/sys/block/<disk>/atomic_write_unit_max_bytes
+Date:		January 2024
+Contact:	Himanshu Madhani <himanshu.madhani@oracle.com>
+Description:
+		[RO] This parameter defines the largest block which can be
+		written atomically with an atomic write operation. This
+		value must be a multiple of atomic_write_unit_min and must
+		be a power-of-two. This value will not be larger than
+		atomic_write_max_bytes.
+
+
+What:		/sys/block/<disk>/atomic_write_boundary_bytes
+Date:		January 2024
+Contact:	Himanshu Madhani <himanshu.madhani@oracle.com>
+Description:
+		[RO] A device may need to internally split I/Os which
+		straddle a given logical block address boundary. In that
+		case a single atomic write operation will be processed as
+		one of more sub-operations which each complete atomically.
+		This parameter specifies the size in bytes of the atomic
+		boundary if one is reported by the device. This value must
+		be a power-of-two.
+
 
 What:		/sys/block/<disk>/diskseq
 Date:		February 2021
diff --git a/block/blk-settings.c b/block/blk-settings.c
index 06ea91e51b8b..11c0361c2313 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -59,6 +59,13 @@ void blk_set_default_limits(struct queue_limits *lim)
 	lim->zoned = false;
 	lim->zone_write_granularity = 0;
 	lim->dma_alignment = 511;
+	lim->atomic_write_hw_max_sectors = 0;
+	lim->atomic_write_max_sectors = 0;
+	lim->atomic_write_hw_boundary_sectors = 0;
+	lim->atomic_write_hw_unit_min_sectors = 0;
+	lim->atomic_write_unit_min_sectors = 0;
+	lim->atomic_write_hw_unit_max_sectors = 0;
+	lim->atomic_write_unit_max_sectors = 0;
 }
 
 /**
@@ -101,6 +108,20 @@ void blk_queue_bounce_limit(struct request_queue *q, enum blk_bounce bounce)
 }
 EXPORT_SYMBOL(blk_queue_bounce_limit);
 
+static void blk_atomic_writes_update_limits(struct request_queue *q)
+{
+	struct queue_limits *limits = &q->limits;
+	unsigned int max_hw_sectors =
+		rounddown_pow_of_two(limits->max_hw_sectors);
+
+	limits->atomic_write_max_sectors =
+		min(limits->atomic_write_hw_max_sectors, max_hw_sectors);
+	limits->atomic_write_unit_min_sectors =
+		min(limits->atomic_write_hw_unit_min_sectors, max_hw_sectors);
+	limits->atomic_write_unit_max_sectors =
+		min(limits->atomic_write_hw_unit_max_sectors, max_hw_sectors);
+}
+
 /**
  * blk_queue_max_hw_sectors - set max sectors for a request for this queue
  * @q:  the request queue for the device
@@ -145,6 +166,8 @@ void blk_queue_max_hw_sectors(struct request_queue *q, unsigned int max_hw_secto
 				 limits->logical_block_size >> SECTOR_SHIFT);
 	limits->max_sectors = max_sectors;
 
+	blk_atomic_writes_update_limits(q);
+
 	if (!q->disk)
 		return;
 	q->disk->bdi->io_pages = max_sectors >> (PAGE_SHIFT - 9);
@@ -182,6 +205,62 @@ void blk_queue_max_discard_sectors(struct request_queue *q,
 }
 EXPORT_SYMBOL(blk_queue_max_discard_sectors);
 
+/**
+ * blk_queue_atomic_write_max_bytes - set max bytes supported by
+ * the device for atomic write operations.
+ * @q:  the request queue for the device
+ * @bytes: maximum bytes supported
+ */
+void blk_queue_atomic_write_max_bytes(struct request_queue *q,
+				      unsigned int bytes)
+{
+	q->limits.atomic_write_hw_max_sectors = bytes >> SECTOR_SHIFT;
+	blk_atomic_writes_update_limits(q);
+}
+EXPORT_SYMBOL(blk_queue_atomic_write_max_bytes);
+
+/**
+ * blk_queue_atomic_write_boundary_bytes - Device's logical block address space
+ * which an atomic write should not cross.
+ * @q:  the request queue for the device
+ * @bytes: must be a power-of-two.
+ */
+void blk_queue_atomic_write_boundary_bytes(struct request_queue *q,
+					   unsigned int bytes)
+{
+	q->limits.atomic_write_hw_boundary_sectors = bytes >> SECTOR_SHIFT;
+}
+EXPORT_SYMBOL(blk_queue_atomic_write_boundary_bytes);
+
+/**
+ * blk_queue_atomic_write_unit_min_sectors - smallest unit that can be written
+ * atomically to the device.
+ * @q:  the request queue for the device
+ * @sectors: must be a power-of-two.
+ */
+void blk_queue_atomic_write_unit_min_sectors(struct request_queue *q,
+					     unsigned int sectors)
+{
+
+	q->limits.atomic_write_hw_unit_min_sectors = sectors;
+	blk_atomic_writes_update_limits(q);
+}
+EXPORT_SYMBOL(blk_queue_atomic_write_unit_min_sectors);
+
+/*
+ * blk_queue_atomic_write_unit_max_sectors - largest unit that can be written
+ * atomically to the device.
+ * @q: the request queue for the device
+ * @sectors: must be a power-of-two.
+ */
+void blk_queue_atomic_write_unit_max_sectors(struct request_queue *q,
+					     unsigned int sectors)
+{
+	q->limits.atomic_write_hw_unit_max_sectors = sectors;
+	blk_atomic_writes_update_limits(q);
+}
+EXPORT_SYMBOL(blk_queue_atomic_write_unit_max_sectors);
+
 /**
  * blk_queue_max_secure_erase_sectors - set max sectors for a secure erase
  * @q:  the request queue for the device
diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index 6b2429cad81a..3978f14f9769 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -118,6 +118,30 @@ static ssize_t queue_max_discard_segments_show(struct request_queue *q,
 	return queue_var_show(queue_max_discard_segments(q), page);
 }
 
+static ssize_t queue_atomic_write_max_bytes_show(struct request_queue *q,
+						char *page)
+{
+	return queue_var_show(queue_atomic_write_max_bytes(q), page);
+}
+
+static ssize_t queue_atomic_write_boundary_show(struct request_queue *q,
+						char *page)
+{
+	return queue_var_show(queue_atomic_write_boundary_bytes(q), page);
+}
+
+static ssize_t queue_atomic_write_unit_min_show(struct request_queue *q,
+						char *page)
+{
+	return queue_var_show(queue_atomic_write_unit_min_bytes(q), page);
+}
+
+static ssize_t queue_atomic_write_unit_max_show(struct request_queue *q,
+						char *page)
+{
+	return queue_var_show(queue_atomic_write_unit_max_bytes(q), page);
+}
+
 static ssize_t queue_max_integrity_segments_show(struct request_queue *q, char *page)
 {
 	return queue_var_show(q->limits.max_integrity_segments, page);
@@ -502,6 +526,11 @@ QUEUE_RO_ENTRY(queue_discard_max_hw, "discard_max_hw_bytes");
 QUEUE_RW_ENTRY(queue_discard_max, "discard_max_bytes");
 QUEUE_RO_ENTRY(queue_discard_zeroes_data, "discard_zeroes_data");
 
+QUEUE_RO_ENTRY(queue_atomic_write_max_bytes, "atomic_write_max_bytes");
+QUEUE_RO_ENTRY(queue_atomic_write_boundary, "atomic_write_boundary_bytes");
+QUEUE_RO_ENTRY(queue_atomic_write_unit_max, "atomic_write_unit_max_bytes");
+QUEUE_RO_ENTRY(queue_atomic_write_unit_min, "atomic_write_unit_min_bytes");
+
 QUEUE_RO_ENTRY(queue_write_same_max, "write_same_max_bytes");
 QUEUE_RO_ENTRY(queue_write_zeroes_max, "write_zeroes_max_bytes");
 QUEUE_RO_ENTRY(queue_zone_append_max, "zone_append_max_bytes");
@@ -629,6 +658,10 @@ static struct attribute *queue_attrs[] = {
 	&queue_discard_max_entry.attr,
 	&queue_discard_max_hw_entry.attr,
 	&queue_discard_zeroes_data_entry.attr,
+	&queue_atomic_write_max_bytes_entry.attr,
+	&queue_atomic_write_boundary_entry.attr,
+	&queue_atomic_write_unit_min_entry.attr,
+	&queue_atomic_write_unit_max_entry.attr,
 	&queue_write_same_max_entry.attr,
 	&queue_write_zeroes_max_entry.attr,
 	&queue_zone_append_max_entry.attr,
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 99e4f5e72213..d5490b988918 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -299,6 +299,14 @@ struct queue_limits {
 	unsigned int		discard_alignment;
 	unsigned int		zone_write_granularity;
 
+	unsigned int		atomic_write_hw_max_sectors;
+	unsigned int		atomic_write_max_sectors;
+	unsigned int		atomic_write_hw_boundary_sectors;
+	unsigned int		atomic_write_hw_unit_min_sectors;
+	unsigned int		atomic_write_unit_min_sectors;
+	unsigned int		atomic_write_hw_unit_max_sectors;
+	unsigned int		atomic_write_unit_max_sectors;
+
 	unsigned short		max_segments;
 	unsigned short		max_integrity_segments;
 	unsigned short		max_discard_segments;
@@ -885,6 +893,14 @@ void blk_queue_zone_write_granularity(struct request_queue *q,
 				      unsigned int size);
 extern void blk_queue_alignment_offset(struct request_queue *q,
 				       unsigned int alignment);
+void blk_queue_atomic_write_max_bytes(struct request_queue *q,
+				unsigned int bytes);
+void blk_queue_atomic_write_unit_max_sectors(struct request_queue *q,
+				unsigned int sectors);
+void blk_queue_atomic_write_unit_min_sectors(struct request_queue *q,
+				unsigned int sectors);
+void blk_queue_atomic_write_boundary_bytes(struct request_queue *q,
+				unsigned int bytes);
 void disk_update_readahead(struct gendisk *disk);
 extern void blk_limits_io_min(struct queue_limits *limits, unsigned int min);
 extern void blk_queue_io_min(struct request_queue *q, unsigned int min);
@@ -1291,6 +1307,30 @@ static inline int queue_dma_alignment(const struct request_queue *q)
 	return q ? q->limits.dma_alignment : 511;
 }
 
+static inline unsigned int
+queue_atomic_write_unit_max_bytes(const struct request_queue *q)
+{
+	return q->limits.atomic_write_unit_max_sectors << SECTOR_SHIFT;
+}
+
+static inline unsigned int
+queue_atomic_write_unit_min_bytes(const struct request_queue *q)
+{
+	return q->limits.atomic_write_unit_min_sectors << SECTOR_SHIFT;
+}
+
+static inline unsigned int
+queue_atomic_write_boundary_bytes(const struct request_queue *q)
+{
+	return q->limits.atomic_write_hw_boundary_sectors << SECTOR_SHIFT;
+}
+
+static inline unsigned int
+queue_atomic_write_max_bytes(const struct request_queue *q)
+{
+	return q->limits.atomic_write_max_sectors << SECTOR_SHIFT;
+}
+
 static inline unsigned int bdev_dma_alignment(struct block_device *bdev)
 {
 	return queue_dma_alignment(bdev_get_queue(bdev));
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH v3 02/15] block: Limit atomic writes according to bio and queue limits
  2024-01-24 11:38 [PATCH v3 00/15] block atomic writes John Garry
  2024-01-24 11:38 ` [PATCH v3 01/15] block: Add atomic write operations to request_queue limits John Garry
@ 2024-01-24 11:38 ` John Garry
  2024-02-13  4:33   ` Ritesh Harjani
  2024-01-24 11:38 ` [PATCH v3 03/15] fs/bdev: Add atomic write support info to statx John Garry
                   ` (13 subsequent siblings)
  15 siblings, 1 reply; 59+ messages in thread
From: John Garry @ 2024-01-24 11:38 UTC (permalink / raw)
  To: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-xfs, linux-fsdevel,
	tytso, jbongio, linux-scsi, ming.lei, ojaswin, bvanassche,
	John Garry

We rely the block layer always being able to send a bio of size
atomic_write_unit_max without being required to split it due to request
queue or other bio limits.

A bio may contain min(BIO_MAX_VECS, limits->max_segments) vectors on the
relevant submission paths for atomic writes and each vector contains at
least a PAGE_SIZE, apart from the first and last vectors.

Signed-off-by: John Garry <john.g.garry@oracle.com>
---
 block/blk-settings.c | 28 ++++++++++++++++++++++++++--
 1 file changed, 26 insertions(+), 2 deletions(-)

diff --git a/block/blk-settings.c b/block/blk-settings.c
index 11c0361c2313..176f26374abc 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -108,18 +108,42 @@ void blk_queue_bounce_limit(struct request_queue *q, enum blk_bounce bounce)
 }
 EXPORT_SYMBOL(blk_queue_bounce_limit);
 
+
+/*
+ * Returns max guaranteed sectors which we can fit in a bio. For convenience of
+ * users, rounddown_pow_of_two() the return value.
+ *
+ * We always assume that we can fit in at least PAGE_SIZE in a segment, apart
+ * from first and last segments.
+ */
+static unsigned int blk_queue_max_guaranteed_bio_sectors(
+					struct queue_limits *limits,
+					struct request_queue *q)
+{
+	unsigned int max_segments = min(BIO_MAX_VECS, limits->max_segments);
+	unsigned int length;
+
+	length = min(max_segments, 2) * queue_logical_block_size(q);
+	if (max_segments > 2)
+		length += (max_segments - 2) * PAGE_SIZE;
+
+	return rounddown_pow_of_two(length >> SECTOR_SHIFT);
+}
+
 static void blk_atomic_writes_update_limits(struct request_queue *q)
 {
 	struct queue_limits *limits = &q->limits;
 	unsigned int max_hw_sectors =
 		rounddown_pow_of_two(limits->max_hw_sectors);
+	unsigned int unit_limit = min(max_hw_sectors,
+		blk_queue_max_guaranteed_bio_sectors(limits, q));
 
 	limits->atomic_write_max_sectors =
 		min(limits->atomic_write_hw_max_sectors, max_hw_sectors);
 	limits->atomic_write_unit_min_sectors =
-		min(limits->atomic_write_hw_unit_min_sectors, max_hw_sectors);
+		min(limits->atomic_write_hw_unit_min_sectors, unit_limit);
 	limits->atomic_write_unit_max_sectors =
-		min(limits->atomic_write_hw_unit_max_sectors, max_hw_sectors);
+		min(limits->atomic_write_hw_unit_max_sectors, unit_limit);
 }
 
 /**
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH v3 03/15] fs/bdev: Add atomic write support info to statx
  2024-01-24 11:38 [PATCH v3 00/15] block atomic writes John Garry
  2024-01-24 11:38 ` [PATCH v3 01/15] block: Add atomic write operations to request_queue limits John Garry
  2024-01-24 11:38 ` [PATCH v3 02/15] block: Limit atomic writes according to bio and queue limits John Garry
@ 2024-01-24 11:38 ` John Garry
  2024-01-24 11:38 ` [PATCH v3 04/15] fs: Add RWF_ATOMIC and IOCB_ATOMIC flags for atomic write support John Garry
                   ` (12 subsequent siblings)
  15 siblings, 0 replies; 59+ messages in thread
From: John Garry @ 2024-01-24 11:38 UTC (permalink / raw)
  To: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-xfs, linux-fsdevel,
	tytso, jbongio, linux-scsi, ming.lei, ojaswin, bvanassche,
	Prasad Singamsetty, John Garry

From: Prasad Singamsetty <prasad.singamsetty@oracle.com>

Extend statx system call to return additional info for atomic write support
support if the specified file is a block device.

Signed-off-by: Prasad Singamsetty <prasad.singamsetty@oracle.com>
Signed-off-by: John Garry <john.g.garry@oracle.com>
---
 block/bdev.c              | 37 +++++++++++++++++++++---------
 fs/stat.c                 | 47 +++++++++++++++++++++++++++++++++------
 include/linux/blkdev.h    |  5 +++--
 include/linux/fs.h        |  3 +++
 include/linux/stat.h      |  3 +++
 include/uapi/linux/stat.h |  9 +++++++-
 6 files changed, 84 insertions(+), 20 deletions(-)

diff --git a/block/bdev.c b/block/bdev.c
index e9f1b12bd75c..2185084ffc89 100644
--- a/block/bdev.c
+++ b/block/bdev.c
@@ -1116,24 +1116,41 @@ void sync_bdevs(bool wait)
 	iput(old_inode);
 }
 
+#define BDEV_STATX_SUPPORTED_MASK (STATX_DIOALIGN | STATX_WRITE_ATOMIC)
+
 /*
- * Handle STATX_DIOALIGN for block devices.
- *
- * Note that the inode passed to this is the inode of a block device node file,
- * not the block device's internal inode.  Therefore it is *not* valid to use
- * I_BDEV() here; the block device has to be looked up by i_rdev instead.
+ * Handle STATX_{DIOALIGN, WRITE_ATOMIC} for block devices.
  */
-void bdev_statx_dioalign(struct inode *inode, struct kstat *stat)
+void bdev_statx(struct dentry *dentry, struct kstat *stat, u32 request_mask)
 {
 	struct block_device *bdev;
 
-	bdev = blkdev_get_no_open(inode->i_rdev);
+	if (!(request_mask & BDEV_STATX_SUPPORTED_MASK))
+		return;
+
+	/*
+	 * Note that d_backing_inode() returns the inode of a block device node
+	 * file, not the block device's internal inode.  Therefore it is *not*
+	 * valid to use I_BDEV() here; the block device has to be looked up by
+	 * i_rdev instead.
+	 */
+	bdev = blkdev_get_no_open(d_backing_inode(dentry)->i_rdev);
 	if (!bdev)
 		return;
 
-	stat->dio_mem_align = bdev_dma_alignment(bdev) + 1;
-	stat->dio_offset_align = bdev_logical_block_size(bdev);
-	stat->result_mask |= STATX_DIOALIGN;
+	if (request_mask & STATX_DIOALIGN) {
+		stat->dio_mem_align = bdev_dma_alignment(bdev) + 1;
+		stat->dio_offset_align = bdev_logical_block_size(bdev);
+		stat->result_mask |= STATX_DIOALIGN;
+	}
+
+	if (request_mask & STATX_WRITE_ATOMIC) {
+		struct request_queue *bd_queue = bdev->bd_queue;
+
+		generic_fill_statx_atomic_writes(stat,
+			queue_atomic_write_unit_min_bytes(bd_queue),
+			queue_atomic_write_unit_max_bytes(bd_queue));
+	}
 
 	blkdev_put_no_open(bdev);
 }
diff --git a/fs/stat.c b/fs/stat.c
index 77cdc69eb422..bd0618477702 100644
--- a/fs/stat.c
+++ b/fs/stat.c
@@ -89,6 +89,37 @@ void generic_fill_statx_attr(struct inode *inode, struct kstat *stat)
 }
 EXPORT_SYMBOL(generic_fill_statx_attr);
 
+/**
+ * generic_fill_statx_atomic_writes - Fill in the atomic writes statx attributes
+ * @stat:	Where to fill in the attribute flags
+ * @unit_min:	Minimum supported atomic write length
+ * @unit_max:	Maximum supported atomic write length
+ *
+ * Fill in the STATX{_ATTR}_WRITE_ATOMIC flags in the kstat structure from
+ * atomic write unit_min and unit_max values.
+ */
+void generic_fill_statx_atomic_writes(struct kstat *stat,
+				      unsigned int unit_min,
+				      unsigned int unit_max)
+{
+	/* Confirm that the request type is known */
+	stat->result_mask |= STATX_WRITE_ATOMIC;
+
+	/* Confirm that the file attribute type is known */
+	stat->attributes_mask |= STATX_ATTR_WRITE_ATOMIC;
+
+	if (unit_min) {
+		stat->atomic_write_unit_min = unit_min;
+		stat->atomic_write_unit_max = unit_max;
+		/* Initially only allow 1x segment */
+		stat->atomic_write_segments_max = 1;
+
+		/* Confirm atomic writes are actually supported */
+		stat->attributes |= STATX_ATTR_WRITE_ATOMIC;
+	}
+}
+EXPORT_SYMBOL(generic_fill_statx_atomic_writes);
+
 /**
  * vfs_getattr_nosec - getattr without security checks
  * @path: file to get attributes from
@@ -259,13 +290,12 @@ static int vfs_statx(int dfd, struct filename *filename, int flags,
 		stat->attributes |= STATX_ATTR_MOUNT_ROOT;
 	stat->attributes_mask |= STATX_ATTR_MOUNT_ROOT;
 
-	/* Handle STATX_DIOALIGN for block devices. */
-	if (request_mask & STATX_DIOALIGN) {
-		struct inode *inode = d_backing_inode(path.dentry);
-
-		if (S_ISBLK(inode->i_mode))
-			bdev_statx_dioalign(inode, stat);
-	}
+	/* If this is a block device inode, override the filesystem
+	 * attributes with the block device specific parameters
+	 * that need to be obtained from the bdev backing inode
+	 */
+	if (S_ISBLK(d_backing_inode(path.dentry)->i_mode))
+		bdev_statx(path.dentry, stat, request_mask);
 
 	path_put(&path);
 	if (retry_estale(error, lookup_flags)) {
@@ -658,6 +688,9 @@ cp_statx(const struct kstat *stat, struct statx __user *buffer)
 	tmp.stx_mnt_id = stat->mnt_id;
 	tmp.stx_dio_mem_align = stat->dio_mem_align;
 	tmp.stx_dio_offset_align = stat->dio_offset_align;
+	tmp.stx_atomic_write_unit_min = stat->atomic_write_unit_min;
+	tmp.stx_atomic_write_unit_max = stat->atomic_write_unit_max;
+	tmp.stx_atomic_write_segments_max = stat->atomic_write_segments_max;
 
 	return copy_to_user(buffer, &tmp, sizeof(tmp)) ? -EFAULT : 0;
 }
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index d5490b988918..443f08239308 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -1541,7 +1541,7 @@ int sync_blockdev(struct block_device *bdev);
 int sync_blockdev_range(struct block_device *bdev, loff_t lstart, loff_t lend);
 int sync_blockdev_nowait(struct block_device *bdev);
 void sync_bdevs(bool wait);
-void bdev_statx_dioalign(struct inode *inode, struct kstat *stat);
+void bdev_statx(struct dentry *dentry, struct kstat *stat, u32 request_mask);
 void printk_all_partitions(void);
 int __init early_lookup_bdev(const char *pathname, dev_t *dev);
 #else
@@ -1559,7 +1559,8 @@ static inline int sync_blockdev_nowait(struct block_device *bdev)
 static inline void sync_bdevs(bool wait)
 {
 }
-static inline void bdev_statx_dioalign(struct inode *inode, struct kstat *stat)
+static inline void bdev_statx(struct dentry *dentry, struct kstat *stat,
+				u32 request_mask)
 {
 }
 static inline void printk_all_partitions(void)
diff --git a/include/linux/fs.h b/include/linux/fs.h
index ed5966a70495..c316c0a92fff 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -3164,6 +3164,9 @@ extern const struct inode_operations page_symlink_inode_operations;
 extern void kfree_link(void *);
 void generic_fillattr(struct mnt_idmap *, u32, struct inode *, struct kstat *);
 void generic_fill_statx_attr(struct inode *inode, struct kstat *stat);
+void generic_fill_statx_atomic_writes(struct kstat *stat,
+				      unsigned int unit_min,
+				      unsigned int unit_max);
 extern int vfs_getattr_nosec(const struct path *, struct kstat *, u32, unsigned int);
 extern int vfs_getattr(const struct path *, struct kstat *, u32, unsigned int);
 void __inode_add_bytes(struct inode *inode, loff_t bytes);
diff --git a/include/linux/stat.h b/include/linux/stat.h
index 52150570d37a..2c5e2b8c6559 100644
--- a/include/linux/stat.h
+++ b/include/linux/stat.h
@@ -53,6 +53,9 @@ struct kstat {
 	u32		dio_mem_align;
 	u32		dio_offset_align;
 	u64		change_cookie;
+	u32		atomic_write_unit_min;
+	u32		atomic_write_unit_max;
+	u32		atomic_write_segments_max;
 };
 
 /* These definitions are internal to the kernel for now. Mainly used by nfsd. */
diff --git a/include/uapi/linux/stat.h b/include/uapi/linux/stat.h
index 2f2ee82d5517..c0e8e10d1de6 100644
--- a/include/uapi/linux/stat.h
+++ b/include/uapi/linux/stat.h
@@ -127,7 +127,12 @@ struct statx {
 	__u32	stx_dio_mem_align;	/* Memory buffer alignment for direct I/O */
 	__u32	stx_dio_offset_align;	/* File offset alignment for direct I/O */
 	/* 0xa0 */
-	__u64	__spare3[12];	/* Spare space for future expansion */
+	__u32	stx_atomic_write_unit_min;
+	__u32	stx_atomic_write_unit_max;
+	__u32   stx_atomic_write_segments_max;
+	__u32   __spare1;
+	/* 0xb0 */
+	__u64	__spare3[10];	/* Spare space for future expansion */
 	/* 0x100 */
 };
 
@@ -155,6 +160,7 @@ struct statx {
 #define STATX_MNT_ID		0x00001000U	/* Got stx_mnt_id */
 #define STATX_DIOALIGN		0x00002000U	/* Want/got direct I/O alignment info */
 #define STATX_MNT_ID_UNIQUE	0x00004000U	/* Want/got extended stx_mount_id */
+#define STATX_WRITE_ATOMIC	0x00008000U	/* Want/got atomic_write_* fields */
 
 #define STATX__RESERVED		0x80000000U	/* Reserved for future struct statx expansion */
 
@@ -190,6 +196,7 @@ struct statx {
 #define STATX_ATTR_MOUNT_ROOT		0x00002000 /* Root of a mount */
 #define STATX_ATTR_VERITY		0x00100000 /* [I] Verity protected file */
 #define STATX_ATTR_DAX			0x00200000 /* File is currently in DAX state */
+#define STATX_ATTR_WRITE_ATOMIC		0x00400000 /* File supports atomic write operations */
 
 
 #endif /* _UAPI_LINUX_STAT_H */
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH v3 04/15] fs: Add RWF_ATOMIC and IOCB_ATOMIC flags for atomic write support
  2024-01-24 11:38 [PATCH v3 00/15] block atomic writes John Garry
                   ` (2 preceding siblings ...)
  2024-01-24 11:38 ` [PATCH v3 03/15] fs/bdev: Add atomic write support info to statx John Garry
@ 2024-01-24 11:38 ` John Garry
  2024-01-24 11:38 ` [PATCH v3 05/15] block: Add REQ_ATOMIC flag John Garry
                   ` (11 subsequent siblings)
  15 siblings, 0 replies; 59+ messages in thread
From: John Garry @ 2024-01-24 11:38 UTC (permalink / raw)
  To: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-xfs, linux-fsdevel,
	tytso, jbongio, linux-scsi, ming.lei, ojaswin, bvanassche,
	Prasad Singamsetty, John Garry

From: Prasad Singamsetty <prasad.singamsetty@oracle.com>

Userspace may add flag RWF_ATOMIC to pwritev2() to indicate that the
write is to be issued with torn write prevention, according to special
alignment and length rules.

Torn write prevention means that for a power or any other HW failure, all
or none of the data will be committed to storage, but never a mix of old
and new.

For any syscall interface utilizing struct iocb, add IOCB_ATOMIC for
iocb->ki_flags field to indicate the same.

A call to statx will give the relevant atomic write info:
- atomic_write_unit_min
- atomic_write_unit_max

Both values are a power-of-2.

Applications can avail of atomic write feature by ensuring that the total
length of a write is a power-of-2 in size and also sized between
atomic_write_unit_min and atomic_write_unit_max, inclusive. Applications
must ensure that the write is at a naturally-aligned offset in the file
wrt the total write length.

Add file mode flag FMODE_CAN_ATOMIC_WRITE, so files which do not have the
flag set will have RWF_ATOMIC rejected and not just ignored.

Signed-off-by: Prasad Singamsetty <prasad.singamsetty@oracle.com>
Signed-off-by: John Garry <john.g.garry@oracle.com>
---
 include/linux/fs.h      | 9 +++++++++
 include/uapi/linux/fs.h | 5 ++++-
 2 files changed, 13 insertions(+), 1 deletion(-)

diff --git a/include/linux/fs.h b/include/linux/fs.h
index c316c0a92fff..95a7e2889d2c 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -119,6 +119,10 @@ typedef int (dio_iodone_t)(struct kiocb *iocb, loff_t offset,
 #define FMODE_PWRITE		((__force fmode_t)0x10)
 /* File is opened for execution with sys_execve / sys_uselib */
 #define FMODE_EXEC		((__force fmode_t)0x20)
+
+/* File supports atomic writes */
+#define FMODE_CAN_ATOMIC_WRITE	((__force fmode_t)0x40)
+
 /* 32bit hashes as llseek() offset (for directories) */
 #define FMODE_32BITHASH         ((__force fmode_t)0x200)
 /* 64bit hashes as llseek() offset (for directories) */
@@ -328,6 +332,7 @@ enum rw_hint {
 #define IOCB_SYNC		(__force int) RWF_SYNC
 #define IOCB_NOWAIT		(__force int) RWF_NOWAIT
 #define IOCB_APPEND		(__force int) RWF_APPEND
+#define IOCB_ATOMIC		(__force int) RWF_ATOMIC
 
 /* non-RWF related bits - start at 16 */
 #define IOCB_EVENTFD		(1 << 16)
@@ -3344,6 +3349,10 @@ static inline int kiocb_set_rw_flags(struct kiocb *ki, rwf_t flags)
 			return -EOPNOTSUPP;
 		kiocb_flags |= IOCB_NOIO;
 	}
+	if (flags & RWF_ATOMIC) {
+		if (!(ki->ki_filp->f_mode & FMODE_CAN_ATOMIC_WRITE))
+			return -EOPNOTSUPP;
+	}
 	kiocb_flags |= (__force int) (flags & RWF_SUPPORTED);
 	if (flags & RWF_SYNC)
 		kiocb_flags |= IOCB_DSYNC;
diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h
index 48ad69f7722e..a0975ae81e64 100644
--- a/include/uapi/linux/fs.h
+++ b/include/uapi/linux/fs.h
@@ -301,9 +301,12 @@ typedef int __bitwise __kernel_rwf_t;
 /* per-IO O_APPEND */
 #define RWF_APPEND	((__force __kernel_rwf_t)0x00000010)
 
+/* Atomic Write */
+#define RWF_ATOMIC	((__force __kernel_rwf_t)0x00000040)
+
 /* mask of flags supported by the kernel */
 #define RWF_SUPPORTED	(RWF_HIPRI | RWF_DSYNC | RWF_SYNC | RWF_NOWAIT |\
-			 RWF_APPEND)
+			 RWF_APPEND | RWF_ATOMIC)
 
 /* Pagemap ioctl */
 #define PAGEMAP_SCAN	_IOWR('f', 16, struct pm_scan_arg)
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH v3 05/15] block: Add REQ_ATOMIC flag
  2024-01-24 11:38 [PATCH v3 00/15] block atomic writes John Garry
                   ` (3 preceding siblings ...)
  2024-01-24 11:38 ` [PATCH v3 04/15] fs: Add RWF_ATOMIC and IOCB_ATOMIC flags for atomic write support John Garry
@ 2024-01-24 11:38 ` John Garry
  2024-02-13  6:24   ` Christoph Hellwig
  2024-01-24 11:38 ` [PATCH v3 06/15] block: Pass blk_queue_get_max_sectors() a request pointer John Garry
                   ` (10 subsequent siblings)
  15 siblings, 1 reply; 59+ messages in thread
From: John Garry @ 2024-01-24 11:38 UTC (permalink / raw)
  To: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-xfs, linux-fsdevel,
	tytso, jbongio, linux-scsi, ming.lei, ojaswin, bvanassche,
	Himanshu Madhani, John Garry

From: Himanshu Madhani <himanshu.madhani@oracle.com>

Add flag REQ_ATOMIC, meaning an atomic operation. This should only be
used in conjunction with REQ_OP_WRITE.

We will not add a special "request atomic write" operation, as to try to
avoid maintenance effort for an operation which is almost the same as
REQ_OP_WRITE.

Signed-off-by: Himanshu Madhani <himanshu.madhani@oracle.com>
Signed-off-by: John Garry <john.g.garry@oracle.com>
---
 include/linux/blk_types.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index f288c94374b3..cd7cceb8565d 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -422,6 +422,7 @@ enum req_flag_bits {
 	__REQ_DRV,		/* for driver use */
 	__REQ_FS_PRIVATE,	/* for file system (submitter) use */
 
+	__REQ_ATOMIC,		/* for atomic write operations */
 	/*
 	 * Command specific flags, keep last:
 	 */
@@ -448,6 +449,7 @@ enum req_flag_bits {
 #define REQ_RAHEAD	(__force blk_opf_t)(1ULL << __REQ_RAHEAD)
 #define REQ_BACKGROUND	(__force blk_opf_t)(1ULL << __REQ_BACKGROUND)
 #define REQ_NOWAIT	(__force blk_opf_t)(1ULL << __REQ_NOWAIT)
+#define REQ_ATOMIC	(__force blk_opf_t)(1ULL << __REQ_ATOMIC)
 #define REQ_POLLED	(__force blk_opf_t)(1ULL << __REQ_POLLED)
 #define REQ_ALLOC_CACHE	(__force blk_opf_t)(1ULL << __REQ_ALLOC_CACHE)
 #define REQ_SWAP	(__force blk_opf_t)(1ULL << __REQ_SWAP)
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH v3 06/15] block: Pass blk_queue_get_max_sectors() a request pointer
  2024-01-24 11:38 [PATCH v3 00/15] block atomic writes John Garry
                   ` (4 preceding siblings ...)
  2024-01-24 11:38 ` [PATCH v3 05/15] block: Add REQ_ATOMIC flag John Garry
@ 2024-01-24 11:38 ` John Garry
  2024-02-13  6:23   ` Christoph Hellwig
  2024-01-24 11:38 ` [PATCH v3 07/15] block: Limit atomic write IO size according to atomic_write_max_sectors John Garry
                   ` (9 subsequent siblings)
  15 siblings, 1 reply; 59+ messages in thread
From: John Garry @ 2024-01-24 11:38 UTC (permalink / raw)
  To: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-xfs, linux-fsdevel,
	tytso, jbongio, linux-scsi, ming.lei, ojaswin, bvanassche,
	John Garry

Currently blk_queue_get_max_sectors() is passed a enum req_op, which does
not work for atomic writes. This is because an atomic write has a different
max sectors values to a regular write, and we need the rq->cmd_flags
to know that we have an atomic write, so pass the request pointer, which
has all information available.

Also use rq->cmd_flags instead of rq->bio->bi_opf when possible.

Signed-off-by: John Garry <john.g.garry@oracle.com>
---
 block/blk-merge.c | 3 ++-
 block/blk-mq.c    | 2 +-
 block/blk.h       | 6 ++++--
 3 files changed, 7 insertions(+), 4 deletions(-)

diff --git a/block/blk-merge.c b/block/blk-merge.c
index 2d470cf2173e..74e9e775f13d 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -592,7 +592,8 @@ static inline unsigned int blk_rq_get_max_sectors(struct request *rq,
 	if (blk_rq_is_passthrough(rq))
 		return q->limits.max_hw_sectors;
 
-	max_sectors = blk_queue_get_max_sectors(q, req_op(rq));
+	max_sectors = blk_queue_get_max_sectors(rq);
+
 	if (!q->limits.chunk_sectors ||
 	    req_op(rq) == REQ_OP_DISCARD ||
 	    req_op(rq) == REQ_OP_SECURE_ERASE)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index aa87fcfda1ec..78555e1a2897 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -3056,7 +3056,7 @@ void blk_mq_submit_bio(struct bio *bio)
 blk_status_t blk_insert_cloned_request(struct request *rq)
 {
 	struct request_queue *q = rq->q;
-	unsigned int max_sectors = blk_queue_get_max_sectors(q, req_op(rq));
+	unsigned int max_sectors = blk_queue_get_max_sectors(rq);
 	unsigned int max_segments = blk_rq_get_max_segments(rq);
 	blk_status_t ret;
 
diff --git a/block/blk.h b/block/blk.h
index 1ef920f72e0f..050696131329 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -166,9 +166,11 @@ static inline unsigned int blk_rq_get_max_segments(struct request *rq)
 	return queue_max_segments(rq->q);
 }
 
-static inline unsigned int blk_queue_get_max_sectors(struct request_queue *q,
-						     enum req_op op)
+static inline unsigned int blk_queue_get_max_sectors(struct request *rq)
 {
+	struct request_queue *q = rq->q;
+	enum req_op op = req_op(rq);
+
 	if (unlikely(op == REQ_OP_DISCARD || op == REQ_OP_SECURE_ERASE))
 		return min(q->limits.max_discard_sectors,
 			   UINT_MAX >> SECTOR_SHIFT);
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH v3 07/15] block: Limit atomic write IO size according to atomic_write_max_sectors
  2024-01-24 11:38 [PATCH v3 00/15] block atomic writes John Garry
                   ` (5 preceding siblings ...)
  2024-01-24 11:38 ` [PATCH v3 06/15] block: Pass blk_queue_get_max_sectors() a request pointer John Garry
@ 2024-01-24 11:38 ` John Garry
  2024-02-13  6:26   ` Christoph Hellwig
  2024-01-24 11:38 ` [PATCH v3 08/15] block: Error an attempt to split an atomic write bio John Garry
                   ` (8 subsequent siblings)
  15 siblings, 1 reply; 59+ messages in thread
From: John Garry @ 2024-01-24 11:38 UTC (permalink / raw)
  To: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-xfs, linux-fsdevel,
	tytso, jbongio, linux-scsi, ming.lei, ojaswin, bvanassche,
	John Garry

Currently an IO size is limited to the request_queue limits max_sectors.
Limit the size for an atomic write to queue limit atomic_write_max_sectors
value.

Signed-off-by: John Garry <john.g.garry@oracle.com>
---
 block/blk-merge.c | 11 ++++++++++-
 block/blk.h       |  3 +++
 2 files changed, 13 insertions(+), 1 deletion(-)

diff --git a/block/blk-merge.c b/block/blk-merge.c
index 74e9e775f13d..6306a2c82354 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -167,7 +167,16 @@ static inline unsigned get_max_io_size(struct bio *bio,
 {
 	unsigned pbs = lim->physical_block_size >> SECTOR_SHIFT;
 	unsigned lbs = lim->logical_block_size >> SECTOR_SHIFT;
-	unsigned max_sectors = lim->max_sectors, start, end;
+	unsigned max_sectors, start, end;
+
+	/*
+	 * We ignore lim->max_sectors for atomic writes simply because
+	 * it may less than the bio size, which we cannot tolerate.
+	 */
+	if (bio->bi_opf & REQ_ATOMIC)
+		max_sectors = lim->atomic_write_max_sectors;
+	else
+		max_sectors = lim->max_sectors;
 
 	if (lim->chunk_sectors) {
 		max_sectors = min(max_sectors,
diff --git a/block/blk.h b/block/blk.h
index 050696131329..6ba8333fcf26 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -178,6 +178,9 @@ static inline unsigned int blk_queue_get_max_sectors(struct request *rq)
 	if (unlikely(op == REQ_OP_WRITE_ZEROES))
 		return q->limits.max_write_zeroes_sectors;
 
+	if (rq->cmd_flags & REQ_ATOMIC)
+		return q->limits.atomic_write_max_sectors;
+
 	return q->limits.max_sectors;
 }
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH v3 08/15] block: Error an attempt to split an atomic write bio
  2024-01-24 11:38 [PATCH v3 00/15] block atomic writes John Garry
                   ` (6 preceding siblings ...)
  2024-01-24 11:38 ` [PATCH v3 07/15] block: Limit atomic write IO size according to atomic_write_max_sectors John Garry
@ 2024-01-24 11:38 ` John Garry
  2024-01-24 11:38 ` [PATCH v3 09/15] block: Add checks to merging of atomic writes John Garry
                   ` (7 subsequent siblings)
  15 siblings, 0 replies; 59+ messages in thread
From: John Garry @ 2024-01-24 11:38 UTC (permalink / raw)
  To: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-xfs, linux-fsdevel,
	tytso, jbongio, linux-scsi, ming.lei, ojaswin, bvanassche,
	John Garry

As the name suggests, we should not be splitting these.

Signed-off-by: John Garry <john.g.garry@oracle.com>
---
 block/blk-merge.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/block/blk-merge.c b/block/blk-merge.c
index 6306a2c82354..9d714e8f76b3 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -314,6 +314,11 @@ struct bio *bio_split_rw(struct bio *bio, const struct queue_limits *lim,
 	*segs = nsegs;
 	return NULL;
 split:
+	if (bio->bi_opf & REQ_ATOMIC) {
+		bio->bi_status = BLK_STS_IOERR;
+		bio_endio(bio);
+		return ERR_PTR(-EINVAL);
+	}
 	/*
 	 * We can't sanely support splitting for a REQ_NOWAIT bio. End it
 	 * with EAGAIN if splitting is required and return an error pointer.
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH v3 09/15] block: Add checks to merging of atomic writes
  2024-01-24 11:38 [PATCH v3 00/15] block atomic writes John Garry
                   ` (7 preceding siblings ...)
  2024-01-24 11:38 ` [PATCH v3 08/15] block: Error an attempt to split an atomic write bio John Garry
@ 2024-01-24 11:38 ` John Garry
  2024-02-12 10:54   ` Nilay Shroff
  2024-01-24 11:38 ` [PATCH v3 10/15] block: Add fops atomic write support John Garry
                   ` (6 subsequent siblings)
  15 siblings, 1 reply; 59+ messages in thread
From: John Garry @ 2024-01-24 11:38 UTC (permalink / raw)
  To: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-xfs, linux-fsdevel,
	tytso, jbongio, linux-scsi, ming.lei, ojaswin, bvanassche,
	John Garry

For atomic writes we allow merging, but we must adhere to some additional
rules:
- Only allow merging of atomic writes with other atomic writes. This avoids
  any checks to ensure that the resultant merge still adheres to the
  underlying device atomic write properties. It also avoids possible
  unnecessary overhead in the device in submitting the whole resultant
  merged IO with an atomic write command for SCSI.
- Ensure that the merged IO would not cross an atomic write boundary, if
  any.

We already ensure that we don't exceed the atomic writes size limit in
get_max_io_size().

Signed-off-by: John Garry <john.g.garry@oracle.com>
---
 block/blk-merge.c | 75 +++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 75 insertions(+)

diff --git a/block/blk-merge.c b/block/blk-merge.c
index 9d714e8f76b3..12a75a252ca2 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -18,6 +18,42 @@
 #include "blk-rq-qos.h"
 #include "blk-throttle.h"
 
+static bool rq_straddles_atomic_write_boundary(struct request *rq,
+					unsigned int front,
+					unsigned int back)
+{
+	unsigned int boundary = queue_atomic_write_boundary_bytes(rq->q);
+	unsigned int mask, imask;
+	loff_t start, end;
+
+	if (!boundary)
+		return false;
+
+	start = rq->__sector << SECTOR_SHIFT;
+	end = start + rq->__data_len;
+
+	start -= front;
+	end += back;
+
+	/* We're longer than the boundary, so must be crossing it */
+	if (end - start > boundary)
+		return true;
+
+	mask = boundary - 1;
+
+	/* start/end are boundary-aligned, so cannot be crossing */
+	if (!(start & mask) || !(end & mask))
+		return false;
+
+	imask = ~mask;
+
+	/* Top bits are different, so crossed a boundary */
+	if ((start & imask) != (end & imask))
+		return true;
+
+	return false;
+}
+
 static inline void bio_get_first_bvec(struct bio *bio, struct bio_vec *bv)
 {
 	*bv = mp_bvec_iter_bvec(bio->bi_io_vec, bio->bi_iter);
@@ -659,6 +695,13 @@ int ll_back_merge_fn(struct request *req, struct bio *bio, unsigned int nr_segs)
 		return 0;
 	}
 
+	if (req->cmd_flags & REQ_ATOMIC) {
+		if (rq_straddles_atomic_write_boundary(req,
+				0, bio->bi_iter.bi_size)) {
+			return 0;
+		}
+	}
+
 	return ll_new_hw_segment(req, bio, nr_segs);
 }
 
@@ -678,6 +721,13 @@ static int ll_front_merge_fn(struct request *req, struct bio *bio,
 		return 0;
 	}
 
+	if (req->cmd_flags & REQ_ATOMIC) {
+		if (rq_straddles_atomic_write_boundary(req,
+				bio->bi_iter.bi_size, 0)) {
+			return 0;
+		}
+	}
+
 	return ll_new_hw_segment(req, bio, nr_segs);
 }
 
@@ -714,6 +764,13 @@ static int ll_merge_requests_fn(struct request_queue *q, struct request *req,
 	    blk_rq_get_max_sectors(req, blk_rq_pos(req)))
 		return 0;
 
+	if (req->cmd_flags & REQ_ATOMIC) {
+		if (rq_straddles_atomic_write_boundary(req,
+				0, blk_rq_bytes(next))) {
+			return 0;
+		}
+	}
+
 	total_phys_segments = req->nr_phys_segments + next->nr_phys_segments;
 	if (total_phys_segments > blk_rq_get_max_segments(req))
 		return 0;
@@ -809,6 +866,18 @@ static enum elv_merge blk_try_req_merge(struct request *req,
 	return ELEVATOR_NO_MERGE;
 }
 
+static bool blk_atomic_write_mergeable_rq_bio(struct request *rq,
+					      struct bio *bio)
+{
+	return (rq->cmd_flags & REQ_ATOMIC) == (bio->bi_opf & REQ_ATOMIC);
+}
+
+static bool blk_atomic_write_mergeable_rqs(struct request *rq,
+					   struct request *next)
+{
+	return (rq->cmd_flags & REQ_ATOMIC) == (next->cmd_flags & REQ_ATOMIC);
+}
+
 /*
  * For non-mq, this has to be called with the request spinlock acquired.
  * For mq with scheduling, the appropriate queue wide lock should be held.
@@ -828,6 +897,9 @@ static struct request *attempt_merge(struct request_queue *q,
 	if (req->ioprio != next->ioprio)
 		return NULL;
 
+	if (!blk_atomic_write_mergeable_rqs(req, next))
+		return NULL;
+
 	/*
 	 * If we are allowed to merge, then append bio list
 	 * from next to rq and release next. merge_requests_fn
@@ -955,6 +1027,9 @@ bool blk_rq_merge_ok(struct request *rq, struct bio *bio)
 	if (rq->ioprio != bio_prio(bio))
 		return false;
 
+	if (blk_atomic_write_mergeable_rq_bio(rq, bio) == false)
+		return false;
+
 	return true;
 }
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH v3 10/15] block: Add fops atomic write support
  2024-01-24 11:38 [PATCH v3 00/15] block atomic writes John Garry
                   ` (8 preceding siblings ...)
  2024-01-24 11:38 ` [PATCH v3 09/15] block: Add checks to merging of atomic writes John Garry
@ 2024-01-24 11:38 ` John Garry
  2024-02-13  9:36   ` Nilay Shroff
  2024-01-24 11:38 ` [PATCH v3 11/15] scsi: sd: Support reading atomic write properties from block limits VPD John Garry
                   ` (5 subsequent siblings)
  15 siblings, 1 reply; 59+ messages in thread
From: John Garry @ 2024-01-24 11:38 UTC (permalink / raw)
  To: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-xfs, linux-fsdevel,
	tytso, jbongio, linux-scsi, ming.lei, ojaswin, bvanassche,
	John Garry

Add support for atomic writes, as follows:
- Ensure that the IO follows all the atomic writes rules, like must be
  naturally aligned
- Set REQ_ATOMIC

We just ignore IOCB_ATOMIC for reads always.

Signed-off-by: John Garry <john.g.garry@oracle.com>
---
 block/fops.c | 44 +++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 43 insertions(+), 1 deletion(-)

diff --git a/block/fops.c b/block/fops.c
index 0cf8cf72cdfa..9c8234373da9 100644
--- a/block/fops.c
+++ b/block/fops.c
@@ -41,6 +41,26 @@ static bool blkdev_dio_unaligned(struct block_device *bdev, loff_t pos,
 		!bdev_iter_is_aligned(bdev, iter);
 }
 
+static bool blkdev_atomic_write_valid(struct block_device *bdev, loff_t pos,
+				      struct iov_iter *iter)
+{
+	struct request_queue *q = bdev_get_queue(bdev);
+	unsigned int min_bytes = queue_atomic_write_unit_min_bytes(q);
+	unsigned int max_bytes = queue_atomic_write_unit_max_bytes(q);
+
+	if (!iter_is_ubuf(iter))
+		return false;
+	if (iov_iter_count(iter) & (min_bytes - 1))
+		return false;
+	if (!is_power_of_2(iov_iter_count(iter)))
+		return false;
+	if (pos & (iov_iter_count(iter) - 1))
+		return false;
+	if (iov_iter_count(iter) > max_bytes)
+		return false;
+	return true;
+}
+
 #define DIO_INLINE_BIO_VECS 4
 
 static ssize_t __blkdev_direct_IO_simple(struct kiocb *iocb,
@@ -48,6 +68,8 @@ static ssize_t __blkdev_direct_IO_simple(struct kiocb *iocb,
 {
 	struct block_device *bdev = I_BDEV(iocb->ki_filp->f_mapping->host);
 	struct bio_vec inline_vecs[DIO_INLINE_BIO_VECS], *vecs;
+	bool is_read = iov_iter_rw(iter) == READ;
+	bool atomic_write = (iocb->ki_flags & IOCB_ATOMIC) && !is_read;
 	loff_t pos = iocb->ki_pos;
 	bool should_dirty = false;
 	struct bio bio;
@@ -56,6 +78,9 @@ static ssize_t __blkdev_direct_IO_simple(struct kiocb *iocb,
 	if (blkdev_dio_unaligned(bdev, pos, iter))
 		return -EINVAL;
 
+	if (atomic_write && !blkdev_atomic_write_valid(bdev, pos, iter))
+		return -EINVAL;
+
 	if (nr_pages <= DIO_INLINE_BIO_VECS)
 		vecs = inline_vecs;
 	else {
@@ -65,7 +90,7 @@ static ssize_t __blkdev_direct_IO_simple(struct kiocb *iocb,
 			return -ENOMEM;
 	}
 
-	if (iov_iter_rw(iter) == READ) {
+	if (is_read) {
 		bio_init(&bio, bdev, vecs, nr_pages, REQ_OP_READ);
 		if (user_backed_iter(iter))
 			should_dirty = true;
@@ -74,6 +99,8 @@ static ssize_t __blkdev_direct_IO_simple(struct kiocb *iocb,
 	}
 	bio.bi_iter.bi_sector = pos >> SECTOR_SHIFT;
 	bio.bi_ioprio = iocb->ki_ioprio;
+	if (atomic_write)
+		bio.bi_opf |= REQ_ATOMIC;
 
 	ret = bio_iov_iter_get_pages(&bio, iter);
 	if (unlikely(ret))
@@ -171,6 +198,9 @@ static ssize_t __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter,
 	loff_t pos = iocb->ki_pos;
 	int ret = 0;
 
+	if ((iocb->ki_flags & IOCB_ATOMIC) && !is_read)
+		return -EINVAL;
+
 	if (blkdev_dio_unaligned(bdev, pos, iter))
 		return -EINVAL;
 
@@ -305,6 +335,7 @@ static ssize_t __blkdev_direct_IO_async(struct kiocb *iocb,
 	struct block_device *bdev = I_BDEV(iocb->ki_filp->f_mapping->host);
 	bool is_read = iov_iter_rw(iter) == READ;
 	blk_opf_t opf = is_read ? REQ_OP_READ : dio_bio_write_op(iocb);
+	bool atomic_write = (iocb->ki_flags & IOCB_ATOMIC) && !is_read;
 	struct blkdev_dio *dio;
 	struct bio *bio;
 	loff_t pos = iocb->ki_pos;
@@ -313,6 +344,9 @@ static ssize_t __blkdev_direct_IO_async(struct kiocb *iocb,
 	if (blkdev_dio_unaligned(bdev, pos, iter))
 		return -EINVAL;
 
+	if (atomic_write && !blkdev_atomic_write_valid(bdev, pos, iter))
+		return -EINVAL;
+
 	if (iocb->ki_flags & IOCB_ALLOC_CACHE)
 		opf |= REQ_ALLOC_CACHE;
 	bio = bio_alloc_bioset(bdev, nr_pages, opf, GFP_KERNEL,
@@ -350,6 +384,9 @@ static ssize_t __blkdev_direct_IO_async(struct kiocb *iocb,
 		task_io_account_write(bio->bi_iter.bi_size);
 	}
 
+	if (atomic_write)
+		bio->bi_opf |= REQ_ATOMIC;
+
 	if (iocb->ki_flags & IOCB_NOWAIT)
 		bio->bi_opf |= REQ_NOWAIT;
 
@@ -620,6 +657,11 @@ static int blkdev_open(struct inode *inode, struct file *filp)
 	if (bdev_nowait(handle->bdev))
 		filp->f_mode |= FMODE_NOWAIT;
 
+	if (queue_atomic_write_unit_min_bytes(bdev_get_queue(handle->bdev)) &&
+	    (filp->f_flags & O_DIRECT)) {
+		filp->f_mode |= FMODE_CAN_ATOMIC_WRITE;
+	}
+
 	filp->f_mapping = handle->bdev->bd_inode->i_mapping;
 	filp->f_wb_err = filemap_sample_wb_err(filp->f_mapping);
 	filp->private_data = handle;
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH v3 11/15] scsi: sd: Support reading atomic write properties from block limits VPD
  2024-01-24 11:38 [PATCH v3 00/15] block atomic writes John Garry
                   ` (9 preceding siblings ...)
  2024-01-24 11:38 ` [PATCH v3 10/15] block: Add fops atomic write support John Garry
@ 2024-01-24 11:38 ` John Garry
  2024-02-13  6:31   ` Christoph Hellwig
  2024-01-24 11:38 ` [PATCH v3 12/15] scsi: sd: Add WRITE_ATOMIC_16 support John Garry
                   ` (4 subsequent siblings)
  15 siblings, 1 reply; 59+ messages in thread
From: John Garry @ 2024-01-24 11:38 UTC (permalink / raw)
  To: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-xfs, linux-fsdevel,
	tytso, jbongio, linux-scsi, ming.lei, ojaswin, bvanassche,
	John Garry

Also update block layer request queue sysfs properties.

See sbc4r22 section 6.6.4 - Block limits VPD page.

Signed-off-by: John Garry <john.g.garry@oracle.com>
---
 drivers/scsi/sd.c | 69 ++++++++++++++++++++++++++++++++++++++++++++++-
 drivers/scsi/sd.h |  8 ++++++
 2 files changed, 76 insertions(+), 1 deletion(-)

diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
index 0833b3e6aa6e..32dfb5327f92 100644
--- a/drivers/scsi/sd.c
+++ b/drivers/scsi/sd.c
@@ -916,6 +916,65 @@ static blk_status_t sd_setup_unmap_cmnd(struct scsi_cmnd *cmd)
 	return scsi_alloc_sgtables(cmd);
 }
 
+static void sd_config_atomic(struct scsi_disk *sdkp)
+{
+	unsigned int logical_block_size = sdkp->device->sector_size,
+		physical_block_size_sectors, max_atomic, unit_min, unit_max;
+	struct request_queue *q = sdkp->disk->queue;
+
+	if ((!sdkp->max_atomic && !sdkp->max_atomic_with_boundary) ||
+	    sdkp->protection_type == T10_PI_TYPE2_PROTECTION)
+		return;
+
+	physical_block_size_sectors = sdkp->physical_block_size /
+					sdkp->device->sector_size;
+
+	unit_min = rounddown_pow_of_two(sdkp->atomic_granularity ?
+					sdkp->atomic_granularity :
+					physical_block_size_sectors);
+
+	/*
+	 * Only use atomic boundary when we have the odd scenario of
+	 * sdkp->max_atomic == 0, which the spec does permit.
+	 */
+	if (sdkp->max_atomic) {
+		max_atomic = sdkp->max_atomic;
+		unit_max = rounddown_pow_of_two(sdkp->max_atomic);
+		sdkp->use_atomic_write_boundary = 0;
+	} else {
+		max_atomic = sdkp->max_atomic_with_boundary;
+		unit_max = rounddown_pow_of_two(sdkp->max_atomic_boundary);
+		sdkp->use_atomic_write_boundary = 1;
+	}
+
+	/*
+	 * Ensure compliance with granularity and alignment. For now, keep it
+	 * simple and just don't support atomic writes for values mismatched
+	 * with max_{boundary}atomic, physical block size, and
+	 * atomic_granularity itself.
+	 *
+	 * We're really being distrustful by checking unit_max also...
+	 */
+	if (sdkp->atomic_granularity > 1) {
+		if (unit_min > 1 && unit_min % sdkp->atomic_granularity)
+			return;
+		if (unit_max > 1 && unit_max % sdkp->atomic_granularity)
+			return;
+	}
+
+	if (sdkp->atomic_alignment > 1) {
+		if (unit_min > 1 && unit_min % sdkp->atomic_alignment)
+			return;
+		if (unit_max > 1 && unit_max % sdkp->atomic_alignment)
+			return;
+	}
+
+	blk_queue_atomic_write_max_bytes(q, max_atomic * logical_block_size);
+	blk_queue_atomic_write_unit_min_sectors(q, unit_min);
+	blk_queue_atomic_write_unit_max_sectors(q, unit_max);
+	blk_queue_atomic_write_boundary_bytes(q, 0);
+}
+
 static blk_status_t sd_setup_write_same16_cmnd(struct scsi_cmnd *cmd,
 		bool unmap)
 {
@@ -3071,7 +3130,7 @@ static void sd_read_block_limits(struct scsi_disk *sdkp)
 		sdkp->max_ws_blocks = (u32)get_unaligned_be64(&vpd->data[36]);
 
 		if (!sdkp->lbpme)
-			goto out;
+			goto read_atomics;
 
 		lba_count = get_unaligned_be32(&vpd->data[20]);
 		desc_count = get_unaligned_be32(&vpd->data[24]);
@@ -3102,6 +3161,14 @@ static void sd_read_block_limits(struct scsi_disk *sdkp)
 			else
 				sd_config_discard(sdkp, SD_LBP_DISABLE);
 		}
+read_atomics:
+		sdkp->max_atomic = get_unaligned_be32(&vpd->data[44]);
+		sdkp->atomic_alignment = get_unaligned_be32(&vpd->data[48]);
+		sdkp->atomic_granularity = get_unaligned_be32(&vpd->data[52]);
+		sdkp->max_atomic_with_boundary = get_unaligned_be32(&vpd->data[56]);
+		sdkp->max_atomic_boundary = get_unaligned_be32(&vpd->data[60]);
+
+		sd_config_atomic(sdkp);
 	}
 
  out:
diff --git a/drivers/scsi/sd.h b/drivers/scsi/sd.h
index 409dda5350d1..990188a56b51 100644
--- a/drivers/scsi/sd.h
+++ b/drivers/scsi/sd.h
@@ -121,6 +121,13 @@ struct scsi_disk {
 	u32		max_unmap_blocks;
 	u32		unmap_granularity;
 	u32		unmap_alignment;
+
+	u32		max_atomic;
+	u32		atomic_alignment;
+	u32		atomic_granularity;
+	u32		max_atomic_with_boundary;
+	u32		max_atomic_boundary;
+
 	u32		index;
 	unsigned int	physical_block_size;
 	unsigned int	max_medium_access_timeouts;
@@ -151,6 +158,7 @@ struct scsi_disk {
 	unsigned	urswrz : 1;
 	unsigned	security : 1;
 	unsigned	ignore_medium_access_errors : 1;
+	unsigned	use_atomic_write_boundary : 1;
 };
 #define to_scsi_disk(obj) container_of(obj, struct scsi_disk, disk_dev)
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH v3 12/15] scsi: sd: Add WRITE_ATOMIC_16 support
  2024-01-24 11:38 [PATCH v3 00/15] block atomic writes John Garry
                   ` (10 preceding siblings ...)
  2024-01-24 11:38 ` [PATCH v3 11/15] scsi: sd: Support reading atomic write properties from block limits VPD John Garry
@ 2024-01-24 11:38 ` John Garry
  2024-01-24 11:38 ` [PATCH v3 13/15] scsi: scsi_debug: Atomic write support John Garry
                   ` (3 subsequent siblings)
  15 siblings, 0 replies; 59+ messages in thread
From: John Garry @ 2024-01-24 11:38 UTC (permalink / raw)
  To: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-xfs, linux-fsdevel,
	tytso, jbongio, linux-scsi, ming.lei, ojaswin, bvanassche,
	John Garry

Add function sd_setup_atomic_cmnd() to setup an WRITE_ATOMIC_16
CDB for when REQ_ATOMIC flag is set for the request.

Also add trace info.

Signed-off-by: John Garry <john.g.garry@oracle.com>
---
 drivers/scsi/scsi_trace.c   | 22 ++++++++++++++++++++++
 drivers/scsi/sd.c           | 24 ++++++++++++++++++++++++
 include/scsi/scsi_proto.h   |  1 +
 include/trace/events/scsi.h |  1 +
 4 files changed, 48 insertions(+)

diff --git a/drivers/scsi/scsi_trace.c b/drivers/scsi/scsi_trace.c
index 41a950075913..3e47c4472a80 100644
--- a/drivers/scsi/scsi_trace.c
+++ b/drivers/scsi/scsi_trace.c
@@ -325,6 +325,26 @@ scsi_trace_zbc_out(struct trace_seq *p, unsigned char *cdb, int len)
 	return ret;
 }
 
+static const char *
+scsi_trace_atomic_write16_out(struct trace_seq *p, unsigned char *cdb, int len)
+{
+	const char *ret = trace_seq_buffer_ptr(p);
+	unsigned int boundary_size;
+	unsigned int nr_blocks;
+	sector_t lba;
+
+	lba = get_unaligned_be64(&cdb[2]);
+	boundary_size = get_unaligned_be16(&cdb[10]);
+	nr_blocks = get_unaligned_be16(&cdb[12]);
+
+	trace_seq_printf(p, "lba=%llu txlen=%u boundary_size=%u",
+			  lba, nr_blocks, boundary_size);
+
+	trace_seq_putc(p, 0);
+
+	return ret;
+}
+
 static const char *
 scsi_trace_varlen(struct trace_seq *p, unsigned char *cdb, int len)
 {
@@ -385,6 +405,8 @@ scsi_trace_parse_cdb(struct trace_seq *p, unsigned char *cdb, int len)
 		return scsi_trace_zbc_in(p, cdb, len);
 	case ZBC_OUT:
 		return scsi_trace_zbc_out(p, cdb, len);
+	case WRITE_ATOMIC_16:
+		return scsi_trace_atomic_write16_out(p, cdb, len);
 	default:
 		return scsi_trace_misc(p, cdb, len);
 	}
diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
index 32dfb5327f92..7df05d796387 100644
--- a/drivers/scsi/sd.c
+++ b/drivers/scsi/sd.c
@@ -1240,6 +1240,26 @@ static int sd_cdl_dld(struct scsi_disk *sdkp, struct scsi_cmnd *scmd)
 	return (hint - IOPRIO_HINT_DEV_DURATION_LIMIT_1) + 1;
 }
 
+static blk_status_t sd_setup_atomic_cmnd(struct scsi_cmnd *cmd,
+					sector_t lba, unsigned int nr_blocks,
+					bool boundary, unsigned char flags)
+{
+	cmd->cmd_len  = 16;
+	cmd->cmnd[0]  = WRITE_ATOMIC_16;
+	cmd->cmnd[1]  = flags;
+	put_unaligned_be64(lba, &cmd->cmnd[2]);
+	put_unaligned_be16(nr_blocks, &cmd->cmnd[12]);
+	if (boundary)
+		put_unaligned_be16(nr_blocks, &cmd->cmnd[10]);
+	else
+		put_unaligned_be16(0, &cmd->cmnd[10]);
+	put_unaligned_be16(nr_blocks, &cmd->cmnd[12]);
+	cmd->cmnd[14] = 0;
+	cmd->cmnd[15] = 0;
+
+	return BLK_STS_OK;
+}
+
 static blk_status_t sd_setup_read_write_cmnd(struct scsi_cmnd *cmd)
 {
 	struct request *rq = scsi_cmd_to_rq(cmd);
@@ -1311,6 +1331,10 @@ static blk_status_t sd_setup_read_write_cmnd(struct scsi_cmnd *cmd)
 	if (protect && sdkp->protection_type == T10_PI_TYPE2_PROTECTION) {
 		ret = sd_setup_rw32_cmnd(cmd, write, lba, nr_blocks,
 					 protect | fua, dld);
+	} else if (rq->cmd_flags & REQ_ATOMIC && write) {
+		ret = sd_setup_atomic_cmnd(cmd, lba, nr_blocks,
+				sdkp->use_atomic_write_boundary,
+				protect | fua);
 	} else if (sdp->use_16_for_rw || (nr_blocks > 0xffff)) {
 		ret = sd_setup_rw16_cmnd(cmd, write, lba, nr_blocks,
 					 protect | fua, dld);
diff --git a/include/scsi/scsi_proto.h b/include/scsi/scsi_proto.h
index 07d65c1f59db..833de67305b5 100644
--- a/include/scsi/scsi_proto.h
+++ b/include/scsi/scsi_proto.h
@@ -119,6 +119,7 @@
 #define WRITE_SAME_16	      0x93
 #define ZBC_OUT		      0x94
 #define ZBC_IN		      0x95
+#define WRITE_ATOMIC_16	0x9c
 #define SERVICE_ACTION_BIDIRECTIONAL 0x9d
 #define SERVICE_ACTION_IN_16  0x9e
 #define SERVICE_ACTION_OUT_16 0x9f
diff --git a/include/trace/events/scsi.h b/include/trace/events/scsi.h
index 8e2d9b1b0e77..05f1945ed204 100644
--- a/include/trace/events/scsi.h
+++ b/include/trace/events/scsi.h
@@ -102,6 +102,7 @@
 		scsi_opcode_name(WRITE_32),			\
 		scsi_opcode_name(WRITE_SAME_32),		\
 		scsi_opcode_name(ATA_16),			\
+		scsi_opcode_name(WRITE_ATOMIC_16),		\
 		scsi_opcode_name(ATA_12))
 
 #define scsi_hostbyte_name(result)	{ result, #result }
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH v3 13/15] scsi: scsi_debug: Atomic write support
  2024-01-24 11:38 [PATCH v3 00/15] block atomic writes John Garry
                   ` (11 preceding siblings ...)
  2024-01-24 11:38 ` [PATCH v3 12/15] scsi: sd: Add WRITE_ATOMIC_16 support John Garry
@ 2024-01-24 11:38 ` John Garry
  2024-01-24 11:38 ` [PATCH v3 14/15] nvme: Support atomic writes John Garry
                   ` (2 subsequent siblings)
  15 siblings, 0 replies; 59+ messages in thread
From: John Garry @ 2024-01-24 11:38 UTC (permalink / raw)
  To: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-xfs, linux-fsdevel,
	tytso, jbongio, linux-scsi, ming.lei, ojaswin, bvanassche,
	John Garry

Add initial support for atomic writes.

As is standard method, feed device properties via modules param, those
being:
- atomic_max_size_blks
- atomic_alignment_blks
- atomic_granularity_blks
- atomic_max_size_with_boundary_blks
- atomic_max_boundary_blks

These just match sbc4r22 section 6.6.4 - Block limits VPD page.

We just support ATOMIC_WRITE_16.

The major change in the driver is how we lock the device for RW accesses.

Currently the driver uses a per-device lock for accessing device metadata
and "media" data (calls to do_device_access()) atomically for the duration
of the whole read/write command.

This should not suit verifying atomic writes. Reason being that currently
all reads/writes are atomic, so using atomic writes does not prove
anything.

Change device access model to basis that regular writes only atomic on a
per-sector basis, while reads and atomic writes are fully atomic.

As mentioned, since accessing metadata and device media is atomic,
continue to have regular writes involving metadata - like discard or PI -
as atomic. We can improve this later.

Currently we only support model where overlapping going reads or writes
wait for current access to complete before commencing an atomic write.
This is described in 4.29.3.2 section of the SBC. However, we simplify,
things and wait for all accesses to complete (when issuing an atomic
write).

Signed-off-by: John Garry <john.g.garry@oracle.com>
---
 drivers/scsi/scsi_debug.c | 589 +++++++++++++++++++++++++++++---------
 1 file changed, 456 insertions(+), 133 deletions(-)

diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c
index 9070c0dc05ef..9568bcbf0821 100644
--- a/drivers/scsi/scsi_debug.c
+++ b/drivers/scsi/scsi_debug.c
@@ -68,6 +68,8 @@ static const char *sdebug_version_date = "20210520";
 
 /* Additional Sense Code (ASC) */
 #define NO_ADDITIONAL_SENSE 0x0
+#define OVERLAP_ATOMIC_COMMAND_ASC 0x0
+#define OVERLAP_ATOMIC_COMMAND_ASCQ 0x23
 #define LOGICAL_UNIT_NOT_READY 0x4
 #define LOGICAL_UNIT_COMMUNICATION_FAILURE 0x8
 #define UNRECOVERED_READ_ERR 0x11
@@ -102,6 +104,7 @@ static const char *sdebug_version_date = "20210520";
 #define READ_BOUNDARY_ASCQ 0x7
 #define ATTEMPT_ACCESS_GAP 0x9
 #define INSUFF_ZONE_ASCQ 0xe
+/* see drivers/scsi/sense_codes.h */
 
 /* Additional Sense Code Qualifier (ASCQ) */
 #define ACK_NAK_TO 0x3
@@ -151,6 +154,12 @@ static const char *sdebug_version_date = "20210520";
 #define DEF_VIRTUAL_GB   0
 #define DEF_VPD_USE_HOSTNO 1
 #define DEF_WRITESAME_LENGTH 0xFFFF
+#define DEF_ATOMIC_WR 0
+#define DEF_ATOMIC_WR_MAX_LENGTH 8192
+#define DEF_ATOMIC_WR_ALIGN 2
+#define DEF_ATOMIC_WR_GRAN 2
+#define DEF_ATOMIC_WR_MAX_LENGTH_BNDRY (DEF_ATOMIC_WR_MAX_LENGTH)
+#define DEF_ATOMIC_WR_MAX_BNDRY 128
 #define DEF_STRICT 0
 #define DEF_STATISTICS false
 #define DEF_SUBMIT_QUEUES 1
@@ -373,7 +382,9 @@ struct sdebug_host_info {
 
 /* There is an xarray of pointers to this struct's objects, one per host */
 struct sdeb_store_info {
-	rwlock_t macc_lck;	/* for atomic media access on this store */
+	rwlock_t macc_data_lck;	/* for media data access on this store */
+	rwlock_t macc_meta_lck;	/* for atomic media meta access on this store */
+	rwlock_t macc_sector_lck;	/* per-sector media data access on this store */
 	u8 *storep;		/* user data storage (ram) */
 	struct t10_pi_tuple *dif_storep; /* protection info */
 	void *map_storep;	/* provisioning map */
@@ -397,12 +408,20 @@ struct sdebug_defer {
 	enum sdeb_defer_type defer_t;
 };
 
+struct sdebug_device_access_info {
+	bool atomic_write;
+	u64 lba;
+	u32 num;
+	struct scsi_cmnd *self;
+};
+
 struct sdebug_queued_cmd {
 	/* corresponding bit set in in_use_bm[] in owning struct sdebug_queue
 	 * instance indicates this slot is in use.
 	 */
 	struct sdebug_defer sd_dp;
 	struct scsi_cmnd *scmd;
+	struct sdebug_device_access_info *i;
 };
 
 struct sdebug_scsi_cmd {
@@ -462,7 +481,8 @@ enum sdeb_opcode_index {
 	SDEB_I_PRE_FETCH = 29,		/* 10, 16 */
 	SDEB_I_ZONE_OUT = 30,		/* 0x94+SA; includes no data xfer */
 	SDEB_I_ZONE_IN = 31,		/* 0x95+SA; all have data-in */
-	SDEB_I_LAST_ELEM_P1 = 32,	/* keep this last (previous + 1) */
+	SDEB_I_ATOMIC_WRITE_16 = 32,
+	SDEB_I_LAST_ELEM_P1 = 33,	/* keep this last (previous + 1) */
 };
 
 
@@ -496,7 +516,8 @@ static const unsigned char opcode_ind_arr[256] = {
 	0, 0, 0, SDEB_I_VERIFY,
 	SDEB_I_PRE_FETCH, SDEB_I_SYNC_CACHE, 0, SDEB_I_WRITE_SAME,
 	SDEB_I_ZONE_OUT, SDEB_I_ZONE_IN, 0, 0,
-	0, 0, 0, 0, 0, 0, SDEB_I_SERV_ACT_IN_16, SDEB_I_SERV_ACT_OUT_16,
+	0, 0, 0, 0,
+	SDEB_I_ATOMIC_WRITE_16, 0, SDEB_I_SERV_ACT_IN_16, SDEB_I_SERV_ACT_OUT_16,
 /* 0xa0; 0xa0->0xbf: 12 byte cdbs */
 	SDEB_I_REPORT_LUNS, SDEB_I_ATA_PT, 0, SDEB_I_MAINT_IN,
 	     SDEB_I_MAINT_OUT, 0, 0, 0,
@@ -544,6 +565,7 @@ static int resp_write_buffer(struct scsi_cmnd *, struct sdebug_dev_info *);
 static int resp_sync_cache(struct scsi_cmnd *, struct sdebug_dev_info *);
 static int resp_pre_fetch(struct scsi_cmnd *, struct sdebug_dev_info *);
 static int resp_report_zones(struct scsi_cmnd *, struct sdebug_dev_info *);
+static int resp_atomic_write(struct scsi_cmnd *, struct sdebug_dev_info *);
 static int resp_open_zone(struct scsi_cmnd *, struct sdebug_dev_info *);
 static int resp_close_zone(struct scsi_cmnd *, struct sdebug_dev_info *);
 static int resp_finish_zone(struct scsi_cmnd *, struct sdebug_dev_info *);
@@ -782,6 +804,11 @@ static const struct opcode_info_t opcode_info_arr[SDEB_I_LAST_ELEM_P1 + 1] = {
 	    resp_report_zones, zone_in_iarr, /* ZONE_IN(16), REPORT ZONES) */
 		{16,  0x0 /* SA */, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
 		 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xbf, 0xc7} },
+/* 31 */
+	{0, 0x0, 0x0, F_D_OUT | FF_MEDIA_IO,
+	    resp_atomic_write, NULL, /* ATOMIC WRITE 16 */
+		{16,  0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff} },
 /* sentinel */
 	{0xff, 0, 0, 0, NULL, NULL,		/* terminating element */
 	    {0,  0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} },
@@ -829,6 +856,13 @@ static unsigned int sdebug_unmap_granularity = DEF_UNMAP_GRANULARITY;
 static unsigned int sdebug_unmap_max_blocks = DEF_UNMAP_MAX_BLOCKS;
 static unsigned int sdebug_unmap_max_desc = DEF_UNMAP_MAX_DESC;
 static unsigned int sdebug_write_same_length = DEF_WRITESAME_LENGTH;
+static unsigned int sdebug_atomic_wr = DEF_ATOMIC_WR;
+static unsigned int sdebug_atomic_wr_max_length = DEF_ATOMIC_WR_MAX_LENGTH;
+static unsigned int sdebug_atomic_wr_align = DEF_ATOMIC_WR_ALIGN;
+static unsigned int sdebug_atomic_wr_gran = DEF_ATOMIC_WR_GRAN;
+static unsigned int sdebug_atomic_wr_max_length_bndry =
+			DEF_ATOMIC_WR_MAX_LENGTH_BNDRY;
+static unsigned int sdebug_atomic_wr_max_bndry = DEF_ATOMIC_WR_MAX_BNDRY;
 static int sdebug_uuid_ctl = DEF_UUID_CTL;
 static bool sdebug_random = DEF_RANDOM;
 static bool sdebug_per_host_store = DEF_PER_HOST_STORE;
@@ -1180,6 +1214,11 @@ static inline bool scsi_debug_lbp(void)
 		(sdebug_lbpu || sdebug_lbpws || sdebug_lbpws10);
 }
 
+static inline bool scsi_debug_atomic_write(void)
+{
+	return 0 == sdebug_fake_rw && sdebug_atomic_wr;
+}
+
 static void *lba2fake_store(struct sdeb_store_info *sip,
 			    unsigned long long lba)
 {
@@ -1807,6 +1846,14 @@ static int inquiry_vpd_b0(unsigned char *arr)
 	/* Maximum WRITE SAME Length */
 	put_unaligned_be64(sdebug_write_same_length, &arr[32]);
 
+	if (sdebug_atomic_wr) {
+		put_unaligned_be32(sdebug_atomic_wr_max_length, &arr[40]);
+		put_unaligned_be32(sdebug_atomic_wr_align, &arr[44]);
+		put_unaligned_be32(sdebug_atomic_wr_gran, &arr[48]);
+		put_unaligned_be32(sdebug_atomic_wr_max_length_bndry, &arr[52]);
+		put_unaligned_be32(sdebug_atomic_wr_max_bndry, &arr[56]);
+	}
+
 	return 0x3c; /* Mandatory page length for Logical Block Provisioning */
 }
 
@@ -3304,15 +3351,239 @@ static inline struct sdeb_store_info *devip2sip(struct sdebug_dev_info *devip,
 	return xa_load(per_store_ap, devip->sdbg_host->si_idx);
 }
 
+
+static inline void
+sdeb_read_lock(rwlock_t *lock)
+{
+	if (sdebug_no_rwlock)
+		__acquire(lock);
+	else
+		read_lock(lock);
+}
+
+static inline void
+sdeb_read_unlock(rwlock_t *lock)
+{
+	if (sdebug_no_rwlock)
+		__release(lock);
+	else
+		read_unlock(lock);
+}
+
+static inline void
+sdeb_write_lock(rwlock_t *lock)
+{
+	if (sdebug_no_rwlock)
+		__acquire(lock);
+	else
+		write_lock(lock);
+}
+
+static inline void
+sdeb_write_unlock(rwlock_t *lock)
+{
+	if (sdebug_no_rwlock)
+		__release(lock);
+	else
+		write_unlock(lock);
+}
+
+static inline void
+sdeb_data_read_lock(struct sdeb_store_info *sip)
+{
+	BUG_ON(!sip);
+
+	sdeb_read_lock(&sip->macc_data_lck);
+}
+
+static inline void
+sdeb_data_read_unlock(struct sdeb_store_info *sip)
+{
+	BUG_ON(!sip);
+
+	sdeb_read_unlock(&sip->macc_data_lck);
+}
+
+static inline void
+sdeb_data_write_lock(struct sdeb_store_info *sip)
+{
+	BUG_ON(!sip);
+
+	sdeb_write_lock(&sip->macc_data_lck);
+}
+
+static inline void
+sdeb_data_write_unlock(struct sdeb_store_info *sip)
+{
+	BUG_ON(!sip);
+
+	sdeb_write_unlock(&sip->macc_data_lck);
+}
+
+static inline void
+sdeb_data_sector_read_lock(struct sdeb_store_info *sip)
+{
+	BUG_ON(!sip);
+
+	sdeb_read_lock(&sip->macc_sector_lck);
+}
+
+static inline void
+sdeb_data_sector_read_unlock(struct sdeb_store_info *sip)
+{
+	BUG_ON(!sip);
+
+	sdeb_read_unlock(&sip->macc_sector_lck);
+}
+
+static inline void
+sdeb_data_sector_write_lock(struct sdeb_store_info *sip)
+{
+	BUG_ON(!sip);
+
+	sdeb_write_lock(&sip->macc_sector_lck);
+}
+
+static inline void
+sdeb_data_sector_write_unlock(struct sdeb_store_info *sip)
+{
+	BUG_ON(!sip);
+
+	sdeb_write_unlock(&sip->macc_sector_lck);
+}
+
+/*
+ * Atomic locking:
+ * We simplify the atomic model to allow only 1x atomic write and many non-
+ * atomic reads or writes for all LBAs.
+
+ * A RW lock has a similar bahaviour:
+ * Only 1x writer and many readers.
+
+ * So use a RW lock for per-device read and write locking:
+ * An atomic access grabs the lock as a writer and non-atomic grabs the lock
+ * as a reader.
+ */
+
+static inline void
+sdeb_data_lock(struct sdeb_store_info *sip, bool atomic_write)
+{
+	if (atomic_write)
+		sdeb_data_write_lock(sip);
+	else
+		sdeb_data_read_lock(sip);
+}
+
+static inline void
+sdeb_data_unlock(struct sdeb_store_info *sip, bool atomic_write)
+{
+	if (atomic_write)
+		sdeb_data_write_unlock(sip);
+	else
+		sdeb_data_read_unlock(sip);
+}
+
+/* Allow many reads but only 1x write per sector */
+static inline void
+sdeb_data_sector_lock(struct sdeb_store_info *sip, bool do_write)
+{
+	if (do_write)
+		sdeb_data_sector_write_lock(sip);
+	else
+		sdeb_data_sector_read_lock(sip);
+}
+
+static inline void
+sdeb_data_sector_unlock(struct sdeb_store_info *sip, bool do_write)
+{
+	if (do_write)
+		sdeb_data_sector_write_unlock(sip);
+	else
+		sdeb_data_sector_read_unlock(sip);
+}
+
+static inline void
+sdeb_meta_read_lock(struct sdeb_store_info *sip)
+{
+	if (sdebug_no_rwlock) {
+		if (sip)
+			__acquire(&sip->macc_meta_lck);
+		else
+			__acquire(&sdeb_fake_rw_lck);
+	} else {
+		if (sip)
+			read_lock(&sip->macc_meta_lck);
+		else
+			read_lock(&sdeb_fake_rw_lck);
+	}
+}
+
+static inline void
+sdeb_meta_read_unlock(struct sdeb_store_info *sip)
+{
+	if (sdebug_no_rwlock) {
+		if (sip)
+			__release(&sip->macc_meta_lck);
+		else
+			__release(&sdeb_fake_rw_lck);
+	} else {
+		if (sip)
+			read_unlock(&sip->macc_meta_lck);
+		else
+			read_unlock(&sdeb_fake_rw_lck);
+	}
+}
+
+static inline void
+sdeb_meta_write_lock(struct sdeb_store_info *sip)
+{
+	if (sdebug_no_rwlock) {
+		if (sip)
+			__acquire(&sip->macc_meta_lck);
+		else
+			__acquire(&sdeb_fake_rw_lck);
+	} else {
+		if (sip)
+			write_lock(&sip->macc_meta_lck);
+		else
+			write_lock(&sdeb_fake_rw_lck);
+	}
+}
+
+static inline void
+sdeb_meta_write_unlock(struct sdeb_store_info *sip)
+{
+	if (sdebug_no_rwlock) {
+		if (sip)
+			__release(&sip->macc_meta_lck);
+		else
+			__release(&sdeb_fake_rw_lck);
+	} else {
+		if (sip)
+			write_unlock(&sip->macc_meta_lck);
+		else
+			write_unlock(&sdeb_fake_rw_lck);
+	}
+}
+
 /* Returns number of bytes copied or -1 if error. */
 static int do_device_access(struct sdeb_store_info *sip, struct scsi_cmnd *scp,
-			    u32 sg_skip, u64 lba, u32 num, bool do_write)
+			    u32 sg_skip, u64 lba, u32 num, bool do_write,
+			    bool atomic_write)
 {
 	int ret;
-	u64 block, rest = 0;
+	u64 block;
 	enum dma_data_direction dir;
 	struct scsi_data_buffer *sdb = &scp->sdb;
 	u8 *fsp;
+	int i;
+
+	/*
+	 * Even though reads are inherently atomic (in this driver), we expect
+	 * the atomic flag only for writes.
+	 */
+	if (!do_write && atomic_write)
+		return -1;
 
 	if (do_write) {
 		dir = DMA_TO_DEVICE;
@@ -3328,21 +3599,26 @@ static int do_device_access(struct sdeb_store_info *sip, struct scsi_cmnd *scp,
 	fsp = sip->storep;
 
 	block = do_div(lba, sdebug_store_sectors);
-	if (block + num > sdebug_store_sectors)
-		rest = block + num - sdebug_store_sectors;
 
-	ret = sg_copy_buffer(sdb->table.sgl, sdb->table.nents,
+	/* Only allow 1x atomic write or multiple non-atomic writes at any given time */
+	sdeb_data_lock(sip, atomic_write);
+	for (i = 0; i < num; i++) {
+		/* We shouldn't need to lock for atomic writes, but do it anyway */
+		sdeb_data_sector_lock(sip, do_write);
+		ret = sg_copy_buffer(sdb->table.sgl, sdb->table.nents,
 		   fsp + (block * sdebug_sector_size),
-		   (num - rest) * sdebug_sector_size, sg_skip, do_write);
-	if (ret != (num - rest) * sdebug_sector_size)
-		return ret;
-
-	if (rest) {
-		ret += sg_copy_buffer(sdb->table.sgl, sdb->table.nents,
-			    fsp, rest * sdebug_sector_size,
-			    sg_skip + ((num - rest) * sdebug_sector_size),
-			    do_write);
+		   sdebug_sector_size, sg_skip, do_write);
+		sdeb_data_sector_unlock(sip, do_write);
+		if (ret != sdebug_sector_size) {
+			ret += (i * sdebug_sector_size);
+			break;
+		}
+		sg_skip += sdebug_sector_size;
+		if (++block >= sdebug_store_sectors)
+			block = 0;
 	}
+	ret = num * sdebug_sector_size;
+	sdeb_data_unlock(sip, atomic_write);
 
 	return ret;
 }
@@ -3518,70 +3794,6 @@ static int prot_verify_read(struct scsi_cmnd *scp, sector_t start_sec,
 	return ret;
 }
 
-static inline void
-sdeb_read_lock(struct sdeb_store_info *sip)
-{
-	if (sdebug_no_rwlock) {
-		if (sip)
-			__acquire(&sip->macc_lck);
-		else
-			__acquire(&sdeb_fake_rw_lck);
-	} else {
-		if (sip)
-			read_lock(&sip->macc_lck);
-		else
-			read_lock(&sdeb_fake_rw_lck);
-	}
-}
-
-static inline void
-sdeb_read_unlock(struct sdeb_store_info *sip)
-{
-	if (sdebug_no_rwlock) {
-		if (sip)
-			__release(&sip->macc_lck);
-		else
-			__release(&sdeb_fake_rw_lck);
-	} else {
-		if (sip)
-			read_unlock(&sip->macc_lck);
-		else
-			read_unlock(&sdeb_fake_rw_lck);
-	}
-}
-
-static inline void
-sdeb_write_lock(struct sdeb_store_info *sip)
-{
-	if (sdebug_no_rwlock) {
-		if (sip)
-			__acquire(&sip->macc_lck);
-		else
-			__acquire(&sdeb_fake_rw_lck);
-	} else {
-		if (sip)
-			write_lock(&sip->macc_lck);
-		else
-			write_lock(&sdeb_fake_rw_lck);
-	}
-}
-
-static inline void
-sdeb_write_unlock(struct sdeb_store_info *sip)
-{
-	if (sdebug_no_rwlock) {
-		if (sip)
-			__release(&sip->macc_lck);
-		else
-			__release(&sdeb_fake_rw_lck);
-	} else {
-		if (sip)
-			write_unlock(&sip->macc_lck);
-		else
-			write_unlock(&sdeb_fake_rw_lck);
-	}
-}
-
 static int resp_read_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
 {
 	bool check_prot;
@@ -3591,6 +3803,7 @@ static int resp_read_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
 	u64 lba;
 	struct sdeb_store_info *sip = devip2sip(devip, true);
 	u8 *cmd = scp->cmnd;
+	bool meta_data_locked = false;
 
 	switch (cmd[0]) {
 	case READ_16:
@@ -3649,6 +3862,10 @@ static int resp_read_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
 		atomic_set(&sdeb_inject_pending, 0);
 	}
 
+	/*
+	 * When checking device access params, for reads we only check data
+	 * versus what is set at init time, so no need to lock.
+	 */
 	ret = check_device_access_params(scp, lba, num, false);
 	if (ret)
 		return ret;
@@ -3668,29 +3885,33 @@ static int resp_read_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
 		return check_condition_result;
 	}
 
-	sdeb_read_lock(sip);
+	if (sdebug_dev_is_zoned(devip) ||
+	    (sdebug_dix && scsi_prot_sg_count(scp)))  {
+		sdeb_meta_read_lock(sip);
+		meta_data_locked = true;
+	}
 
 	/* DIX + T10 DIF */
 	if (unlikely(sdebug_dix && scsi_prot_sg_count(scp))) {
 		switch (prot_verify_read(scp, lba, num, ei_lba)) {
 		case 1: /* Guard tag error */
 			if (cmd[1] >> 5 != 3) { /* RDPROTECT != 3 */
-				sdeb_read_unlock(sip);
+				sdeb_meta_read_unlock(sip);
 				mk_sense_buffer(scp, ABORTED_COMMAND, 0x10, 1);
 				return check_condition_result;
 			} else if (scp->prot_flags & SCSI_PROT_GUARD_CHECK) {
-				sdeb_read_unlock(sip);
+				sdeb_meta_read_unlock(sip);
 				mk_sense_buffer(scp, ILLEGAL_REQUEST, 0x10, 1);
 				return illegal_condition_result;
 			}
 			break;
 		case 3: /* Reference tag error */
 			if (cmd[1] >> 5 != 3) { /* RDPROTECT != 3 */
-				sdeb_read_unlock(sip);
+				sdeb_meta_read_unlock(sip);
 				mk_sense_buffer(scp, ABORTED_COMMAND, 0x10, 3);
 				return check_condition_result;
 			} else if (scp->prot_flags & SCSI_PROT_REF_CHECK) {
-				sdeb_read_unlock(sip);
+				sdeb_meta_read_unlock(sip);
 				mk_sense_buffer(scp, ILLEGAL_REQUEST, 0x10, 3);
 				return illegal_condition_result;
 			}
@@ -3698,8 +3919,9 @@ static int resp_read_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
 		}
 	}
 
-	ret = do_device_access(sip, scp, 0, lba, num, false);
-	sdeb_read_unlock(sip);
+	ret = do_device_access(sip, scp, 0, lba, num, false, false);
+	if (meta_data_locked)
+		sdeb_meta_read_unlock(sip);
 	if (unlikely(ret == -1))
 		return DID_ERROR << 16;
 
@@ -3888,6 +4110,7 @@ static int resp_write_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
 	u64 lba;
 	struct sdeb_store_info *sip = devip2sip(devip, true);
 	u8 *cmd = scp->cmnd;
+	bool meta_data_locked = false;
 
 	switch (cmd[0]) {
 	case WRITE_16:
@@ -3941,10 +4164,17 @@ static int resp_write_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
 				    "to DIF device\n");
 	}
 
-	sdeb_write_lock(sip);
+	if (sdebug_dev_is_zoned(devip) ||
+	    (sdebug_dix && scsi_prot_sg_count(scp)) ||
+	    scsi_debug_lbp())  {
+		sdeb_meta_write_lock(sip);
+		meta_data_locked = true;
+	}
+
 	ret = check_device_access_params(scp, lba, num, true);
 	if (ret) {
-		sdeb_write_unlock(sip);
+		if (meta_data_locked)
+			sdeb_meta_write_unlock(sip);
 		return ret;
 	}
 
@@ -3953,22 +4183,22 @@ static int resp_write_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
 		switch (prot_verify_write(scp, lba, num, ei_lba)) {
 		case 1: /* Guard tag error */
 			if (scp->prot_flags & SCSI_PROT_GUARD_CHECK) {
-				sdeb_write_unlock(sip);
+				sdeb_meta_write_unlock(sip);
 				mk_sense_buffer(scp, ILLEGAL_REQUEST, 0x10, 1);
 				return illegal_condition_result;
 			} else if (scp->cmnd[1] >> 5 != 3) { /* WRPROTECT != 3 */
-				sdeb_write_unlock(sip);
+				sdeb_meta_write_unlock(sip);
 				mk_sense_buffer(scp, ABORTED_COMMAND, 0x10, 1);
 				return check_condition_result;
 			}
 			break;
 		case 3: /* Reference tag error */
 			if (scp->prot_flags & SCSI_PROT_REF_CHECK) {
-				sdeb_write_unlock(sip);
+				sdeb_meta_write_unlock(sip);
 				mk_sense_buffer(scp, ILLEGAL_REQUEST, 0x10, 3);
 				return illegal_condition_result;
 			} else if (scp->cmnd[1] >> 5 != 3) { /* WRPROTECT != 3 */
-				sdeb_write_unlock(sip);
+				sdeb_meta_write_unlock(sip);
 				mk_sense_buffer(scp, ABORTED_COMMAND, 0x10, 3);
 				return check_condition_result;
 			}
@@ -3976,13 +4206,16 @@ static int resp_write_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
 		}
 	}
 
-	ret = do_device_access(sip, scp, 0, lba, num, true);
+	ret = do_device_access(sip, scp, 0, lba, num, true, false);
 	if (unlikely(scsi_debug_lbp()))
 		map_region(sip, lba, num);
+
 	/* If ZBC zone then bump its write pointer */
 	if (sdebug_dev_is_zoned(devip))
 		zbc_inc_wp(devip, lba, num);
-	sdeb_write_unlock(sip);
+	if (meta_data_locked)
+		sdeb_meta_write_unlock(sip);
+
 	if (unlikely(-1 == ret))
 		return DID_ERROR << 16;
 	else if (unlikely(sdebug_verbose &&
@@ -4089,7 +4322,8 @@ static int resp_write_scat(struct scsi_cmnd *scp,
 		goto err_out;
 	}
 
-	sdeb_write_lock(sip);
+	/* Just keep it simple and always lock for now */
+	sdeb_meta_write_lock(sip);
 	sg_off = lbdof_blen;
 	/* Spec says Buffer xfer Length field in number of LBs in dout */
 	cum_lb = 0;
@@ -4132,7 +4366,11 @@ static int resp_write_scat(struct scsi_cmnd *scp,
 			}
 		}
 
-		ret = do_device_access(sip, scp, sg_off, lba, num, true);
+		/*
+		 * Write ranges atomically to keep as close to pre-atomic
+		 * writes behaviour as possible.
+		 */
+		ret = do_device_access(sip, scp, sg_off, lba, num, true, true);
 		/* If ZBC zone then bump its write pointer */
 		if (sdebug_dev_is_zoned(devip))
 			zbc_inc_wp(devip, lba, num);
@@ -4171,7 +4409,7 @@ static int resp_write_scat(struct scsi_cmnd *scp,
 	}
 	ret = 0;
 err_out_unlock:
-	sdeb_write_unlock(sip);
+	sdeb_meta_write_unlock(sip);
 err_out:
 	kfree(lrdp);
 	return ret;
@@ -4190,14 +4428,16 @@ static int resp_write_same(struct scsi_cmnd *scp, u64 lba, u32 num,
 						scp->device->hostdata, true);
 	u8 *fs1p;
 	u8 *fsp;
+	bool meta_data_locked = false;
 
-	sdeb_write_lock(sip);
+	if (sdebug_dev_is_zoned(devip) || scsi_debug_lbp()) {
+		sdeb_meta_write_lock(sip);
+		meta_data_locked = true;
+	}
 
 	ret = check_device_access_params(scp, lba, num, true);
-	if (ret) {
-		sdeb_write_unlock(sip);
-		return ret;
-	}
+	if (ret)
+		goto out;
 
 	if (unmap && scsi_debug_lbp()) {
 		unmap_region(sip, lba, num);
@@ -4208,6 +4448,7 @@ static int resp_write_same(struct scsi_cmnd *scp, u64 lba, u32 num,
 	/* if ndob then zero 1 logical block, else fetch 1 logical block */
 	fsp = sip->storep;
 	fs1p = fsp + (block * lb_size);
+	sdeb_data_write_lock(sip);
 	if (ndob) {
 		memset(fs1p, 0, lb_size);
 		ret = 0;
@@ -4215,8 +4456,8 @@ static int resp_write_same(struct scsi_cmnd *scp, u64 lba, u32 num,
 		ret = fetch_to_dev_buffer(scp, fs1p, lb_size);
 
 	if (-1 == ret) {
-		sdeb_write_unlock(sip);
-		return DID_ERROR << 16;
+		ret = DID_ERROR << 16;
+		goto out;
 	} else if (sdebug_verbose && !ndob && (ret < lb_size))
 		sdev_printk(KERN_INFO, scp->device,
 			    "%s: %s: lb size=%u, IO sent=%d bytes\n",
@@ -4233,10 +4474,12 @@ static int resp_write_same(struct scsi_cmnd *scp, u64 lba, u32 num,
 	/* If ZBC zone then bump its write pointer */
 	if (sdebug_dev_is_zoned(devip))
 		zbc_inc_wp(devip, lba, num);
+	sdeb_data_write_unlock(sip);
+	ret = 0;
 out:
-	sdeb_write_unlock(sip);
-
-	return 0;
+	if (meta_data_locked)
+		sdeb_meta_write_unlock(sip);
+	return ret;
 }
 
 static int resp_write_same_10(struct scsi_cmnd *scp,
@@ -4379,25 +4622,30 @@ static int resp_comp_write(struct scsi_cmnd *scp,
 		return check_condition_result;
 	}
 
-	sdeb_write_lock(sip);
-
 	ret = do_dout_fetch(scp, dnum, arr);
 	if (ret == -1) {
 		retval = DID_ERROR << 16;
-		goto cleanup;
+		goto cleanup_free;
 	} else if (sdebug_verbose && (ret < (dnum * lb_size)))
 		sdev_printk(KERN_INFO, scp->device, "%s: compare_write: cdb "
 			    "indicated=%u, IO sent=%d bytes\n", my_name,
 			    dnum * lb_size, ret);
+
+	sdeb_data_write_lock(sip);
+	sdeb_meta_write_lock(sip);
 	if (!comp_write_worker(sip, lba, num, arr, false)) {
 		mk_sense_buffer(scp, MISCOMPARE, MISCOMPARE_VERIFY_ASC, 0);
 		retval = check_condition_result;
-		goto cleanup;
+		goto cleanup_unlock;
 	}
+
+	/* Cover sip->map_storep (which map_region()) sets with data lock */
 	if (scsi_debug_lbp())
 		map_region(sip, lba, num);
-cleanup:
-	sdeb_write_unlock(sip);
+cleanup_unlock:
+	sdeb_meta_write_unlock(sip);
+	sdeb_data_write_unlock(sip);
+cleanup_free:
 	kfree(arr);
 	return retval;
 }
@@ -4441,7 +4689,7 @@ static int resp_unmap(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
 
 	desc = (void *)&buf[8];
 
-	sdeb_write_lock(sip);
+	sdeb_meta_write_lock(sip);
 
 	for (i = 0 ; i < descriptors ; i++) {
 		unsigned long long lba = get_unaligned_be64(&desc[i].lba);
@@ -4457,7 +4705,7 @@ static int resp_unmap(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
 	ret = 0;
 
 out:
-	sdeb_write_unlock(sip);
+	sdeb_meta_write_unlock(sip);
 	kfree(buf);
 
 	return ret;
@@ -4570,12 +4818,13 @@ static int resp_pre_fetch(struct scsi_cmnd *scp,
 		rest = block + nblks - sdebug_store_sectors;
 
 	/* Try to bring the PRE-FETCH range into CPU's cache */
-	sdeb_read_lock(sip);
+	sdeb_data_read_lock(sip);
 	prefetch_range(fsp + (sdebug_sector_size * block),
 		       (nblks - rest) * sdebug_sector_size);
 	if (rest)
 		prefetch_range(fsp, rest * sdebug_sector_size);
-	sdeb_read_unlock(sip);
+
+	sdeb_data_read_unlock(sip);
 fini:
 	if (cmd[1] & 0x2)
 		res = SDEG_RES_IMMED_MASK;
@@ -4734,7 +4983,7 @@ static int resp_verify(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
 		return check_condition_result;
 	}
 	/* Not changing store, so only need read access */
-	sdeb_read_lock(sip);
+	sdeb_data_read_lock(sip);
 
 	ret = do_dout_fetch(scp, a_num, arr);
 	if (ret == -1) {
@@ -4756,7 +5005,7 @@ static int resp_verify(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
 		goto cleanup;
 	}
 cleanup:
-	sdeb_read_unlock(sip);
+	sdeb_data_read_unlock(sip);
 	kfree(arr);
 	return ret;
 }
@@ -4802,7 +5051,7 @@ static int resp_report_zones(struct scsi_cmnd *scp,
 		return check_condition_result;
 	}
 
-	sdeb_read_lock(sip);
+	sdeb_meta_read_lock(sip);
 
 	desc = arr + 64;
 	for (lba = zs_lba; lba < sdebug_capacity;
@@ -4900,11 +5149,70 @@ static int resp_report_zones(struct scsi_cmnd *scp,
 	ret = fill_from_dev_buffer(scp, arr, min_t(u32, alloc_len, rep_len));
 
 fini:
-	sdeb_read_unlock(sip);
+	sdeb_meta_read_unlock(sip);
 	kfree(arr);
 	return ret;
 }
 
+static int resp_atomic_write(struct scsi_cmnd *scp,
+			     struct sdebug_dev_info *devip)
+{
+	struct sdeb_store_info *sip;
+	u8 *cmd = scp->cmnd;
+	u16 boundary, len;
+	u64 lba, lba_tmp;
+	int ret;
+
+	if (!scsi_debug_atomic_write()) {
+		mk_sense_invalid_opcode(scp);
+		return check_condition_result;
+	}
+
+	sip = devip2sip(devip, true);
+
+	lba = get_unaligned_be64(cmd + 2);
+	boundary = get_unaligned_be16(cmd + 10);
+	len = get_unaligned_be16(cmd + 12);
+
+	lba_tmp = lba;
+	if (sdebug_atomic_wr_align &&
+	    do_div(lba_tmp, sdebug_atomic_wr_align)) {
+		/* Does not meet alignment requirement */
+		mk_sense_buffer(scp, ILLEGAL_REQUEST, INVALID_FIELD_IN_CDB, 0);
+		return check_condition_result;
+	}
+
+	if (sdebug_atomic_wr_gran && len % sdebug_atomic_wr_gran) {
+		/* Does not meet alignment requirement */
+		mk_sense_buffer(scp, ILLEGAL_REQUEST, INVALID_FIELD_IN_CDB, 0);
+		return check_condition_result;
+	}
+
+	if (boundary > 0) {
+		if (boundary > sdebug_atomic_wr_max_bndry) {
+			mk_sense_invalid_fld(scp, SDEB_IN_CDB, 12, -1);
+			return check_condition_result;
+		}
+
+		if (len > sdebug_atomic_wr_max_length_bndry) {
+			mk_sense_invalid_fld(scp, SDEB_IN_CDB, 12, -1);
+			return check_condition_result;
+		}
+	} else {
+		if (len > sdebug_atomic_wr_max_length) {
+			mk_sense_invalid_fld(scp, SDEB_IN_CDB, 12, -1);
+			return check_condition_result;
+		}
+	}
+
+	ret = do_device_access(sip, scp, 0, lba, len, true, true);
+	if (unlikely(ret == -1))
+		return DID_ERROR << 16;
+	if (unlikely(ret != len * sdebug_sector_size))
+		return DID_ERROR << 16;
+	return 0;
+}
+
 /* Logic transplanted from tcmu-runner, file_zbc.c */
 static void zbc_open_all(struct sdebug_dev_info *devip)
 {
@@ -4931,8 +5239,7 @@ static int resp_open_zone(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
 		mk_sense_invalid_opcode(scp);
 		return check_condition_result;
 	}
-
-	sdeb_write_lock(sip);
+	sdeb_meta_write_lock(sip);
 
 	if (all) {
 		/* Check if all closed zones can be open */
@@ -4981,7 +5288,7 @@ static int resp_open_zone(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
 
 	zbc_open_zone(devip, zsp, true);
 fini:
-	sdeb_write_unlock(sip);
+	sdeb_meta_write_unlock(sip);
 	return res;
 }
 
@@ -5008,7 +5315,7 @@ static int resp_close_zone(struct scsi_cmnd *scp,
 		return check_condition_result;
 	}
 
-	sdeb_write_lock(sip);
+	sdeb_meta_write_lock(sip);
 
 	if (all) {
 		zbc_close_all(devip);
@@ -5037,7 +5344,7 @@ static int resp_close_zone(struct scsi_cmnd *scp,
 
 	zbc_close_zone(devip, zsp);
 fini:
-	sdeb_write_unlock(sip);
+	sdeb_meta_write_unlock(sip);
 	return res;
 }
 
@@ -5080,7 +5387,7 @@ static int resp_finish_zone(struct scsi_cmnd *scp,
 		return check_condition_result;
 	}
 
-	sdeb_write_lock(sip);
+	sdeb_meta_write_lock(sip);
 
 	if (all) {
 		zbc_finish_all(devip);
@@ -5109,7 +5416,7 @@ static int resp_finish_zone(struct scsi_cmnd *scp,
 
 	zbc_finish_zone(devip, zsp, true);
 fini:
-	sdeb_write_unlock(sip);
+	sdeb_meta_write_unlock(sip);
 	return res;
 }
 
@@ -5160,7 +5467,7 @@ static int resp_rwp_zone(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
 		return check_condition_result;
 	}
 
-	sdeb_write_lock(sip);
+	sdeb_meta_write_lock(sip);
 
 	if (all) {
 		zbc_rwp_all(devip);
@@ -5188,7 +5495,7 @@ static int resp_rwp_zone(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
 
 	zbc_rwp_zone(devip, zsp);
 fini:
-	sdeb_write_unlock(sip);
+	sdeb_meta_write_unlock(sip);
 	return res;
 }
 
@@ -5215,6 +5522,7 @@ static void sdebug_q_cmd_complete(struct sdebug_defer *sd_dp)
 	if (!scp) {
 		pr_err("scmd=NULL\n");
 		goto out;
+
 	}
 
 	sdsc = scsi_cmd_priv(scp);
@@ -6152,6 +6460,7 @@ module_param_named(lbprz, sdebug_lbprz, int, S_IRUGO);
 module_param_named(lbpu, sdebug_lbpu, int, S_IRUGO);
 module_param_named(lbpws, sdebug_lbpws, int, S_IRUGO);
 module_param_named(lbpws10, sdebug_lbpws10, int, S_IRUGO);
+module_param_named(atomic_wr, sdebug_atomic_wr, int, S_IRUGO);
 module_param_named(lowest_aligned, sdebug_lowest_aligned, int, S_IRUGO);
 module_param_named(lun_format, sdebug_lun_am_i, int, S_IRUGO | S_IWUSR);
 module_param_named(max_luns, sdebug_max_luns, int, S_IRUGO | S_IWUSR);
@@ -6186,6 +6495,11 @@ module_param_named(unmap_alignment, sdebug_unmap_alignment, int, S_IRUGO);
 module_param_named(unmap_granularity, sdebug_unmap_granularity, int, S_IRUGO);
 module_param_named(unmap_max_blocks, sdebug_unmap_max_blocks, int, S_IRUGO);
 module_param_named(unmap_max_desc, sdebug_unmap_max_desc, int, S_IRUGO);
+module_param_named(atomic_wr_max_length, sdebug_atomic_wr_max_length, int, S_IRUGO);
+module_param_named(atomic_wr_align, sdebug_atomic_wr_align, int, S_IRUGO);
+module_param_named(atomic_wr_gran, sdebug_atomic_wr_gran, int, S_IRUGO);
+module_param_named(atomic_wr_max_length_bndry, sdebug_atomic_wr_max_length_bndry, int, S_IRUGO);
+module_param_named(atomic_wr_max_bndry, sdebug_atomic_wr_max_bndry, int, S_IRUGO);
 module_param_named(uuid_ctl, sdebug_uuid_ctl, int, S_IRUGO);
 module_param_named(virtual_gb, sdebug_virtual_gb, int, S_IRUGO | S_IWUSR);
 module_param_named(vpd_use_hostno, sdebug_vpd_use_hostno, int,
@@ -6229,6 +6543,7 @@ MODULE_PARM_DESC(lbprz,
 MODULE_PARM_DESC(lbpu, "enable LBP, support UNMAP command (def=0)");
 MODULE_PARM_DESC(lbpws, "enable LBP, support WRITE SAME(16) with UNMAP bit (def=0)");
 MODULE_PARM_DESC(lbpws10, "enable LBP, support WRITE SAME(10) with UNMAP bit (def=0)");
+MODULE_PARM_DESC(atomic_write, "enable ATOMIC WRITE support, support WRITE ATOMIC(16) (def=1)");
 MODULE_PARM_DESC(lowest_aligned, "lowest aligned lba (def=0)");
 MODULE_PARM_DESC(lun_format, "LUN format: 0->peripheral (def); 1 --> flat address method");
 MODULE_PARM_DESC(max_luns, "number of LUNs per target to simulate(def=1)");
@@ -6260,6 +6575,11 @@ MODULE_PARM_DESC(unmap_alignment, "lowest aligned thin provisioning lba (def=0)"
 MODULE_PARM_DESC(unmap_granularity, "thin provisioning granularity in blocks (def=1)");
 MODULE_PARM_DESC(unmap_max_blocks, "max # of blocks can be unmapped in one cmd (def=0xffffffff)");
 MODULE_PARM_DESC(unmap_max_desc, "max # of ranges that can be unmapped in one cmd (def=256)");
+MODULE_PARM_DESC(atomic_wr_max_length, "max # of blocks can be atomically written in one cmd (def=8192)");
+MODULE_PARM_DESC(atomic_wr_align, "minimum alignment of atomic write in blocks (def=2)");
+MODULE_PARM_DESC(atomic_wr_gran, "minimum granularity of atomic write in blocks (def=2)");
+MODULE_PARM_DESC(atomic_wr_max_length_bndry, "max # of blocks can be atomically written in one cmd with boundary set (def=8192)");
+MODULE_PARM_DESC(atomic_wr_max_bndry, "max # boundaries per atomic write (def=128)");
 MODULE_PARM_DESC(uuid_ctl,
 		 "1->use uuid for lu name, 0->don't, 2->all use same (def=0)");
 MODULE_PARM_DESC(virtual_gb, "virtual gigabyte (GiB) size (def=0 -> use dev_size_mb)");
@@ -7406,6 +7726,7 @@ static int __init scsi_debug_init(void)
 			return -EINVAL;
 		}
 	}
+
 	xa_init_flags(per_store_ap, XA_FLAGS_ALLOC | XA_FLAGS_LOCK_IRQ);
 	if (want_store) {
 		idx = sdebug_add_store();
@@ -7613,7 +7934,9 @@ static int sdebug_add_store(void)
 			map_region(sip, 0, 2);
 	}
 
-	rwlock_init(&sip->macc_lck);
+	rwlock_init(&sip->macc_data_lck);
+	rwlock_init(&sip->macc_meta_lck);
+	rwlock_init(&sip->macc_sector_lck);
 	return (int)n_idx;
 err:
 	sdebug_erase_store((int)n_idx, sip);
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH v3 14/15] nvme: Support atomic writes
  2024-01-24 11:38 [PATCH v3 00/15] block atomic writes John Garry
                   ` (12 preceding siblings ...)
  2024-01-24 11:38 ` [PATCH v3 13/15] scsi: scsi_debug: Atomic write support John Garry
@ 2024-01-24 11:38 ` John Garry
  2024-02-13  6:42   ` Christoph Hellwig
  2024-02-14 12:27   ` Nilay Shroff
  2024-01-24 11:38 ` [PATCH v3 15/15] nvme: Ensure atomic writes will be executed atomically John Garry
  2024-01-29  6:18 ` [PATCH v3 00/15] block atomic writes Christoph Hellwig
  15 siblings, 2 replies; 59+ messages in thread
From: John Garry @ 2024-01-24 11:38 UTC (permalink / raw)
  To: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-xfs, linux-fsdevel,
	tytso, jbongio, linux-scsi, ming.lei, ojaswin, bvanassche,
	Alan Adamson, John Garry

From: Alan Adamson <alan.adamson@oracle.com>

Support reading atomic write registers to fill in request_queue
properties.

Use following method to calculate limits:
atomic_write_max_bytes = flp2(NAWUPF ?: AWUPF)
atomic_write_unit_min = logical_block_size
atomic_write_unit_max = flp2(NAWUPF ?: AWUPF)
atomic_write_boundary = NABSPF

Signed-off-by: Alan Adamson <alan.adamson@oracle.com>
#jpg: some rewrite
Signed-off-by: John Garry <john.g.garry@oracle.com>
---
 drivers/nvme/host/core.c | 40 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 40 insertions(+)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 85ab0fcf9e88..5045c84f2516 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -1911,6 +1911,44 @@ static void nvme_set_queue_limits(struct nvme_ctrl *ctrl,
 	blk_queue_write_cache(q, vwc, vwc);
 }
 
+static void nvme_update_atomic_write_disk_info(struct gendisk *disk,
+		struct nvme_ctrl *ctrl, struct nvme_id_ns *id, u32 bs, u32 atomic_bs)
+{
+	unsigned int unit_min = 0, unit_max = 0, boundary = 0, max_bytes = 0;
+	struct request_queue *q = disk->queue;
+
+	if (id->nsfeat & NVME_NS_FEAT_ATOMICS && id->nawupf) {
+		if (le16_to_cpu(id->nabspf))
+			boundary = (le16_to_cpu(id->nabspf) + 1) * bs;
+
+		/*
+		 * The boundary size just needs to be a multiple of unit_max
+		 * (and not necessarily a power-of-2), so this could be relaxed
+		 * in the block layer in future.
+		 * Furthermore, if needed, unit_max could be reduced so that the
+		 * boundary size was compliant - but don't support yet.
+		 */
+		if (!boundary || is_power_of_2(boundary)) {
+			max_bytes = atomic_bs;
+			unit_min = bs;
+			unit_max = rounddown_pow_of_two(atomic_bs);
+		} else {
+			dev_notice(ctrl->device, "Unsupported atomic write boundary (%d)\n",
+				boundary);
+			boundary = 0;
+		}
+	} else if (ctrl->subsys->awupf) {
+		max_bytes = atomic_bs;
+		unit_min = bs;
+		unit_max = rounddown_pow_of_two(atomic_bs);
+	}
+
+	blk_queue_atomic_write_max_bytes(q, max_bytes);
+	blk_queue_atomic_write_unit_min_sectors(q, unit_min >> SECTOR_SHIFT);
+	blk_queue_atomic_write_unit_max_sectors(q, unit_max >> SECTOR_SHIFT);
+	blk_queue_atomic_write_boundary_bytes(q, boundary);
+}
+
 static void nvme_update_disk_info(struct nvme_ctrl *ctrl, struct gendisk *disk,
 		struct nvme_ns_head *head, struct nvme_id_ns *id)
 {
@@ -1941,6 +1979,8 @@ static void nvme_update_disk_info(struct nvme_ctrl *ctrl, struct gendisk *disk,
 			atomic_bs = (1 + le16_to_cpu(id->nawupf)) * bs;
 		else
 			atomic_bs = (1 + ctrl->subsys->awupf) * bs;
+
+		nvme_update_atomic_write_disk_info(disk, ctrl, id, bs, atomic_bs);
 	}
 
 	if (id->nsfeat & NVME_NS_FEAT_IO_OPT) {
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH v3 15/15] nvme: Ensure atomic writes will be executed atomically
  2024-01-24 11:38 [PATCH v3 00/15] block atomic writes John Garry
                   ` (13 preceding siblings ...)
  2024-01-24 11:38 ` [PATCH v3 14/15] nvme: Support atomic writes John Garry
@ 2024-01-24 11:38 ` John Garry
  2024-01-25  0:52   ` Keith Busch
  2024-02-13  6:42   ` Christoph Hellwig
  2024-01-29  6:18 ` [PATCH v3 00/15] block atomic writes Christoph Hellwig
  15 siblings, 2 replies; 59+ messages in thread
From: John Garry @ 2024-01-24 11:38 UTC (permalink / raw)
  To: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-xfs, linux-fsdevel,
	tytso, jbongio, linux-scsi, ming.lei, ojaswin, bvanassche,
	Alan Adamson, John Garry

From: Alan Adamson <alan.adamson@oracle.com>

There is no dedicated NVMe atomic write command (which may error for a
command which exceeds the controller atomic write limits).

As an insurance policy against the block layer sending requests which
cannot be executed atomically, add a check in the queue path.

Signed-off-by: Alan Adamson <alan.adamson@oracle.com>
Signed-off-by: John Garry <john.g.garry@oracle.com>
---
 drivers/nvme/host/core.c | 35 +++++++++++++++++++++++++++++++++--
 drivers/nvme/host/nvme.h |  2 ++
 2 files changed, 35 insertions(+), 2 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 5045c84f2516..6a34a5d92088 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -911,6 +911,32 @@ static inline blk_status_t nvme_setup_rw(struct nvme_ns *ns,
 	if (req->cmd_flags & REQ_RAHEAD)
 		dsmgmt |= NVME_RW_DSM_FREQ_PREFETCH;
 
+	/*
+	 * Ensure that nothing has been sent which cannot be executed
+	 * atomically.
+	 */
+	if (req->cmd_flags & REQ_ATOMIC) {
+		struct nvme_ns_head *head = ns->head;
+		u32 boundary_bytes = head->atomic_boundary;
+
+		if (blk_rq_bytes(req) > ns->head->atomic_max)
+			return BLK_STS_IOERR;
+
+		if (boundary_bytes) {
+			u32 mask = boundary_bytes - 1, imask = ~mask;
+			u32 start = blk_rq_pos(req) << SECTOR_SHIFT;
+			u32 end = start + blk_rq_bytes(req);
+
+			if (blk_rq_bytes(req) > boundary_bytes)
+				return BLK_STS_IOERR;
+
+			if (((start & imask) != (end & imask)) &&
+			    (end & mask)) {
+				return BLK_STS_IOERR;
+			}
+		}
+	}
+
 	cmnd->rw.opcode = op;
 	cmnd->rw.flags = 0;
 	cmnd->rw.nsid = cpu_to_le32(ns->head->ns_id);
@@ -1912,7 +1938,8 @@ static void nvme_set_queue_limits(struct nvme_ctrl *ctrl,
 }
 
 static void nvme_update_atomic_write_disk_info(struct gendisk *disk,
-		struct nvme_ctrl *ctrl, struct nvme_id_ns *id, u32 bs, u32 atomic_bs)
+		struct nvme_ctrl *ctrl, struct nvme_ns_head *head,
+		struct nvme_id_ns *id, u32 bs, u32 atomic_bs)
 {
 	unsigned int unit_min = 0, unit_max = 0, boundary = 0, max_bytes = 0;
 	struct request_queue *q = disk->queue;
@@ -1943,6 +1970,9 @@ static void nvme_update_atomic_write_disk_info(struct gendisk *disk,
 		unit_max = rounddown_pow_of_two(atomic_bs);
 	}
 
+	head->atomic_max = max_bytes;
+	head->atomic_boundary = boundary;
+
 	blk_queue_atomic_write_max_bytes(q, max_bytes);
 	blk_queue_atomic_write_unit_min_sectors(q, unit_min >> SECTOR_SHIFT);
 	blk_queue_atomic_write_unit_max_sectors(q, unit_max >> SECTOR_SHIFT);
@@ -1980,7 +2010,8 @@ static void nvme_update_disk_info(struct nvme_ctrl *ctrl, struct gendisk *disk,
 		else
 			atomic_bs = (1 + ctrl->subsys->awupf) * bs;
 
-		nvme_update_atomic_write_disk_info(disk, ctrl, id, bs, atomic_bs);
+		nvme_update_atomic_write_disk_info(disk, ctrl, head, id,
+						   bs, atomic_bs);
 	}
 
 	if (id->nsfeat & NVME_NS_FEAT_IO_OPT) {
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 030c80818240..938060c85e6f 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -476,6 +476,8 @@ struct nvme_ns_head {
 	struct device		cdev_device;
 
 	struct gendisk		*disk;
+	u32 atomic_max;
+	u32 atomic_boundary;
 #ifdef CONFIG_NVME_MULTIPATH
 	struct bio_list		requeue_list;
 	spinlock_t		requeue_lock;
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* Re: [PATCH v3 15/15] nvme: Ensure atomic writes will be executed atomically
  2024-01-24 11:38 ` [PATCH v3 15/15] nvme: Ensure atomic writes will be executed atomically John Garry
@ 2024-01-25  0:52   ` Keith Busch
  2024-01-25 11:28     ` John Garry
  2024-01-26  3:50     ` Chaitanya Kulkarni
  2024-02-13  6:42   ` Christoph Hellwig
  1 sibling, 2 replies; 59+ messages in thread
From: Keith Busch @ 2024-01-25  0:52 UTC (permalink / raw)
  To: John Garry
  Cc: axboe, hch, sagi, jejb, martin.petersen, djwong, viro, brauner,
	dchinner, jack, linux-block, linux-kernel, linux-nvme, linux-xfs,
	linux-fsdevel, tytso, jbongio, linux-scsi, ming.lei, ojaswin,
	bvanassche, Alan Adamson

On Wed, Jan 24, 2024 at 11:38:41AM +0000, John Garry wrote:
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index 5045c84f2516..6a34a5d92088 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -911,6 +911,32 @@ static inline blk_status_t nvme_setup_rw(struct nvme_ns *ns,
>  	if (req->cmd_flags & REQ_RAHEAD)
>  		dsmgmt |= NVME_RW_DSM_FREQ_PREFETCH;
>  
> +	/*
> +	 * Ensure that nothing has been sent which cannot be executed
> +	 * atomically.
> +	 */
> +	if (req->cmd_flags & REQ_ATOMIC) {
> +		struct nvme_ns_head *head = ns->head;
> +		u32 boundary_bytes = head->atomic_boundary;
> +
> +		if (blk_rq_bytes(req) > ns->head->atomic_max)
> +			return BLK_STS_IOERR;
> +
> +		if (boundary_bytes) {
> +			u32 mask = boundary_bytes - 1, imask = ~mask;
> +			u32 start = blk_rq_pos(req) << SECTOR_SHIFT;
> +			u32 end = start + blk_rq_bytes(req);
> +
> +			if (blk_rq_bytes(req) > boundary_bytes)
> +				return BLK_STS_IOERR;
> +
> +			if (((start & imask) != (end & imask)) &&
> +			    (end & mask)) {
> +				return BLK_STS_IOERR;
> +			}
> +		}
> +	}

Aren't these new fields, atomic_max and atomic_boundary, duplicates of
the equivalent queue limits? Let's just use the queue limits instead.

And couldn't we generically validate the constraints are not violated in
submit_bio_noacct() instead of doing that in the low level driver? The
driver assumes all other requests are already sanity checked, so I don't
think we should change the responsibility for that just for this flag.

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v3 15/15] nvme: Ensure atomic writes will be executed atomically
  2024-01-25  0:52   ` Keith Busch
@ 2024-01-25 11:28     ` John Garry
  2024-01-29  6:20       ` Christoph Hellwig
  2024-01-26  3:50     ` Chaitanya Kulkarni
  1 sibling, 1 reply; 59+ messages in thread
From: John Garry @ 2024-01-25 11:28 UTC (permalink / raw)
  To: Keith Busch
  Cc: axboe, hch, sagi, jejb, martin.petersen, djwong, viro, brauner,
	dchinner, jack, linux-block, linux-kernel, linux-nvme, linux-xfs,
	linux-fsdevel, tytso, jbongio, linux-scsi, ming.lei, ojaswin,
	bvanassche, Alan Adamson

On 25/01/2024 00:52, Keith Busch wrote:
> On Wed, Jan 24, 2024 at 11:38:41AM +0000, John Garry wrote:
>> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
>> index 5045c84f2516..6a34a5d92088 100644
>> --- a/drivers/nvme/host/core.c
>> +++ b/drivers/nvme/host/core.c
>> @@ -911,6 +911,32 @@ static inline blk_status_t nvme_setup_rw(struct nvme_ns *ns,
>>   	if (req->cmd_flags & REQ_RAHEAD)
>>   		dsmgmt |= NVME_RW_DSM_FREQ_PREFETCH;
>>   
>> +	/*
>> +	 * Ensure that nothing has been sent which cannot be executed
>> +	 * atomically.
>> +	 */
>> +	if (req->cmd_flags & REQ_ATOMIC) {
>> +		struct nvme_ns_head *head = ns->head;
>> +		u32 boundary_bytes = head->atomic_boundary;
>> +
>> +		if (blk_rq_bytes(req) > ns->head->atomic_max)
>> +			return BLK_STS_IOERR;
>> +
>> +		if (boundary_bytes) {
>> +			u32 mask = boundary_bytes - 1, imask = ~mask;
>> +			u32 start = blk_rq_pos(req) << SECTOR_SHIFT;
>> +			u32 end = start + blk_rq_bytes(req);
>> +
>> +			if (blk_rq_bytes(req) > boundary_bytes)
>> +				return BLK_STS_IOERR;
>> +
>> +			if (((start & imask) != (end & imask)) &&
>> +			    (end & mask)) {
>> +				return BLK_STS_IOERR;
>> +			}
>> +		}
>> +	}
> Aren't these new fields, atomic_max and atomic_boundary, duplicates of
> the equivalent queue limits? Let's just use the queue limits instead.

I was making a copy of these limits in the driver out of an abundance of 
caution. I suppose the atomic_max and atomic_boundary values won't be 
modified in the block layer, so I could use them instead.

> 
> And couldn't we generically validate the constraints are not violated in
> submit_bio_noacct() instead of doing that in the low level driver? The
> driver assumes all other requests are already sanity checked, so I don't
> think we should change the responsibility for that just for this flag.

As a safety mechanism, we want to ensure the complete stack is not 
giving us out-of-limits atomic writes.

We have limits checks in XFS iomap and fops.c, but we would also want to 
ensure that the the block layer is not doing anything it shouldn't be 
doing after submit_bio_noacct(), like merging atomic write BIOs which 
straddle a boundary or exceed atomic_max (if there were any merging).

The SCSI standard already has provision for error'ing an atomic write 
command which exceeds the target atomic write capabilities, while NVMe 
doesn't.

BTW, Christoph did mention that he would like to see this:
https://lore.kernel.org/linux-nvme/20231109153603.GA2188@lst.de/

Thanks,
John


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v3 15/15] nvme: Ensure atomic writes will be executed atomically
  2024-01-25  0:52   ` Keith Busch
  2024-01-25 11:28     ` John Garry
@ 2024-01-26  3:50     ` Chaitanya Kulkarni
  1 sibling, 0 replies; 59+ messages in thread
From: Chaitanya Kulkarni @ 2024-01-26  3:50 UTC (permalink / raw)
  To: Keith Busch, John Garry
  Cc: axboe, hch, sagi, jejb, martin.petersen, djwong, viro, brauner,
	dchinner, jack, linux-block, linux-kernel, linux-nvme, linux-xfs,
	linux-fsdevel, tytso, jbongio, linux-scsi, ming.lei, ojaswin,
	bvanassche, Alan Adamson

On 1/24/2024 4:52 PM, Keith Busch wrote:
> On Wed, Jan 24, 2024 at 11:38:41AM +0000, John Garry wrote:
>> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
>> index 5045c84f2516..6a34a5d92088 100644
>> --- a/drivers/nvme/host/core.c
>> +++ b/drivers/nvme/host/core.c
>> @@ -911,6 +911,32 @@ static inline blk_status_t nvme_setup_rw(struct nvme_ns *ns,
>>   	if (req->cmd_flags & REQ_RAHEAD)
>>   		dsmgmt |= NVME_RW_DSM_FREQ_PREFETCH;
>>   
>> +	/*
>> +	 * Ensure that nothing has been sent which cannot be executed
>> +	 * atomically.
>> +	 */
>> +	if (req->cmd_flags & REQ_ATOMIC) {
>> +		struct nvme_ns_head *head = ns->head;
>> +		u32 boundary_bytes = head->atomic_boundary;
>> +
>> +		if (blk_rq_bytes(req) > ns->head->atomic_max)
>> +			return BLK_STS_IOERR;
>> +
>> +		if (boundary_bytes) {
>> +			u32 mask = boundary_bytes - 1, imask = ~mask;
>> +			u32 start = blk_rq_pos(req) << SECTOR_SHIFT;
>> +			u32 end = start + blk_rq_bytes(req);
>> +
>> +			if (blk_rq_bytes(req) > boundary_bytes)
>> +				return BLK_STS_IOERR;
>> +
>> +			if (((start & imask) != (end & imask)) &&
>> +			    (end & mask)) {
>> +				return BLK_STS_IOERR;
>> +			}
>> +		}
>> +	}
> 
> Aren't these new fields, atomic_max and atomic_boundary, duplicates of
> the equivalent queue limits? Let's just use the queue limits instead.
> 
> And couldn't we generically validate the constraints are not violated in
> submit_bio_noacct() instead of doing that in the low level driver? The
> driver assumes all other requests are already sanity checked, so I don't
> think we should change the responsibility for that just for this flag.
> 

does it makes sense to move about code to the helper ? perhaps inline ?

-ck



^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v3 00/15] block atomic writes
  2024-01-24 11:38 [PATCH v3 00/15] block atomic writes John Garry
                   ` (14 preceding siblings ...)
  2024-01-24 11:38 ` [PATCH v3 15/15] nvme: Ensure atomic writes will be executed atomically John Garry
@ 2024-01-29  6:18 ` Christoph Hellwig
  2024-01-29  9:17   ` John Garry
  2024-02-06 18:44   ` John Garry
  15 siblings, 2 replies; 59+ messages in thread
From: Christoph Hellwig @ 2024-01-29  6:18 UTC (permalink / raw)
  To: John Garry
  Cc: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-xfs, linux-fsdevel, tytso, jbongio, linux-scsi, ming.lei,
	ojaswin, bvanassche

Do you have a git tree with all patches somewhere?


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v3 15/15] nvme: Ensure atomic writes will be executed atomically
  2024-01-25 11:28     ` John Garry
@ 2024-01-29  6:20       ` Christoph Hellwig
  2024-01-29  9:36         ` John Garry
  0 siblings, 1 reply; 59+ messages in thread
From: Christoph Hellwig @ 2024-01-29  6:20 UTC (permalink / raw)
  To: John Garry
  Cc: Keith Busch, axboe, hch, sagi, jejb, martin.petersen, djwong,
	viro, brauner, dchinner, jack, linux-block, linux-kernel,
	linux-nvme, linux-xfs, linux-fsdevel, tytso, jbongio, linux-scsi,
	ming.lei, ojaswin, bvanassche, Alan Adamson

On Thu, Jan 25, 2024 at 11:28:22AM +0000, John Garry wrote:
> We have limits checks in XFS iomap and fops.c, but we would also want to 
> ensure that the the block layer is not doing anything it shouldn't be doing 
> after submit_bio_noacct(), like merging atomic write BIOs which straddle a 
> boundary or exceed atomic_max (if there were any merging).
>
> The SCSI standard already has provision for error'ing an atomic write 
> command which exceeds the target atomic write capabilities, while NVMe 
> doesn't.

Can you get Oracle to propose this for NVMe?  It always helps if these
suggestions come from a large buyer of NVMe equipment.

> BTW, Christoph did mention that he would like to see this:
> https://lore.kernel.org/linux-nvme/20231109153603.GA2188@lst.de/

I can probably live with a sufficiently low-level block layer check.

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v3 00/15] block atomic writes
  2024-01-29  6:18 ` [PATCH v3 00/15] block atomic writes Christoph Hellwig
@ 2024-01-29  9:17   ` John Garry
  2024-02-06 18:44   ` John Garry
  1 sibling, 0 replies; 59+ messages in thread
From: John Garry @ 2024-01-29  9:17 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: axboe, kbusch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-xfs, linux-fsdevel, tytso, jbongio, linux-scsi, ming.lei,
	ojaswin, bvanassche

On 29/01/2024 06:18, Christoph Hellwig wrote:
> Do you have a git tree with all patches somewhere?
>

They should apply cleanly on v6.8-rc1, but you can also check 
https://github.com/johnpgarry/linux/commits/atomic-writes-v6.8-v3/ for 
this series. The XFS series is at top and can be found at 
https://github.com/johnpgarry/linux/tree/atomic-writes-v6.8-v3-fs

Cheers,
John




^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v3 15/15] nvme: Ensure atomic writes will be executed atomically
  2024-01-29  6:20       ` Christoph Hellwig
@ 2024-01-29  9:36         ` John Garry
  2024-01-29 14:39           ` Christoph Hellwig
  0 siblings, 1 reply; 59+ messages in thread
From: John Garry @ 2024-01-29  9:36 UTC (permalink / raw)
  To: Christoph Hellwig, martin.petersen
  Cc: Keith Busch, axboe, sagi, jejb, djwong, viro, brauner, dchinner,
	jack, linux-block, linux-kernel, linux-nvme, linux-xfs,
	linux-fsdevel, tytso, jbongio, linux-scsi, ming.lei, ojaswin,
	bvanassche, Alan Adamson

On 29/01/2024 06:20, Christoph Hellwig wrote:
> On Thu, Jan 25, 2024 at 11:28:22AM +0000, John Garry wrote:
>> We have limits checks in XFS iomap and fops.c, but we would also want to
>> ensure that the the block layer is not doing anything it shouldn't be doing
>> after submit_bio_noacct(), like merging atomic write BIOs which straddle a
>> boundary or exceed atomic_max (if there were any merging).
>>
>> The SCSI standard already has provision for error'ing an atomic write
>> command which exceeds the target atomic write capabilities, while NVMe
>> doesn't.
> 
> Can you get Oracle to propose this for NVMe?  It always helps if these
> suggestions come from a large buyer of NVMe equipment.

I'll let Martin comment on that.

> 
>> BTW, Christoph did mention that he would like to see this:
>> https://lore.kernel.org/linux-nvme/20231109153603.GA2188@lst.de/
> 
> I can probably live with a sufficiently low-level block layer check.

That would probably be in blk_mq_dispatch_rq_list() for block drivers 
with .queue_rq set, but I would need to check for a good place for 
->queue_rqs . I can't imagine that we just want to inefficiently iter 
all rqs at the ->queue_rqs call point.

However considering the nature of this change, it is not a good sign 
that we/I need to check... I'd be more inclined to leave as is.

Thanks,
John


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v3 15/15] nvme: Ensure atomic writes will be executed atomically
  2024-01-29  9:36         ` John Garry
@ 2024-01-29 14:39           ` Christoph Hellwig
  0 siblings, 0 replies; 59+ messages in thread
From: Christoph Hellwig @ 2024-01-29 14:39 UTC (permalink / raw)
  To: John Garry
  Cc: Christoph Hellwig, martin.petersen, Keith Busch, axboe, sagi,
	jejb, djwong, viro, brauner, dchinner, jack, linux-block,
	linux-kernel, linux-nvme, linux-xfs, linux-fsdevel, tytso,
	jbongio, linux-scsi, ming.lei, ojaswin, bvanassche, Alan Adamson

On Mon, Jan 29, 2024 at 09:36:38AM +0000, John Garry wrote:
> That would probably be in blk_mq_dispatch_rq_list() for block drivers with 
> .queue_rq set, but I would need to check for a good place for ->queue_rqs . 
> I can't imagine that we just want to inefficiently iter all rqs at the 
> ->queue_rqs call point.
>
> However considering the nature of this change, it is not a good sign that 
> we/I need to check... I'd be more inclined to leave as is.

Heh.  At least early on having the checks in one place in nvme makes
me feel easier for sure.  If we can easily use the block limits for
the checks we shouldn't have to keep duplicate values in nvme, though.


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v3 00/15] block atomic writes
  2024-01-29  6:18 ` [PATCH v3 00/15] block atomic writes Christoph Hellwig
  2024-01-29  9:17   ` John Garry
@ 2024-02-06 18:44   ` John Garry
  2024-02-10 12:12     ` David Laight
  1 sibling, 1 reply; 59+ messages in thread
From: John Garry @ 2024-02-06 18:44 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: axboe, kbusch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-xfs, linux-fsdevel, tytso, jbongio, linux-scsi, ming.lei,
	ojaswin, bvanassche

On 29/01/2024 06:18, Christoph Hellwig wrote:
> Do you have a git tree with all patches somewhere?
> 

Hi Christoph,

Please let me know if you had a chance to look at this series or what 
your plans are.

BTW, about testing, it would be good to know your thoughts on power-fail 
testing.

I have done much testing for ensuring that writes are properly issued to 
HW with no undesired splitting/merging, etc for normal operation. I have 
also tested crashing the kernel only to see if atomic writes get 
corrupted. This all looks ok.

About PF testing, I have an NVMe M.2 drive, but it supports just 4K 
nawupf. In addition, unfortunately the port on my machine does not allow 
me to power it off, so I need to plug out the power cable to test PF :(

We do also support atomic writes on our SCSI storage servers, but it is 
not practically possible to PF them.

For actual PF testing, I have been using fio in crc64 verify mode with a 
couple of tweaks to support atomic writes.

What I find from limited testing for XFS and bdev atomic writes on that 
NVMe card is that indeed 4K writes are PF-safe, but 16K (this is an 
arbitrary large block size which I chose) is not. But I think all cards 
will be 4K PF safe, even if not declared.

Thanks,
John



^ permalink raw reply	[flat|nested] 59+ messages in thread

* RE: [PATCH v3 00/15] block atomic writes
  2024-02-06 18:44   ` John Garry
@ 2024-02-10 12:12     ` David Laight
  0 siblings, 0 replies; 59+ messages in thread
From: David Laight @ 2024-02-10 12:12 UTC (permalink / raw)
  To: 'John Garry', Christoph Hellwig
  Cc: axboe, kbusch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-xfs, linux-fsdevel, tytso, jbongio, linux-scsi, ming.lei,
	ojaswin, bvanassche

Can someone add a : after block?

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re:[PATCH v3 09/15] block: Add checks to merging of atomic writes
  2024-01-24 11:38 ` [PATCH v3 09/15] block: Add checks to merging of atomic writes John Garry
@ 2024-02-12 10:54   ` Nilay Shroff
  2024-02-12 11:20     ` [PATCH " John Garry
  2024-02-12 12:09     ` John Garry
  0 siblings, 2 replies; 59+ messages in thread
From: Nilay Shroff @ 2024-02-12 10:54 UTC (permalink / raw)
  To: john.g.garry
  Cc: axboe, brauner, bvanassche, dchinner, djwong, hch, jack, jbongio,
	jejb, kbusch, linux-block, linux-fsdevel, linux-kernel,
	linux-nvme, linux-scsi, linux-xfs, martin.petersen, ming.lei,
	ojaswin, sagi, tytso, viro

>+static bool rq_straddles_atomic_write_boundary(struct request *rq,
>+					unsigned int front,
>+					unsigned int back)
>+{
>+	unsigned int boundary = queue_atomic_write_boundary_bytes(rq->q);
>+	unsigned int mask, imask;
>+	loff_t start, end;
>+
>+	if (!boundary)
>+		return false;
>+
>+	start = rq->__sector << SECTOR_SHIFT;
>+	end = start + rq->__data_len;
>+
>+	start -= front;
>+	end += back;
>+
>+	/* We're longer than the boundary, so must be crossing it */
>+	if (end - start > boundary)
>+		return true;
>+
>+	mask = boundary - 1;
>+
>+	/* start/end are boundary-aligned, so cannot be crossing */
>+	if (!(start & mask) || !(end & mask))
>+		return false;
>+
>+	imask = ~mask;
>+
>+	/* Top bits are different, so crossed a boundary */
>+	if ((start & imask) != (end & imask))
>+		return true;
>+
>+	return false;
>+}
>+

Shall we ensure here that we don't cross max limit of atomic write supported by 
device? It seems that if the boundary size is not advertized by the device 
(in fact, I have one NVMe drive which has boundary size zero i.e. nabo/nabspf/
nawupf are all zero but awupf is non-zero) then we (unconditionally) allow 
merging. However it may be possible that post merging the total size of the 
request may exceed the atomic-write-unit-max-size supported by the device and 
if that happens then most probably we would be able to catch it very late in 
the driver code (if the device is NVMe). 

So is it a good idea to validate here whether we could potentially exceed 
the atomic-write-max-unit-size supported by device before we allow merging? 
In case we exceed the atomic-write-max-unit-size post merge then don't allow
merging?
 
Thanks,
--Nilay


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v3 09/15] block: Add checks to merging of atomic writes
  2024-02-12 10:54   ` Nilay Shroff
@ 2024-02-12 11:20     ` John Garry
  2024-02-12 12:01       ` Nilay Shroff
  2024-02-12 12:09     ` John Garry
  1 sibling, 1 reply; 59+ messages in thread
From: John Garry @ 2024-02-12 11:20 UTC (permalink / raw)
  To: Nilay Shroff
  Cc: axboe, brauner, bvanassche, dchinner, djwong, hch, jack, jbongio,
	jejb, kbusch, linux-block, linux-fsdevel, linux-kernel,
	linux-nvme, linux-scsi, linux-xfs, martin.petersen, ming.lei,
	ojaswin, sagi, tytso, viro


>> +
> 
>> +	imask = ~mask;
> 
>> +
> 
>> +	/* Top bits are different, so crossed a boundary */
> 
>> +	if ((start & imask) != (end & imask))
> 
>> +		return true;
> 
>> +
> 
>> +	return false;
> 
>> +}
> 
>> +
> 

I'm not sure what is going on with your mail client here.

> 
> 
> Shall we ensure here that we don't cross max limit of atomic write supported by
> 
> device? It seems that if the boundary size is not advertized by the device
> 
> (in fact, I have one NVMe drive which has boundary size zero i.e. nabo/nabspf/
> 
> nawupf are all zero but awupf is non-zero) then we (unconditionally) allow
> 
> merging. However it may be possible that post merging the total size of the
> 
> request may exceed the atomic-write-unit-max-size supported by the device and
> 
> if that happens then most probably we would be able to catch it very late in
> 
> the driver code (if the device is NVMe).
> 
> 
> 
> So is it a good idea to validate here whether we could potentially exceed
> 
> the atomic-write-max-unit-size supported by device before we allow merging?

Note that we have atomic_write_max_bytes and atomic_write_max_unit_size, 
and they are not always the same thing.

> 
> In case we exceed the atomic-write-max-unit-size post merge then don't allow
> 
> merging?

We check this elsewhere. I just expanded the normal check for max 
request size to cover atomic writes.

Normally we check that a merged request would not exceed max_sectors 
value, and this max_sectors value can be got from 
blk_queue_get_max_sectors().

So if you check a function like ll_back_merge_fn(), we have a merging 
size check:

	if (blk_rq_sectors(req) + bio_sectors(bio) >
	    blk_rq_get_max_sectors(req, blk_rq_pos(req))) {
		req_set_nomerge(req->q, req);
		return 0;
	}

And here the blk_rq_get_max_sectors() -> blk_queue_get_max_sectors() 
call now also supports atomic writes (see patch #7):

@@ -167,7 +167,16 @@ static inline unsigned get_max_io_size(struct bio *bio,
  {
...

+	if (bio->bi_opf & REQ_ATOMIC)
+		max_sectors = lim->atomic_write_max_sectors;
+	else
+		max_sectors = lim->max_sectors;

Note that we do not allow merging of atomic and non-atomic writes.

Thanks,
John

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v3 09/15] block: Add checks to merging of atomic writes
  2024-02-12 11:20     ` [PATCH " John Garry
@ 2024-02-12 12:01       ` Nilay Shroff
  0 siblings, 0 replies; 59+ messages in thread
From: Nilay Shroff @ 2024-02-12 12:01 UTC (permalink / raw)
  To: John Garry
  Cc: axboe, brauner, bvanassche, dchinner, djwong, hch, jack, jbongio,
	jejb, kbusch, linux-block, linux-fsdevel, linux-kernel,
	linux-nvme, linux-scsi, linux-xfs, martin.petersen, ming.lei,
	ojaswin, sagi, tytso, viro



On 2/12/24 16:50, John Garry wrote:
> I'm not sure what is going on with your mail client here.

Sorry for the inconvenience, I will check the settings.

>>
>> So is it a good idea to validate here whether we could potentially exceed
>>
>> the atomic-write-max-unit-size supported by device before we allow merging?
> 
> Note that we have atomic_write_max_bytes and atomic_write_max_unit_size, and they are not always the same thing.
> 
>>
>> In case we exceed the atomic-write-max-unit-size post merge then don't allow
>>
>> merging?
> 
> We check this elsewhere. I just expanded the normal check for max request size to cover atomic writes.
> 
> Normally we check that a merged request would not exceed max_sectors value, and this max_sectors value can be got from blk_queue_get_max_sectors().
> 
> So if you check a function like ll_back_merge_fn(), we have a merging size check:
> 
>     if (blk_rq_sectors(req) + bio_sectors(bio) >
>         blk_rq_get_max_sectors(req, blk_rq_pos(req))) {
>         req_set_nomerge(req->q, req);
>         return 0;
>     }
> 
> And here the blk_rq_get_max_sectors() -> blk_queue_get_max_sectors() call now also supports atomic writes (see patch #7):
OK got it. I think I have missed this part.

> 
> @@ -167,7 +167,16 @@ static inline unsigned get_max_io_size(struct bio *bio,
>  {
> ...
> 
> +    if (bio->bi_opf & REQ_ATOMIC)
> +        max_sectors = lim->atomic_write_max_sectors;
> +    else
> +        max_sectors = lim->max_sectors;
> 
> Note that we do not allow merging of atomic and non-atomic writes.
> 
Yeah 

Thanks,
--Nilay

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v3 09/15] block: Add checks to merging of atomic writes
  2024-02-12 10:54   ` Nilay Shroff
  2024-02-12 11:20     ` [PATCH " John Garry
@ 2024-02-12 12:09     ` John Garry
  2024-02-13  6:52       ` Nilay Shroff
  1 sibling, 1 reply; 59+ messages in thread
From: John Garry @ 2024-02-12 12:09 UTC (permalink / raw)
  To: Nilay Shroff
  Cc: axboe, brauner, bvanassche, dchinner, djwong, hch, jack, jbongio,
	jejb, kbusch, linux-block, linux-fsdevel, linux-kernel,
	linux-nvme, linux-scsi, linux-xfs, martin.petersen, ming.lei,
	ojaswin, sagi, tytso, viro

On 12/02/2024 10:54, Nilay Shroff wrote:
> 
> Shall we ensure here that we don't cross max limit of atomic write supported by
> 
> device? It seems that if the boundary size is not advertized by the device
> 
> (in fact, I have one NVMe drive which has boundary size zero i.e. nabo/nabspf/
> 
> nawupf are all zero but awupf is non-zero) then we (unconditionally) allow
> 
> merging.

BTW, if you don't mind, can you share awupf value and device model? I 
have been on the search for NVMe devices which support atomic writes 
(with non-zero PF reported value). All I have is a M.2 card which has a 
4KB PF atomic write value.

But if this is private info, then ok.

Thanks,
John

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v3 02/15] block: Limit atomic writes according to bio and queue limits
  2024-01-24 11:38 ` [PATCH v3 02/15] block: Limit atomic writes according to bio and queue limits John Garry
@ 2024-02-13  4:33   ` Ritesh Harjani
  2024-02-13  8:05     ` John Garry
  0 siblings, 1 reply; 59+ messages in thread
From: Ritesh Harjani @ 2024-02-13  4:33 UTC (permalink / raw)
  To: John Garry, axboe, kbusch, hch, sagi, jejb, martin.petersen,
	djwong, viro, brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-xfs, linux-fsdevel,
	tytso, jbongio, linux-scsi, ming.lei, ojaswin, bvanassche,
	John Garry

John Garry <john.g.garry@oracle.com> writes:

> We rely the block layer always being able to send a bio of size
> atomic_write_unit_max without being required to split it due to request
> queue or other bio limits.
>
> A bio may contain min(BIO_MAX_VECS, limits->max_segments) vectors on the
> relevant submission paths for atomic writes and each vector contains at
> least a PAGE_SIZE, apart from the first and last vectors.
>
> Signed-off-by: John Garry <john.g.garry@oracle.com>
> ---
>  block/blk-settings.c | 28 ++++++++++++++++++++++++++--
>  1 file changed, 26 insertions(+), 2 deletions(-)
>
> diff --git a/block/blk-settings.c b/block/blk-settings.c
> index 11c0361c2313..176f26374abc 100644
> --- a/block/blk-settings.c
> +++ b/block/blk-settings.c
> @@ -108,18 +108,42 @@ void blk_queue_bounce_limit(struct request_queue *q, enum blk_bounce bounce)
>  }
>  EXPORT_SYMBOL(blk_queue_bounce_limit);
>  
> +
> +/*
> + * Returns max guaranteed sectors which we can fit in a bio. For convenience of
> + * users, rounddown_pow_of_two() the return value.
> + *
> + * We always assume that we can fit in at least PAGE_SIZE in a segment, apart
> + * from first and last segments.
> + */

It took sometime to really understand what is special about the first
and the last vector. Looks like what we are discussing here is the
I/O covering a partial page, i.e. the starting offset and the end
boundary might not cover the whole page. 

It still isn't very clear that why do we need to consider
queue_logical_block_size(q) and not the PAGE_SIZE for those 2 vectors
(1. given atomic writes starting offset and length has alignment
restrictions? 
2. So maybe there are devices with starting offset alignment
to be as low as sector size?)

But either ways, my point is it would be good to have a comment above
this function to help understand what is going on here. 


> +static unsigned int blk_queue_max_guaranteed_bio_sectors(
> +					struct queue_limits *limits,
> +					struct request_queue *q)
> +{
> +	unsigned int max_segments = min(BIO_MAX_VECS, limits->max_segments);
> +	unsigned int length;
> +
> +	length = min(max_segments, 2) * queue_logical_block_size(q);
> +	if (max_segments > 2)
> +		length += (max_segments - 2) * PAGE_SIZE;
> +
> +	return rounddown_pow_of_two(length >> SECTOR_SHIFT);
> +}
> +
>  static void blk_atomic_writes_update_limits(struct request_queue *q)
>  {
>  	struct queue_limits *limits = &q->limits;
>  	unsigned int max_hw_sectors =
>  		rounddown_pow_of_two(limits->max_hw_sectors);
> +	unsigned int unit_limit = min(max_hw_sectors,
> +		blk_queue_max_guaranteed_bio_sectors(limits, q));
>  
>  	limits->atomic_write_max_sectors =
>  		min(limits->atomic_write_hw_max_sectors, max_hw_sectors);
>  	limits->atomic_write_unit_min_sectors =
> -		min(limits->atomic_write_hw_unit_min_sectors, max_hw_sectors);
> +		min(limits->atomic_write_hw_unit_min_sectors, unit_limit);
>  	limits->atomic_write_unit_max_sectors =
> -		min(limits->atomic_write_hw_unit_max_sectors, max_hw_sectors);
> +		min(limits->atomic_write_hw_unit_max_sectors, unit_limit);
>  }
>  
>  /**
> -- 
> 2.31.1

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v3 01/15] block: Add atomic write operations to request_queue limits
  2024-01-24 11:38 ` [PATCH v3 01/15] block: Add atomic write operations to request_queue limits John Garry
@ 2024-02-13  6:22   ` Christoph Hellwig
  0 siblings, 0 replies; 59+ messages in thread
From: Christoph Hellwig @ 2024-02-13  6:22 UTC (permalink / raw)
  To: John Garry
  Cc: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-xfs, linux-fsdevel, tytso, jbongio, linux-scsi, ming.lei,
	ojaswin, bvanassche, Himanshu Madhani

On Wed, Jan 24, 2024 at 11:38:27AM +0000, John Garry wrote:
> From: Himanshu Madhani <himanshu.madhani@oracle.com>
> 
> Add the following limits:
> - atomic_write_boundary_bytes
> - atomic_write_max_bytes
> - atomic_write_unit_max_bytes
> - atomic_write_unit_min_bytes
> 
> All atomic writes limits are initialised to 0 to indicate no atomic write
> support. Stacked devices are just not supported either for now.

Can you merge this with the next patch?  Splitting adding limits vs
actually using them seems rather confusing.


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v3 06/15] block: Pass blk_queue_get_max_sectors() a request pointer
  2024-01-24 11:38 ` [PATCH v3 06/15] block: Pass blk_queue_get_max_sectors() a request pointer John Garry
@ 2024-02-13  6:23   ` Christoph Hellwig
  2024-02-13  8:15     ` John Garry
  0 siblings, 1 reply; 59+ messages in thread
From: Christoph Hellwig @ 2024-02-13  6:23 UTC (permalink / raw)
  To: John Garry
  Cc: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-xfs, linux-fsdevel, tytso, jbongio, linux-scsi, ming.lei,
	ojaswin, bvanassche

Looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>

Can you move pure prep patches like this one to the beginning of the
series?

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v3 05/15] block: Add REQ_ATOMIC flag
  2024-01-24 11:38 ` [PATCH v3 05/15] block: Add REQ_ATOMIC flag John Garry
@ 2024-02-13  6:24   ` Christoph Hellwig
  0 siblings, 0 replies; 59+ messages in thread
From: Christoph Hellwig @ 2024-02-13  6:24 UTC (permalink / raw)
  To: John Garry
  Cc: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-xfs, linux-fsdevel, tytso, jbongio, linux-scsi, ming.lei,
	ojaswin, bvanassche, Himanshu Madhani

On Wed, Jan 24, 2024 at 11:38:31AM +0000, John Garry wrote:
> From: Himanshu Madhani <himanshu.madhani@oracle.com>
> 
> Add flag REQ_ATOMIC, meaning an atomic operation. This should only be
> used in conjunction with REQ_OP_WRITE.
> 
> We will not add a special "request atomic write" operation, as to try to
> avoid maintenance effort for an operation which is almost the same as
> REQ_OP_WRITE.

I'd also merge this into the main atomic writes block layer patch..


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v3 07/15] block: Limit atomic write IO size according to atomic_write_max_sectors
  2024-01-24 11:38 ` [PATCH v3 07/15] block: Limit atomic write IO size according to atomic_write_max_sectors John Garry
@ 2024-02-13  6:26   ` Christoph Hellwig
  2024-02-13  8:15     ` John Garry
  0 siblings, 1 reply; 59+ messages in thread
From: Christoph Hellwig @ 2024-02-13  6:26 UTC (permalink / raw)
  To: John Garry
  Cc: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-xfs, linux-fsdevel, tytso, jbongio, linux-scsi, ming.lei,
	ojaswin, bvanassche

On Wed, Jan 24, 2024 at 11:38:33AM +0000, John Garry wrote:
> Currently an IO size is limited to the request_queue limits max_sectors.
> Limit the size for an atomic write to queue limit atomic_write_max_sectors
> value.

Same here.  Please have one patch that actually adds useful atomic write
support to the block layer.  That doesn't include fs stuff like
IOCB_ATOMIC or the block file operation support, but to have a reviewable
chunk I'd really like to see the full block-layer support for the limits,
enforcing them, the merge prevention in a single commit with an extensive
commit log explaining the semantics.  That allows a useful review without
looking at the full tree, and also will help with people reading history
in the future.

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v3 11/15] scsi: sd: Support reading atomic write properties from block limits VPD
  2024-01-24 11:38 ` [PATCH v3 11/15] scsi: sd: Support reading atomic write properties from block limits VPD John Garry
@ 2024-02-13  6:31   ` Christoph Hellwig
  2024-02-13  8:16     ` John Garry
  0 siblings, 1 reply; 59+ messages in thread
From: Christoph Hellwig @ 2024-02-13  6:31 UTC (permalink / raw)
  To: John Garry
  Cc: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-xfs, linux-fsdevel, tytso, jbongio, linux-scsi, ming.lei,
	ojaswin, bvanassche

On Wed, Jan 24, 2024 at 11:38:37AM +0000, John Garry wrote:
> Also update block layer request queue sysfs properties.
> 
> See sbc4r22 section 6.6.4 - Block limits VPD page.

Not the most useful commit log..

Can you merge this with the next patch and write a detailed commit
log how atomic writes map to SBC features and what limitations
Linux atomic writes on Linux have?


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v3 14/15] nvme: Support atomic writes
  2024-01-24 11:38 ` [PATCH v3 14/15] nvme: Support atomic writes John Garry
@ 2024-02-13  6:42   ` Christoph Hellwig
  2024-02-13 14:21     ` John Garry
  2024-02-14 12:27   ` Nilay Shroff
  1 sibling, 1 reply; 59+ messages in thread
From: Christoph Hellwig @ 2024-02-13  6:42 UTC (permalink / raw)
  To: John Garry
  Cc: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-xfs, linux-fsdevel, tytso, jbongio, linux-scsi, ming.lei,
	ojaswin, bvanassche, Alan Adamson

On Wed, Jan 24, 2024 at 11:38:40AM +0000, John Garry wrote:
> From: Alan Adamson <alan.adamson@oracle.com>
> 
> Support reading atomic write registers to fill in request_queue
> properties.
> 
> Use following method to calculate limits:
> atomic_write_max_bytes = flp2(NAWUPF ?: AWUPF)
> atomic_write_unit_min = logical_block_size
> atomic_write_unit_max = flp2(NAWUPF ?: AWUPF)
> atomic_write_boundary = NABSPF

Can you expand this to actually be a real commit log with full
sentences, expanding the NVME field name acronyms and reference
the relevant Sections and Figures in a specific version of the
NVMe specification?

Also some implementation comments:

NVMe has a particularly nasty NABO field in Identify Namespace, which
offsets the boundary. We probably need to reject atomic writes or
severly limit them if this field is set.

Please also read through TP4098(a) and look at the MAM field.  As far
as I can tell the patch as-is assumes it always is set to 1.

> +static void nvme_update_atomic_write_disk_info(struct gendisk *disk,
> +		struct nvme_ctrl *ctrl, struct nvme_id_ns *id, u32 bs, u32 atomic_bs)

Please avoid the overly long line here.

> +		nvme_update_atomic_write_disk_info(disk, ctrl, id, bs, atomic_bs);

.. and here.


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v3 15/15] nvme: Ensure atomic writes will be executed atomically
  2024-01-24 11:38 ` [PATCH v3 15/15] nvme: Ensure atomic writes will be executed atomically John Garry
  2024-01-25  0:52   ` Keith Busch
@ 2024-02-13  6:42   ` Christoph Hellwig
  2024-02-13 14:07     ` John Garry
  1 sibling, 1 reply; 59+ messages in thread
From: Christoph Hellwig @ 2024-02-13  6:42 UTC (permalink / raw)
  To: John Garry
  Cc: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-xfs, linux-fsdevel, tytso, jbongio, linux-scsi, ming.lei,
	ojaswin, bvanassche, Alan Adamson

If we don't end up doing the checks in the block layer:

> +	/*
> +	 * Ensure that nothing has been sent which cannot be executed
> +	 * atomically.
> +	 */
> +	if (req->cmd_flags & REQ_ATOMIC) {
> +		struct nvme_ns_head *head = ns->head;
> +		u32 boundary_bytes = head->atomic_boundary;

... please split the checks into a helper.  And merge them into the
previous patch.


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v3 09/15] block: Add checks to merging of atomic writes
  2024-02-12 12:09     ` John Garry
@ 2024-02-13  6:52       ` Nilay Shroff
  0 siblings, 0 replies; 59+ messages in thread
From: Nilay Shroff @ 2024-02-13  6:52 UTC (permalink / raw)
  To: John Garry
  Cc: axboe, brauner, bvanassche, dchinner, djwong, hch, jack, jbongio,
	jejb, kbusch, linux-block, linux-fsdevel, linux-kernel,
	linux-nvme, linux-scsi, linux-xfs, martin.petersen, ming.lei,
	ojaswin, sagi, tytso, viro



On 2/12/24 17:39, John Garry wrote:
> On 12/02/2024 10:54, Nilay Shroff wrote:
>>
>> Shall we ensure here that we don't cross max limit of atomic write supported by
>>
>> device? It seems that if the boundary size is not advertized by the device
>>
>> (in fact, I have one NVMe drive which has boundary size zero i.e. nabo/nabspf/
>>
>> nawupf are all zero but awupf is non-zero) then we (unconditionally) allow
>>
>> merging.
> 
> BTW, if you don't mind, can you share awupf value and device model? I have been on the search for NVMe devices which support atomic writes (with non-zero PF reported value). All I have is a M.2 card which has a 4KB PF atomic write value.
> 
> But if this is private info, then ok.
> 
> Thanks,
> John

Yeah sure. Below are the details about NVMe:

# lspci 
0040:01:00.0 Non-Volatile memory controller: KIOXIA Corporation Device 0025 (rev 01)

# nvme id-ctrl /dev/nvme0 -H 
NVME Identify Controller:
vid       : 0x1e0f
ssvid     : 0x1014
sn        : Z130A00LTGZ8        
mn        : 800GB NVMe Gen4 U.2 SSD                 
fr        : REV.C9S2
[...]
awun      : 65535
awupf     : 63
[...]

# nvme id-ns /dev/nvme0n1 -H 
NVME Identify Namespace 1:
nsze    : 0x18ffff3
ncap    : 0x18ffff3
nuse    : 0
nsfeat  : 0x14
  [4:4] : 0x1	NPWG, NPWA, NPDG, NPDA, and NOWS are Supported
  [3:3] : 0	NGUID and EUI64 fields if non-zero, Reused
  [2:2] : 0x1	Deallocated or Unwritten Logical Block error Supported
  [1:1] : 0	Namespace uses AWUN, AWUPF, and ACWU
  [0:0] : 0	Thin Provisioning Not Supported

[...]

nawun   : 0
nawupf  : 0
nacwu   : 0
nabsn   : 0
nabo    : 0
nabspf  : 0

[...]

LBA Format  0 : Metadata Size: 0   bytes - Data Size: 4096 bytes - Relative Performance: 0 Best (in use)
LBA Format  1 : Metadata Size: 8   bytes - Data Size: 4096 bytes - Relative Performance: 0 Best 
LBA Format  2 : Metadata Size: 0   bytes - Data Size: 512 bytes - Relative Performance: 0 Best 
LBA Format  3 : Metadata Size: 8   bytes - Data Size: 512 bytes - Relative Performance: 0 Best 
LBA Format  4 : Metadata Size: 64  bytes - Data Size: 4096 bytes - Relative Performance: 0 Best 
LBA Format  5 : Metadata Size: 8   bytes - Data Size: 512 bytes - Relative Performance: 0 Best 


As shown above, I am using KIOXIA NVMe. This NVMe has one namespace created(nvme0n1). 
The nsfeat reports that this namespace uses AWUN, AWUPF, and ACWU.The awupf for this NVMe is 63. 
As awupf is 0's based value, it's actually 64. The configured LBA for the namespace (in use) is 4096 
bytes and so that means that this NVMe supports writing 262144 (64*4096) bytes of data atomically 
during power failure. Further, please note that on this NVMe we have nawupf/nabspf/nabo all set
to zero. 

Let me know if you need any other details. And BTW, if you want I could help you with anything 
you'd like to test on this NVMe. 

Thanks,
--Nilay

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v3 02/15] block: Limit atomic writes according to bio and queue limits
  2024-02-13  4:33   ` Ritesh Harjani
@ 2024-02-13  8:05     ` John Garry
  0 siblings, 0 replies; 59+ messages in thread
From: John Garry @ 2024-02-13  8:05 UTC (permalink / raw)
  To: Ritesh Harjani (IBM),
	axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-xfs, linux-fsdevel,
	tytso, jbongio, linux-scsi, ming.lei, ojaswin, bvanassche

On 13/02/2024 04:33, Ritesh Harjani (IBM) wrote:
>> diff --git a/block/blk-settings.c b/block/blk-settings.c
>> index 11c0361c2313..176f26374abc 100644
>> --- a/block/blk-settings.c
>> +++ b/block/blk-settings.c
>> @@ -108,18 +108,42 @@ void blk_queue_bounce_limit(struct request_queue *q, enum blk_bounce bounce)
>>   }
>>   EXPORT_SYMBOL(blk_queue_bounce_limit);
>>   
>> +
>> +/*
>> + * Returns max guaranteed sectors which we can fit in a bio. For convenience of
>> + * users, rounddown_pow_of_two() the return value.
>> + *
>> + * We always assume that we can fit in at least PAGE_SIZE in a segment, apart
>> + * from first and last segments.
>> + */
> It took sometime to really understand what is special about the first
> and the last vector. Looks like what we are discussing here is the
> I/O covering a partial page, i.e. the starting offset and the end
> boundary might not cover the whole page.
> 
> It still isn't very clear that why do we need to consider
> queue_logical_block_size(q) and not the PAGE_SIZE for those 2 vectors
> (1. given atomic writes starting offset and length has alignment
> restrictions?
We are using the direct IO alignment restriction, and that is the iovecs 
need to be bdev logical block size aligned - please see 
bdev_iter_is_aligned().

We are also supporting a single iovec currently. As such, the middle 
bvecs will always contain at least PAGE_SIZE, and the first/last must 
have at least LBS data.

Note that we will want to support atomic writes in future for buffered 
IO, but it would be sensible to keep this direct IO alignment 
restriction there as well.

Let me know if this needs to be made clearer in the code/commit message.

Thanks,
John

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v3 07/15] block: Limit atomic write IO size according to atomic_write_max_sectors
  2024-02-13  6:26   ` Christoph Hellwig
@ 2024-02-13  8:15     ` John Garry
  2024-02-14  7:26       ` Christoph Hellwig
  0 siblings, 1 reply; 59+ messages in thread
From: John Garry @ 2024-02-13  8:15 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: axboe, kbusch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-xfs, linux-fsdevel, tytso, jbongio, linux-scsi, ming.lei,
	ojaswin, bvanassche

On 13/02/2024 06:26, Christoph Hellwig wrote:
> On Wed, Jan 24, 2024 at 11:38:33AM +0000, John Garry wrote:
>> Currently an IO size is limited to the request_queue limits max_sectors.
>> Limit the size for an atomic write to queue limit atomic_write_max_sectors
>> value.
> 
> Same here.  Please have one patch that actually adds useful atomic write
> support to the block layer.  That doesn't include fs stuff like
> IOCB_ATOMIC or the block file operation support, but to have a reviewable
> chunk I'd really like to see the full block-layer support for the limits,
> enforcing them, the merge prevention in a single commit with an extensive
> commit log explaining the semantics.  That allows a useful review without
> looking at the full tree, and also will help with people reading history
> in the future.

Fine, so essentially merge 1, 2, 5, 7, 8, 9

BTW, I was also going to add this function which ensures that partitions 
are properly aligned:

bool bdev_can_atomic_write(struct block_device *bdev)
{
	struct request_queue *bd_queue = bdev->bd_queue;
	struct queue_limits *limits = &bd_queue->limits;

	if(!limits->atomic_write_unit_min_sectors)
		return false;

	if (bdev_is_partition(bdev)) {
		unsigned int granularity = max(limits->atomic_write_unit_min_sectors,
limits->atomic_write_hw_boundary_sectors);
		if (bdev->bd_start_sect % granularity)
			return false;
	}
	return true;
}

I'm note sure if that would be better in the fops.c patch (or not added)

Thanks,
John

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v3 06/15] block: Pass blk_queue_get_max_sectors() a request pointer
  2024-02-13  6:23   ` Christoph Hellwig
@ 2024-02-13  8:15     ` John Garry
  0 siblings, 0 replies; 59+ messages in thread
From: John Garry @ 2024-02-13  8:15 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: axboe, kbusch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-xfs, linux-fsdevel, tytso, jbongio, linux-scsi, ming.lei,
	ojaswin, bvanassche

On 13/02/2024 06:23, Christoph Hellwig wrote:
> Looks good:
> 
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> 
> Can you move pure prep patches like this one to the beginning of the
> series?

ok

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v3 11/15] scsi: sd: Support reading atomic write properties from block limits VPD
  2024-02-13  6:31   ` Christoph Hellwig
@ 2024-02-13  8:16     ` John Garry
  0 siblings, 0 replies; 59+ messages in thread
From: John Garry @ 2024-02-13  8:16 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: axboe, kbusch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-xfs, linux-fsdevel, tytso, jbongio, linux-scsi, ming.lei,
	ojaswin, bvanassche

On 13/02/2024 06:31, Christoph Hellwig wrote:
> On Wed, Jan 24, 2024 at 11:38:37AM +0000, John Garry wrote:
>> Also update block layer request queue sysfs properties.
>>
>> See sbc4r22 section 6.6.4 - Block limits VPD page.
> Not the most useful commit log..
> 
> Can you merge this with the next patch and write a detailed commit
> log how atomic writes map to SBC features and what limitations
> Linux atomic writes on Linux have?


Will do

Thanks,
John

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re:[PATCH v3 10/15] block: Add fops atomic write support
  2024-01-24 11:38 ` [PATCH v3 10/15] block: Add fops atomic write support John Garry
@ 2024-02-13  9:36   ` Nilay Shroff
  2024-02-13  9:58     ` [PATCH " John Garry
  0 siblings, 1 reply; 59+ messages in thread
From: Nilay Shroff @ 2024-02-13  9:36 UTC (permalink / raw)
  To: john.g.garry
  Cc: axboe, brauner, bvanassche, dchinner, djwong, hch, jack, jbongio,
	jejb, kbusch, linux-block, linux-fsdevel, linux-kernel,
	linux-nvme, linux-scsi, linux-xfs, martin.petersen, ming.lei,
	ojaswin, sagi, tytso, viro

>+static bool blkdev_atomic_write_valid(struct block_device *bdev, loff_t pos,
>+				      struct iov_iter *iter)
>+{
>+	struct request_queue *q = bdev_get_queue(bdev);
>+	unsigned int min_bytes = queue_atomic_write_unit_min_bytes(q);
>+	unsigned int max_bytes = queue_atomic_write_unit_max_bytes(q);
>+
>+	if (!iter_is_ubuf(iter))
>+		return false;
>+	if (iov_iter_count(iter) & (min_bytes - 1))
>+		return false;
>+	if (!is_power_of_2(iov_iter_count(iter)))
>+		return false;
>+	if (pos & (iov_iter_count(iter) - 1))
>+		return false;
>+	if (iov_iter_count(iter) > max_bytes)
>+		return false;
>+	return true;
>+}

Here do we need to also validate whether the IO doesn't straddle 
the atmic bondary limit (if it's non-zero)? We do check that IO 
doesn't straddle the atomic boundary limit but that happens very 
late in the IO code path either during blk-merge or in NVMe driver 
code.

Thanks,
--Nilay


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v3 10/15] block: Add fops atomic write support
  2024-02-13  9:36   ` Nilay Shroff
@ 2024-02-13  9:58     ` John Garry
  2024-02-13 11:08       ` Nilay Shroff
  0 siblings, 1 reply; 59+ messages in thread
From: John Garry @ 2024-02-13  9:58 UTC (permalink / raw)
  To: Nilay Shroff
  Cc: axboe, brauner, bvanassche, dchinner, djwong, hch, jack, jbongio,
	jejb, kbusch, linux-block, linux-fsdevel, linux-kernel,
	linux-nvme, linux-scsi, linux-xfs, martin.petersen, ming.lei,
	ojaswin, sagi, tytso, viro

On 13/02/2024 09:36, Nilay Shroff wrote:
>> +static bool blkdev_atomic_write_valid(struct block_device *bdev, loff_t pos,
> 
>> +				      struct iov_iter *iter)
> 
>> +{
> 
>> +	struct request_queue *q = bdev_get_queue(bdev);
> 
>> +	unsigned int min_bytes = queue_atomic_write_unit_min_bytes(q);
> 
>> +	unsigned int max_bytes = queue_atomic_write_unit_max_bytes(q);
> 
>> +
> 
>> +	if (!iter_is_ubuf(iter))
> 
>> +		return false;
> 
>> +	if (iov_iter_count(iter) & (min_bytes - 1))
> 
>> +		return false;
> 
>> +	if (!is_power_of_2(iov_iter_count(iter)))
> 
>> +		return false;
> 
>> +	if (pos & (iov_iter_count(iter) - 1))
> 
>> +		return false;
> 
>> +	if (iov_iter_count(iter) > max_bytes)
> 
>> +		return false;
> 
>> +	return true;
> 
>> +}
> 
> 
> 
> Here do we need to also validate whether the IO doesn't straddle
> 
> the atmic bondary limit (if it's non-zero)? We do check that IO
> 
> doesn't straddle the atomic boundary limit but that happens very
> 
> late in the IO code path either during blk-merge or in NVMe driver
> 
> code.

It's relied that atomic_write_unit_max is <= atomic_write_boundary and 
both are a power-of-2. Please see the NVMe patch, which this is checked. 
Indeed, it would not make sense if atomic_write_unit_max > 
atomic_write_boundary (when non-zero).

So if the write is naturally aligned and its size is <= 
atomic_write_unit_max, then it cannot be straddling a boundary.

Thanks,
John

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v3 10/15] block: Add fops atomic write support
  2024-02-13  9:58     ` [PATCH " John Garry
@ 2024-02-13 11:08       ` Nilay Shroff
  2024-02-13 11:52         ` John Garry
  0 siblings, 1 reply; 59+ messages in thread
From: Nilay Shroff @ 2024-02-13 11:08 UTC (permalink / raw)
  To: John Garry
  Cc: axboe, brauner, bvanassche, dchinner, djwong, hch, jack, jbongio,
	jejb, kbusch, linux-block, linux-fsdevel, linux-kernel,
	linux-nvme, linux-scsi, linux-xfs, martin.petersen, ming.lei,
	ojaswin, sagi, tytso, viro



On 2/13/24 15:28, John Garry wrote:
> On 13/02/2024 09:36, Nilay Shroff wrote:
>>> +static bool blkdev_atomic_write_valid(struct block_device *bdev, loff_t pos,
>>
>>> +                      struct iov_iter *iter)
>>
>>> +{
>>
>>> +    struct request_queue *q = bdev_get_queue(bdev);
>>
>>> +    unsigned int min_bytes = queue_atomic_write_unit_min_bytes(q);
>>
>>> +    unsigned int max_bytes = queue_atomic_write_unit_max_bytes(q);
>>
>>> +
>>
>>> +    if (!iter_is_ubuf(iter))
>>
>>> +        return false;
>>
>>> +    if (iov_iter_count(iter) & (min_bytes - 1))
>>
>>> +        return false;
>>
>>> +    if (!is_power_of_2(iov_iter_count(iter)))
>>
>>> +        return false;
>>
>>> +    if (pos & (iov_iter_count(iter) - 1))
>>
>>> +        return false;
>>
>>> +    if (iov_iter_count(iter) > max_bytes)
>>
>>> +        return false;
>>
>>> +    return true;
>>
>>> +}
>>
>>
>>
>> Here do we need to also validate whether the IO doesn't straddle
>>
>> the atmic bondary limit (if it's non-zero)? We do check that IO
>>
>> doesn't straddle the atomic boundary limit but that happens very
>>
>> late in the IO code path either during blk-merge or in NVMe driver
>>
>> code.
> 
> It's relied that atomic_write_unit_max is <= atomic_write_boundary and both are a power-of-2. Please see the NVMe patch, which this is checked. Indeed, it would not make sense if atomic_write_unit_max > atomic_write_boundary (when non-zero).
> 
> So if the write is naturally aligned and its size is <= atomic_write_unit_max, then it cannot be straddling a boundary.

Ok fine but in case the device doesn't support namespace atomic boundary size (i.e. NABSPF is zero) then still do we need 
to restrict IO which crosses the atomic boundary? 

I am quoting this from NVMe spec (Command Set Specification, revision 1.0a, Section 2.1.4.3) : 
"To ensure backwards compatibility, the values reported for AWUN, AWUPF, and ACWU shall be set such that 
they  are  supported  even  if  a  write  crosses  an  atomic  boundary.  If  a  controller  does  not  
guarantee atomicity across atomic boundaries, the controller shall set AWUN, AWUPF, and ACWU to 0h (1 LBA)." 

Thanks,
--Nilay


  


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v3 10/15] block: Add fops atomic write support
  2024-02-13 11:08       ` Nilay Shroff
@ 2024-02-13 11:52         ` John Garry
  2024-02-14  9:38           ` Nilay Shroff
  0 siblings, 1 reply; 59+ messages in thread
From: John Garry @ 2024-02-13 11:52 UTC (permalink / raw)
  To: Nilay Shroff
  Cc: axboe, brauner, bvanassche, dchinner, djwong, hch, jack, jbongio,
	jejb, kbusch, linux-block, linux-fsdevel, linux-kernel,
	linux-nvme, linux-scsi, linux-xfs, martin.petersen, ming.lei,
	ojaswin, sagi, tytso, viro

On 13/02/2024 11:08, Nilay Shroff wrote:
>> It's relied that atomic_write_unit_max is <= atomic_write_boundary and both are a power-of-2. Please see the NVMe patch, which this is checked. Indeed, it would not make sense if atomic_write_unit_max > atomic_write_boundary (when non-zero).
>>
>> So if the write is naturally aligned and its size is <= atomic_write_unit_max, then it cannot be straddling a boundary.
> Ok fine but in case the device doesn't support namespace atomic boundary size (i.e. NABSPF is zero) then still do we need
> to restrict IO which crosses the atomic boundary?

Is there a boundary if NABSPF is zero?

> 
> I am quoting this from NVMe spec (Command Set Specification, revision 1.0a, Section 2.1.4.3) :
> "To ensure backwards compatibility, the values reported for AWUN, AWUPF, and ACWU shall be set such that
> they  are  supported  even  if  a  write  crosses  an  atomic  boundary.  If  a  controller  does  not
> guarantee atomicity across atomic boundaries, the controller shall set AWUN, AWUPF, and ACWU to 0h (1 LBA)."

How about respond to the NVMe patch in this series, asking this question?

I have my idea on how the boundary is determined, but I think that the 
spec could be made clearer.

Thanks,
John




^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v3 15/15] nvme: Ensure atomic writes will be executed atomically
  2024-02-13  6:42   ` Christoph Hellwig
@ 2024-02-13 14:07     ` John Garry
  0 siblings, 0 replies; 59+ messages in thread
From: John Garry @ 2024-02-13 14:07 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: axboe, kbusch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-xfs, linux-fsdevel, tytso, jbongio, linux-scsi, ming.lei,
	ojaswin, bvanassche, Alan Adamson

On 13/02/2024 06:42, Christoph Hellwig wrote:
> If we don't end up doing the checks in the block layer:
> 
>> +	/*
>> +	 * Ensure that nothing has been sent which cannot be executed
>> +	 * atomically.
>> +	 */
>> +	if (req->cmd_flags & REQ_ATOMIC) {
>> +		struct nvme_ns_head *head = ns->head;
>> +		u32 boundary_bytes = head->atomic_boundary;
> 
> ... please split the checks into a helper. 

ok

> And merge them into the
> previous patch.

Fine, if you prefer that.

> 


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v3 14/15] nvme: Support atomic writes
  2024-02-13  6:42   ` Christoph Hellwig
@ 2024-02-13 14:21     ` John Garry
  2024-02-14  8:00       ` Christoph Hellwig
  0 siblings, 1 reply; 59+ messages in thread
From: John Garry @ 2024-02-13 14:21 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: axboe, kbusch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-xfs, linux-fsdevel, tytso, jbongio, linux-scsi, ming.lei,
	ojaswin, bvanassche, Alan Adamson

On 13/02/2024 06:42, Christoph Hellwig wrote:
> On Wed, Jan 24, 2024 at 11:38:40AM +0000, John Garry wrote:
>> From: Alan Adamson <alan.adamson@oracle.com>
>>
>> Support reading atomic write registers to fill in request_queue
>> properties.
>>
>> Use following method to calculate limits:
>> atomic_write_max_bytes = flp2(NAWUPF ?: AWUPF)
>> atomic_write_unit_min = logical_block_size
>> atomic_write_unit_max = flp2(NAWUPF ?: AWUPF)
>> atomic_write_boundary = NABSPF
> 
> Can you expand this to actually be a real commit log with full
> sentences, expanding the NVME field name acronyms and reference
> the relevant Sections and Figures in a specific version of the
> NVMe specification?

ok

> 
> Also some implementation comments:
> 
> NVMe has a particularly nasty NABO field in Identify Namespace, which
> offsets the boundary. We probably need to reject atomic writes or
> severly limit them if this field is set.

ok, and initially we'll just not support atomic writes for NABO != 0

> 
> Please also read through TP4098(a) and look at the MAM field.

It's not public, AFAIK.

And I don't think a feature which allows us to straddle boundaries is 
too interesting really.

>  As far
> as I can tell the patch as-is assumes it always is set to 1.
> 
>> +static void nvme_update_atomic_write_disk_info(struct gendisk *disk,
>> +		struct nvme_ctrl *ctrl, struct nvme_id_ns *id, u32 bs, u32 atomic_bs)
> 
> Please avoid the overly long line here.
> 
>> +		nvme_update_atomic_write_disk_info(disk, ctrl, id, bs, atomic_bs);
> 
> .. and here.

ok, but I think that this will get shorter anyway.

> 

Thanks,
John

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v3 07/15] block: Limit atomic write IO size according to atomic_write_max_sectors
  2024-02-13  8:15     ` John Garry
@ 2024-02-14  7:26       ` Christoph Hellwig
  2024-02-14  9:24         ` John Garry
  0 siblings, 1 reply; 59+ messages in thread
From: Christoph Hellwig @ 2024-02-14  7:26 UTC (permalink / raw)
  To: John Garry
  Cc: Christoph Hellwig, axboe, kbusch, sagi, jejb, martin.petersen,
	djwong, viro, brauner, dchinner, jack, linux-block, linux-kernel,
	linux-nvme, linux-xfs, linux-fsdevel, tytso, jbongio, linux-scsi,
	ming.lei, ojaswin, bvanassche

On Tue, Feb 13, 2024 at 08:15:08AM +0000, John Garry wrote:
> I'm note sure if that would be better in the fops.c patch (or not added)

We'll need the partition check.  If you want to get fancy you could
also add the atomic boundary offset thing there as a partitions would
make devices with that "feature" useful again, although I'd prefer to
only deal with that if the need actually arises.

The right place is in the core infrastructure, the bdev patch is just
a user of the block infrastructure.  bdev really are just another
file system and a consumer of the block layer APIs.

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v3 14/15] nvme: Support atomic writes
  2024-02-13 14:21     ` John Garry
@ 2024-02-14  8:00       ` Christoph Hellwig
  2024-02-14  9:21         ` John Garry
  0 siblings, 1 reply; 59+ messages in thread
From: Christoph Hellwig @ 2024-02-14  8:00 UTC (permalink / raw)
  To: John Garry
  Cc: Christoph Hellwig, axboe, kbusch, sagi, jejb, martin.petersen,
	djwong, viro, brauner, dchinner, jack, linux-block, linux-kernel,
	linux-nvme, linux-xfs, linux-fsdevel, tytso, jbongio, linux-scsi,
	ming.lei, ojaswin, bvanassche, Alan Adamson

On Tue, Feb 13, 2024 at 02:21:25PM +0000, John Garry wrote:
>> Please also read through TP4098(a) and look at the MAM field.
>
> It's not public, AFAIK.

Oracle is a member, so you can take a look at it easily.  If we need
it for Linux I can also work with the NVMe Board to release it.

> And I don't think a feature which allows us to straddle boundaries is too 
> interesting really.

Without MAM=1 NVMe can't support atomic writes larger than
AWUPF/NAWUPF, which is typically set to the indirection table
size.  You're leaving a lot of potential unused with that.


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v3 14/15] nvme: Support atomic writes
  2024-02-14  8:00       ` Christoph Hellwig
@ 2024-02-14  9:21         ` John Garry
  0 siblings, 0 replies; 59+ messages in thread
From: John Garry @ 2024-02-14  9:21 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: axboe, kbusch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-xfs, linux-fsdevel, tytso, jbongio, linux-scsi, ming.lei,
	ojaswin, bvanassche, Alan Adamson

On 14/02/2024 08:00, Christoph Hellwig wrote:
> On Tue, Feb 13, 2024 at 02:21:25PM +0000, John Garry wrote:
>>> Please also read through TP4098(a) and look at the MAM field.
>>
>> It's not public, AFAIK.
> 
> Oracle is a member, so you can take a look at it easily.  If we need
> it for Linux I can also work with the NVMe Board to release it.

What I really meant was that I prefer not to quote private TPs in public 
domain. I have the doc.

> 
>> And I don't think a feature which allows us to straddle boundaries is too
>> interesting really.
> 
> Without MAM=1 NVMe can't support atomic writes larger than
> AWUPF/NAWUPF, which is typically set to the indirection table
> size.  You're leaving a lot of potential unused with that.
> 

atomic_write_unit_max would always be dictated by min of boundary and 
AWUPF/NAWUPF. With MAM=1, we could increase atomic_write_max_bytes - but 
does it really help us? I mean, atomic_write_max_bytes only comes into 
play for merging - do we get much merging for NVMe transports? I am not 
sure.

Thanks,
John

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v3 07/15] block: Limit atomic write IO size according to atomic_write_max_sectors
  2024-02-14  7:26       ` Christoph Hellwig
@ 2024-02-14  9:24         ` John Garry
  0 siblings, 0 replies; 59+ messages in thread
From: John Garry @ 2024-02-14  9:24 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: axboe, kbusch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-xfs, linux-fsdevel, tytso, jbongio, linux-scsi, ming.lei,
	ojaswin, bvanassche

On 14/02/2024 07:26, Christoph Hellwig wrote:
> On Tue, Feb 13, 2024 at 08:15:08AM +0000, John Garry wrote:
>> I'm note sure if that would be better in the fops.c patch (or not added)
> 
> We'll need the partition check.  If you want to get fancy you could
> also add the atomic boundary offset thing there as a partitions would
> make devices with that "feature" useful again, although I'd prefer to
> only deal with that if the need actually arises.

Yeah, that is my general philosophy about possible weird HW.

> 
> The right place is in the core infrastructure, the bdev patch is just
> a user of the block infrastructure.  bdev really are just another
> file system and a consumer of the block layer APIs.

ok, I'll try to find a good place for it.

Thanks,
John

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v3 10/15] block: Add fops atomic write support
  2024-02-13 11:52         ` John Garry
@ 2024-02-14  9:38           ` Nilay Shroff
  2024-02-14 11:29             ` John Garry
  0 siblings, 1 reply; 59+ messages in thread
From: Nilay Shroff @ 2024-02-14  9:38 UTC (permalink / raw)
  To: John Garry
  Cc: axboe, brauner, bvanassche, dchinner, djwong, hch, jack, jbongio,
	jejb, kbusch, linux-block, linux-fsdevel, linux-kernel,
	linux-nvme, linux-scsi, linux-xfs, martin.petersen, ming.lei,
	ojaswin, sagi, tytso, viro



On 2/13/24 17:22, John Garry wrote:
> On 13/02/2024 11:08, Nilay Shroff wrote:
>>> It's relied that atomic_write_unit_max is <= atomic_write_boundary and both are a power-of-2. Please see the NVMe patch, which this is checked. Indeed, it would not make sense if atomic_write_unit_max > atomic_write_boundary (when non-zero).
>>>
>>> So if the write is naturally aligned and its size is <= atomic_write_unit_max, then it cannot be straddling a boundary.
>> Ok fine but in case the device doesn't support namespace atomic boundary size (i.e. NABSPF is zero) then still do we need
>> to restrict IO which crosses the atomic boundary?
> 
> Is there a boundary if NABSPF is zero?
If NABSPF is zero then there's no boundary and so we may not need to worry about IO crossing boundary.

Even though, the atomic boundary is not defined, this function doesn't allow atomic write crossing atomic_write_unit_max_bytes.
For instance, if AWUPF is 63 and an IO starts atomic write from logical block #32 and the number of logical blocks to be written
in this IO equals to #64 then it's not allowed. However if this same IO starts from logical block #0 then it's allowed.
So my point here's that can this restriction be avoided when atomic boundary is zero (or not defined)? 

Also, it seems that the restriction implemented for atomic write to succeed are very strict. For example, atomic-write can't
succeed if an IO starts from logical block #8 and the number of logical blocks to be written in this IO equals to #16. 
In this particular case, IO is well within atomic-boundary (if it's defined) and atomic-size-limit, so why do we NOT want to 
allow it? Is it intentional? I think, the spec doesn't mention about such limitation.

> 
>>
>> I am quoting this from NVMe spec (Command Set Specification, revision 1.0a, Section 2.1.4.3) :
>> "To ensure backwards compatibility, the values reported for AWUN, AWUPF, and ACWU shall be set such that
>> they  are  supported  even  if  a  write  crosses  an  atomic  boundary.  If  a  controller  does  not
>> guarantee atomicity across atomic boundaries, the controller shall set AWUN, AWUPF, and ACWU to 0h (1 LBA)."
> 
> How about respond to the NVMe patch in this series, asking this question?
> 
Yes I will send this query to the NVMe patch in this series.

Thanks,
--Nilay

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v3 10/15] block: Add fops atomic write support
  2024-02-14  9:38           ` Nilay Shroff
@ 2024-02-14 11:29             ` John Garry
  2024-02-14 11:47               ` Nilay Shroff
  0 siblings, 1 reply; 59+ messages in thread
From: John Garry @ 2024-02-14 11:29 UTC (permalink / raw)
  To: Nilay Shroff
  Cc: axboe, brauner, bvanassche, dchinner, djwong, hch, jack, jbongio,
	jejb, kbusch, linux-block, linux-fsdevel, linux-kernel,
	linux-nvme, linux-scsi, linux-xfs, martin.petersen, ming.lei,
	ojaswin, sagi, tytso, viro

On 14/02/2024 09:38, Nilay Shroff wrote:
> 
> 
> On 2/13/24 17:22, John Garry wrote:
>> On 13/02/2024 11:08, Nilay Shroff wrote:
>>>> It's relied that atomic_write_unit_max is <= atomic_write_boundary and both are a power-of-2. Please see the NVMe patch, which this is checked. Indeed, it would not make sense if atomic_write_unit_max > atomic_write_boundary (when non-zero).
>>>>
>>>> So if the write is naturally aligned and its size is <= atomic_write_unit_max, then it cannot be straddling a boundary.
>>> Ok fine but in case the device doesn't support namespace atomic boundary size (i.e. NABSPF is zero) then still do we need
>>> to restrict IO which crosses the atomic boundary?
>>
>> Is there a boundary if NABSPF is zero?
> If NABSPF is zero then there's no boundary and so we may not need to worry about IO crossing boundary.
> 
> Even though, the atomic boundary is not defined, this function doesn't allow atomic write crossing atomic_write_unit_max_bytes.
> For instance, if AWUPF is 63 and an IO starts atomic write from logical block #32 and the number of logical blocks to be written

When you say "IO", you need to be clearer. Do you mean a write from 
userspace or a merged atomic write?

If userspace issues an atomic write which is 64 blocks at offset 32, 
then it will be rejected.

It will be rejected as it is not naturally aligned, e.g. a 64 block 
writes can only be at offset 0, 64, 128,

> in this IO equals to #64 then it's not allowed.
>  However if this same IO starts from logical block #0 then it's allowed.
> So my point here's that can this restriction be avoided when atomic boundary is zero (or not defined)?

We want a consistent set of rules for userspace to follow, whether the 
atomic boundary is zero or non-zero.

Currently the atomic boundary only comes into play for merging writes, 
i.e. we cannot merge a write in which the resultant IO straddles a boundary.

> 
> Also, it seems that the restriction implemented for atomic write to succeed are very strict. For example, atomic-write can't
> succeed if an IO starts from logical block #8 and the number of logical blocks to be written in this IO equals to #16.
> In this particular case, IO is well within atomic-boundary (if it's defined) and atomic-size-limit, so why do we NOT want to
> allow it? Is it intentional? I think, the spec doesn't mention about such limitation.

According to the NVMe spec, this is ok. However we don't want the user 
to have to deal with things like NVMe boundaries. Indeed, for FSes, we 
do not have a direct linear map from FS blocks to physical blocks, so it 
would be impossible for the user to know about a boundary condition in 
this context.

We are trying to formulate rules which work for the somewhat orthogonal 
HW features of both SCSI and NVMe for both block devices and FSes, while 
also dealing with alignment concerns of extent-based FSes, like XFS.

> 
>>
>>>
>>> I am quoting this from NVMe spec (Command Set Specification, revision 1.0a, Section 2.1.4.3) :
>>> "To ensure backwards compatibility, the values reported for AWUN, AWUPF, and ACWU shall be set such that
>>> they  are  supported  even  if  a  write  crosses  an  atomic  boundary.  If  a  controller  does  not
>>> guarantee atomicity across atomic boundaries, the controller shall set AWUN, AWUPF, and ACWU to 0h (1 LBA)."
>>
>> How about respond to the NVMe patch in this series, asking this question?
>>
> Yes I will send this query to the NVMe patch in this series.

Thanks,
John


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v3 10/15] block: Add fops atomic write support
  2024-02-14 11:29             ` John Garry
@ 2024-02-14 11:47               ` Nilay Shroff
  0 siblings, 0 replies; 59+ messages in thread
From: Nilay Shroff @ 2024-02-14 11:47 UTC (permalink / raw)
  To: John Garry
  Cc: axboe, brauner, bvanassche, dchinner, djwong, hch, jack, jbongio,
	jejb, kbusch, linux-block, linux-fsdevel, linux-kernel,
	linux-nvme, linux-scsi, linux-xfs, martin.petersen, ming.lei,
	ojaswin, sagi, tytso, viro



On 2/14/24 16:59, John Garry wrote:
> On 14/02/2024 09:38, Nilay Shroff wrote:
>>
>>
>> On 2/13/24 17:22, John Garry wrote:
>>> On 13/02/2024 11:08, Nilay Shroff wrote:
>>>>> It's relied that atomic_write_unit_max is <= atomic_write_boundary and both are a power-of-2. Please see the NVMe patch, which this is checked. Indeed, it would not make sense if atomic_write_unit_max > atomic_write_boundary (when non-zero).
>>>>>
>>>>> So if the write is naturally aligned and its size is <= atomic_write_unit_max, then it cannot be straddling a boundary.
>>>> Ok fine but in case the device doesn't support namespace atomic boundary size (i.e. NABSPF is zero) then still do we need
>>>> to restrict IO which crosses the atomic boundary?
>>>
>>> Is there a boundary if NABSPF is zero?
>> If NABSPF is zero then there's no boundary and so we may not need to worry about IO crossing boundary.
>>
>> Even though, the atomic boundary is not defined, this function doesn't allow atomic write crossing atomic_write_unit_max_bytes.
>> For instance, if AWUPF is 63 and an IO starts atomic write from logical block #32 and the number of logical blocks to be written
> 
> When you say "IO", you need to be clearer. Do you mean a write from userspace or a merged atomic write?
Yes I meant write from the userspace. Sorry for the confusion here.
> 
> If userspace issues an atomic write which is 64 blocks at offset 32, then it will be rejected.
> 
> It will be rejected as it is not naturally aligned, e.g. a 64 block writes can only be at offset 0, 64, 128,
So it means that even though h/w may support atomic-write crossing natural alignment boundary, the kernel would still reject it.
> 
>> in this IO equals to #64 then it's not allowed.
>>  However if this same IO starts from logical block #0 then it's allowed.
>> So my point here's that can this restriction be avoided when atomic boundary is zero (or not defined)?
> 
> We want a consistent set of rules for userspace to follow, whether the atomic boundary is zero or non-zero.
> 
> Currently the atomic boundary only comes into play for merging writes, i.e. we cannot merge a write in which the resultant IO straddles a boundary.
> 
>>
>> Also, it seems that the restriction implemented for atomic write to succeed are very strict. For example, atomic-write can't
>> succeed if an IO starts from logical block #8 and the number of logical blocks to be written in this IO equals to #16.
>> In this particular case, IO is well within atomic-boundary (if it's defined) and atomic-size-limit, so why do we NOT want to
>> allow it? Is it intentional? I think, the spec doesn't mention about such limitation.
> 
> According to the NVMe spec, this is ok. However we don't want the user to have to deal with things like NVMe boundaries. Indeed, for FSes, we do not have a direct linear map from FS blocks to physical blocks, so it would be impossible for the user to know about a boundary condition in this context.
> 
> We are trying to formulate rules which work for the somewhat orthogonal HW features of both SCSI and NVMe for both block devices and FSes, while also dealing with alignment concerns of extent-based FSes, like XFS.
Hmm OK, thanks for that explanation. 

Thanks,
--Nilay

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v3 14/15] nvme: Support atomic writes
  2024-01-24 11:38 ` [PATCH v3 14/15] nvme: Support atomic writes John Garry
  2024-02-13  6:42   ` Christoph Hellwig
@ 2024-02-14 12:27   ` Nilay Shroff
  2024-02-14 13:02     ` John Garry
  1 sibling, 1 reply; 59+ messages in thread
From: Nilay Shroff @ 2024-02-14 12:27 UTC (permalink / raw)
  To: john.g.garry
  Cc: alan.adamson, axboe, brauner, bvanassche, dchinner, djwong, hch,
	jack, jbongio, jejb, kbusch, linux-block, linux-fsdevel,
	linux-kernel, linux-nvme, linux-scsi, linux-xfs, martin.petersen,
	ming.lei, ojaswin, sagi, tytso, viro

>Support reading atomic write registers to fill in request_queue
>properties.

>Use following method to calculate limits:
>atomic_write_max_bytes = flp2(NAWUPF ?: AWUPF)
>atomic_write_unit_min = logical_block_size
>atomic_write_unit_max = flp2(NAWUPF ?: AWUPF)
>atomic_write_boundary = NABSPF

In case the device doesn't support namespace atomic boundary size (i.e. NABSPF 
is zero) then while merging atomic block-IO we should allow merge.
 
For example, while front/back merging the atomic block IO, we check whether 
boundary is defined or not. In case if boundary is not-defined (i.e. it's zero) 
then we simply reject merging ateempt (as implemented in 
rq_straddles_atomic_write_boundary()).  

I am quoting this from NVMe spec (Command Set Specification, revision 1.0a, 
Section 2.1.4.3) : "To ensure backwards compatibility, the values reported for 
AWUN, AWUPF, and ACWU shall be set such that they  are  supported  even  if  a  
write  crosses  an  atomic  boundary.  If  a  controller  does  not  guarantee 
atomicity across atomic boundaries, the controller shall set AWUN, AWUPF, and 
ACWU to 0h (1 LBA)." 

Thanks,
--Nilay


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v3 14/15] nvme: Support atomic writes
  2024-02-14 12:27   ` Nilay Shroff
@ 2024-02-14 13:02     ` John Garry
  2024-02-14 16:45       ` Nilay Shroff
  0 siblings, 1 reply; 59+ messages in thread
From: John Garry @ 2024-02-14 13:02 UTC (permalink / raw)
  To: Nilay Shroff
  Cc: alan.adamson, axboe, brauner, bvanassche, dchinner, djwong, hch,
	jack, jbongio, jejb, kbusch, linux-block, linux-fsdevel,
	linux-kernel, linux-nvme, linux-scsi, linux-xfs, martin.petersen,
	ming.lei, ojaswin, sagi, tytso, viro

On 14/02/2024 12:27, Nilay Shroff wrote:
>> Support reading atomic write registers to fill in request_queue
> 
>> properties.
> 
> 
> 
>> Use following method to calculate limits:
> 
>> atomic_write_max_bytes = flp2(NAWUPF ?: AWUPF)
> 

You still need to fix that mail client to not add extra blank lines.

>> atomic_write_unit_min = logical_block_size
> 
>> atomic_write_unit_max = flp2(NAWUPF ?: AWUPF)
> 
>> atomic_write_boundary = NABSPF
> 
> 
> 
> In case the device doesn't support namespace atomic boundary size (i.e. NABSPF
> 
> is zero) then while merging atomic block-IO we should allow merge.
> 
>   
> 
> For example, while front/back merging the atomic block IO, we check whether
> 
> boundary is defined or not. In case if boundary is not-defined (i.e. it's zero)
> 
> then we simply reject merging ateempt (as implemented in
> 
> rq_straddles_atomic_write_boundary()).

Are you sure about that? In rq_straddles_atomic_write_boundary(), if 
boundary == 0, then we return false, i.e. there is no boundary, so we 
can never be crossing it.

static bool rq_straddles_atomic_write_boundary(struct request *rq,
unsigned int front,
unsigned int back)
{
	unsigned int boundary = queue_atomic_write_boundary_bytes(rq->q);
	unsigned int mask, imask;
	loff_t start, end;

	if (!boundary)
		return false;

	...
}

And then will not reject a merge for that reason, like:

int ll_back_merge_fn(struct request *req, struct bio *bio, unsigned int 
nr_segs)
{
	...

	if (req->cmd_flags & REQ_ATOMIC) {
		if (rq_straddles_atomic_write_boundary(req,
			0, bio->bi_iter.bi_size)) {
			return 0;
		}
	}

	return ll_new_hw_segment(req, bio, nr_segs);
}


> 
> 
> 
> I am quoting this from NVMe spec (Command Set Specification, revision 1.0a,
> 
> Section 2.1.4.3) : "To ensure backwards compatibility, the values reported for
> 
> AWUN, AWUPF, and ACWU shall be set such that they  are  supported  even  if  a
> 
> write  crosses  an  atomic  boundary.  If  a  controller  does  not  guarantee
> 
> atomicity across atomic boundaries, the controller shall set AWUN, AWUPF, and
> 
> ACWU to 0h (1 LBA)."
> 
> 
> 

Thanks,
John


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v3 14/15] nvme: Support atomic writes
  2024-02-14 13:02     ` John Garry
@ 2024-02-14 16:45       ` Nilay Shroff
  0 siblings, 0 replies; 59+ messages in thread
From: Nilay Shroff @ 2024-02-14 16:45 UTC (permalink / raw)
  To: John Garry
  Cc: alan.adamson, axboe, brauner, bvanassche, dchinner, djwong, hch,
	jack, jbongio, jejb, kbusch, linux-block, linux-fsdevel,
	linux-kernel, linux-nvme, linux-scsi, linux-xfs, martin.petersen,
	ming.lei, ojaswin, sagi, tytso, viro



On 2/14/24 18:32, John Garry wrote:
> On 14/02/2024 12:27, Nilay Shroff wrote:
>>
>>
>>
>>> Use following method to calculate limits:
>>
>>> atomic_write_max_bytes = flp2(NAWUPF ?: AWUPF)
>>
> 
> You still need to fix that mail client to not add extra blank lines.
Yes, I am working on it. I hope it's solved now. 
> 
>>> atomic_write_unit_min = logical_block_size
>>
>>> atomic_write_unit_max = flp2(NAWUPF ?: AWUPF)
>>
>>> atomic_write_boundary = NABSPF
>>
>>
>>
>> In case the device doesn't support namespace atomic boundary size (i.e. NABSPF
>>
>> is zero) then while merging atomic block-IO we should allow merge.
>>
>>  
>> For example, while front/back merging the atomic block IO, we check whether
>>
>> boundary is defined or not. In case if boundary is not-defined (i.e. it's zero)
>>
>> then we simply reject merging ateempt (as implemented in
>>
>> rq_straddles_atomic_write_boundary()).
> 
> Are you sure about that? In rq_straddles_atomic_write_boundary(), if boundary == 0, then we return false, i.e. there is no boundary, so we can never be crossing it.
> 
> static bool rq_straddles_atomic_write_boundary(struct request *rq,
> unsigned int front,
> unsigned int back)
> {
>     unsigned int boundary = queue_atomic_write_boundary_bytes(rq->q);
>     unsigned int mask, imask;
>     loff_t start, end;
> 
>     if (!boundary)
>         return false;
> 
>     ...
> }
> 
> And then will not reject a merge for that reason, like:
> 
> int ll_back_merge_fn(struct request *req, struct bio *bio, unsigned int nr_segs)
> {
>     ...
> 
>     if (req->cmd_flags & REQ_ATOMIC) {
>         if (rq_straddles_atomic_write_boundary(req,
>             0, bio->bi_iter.bi_size)) {
>             return 0;
>         }
>     }
> 
>     return ll_new_hw_segment(req, bio, nr_segs);
> }
> 
> 
Aargh, you are right. I see that if rq_straddles_atomic_write_boundary() returns true then we avoid merge otherwise the merge is attempted. My bad...

Thanks,
--Nilay

^ permalink raw reply	[flat|nested] 59+ messages in thread

end of thread, other threads:[~2024-02-14 16:46 UTC | newest]

Thread overview: 59+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-01-24 11:38 [PATCH v3 00/15] block atomic writes John Garry
2024-01-24 11:38 ` [PATCH v3 01/15] block: Add atomic write operations to request_queue limits John Garry
2024-02-13  6:22   ` Christoph Hellwig
2024-01-24 11:38 ` [PATCH v3 02/15] block: Limit atomic writes according to bio and queue limits John Garry
2024-02-13  4:33   ` Ritesh Harjani
2024-02-13  8:05     ` John Garry
2024-01-24 11:38 ` [PATCH v3 03/15] fs/bdev: Add atomic write support info to statx John Garry
2024-01-24 11:38 ` [PATCH v3 04/15] fs: Add RWF_ATOMIC and IOCB_ATOMIC flags for atomic write support John Garry
2024-01-24 11:38 ` [PATCH v3 05/15] block: Add REQ_ATOMIC flag John Garry
2024-02-13  6:24   ` Christoph Hellwig
2024-01-24 11:38 ` [PATCH v3 06/15] block: Pass blk_queue_get_max_sectors() a request pointer John Garry
2024-02-13  6:23   ` Christoph Hellwig
2024-02-13  8:15     ` John Garry
2024-01-24 11:38 ` [PATCH v3 07/15] block: Limit atomic write IO size according to atomic_write_max_sectors John Garry
2024-02-13  6:26   ` Christoph Hellwig
2024-02-13  8:15     ` John Garry
2024-02-14  7:26       ` Christoph Hellwig
2024-02-14  9:24         ` John Garry
2024-01-24 11:38 ` [PATCH v3 08/15] block: Error an attempt to split an atomic write bio John Garry
2024-01-24 11:38 ` [PATCH v3 09/15] block: Add checks to merging of atomic writes John Garry
2024-02-12 10:54   ` Nilay Shroff
2024-02-12 11:20     ` [PATCH " John Garry
2024-02-12 12:01       ` Nilay Shroff
2024-02-12 12:09     ` John Garry
2024-02-13  6:52       ` Nilay Shroff
2024-01-24 11:38 ` [PATCH v3 10/15] block: Add fops atomic write support John Garry
2024-02-13  9:36   ` Nilay Shroff
2024-02-13  9:58     ` [PATCH " John Garry
2024-02-13 11:08       ` Nilay Shroff
2024-02-13 11:52         ` John Garry
2024-02-14  9:38           ` Nilay Shroff
2024-02-14 11:29             ` John Garry
2024-02-14 11:47               ` Nilay Shroff
2024-01-24 11:38 ` [PATCH v3 11/15] scsi: sd: Support reading atomic write properties from block limits VPD John Garry
2024-02-13  6:31   ` Christoph Hellwig
2024-02-13  8:16     ` John Garry
2024-01-24 11:38 ` [PATCH v3 12/15] scsi: sd: Add WRITE_ATOMIC_16 support John Garry
2024-01-24 11:38 ` [PATCH v3 13/15] scsi: scsi_debug: Atomic write support John Garry
2024-01-24 11:38 ` [PATCH v3 14/15] nvme: Support atomic writes John Garry
2024-02-13  6:42   ` Christoph Hellwig
2024-02-13 14:21     ` John Garry
2024-02-14  8:00       ` Christoph Hellwig
2024-02-14  9:21         ` John Garry
2024-02-14 12:27   ` Nilay Shroff
2024-02-14 13:02     ` John Garry
2024-02-14 16:45       ` Nilay Shroff
2024-01-24 11:38 ` [PATCH v3 15/15] nvme: Ensure atomic writes will be executed atomically John Garry
2024-01-25  0:52   ` Keith Busch
2024-01-25 11:28     ` John Garry
2024-01-29  6:20       ` Christoph Hellwig
2024-01-29  9:36         ` John Garry
2024-01-29 14:39           ` Christoph Hellwig
2024-01-26  3:50     ` Chaitanya Kulkarni
2024-02-13  6:42   ` Christoph Hellwig
2024-02-13 14:07     ` John Garry
2024-01-29  6:18 ` [PATCH v3 00/15] block atomic writes Christoph Hellwig
2024-01-29  9:17   ` John Garry
2024-02-06 18:44   ` John Garry
2024-02-10 12:12     ` David Laight

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).