linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/15 V6] nvme-rdma/nvmet-rdma: Add metadata/T10-PI support
@ 2020-04-28 13:11 Max Gurtovoy
  2020-04-28 13:11 ` [PATCH 01/15] nvme: introduce namespace features flag Max Gurtovoy
                   ` (14 more replies)
  0 siblings, 15 replies; 35+ messages in thread
From: Max Gurtovoy @ 2020-04-28 13:11 UTC (permalink / raw)
  To: maxg, linux-nvme, kbusch, hch, sagi, martin.petersen, jsmart2021, axboe
  Cc: vladimirk, shlomin, israelr, idanb, oren, nitzanc

Hello Sagi, Christoph, Keith, Martin, James and Co

This patchset adds metadata (T10-PI) support for NVMeoF/RDMA host side
and target side, using signature verbs API. This set starts with a few
preparation commits to the NVMe host core layer. It continues with
NVMeoF/RDMA host implementation and preparations to NVMe target core
layer. The patchset ends with NVMeoF/RDMA target implementation.

In V6 I mainly did some updates for host side configuration and removed
the NVMe cli patch. For the initial mode, we'll use T10-PI/metadata for
capable devices and act according to target controller and namespaces
configuration. I also used James's suggestion of making nvme_ns_has_pi
accessible and implemented Christoph's suggestions for ease the code in
NVMe host core. Code looks more simple and readable now and hopefully we
can continue with the review and the merge it soon.

Configuration:
Host:
 - same as before

Target:
 - echo 1 > /config/nvmet/subsystems/${NAME}/attr_pi_enable
 - echo 1 > /config/nvmet/ports/${PORT_NUM}/param_pi_enable

The code was tested using Mellanox's ConnectX-4/ConnectX-5 HCAs.
This series applies on top of nvme_5.8 branch cleanly.

Changes from v5:
 - rebased over nvme_5.8
 - configure RDMA controllers for T10-PI by default (if capable)
 - removed RW api patch for exporting single MR length (will add it in the future if needed)
 - Implemented Christoph's suggestions for NVMe host core
 - fail enabling target port in case of PI capability mismatch (from Christoph)
 
Changes from v4:
 - removed get_mdts patches (merged)
 - added enum nvme_ns_features instead of defines (patch 1/17)
 - rename pi/prot to md (patches 2/17 + 6/17 + 8/17 + 9/17 + 10/17)
 - another rebase

Changes from v3:
 - Added Reviewed-by signatures
 - New RDMA/rw patch (Patch 17/19)
 - Add mdts setting op for controllers (Patches 14/19, 18/19)
 - Rename NVME_NS_DIX_SUPPORTED to NVME_NS_MD_HOST_SUPPORTED and
   NVME_NS_DIF_SUPPORTED to NVME_NS_MD_CTRL_SUPPORTED (Patch 01/19)
 - Split "nvme: Introduce namespace features flag" patch (patch 02/19)
 - Rename nvmet_rdma_set_diff_domain to nvmet_rdma_set_sig_domain
   and nvme_rdma_set_diff_domain to nvme_rdma_set_sig_domain
   (Patches 08/19, 19/19)
 - Remove ns parameter from nvme_rdma_set_sig_domain/nvmet_rdma_set_sig_domain
   functions (patch 08/19, 19/19)
 - Rebase over nvme-5.7 branch

Changes from v2:
 - Convert the virtual start sector (which passed to bip_set_seed function)
   to be in integrity interval units (Patch 14/15)
 - Clarify some commit messages

Changes from v1:
 - Added Reviewed-by signatures
 - Added namespace features flag (Patch 01/15)
 - Remove nvme_ns_has_pi function (Patch 01/15)
 - Added has_pi field to struct nvme_request (Patch 01/15)
 - Subject change for patch 02/15
 - Fix comment for PCI metadata (Patch 03/15)
 - Rebase over "nvme: Avoid preallocating big SGL for data" patchset
 - Introduce NVME_INLINE_PROT_SG_CNT flag (Patch 05/15)
 - Introduce nvme_rdma_sgl structure (Patch 06/15)
 - Remove first_sgl pointer from struct nvme_rdma_request (Patch 06/15)
 - Split nvme-rdma patches (Patches 06/15, 07/15)
 - Rename is_protected to use_pi (Patch 07/15)
 - Refactor nvme_rdma_get_max_fr_pages function (Patch 07/15)
 - Added ifdef CONFIG_BLK_DEV_INTEGRITY (Patches 07/15, 09/15, 13/15,
   14/15, 15/15)
 - Added port configfs pi_enable (Patch 14/15)

Israel Rukshin (9):
  nvme: introduce NVME_INLINE_MD_SG_CNT
  nvme-rdma: introduce nvme_rdma_sgl structure
  nvmet: add metadata characteristics for a namespace
  nvmet: rename nvmet_rw_len to nvmet_rw_data_len
  nvmet: rename nvmet_check_data_len to nvmet_check_transfer_len
  nvme: add Metadata Capabilities enumerations
  nvmet: add metadata/T10-PI support
  nvmet: add metadata support for block devices
  nvmet-rdma: add metadata/T10-PI support

James Smart (1):
  nvme: make nvme_ns_has_pi accessible to transports

Max Gurtovoy (5):
  nvme: introduce namespace features flag
  nvme: introduce NVME_NS_METADATA_SUPPORTED flag
  nvme: enforce extended LBA format for fabrics metadata
  nvme: introduce max_integrity_segments ctrl attribute
  nvme-rdma: add metadata/T10-PI support

 drivers/nvme/host/core.c          |  83 ++++++---
 drivers/nvme/host/nvme.h          |  18 +-
 drivers/nvme/host/pci.c           |   6 +
 drivers/nvme/host/rdma.c          | 358 ++++++++++++++++++++++++++++++++------
 drivers/nvme/target/admin-cmd.c   |  40 +++--
 drivers/nvme/target/configfs.c    |  61 +++++++
 drivers/nvme/target/core.c        | 115 +++++++++---
 drivers/nvme/target/discovery.c   |   8 +-
 drivers/nvme/target/fabrics-cmd.c |  16 +-
 drivers/nvme/target/io-cmd-bdev.c | 116 +++++++++++-
 drivers/nvme/target/io-cmd-file.c |   6 +-
 drivers/nvme/target/nvmet.h       |  27 ++-
 drivers/nvme/target/rdma.c        | 234 +++++++++++++++++++++++--
 include/linux/nvme.h              |   6 +
 14 files changed, 947 insertions(+), 147 deletions(-)

-- 
1.8.3.1


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH 01/15] nvme: introduce namespace features flag
  2020-04-28 13:11 [PATCH 00/15 V6] nvme-rdma/nvmet-rdma: Add metadata/T10-PI support Max Gurtovoy
@ 2020-04-28 13:11 ` Max Gurtovoy
  2020-05-01 13:20   ` Christoph Hellwig
  2020-05-01 14:24   ` Christoph Hellwig
  2020-04-28 13:11 ` [PATCH 02/15] nvme: introduce NVME_NS_METADATA_SUPPORTED flag Max Gurtovoy
                   ` (13 subsequent siblings)
  14 siblings, 2 replies; 35+ messages in thread
From: Max Gurtovoy @ 2020-04-28 13:11 UTC (permalink / raw)
  To: maxg, linux-nvme, kbusch, hch, sagi, martin.petersen, jsmart2021, axboe
  Cc: vladimirk, shlomin, israelr, idanb, oren, nitzanc

Replace the specific ext boolean (that implies on extended LBA format)
with a feature in the new namespace features flag. This is a preparation
for adding more namespace features (such as metadata specific features).

Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
Reviewed-by: Israel Rukshin <israelr@mellanox.com>
---
 drivers/nvme/host/core.c | 8 +++++---
 drivers/nvme/host/nvme.h | 6 +++++-
 2 files changed, 10 insertions(+), 4 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index dfb064b4..02130d3 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -1302,7 +1302,7 @@ static int nvme_submit_io(struct nvme_ns *ns, struct nvme_user_io __user *uio)
 	meta_len = (io.nblocks + 1) * ns->ms;
 	metadata = nvme_to_user_ptr(io.metadata);
 
-	if (ns->ext) {
+	if (ns->features & NVME_NS_EXT_LBAS) {
 		length += meta_len;
 		meta_len = 0;
 	} else if (meta_len) {
@@ -1880,7 +1880,7 @@ static void nvme_update_disk_info(struct gendisk *disk,
 	blk_queue_io_min(disk->queue, phys_bs);
 	blk_queue_io_opt(disk->queue, io_opt);
 
-	if (ns->ms && !ns->ext &&
+	if (ns->ms && !(ns->features & NVME_NS_EXT_LBAS) &&
 	    (ns->ctrl->ops->flags & NVME_F_METADATA_SUPPORTED))
 		nvme_init_integrity(disk, ns->ms, ns->pi_type);
 	if ((ns->ms && !nvme_ns_has_pi(ns) && !blk_get_integrity(disk)) ||
@@ -1919,8 +1919,10 @@ static void __nvme_revalidate_disk(struct gendisk *disk, struct nvme_id_ns *id)
 	else
 		iob = nvme_lba_to_sect(ns, le16_to_cpu(id->noiob));
 
+	ns->features = 0;
 	ns->ms = le16_to_cpu(id->lbaf[id->flbas & NVME_NS_FLBAS_LBA_MASK].ms);
-	ns->ext = ns->ms && (id->flbas & NVME_NS_FLBAS_META_EXT);
+	if (ns->ms && (id->flbas & NVME_NS_FLBAS_META_EXT))
+		ns->features |= NVME_NS_EXT_LBAS;
 	/* the PI implementation requires metadata equal t10 pi tuple size */
 	if (ns->ms == sizeof(struct t10_pi_tuple))
 		ns->pi_type = id->dps & NVME_NS_DPS_PI_MASK;
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index f3ab177..110577c7 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -364,6 +364,10 @@ struct nvme_ns_head {
 #endif
 };
 
+enum nvme_ns_features {
+	NVME_NS_EXT_LBAS = 1 << 0, /* support extended LBA format */
+};
+
 struct nvme_ns {
 	struct list_head list;
 
@@ -383,8 +387,8 @@ struct nvme_ns {
 	u16 ms;
 	u16 sgs;
 	u32 sws;
-	bool ext;
 	u8 pi_type;
+	unsigned long features;
 	unsigned long flags;
 #define NVME_NS_REMOVING	0
 #define NVME_NS_DEAD     	1
-- 
1.8.3.1


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 02/15] nvme: introduce NVME_NS_METADATA_SUPPORTED flag
  2020-04-28 13:11 [PATCH 00/15 V6] nvme-rdma/nvmet-rdma: Add metadata/T10-PI support Max Gurtovoy
  2020-04-28 13:11 ` [PATCH 01/15] nvme: introduce namespace features flag Max Gurtovoy
@ 2020-04-28 13:11 ` Max Gurtovoy
  2020-05-01 13:20   ` Christoph Hellwig
  2020-04-28 13:11 ` [PATCH 03/15] nvme: make nvme_ns_has_pi accessible to transports Max Gurtovoy
                   ` (12 subsequent siblings)
  14 siblings, 1 reply; 35+ messages in thread
From: Max Gurtovoy @ 2020-04-28 13:11 UTC (permalink / raw)
  To: maxg, linux-nvme, kbusch, hch, sagi, martin.petersen, jsmart2021, axboe
  Cc: vladimirk, shlomin, israelr, idanb, oren, nitzanc

This is a preparation for adding support for metadata in fabric
controllers. New flag will imply that NVMe namespace supports getting
metadata that was originally generated by host's block layer.

Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
Reviewed-by: Israel Rukshin <israelr@mellanox.com>
---
 drivers/nvme/host/core.c | 41 ++++++++++++++++++++++++++++++++++-------
 drivers/nvme/host/nvme.h |  1 +
 2 files changed, 35 insertions(+), 7 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 02130d3..2f419de 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -1880,13 +1880,27 @@ static void nvme_update_disk_info(struct gendisk *disk,
 	blk_queue_io_min(disk->queue, phys_bs);
 	blk_queue_io_opt(disk->queue, io_opt);
 
-	if (ns->ms && !(ns->features & NVME_NS_EXT_LBAS) &&
-	    (ns->ctrl->ops->flags & NVME_F_METADATA_SUPPORTED))
-		nvme_init_integrity(disk, ns->ms, ns->pi_type);
-	if ((ns->ms && !nvme_ns_has_pi(ns) && !blk_get_integrity(disk)) ||
-	    ns->lba_shift > PAGE_SHIFT)
+	/*
+	 * The block layer can't support LBA sizes larger than the page size
+	 * yet, so catch this early and don't allow block I/O.
+	 */
+	if (ns->lba_shift > PAGE_SHIFT)
 		capacity = 0;
 
+	/*
+	 * Register a metadata profile for PI, or the plain non-integrity NVMe
+	 * metadata masquerading as Type 0 if supported, otherwise reject block
+	 * I/O to namespaces with metadata except when the namespace supports
+	 * PI, as it can strip/insert in that case.
+	 */
+	if (ns->ms) {
+		if (IS_ENABLED(CONFIG_BLK_DEV_INTEGRITY) &&
+		    (ns->features & NVME_NS_METADATA_SUPPORTED))
+			nvme_init_integrity(disk, ns->ms, ns->pi_type);
+		else if (!nvme_ns_has_pi(ns))
+			capacity = 0;
+	}
+
 	set_capacity_revalidate_and_notify(disk, capacity, false);
 
 	nvme_config_discard(disk, ns);
@@ -1921,14 +1935,27 @@ static void __nvme_revalidate_disk(struct gendisk *disk, struct nvme_id_ns *id)
 
 	ns->features = 0;
 	ns->ms = le16_to_cpu(id->lbaf[id->flbas & NVME_NS_FLBAS_LBA_MASK].ms);
-	if (ns->ms && (id->flbas & NVME_NS_FLBAS_META_EXT))
-		ns->features |= NVME_NS_EXT_LBAS;
 	/* the PI implementation requires metadata equal t10 pi tuple size */
 	if (ns->ms == sizeof(struct t10_pi_tuple))
 		ns->pi_type = id->dps & NVME_NS_DPS_PI_MASK;
 	else
 		ns->pi_type = 0;
 
+	if (ns->ms) {
+		if (id->flbas & NVME_NS_FLBAS_META_EXT)
+			ns->features |= NVME_NS_EXT_LBAS;
+
+		/*
+		 * For PCI, Extended logical block will be generated by the
+		 * controller. Non-extended format can be generated by the
+		 * block layer.
+		 */
+		if (ns->ctrl->ops->flags & NVME_F_METADATA_SUPPORTED) {
+			if (!(ns->features & NVME_NS_EXT_LBAS))
+				ns->features |= NVME_NS_METADATA_SUPPORTED;
+		}
+	}
+
 	if (iob)
 		blk_queue_chunk_sectors(ns->queue, rounddown_pow_of_two(iob));
 	nvme_update_disk_info(disk, ns, id);
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 110577c7..0dda145 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -366,6 +366,7 @@ struct nvme_ns_head {
 
 enum nvme_ns_features {
 	NVME_NS_EXT_LBAS = 1 << 0, /* support extended LBA format */
+	NVME_NS_METADATA_SUPPORTED = 1 << 1, /* support getting generated md */
 };
 
 struct nvme_ns {
-- 
1.8.3.1


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 03/15] nvme: make nvme_ns_has_pi accessible to transports
  2020-04-28 13:11 [PATCH 00/15 V6] nvme-rdma/nvmet-rdma: Add metadata/T10-PI support Max Gurtovoy
  2020-04-28 13:11 ` [PATCH 01/15] nvme: introduce namespace features flag Max Gurtovoy
  2020-04-28 13:11 ` [PATCH 02/15] nvme: introduce NVME_NS_METADATA_SUPPORTED flag Max Gurtovoy
@ 2020-04-28 13:11 ` Max Gurtovoy
  2020-05-01 13:20   ` Christoph Hellwig
  2020-04-28 13:11 ` [PATCH 04/15] nvme: enforce extended LBA format for fabrics metadata Max Gurtovoy
                   ` (11 subsequent siblings)
  14 siblings, 1 reply; 35+ messages in thread
From: Max Gurtovoy @ 2020-04-28 13:11 UTC (permalink / raw)
  To: maxg, linux-nvme, kbusch, hch, sagi, martin.petersen, jsmart2021, axboe
  Cc: vladimirk, shlomin, israelr, idanb, oren, nitzanc

From: James Smart <jsmart2021@gmail.com>

Move the nvme_ns_has_pi() inline from core.c to the nvme.h header.
This allows use by the transports.

Signed-off-by: James Smart <jsmart2021@gmail.com>
[maxg: added a comment for nvme_ns_has_pi()]
Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
Reviewed-by: Israel Rukshin <israelr@mellanox.com>
---
 drivers/nvme/host/core.c | 6 ------
 drivers/nvme/host/nvme.h | 7 +++++++
 2 files changed, 7 insertions(+), 6 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 2f419de..40aa2be 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -19,7 +19,6 @@
 #include <linux/pr.h>
 #include <linux/ptrace.h>
 #include <linux/nvme_ioctl.h>
-#include <linux/t10-pi.h>
 #include <linux/pm_qos.h>
 #include <asm/unaligned.h>
 
@@ -204,11 +203,6 @@ static void nvme_delete_ctrl_sync(struct nvme_ctrl *ctrl)
 	nvme_put_ctrl(ctrl);
 }
 
-static inline bool nvme_ns_has_pi(struct nvme_ns *ns)
-{
-	return ns->pi_type && ns->ms == sizeof(struct t10_pi_tuple);
-}
-
 static blk_status_t nvme_error_status(u16 status)
 {
 	switch (status & 0x7ff) {
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 0dda145..0f5fa85 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -16,6 +16,7 @@
 #include <linux/fault-inject.h>
 #include <linux/rcupdate.h>
 #include <linux/wait.h>
+#include <linux/t10-pi.h>
 
 #include <trace/events/block.h>
 
@@ -399,6 +400,12 @@ struct nvme_ns {
 
 };
 
+/* NVMe ns supports metadata actions by the controller (generate/strip) */
+static inline bool nvme_ns_has_pi(struct nvme_ns *ns)
+{
+	return ns->pi_type && ns->ms == sizeof(struct t10_pi_tuple);
+}
+
 struct nvme_ctrl_ops {
 	const char *name;
 	struct module *module;
-- 
1.8.3.1


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 04/15] nvme: enforce extended LBA format for fabrics metadata
  2020-04-28 13:11 [PATCH 00/15 V6] nvme-rdma/nvmet-rdma: Add metadata/T10-PI support Max Gurtovoy
                   ` (2 preceding siblings ...)
  2020-04-28 13:11 ` [PATCH 03/15] nvme: make nvme_ns_has_pi accessible to transports Max Gurtovoy
@ 2020-04-28 13:11 ` Max Gurtovoy
  2020-05-01 13:21   ` Christoph Hellwig
  2020-05-01 13:41   ` Christoph Hellwig
  2020-04-28 13:11 ` [PATCH 05/15] nvme: introduce max_integrity_segments ctrl attribute Max Gurtovoy
                   ` (10 subsequent siblings)
  14 siblings, 2 replies; 35+ messages in thread
From: Max Gurtovoy @ 2020-04-28 13:11 UTC (permalink / raw)
  To: maxg, linux-nvme, kbusch, hch, sagi, martin.petersen, jsmart2021, axboe
  Cc: vladimirk, shlomin, israelr, idanb, oren, nitzanc

An extended LBA is a larger LBA that is created when metadata associated
with the LBA is transferred contiguously with the LBA data (AKA
interleaved). The metadata may be either transferred as part of the LBA
(creating an extended LBA) or it may be transferred as a separate
contiguous buffer of data. According to the NVMeoF spec, a fabrics ctrl
supports only an Extended LBA format. Fail revalidation in case we have a
spec violation. Also add a flag that will imply on capable transports and
controllers as part of a preparation for allowing end-to-end protection
information for fabric controllers.

Suggested-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
Signed-off-by: Israel Rukshin <israelr@mellanox.com>
---
 drivers/nvme/host/core.c | 41 +++++++++++++++++++++++++++--------------
 drivers/nvme/host/nvme.h |  1 +
 2 files changed, 28 insertions(+), 14 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 40aa2be..dd59de0 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -1908,9 +1908,10 @@ static void nvme_update_disk_info(struct gendisk *disk,
 	blk_mq_unfreeze_queue(disk->queue);
 }
 
-static void __nvme_revalidate_disk(struct gendisk *disk, struct nvme_id_ns *id)
+static int __nvme_revalidate_disk(struct gendisk *disk, struct nvme_id_ns *id)
 {
 	struct nvme_ns *ns = disk->private_data;
+	struct nvme_ctrl *ctrl = ns->ctrl;
 	u32 iob;
 
 	/*
@@ -1921,9 +1922,9 @@ static void __nvme_revalidate_disk(struct gendisk *disk, struct nvme_id_ns *id)
 	if (ns->lba_shift == 0)
 		ns->lba_shift = 9;
 
-	if ((ns->ctrl->quirks & NVME_QUIRK_STRIPE_SIZE) &&
-	    is_power_of_2(ns->ctrl->max_hw_sectors))
-		iob = ns->ctrl->max_hw_sectors;
+	if ((ctrl->quirks & NVME_QUIRK_STRIPE_SIZE) &&
+	    is_power_of_2(ctrl->max_hw_sectors))
+		iob = ctrl->max_hw_sectors;
 	else
 		iob = nvme_lba_to_sect(ns, le16_to_cpu(id->noiob));
 
@@ -1936,16 +1937,24 @@ static void __nvme_revalidate_disk(struct gendisk *disk, struct nvme_id_ns *id)
 		ns->pi_type = 0;
 
 	if (ns->ms) {
-		if (id->flbas & NVME_NS_FLBAS_META_EXT)
-			ns->features |= NVME_NS_EXT_LBAS;
-
 		/*
-		 * For PCI, Extended logical block will be generated by the
-		 * controller. Non-extended format can be generated by the
-		 * block layer.
+		 * For PCIe only the separate metadata pointer is supported,
+		 * as the block layer supplies metadata in a separate bio_vec
+		 * chain. For Fabrics, only metadata as part of extended data
+		 * LBA is supported on the wire per the Fabrics specification,
+		 * but the HBA/HCA will do the remapping from the separate
+		 * metadata buffers for us.
 		 */
-		if (ns->ctrl->ops->flags & NVME_F_METADATA_SUPPORTED) {
-			if (!(ns->features & NVME_NS_EXT_LBAS))
+		if (id->flbas & NVME_NS_FLBAS_META_EXT) {
+			ns->features |= NVME_NS_EXT_LBAS;
+			if ((ctrl->ops->flags & NVME_F_FABRICS) &&
+			    (ctrl->ops->flags & NVME_F_METADATA_SUPPORTED) &&
+			    ctrl->pi_capable)
+				ns->features |= NVME_NS_METADATA_SUPPORTED;
+		} else {
+			if (WARN_ON_ONCE(ctrl->ops->flags & NVME_F_FABRICS))
+				return -EINVAL;
+			if (ctrl->ops->flags & NVME_F_METADATA_SUPPORTED)
 				ns->features |= NVME_NS_METADATA_SUPPORTED;
 		}
 	}
@@ -1960,6 +1969,7 @@ static void __nvme_revalidate_disk(struct gendisk *disk, struct nvme_id_ns *id)
 		revalidate_disk(ns->head->disk);
 	}
 #endif
+	return 0;
 }
 
 static int nvme_revalidate_disk(struct gendisk *disk)
@@ -1995,7 +2005,7 @@ static int nvme_revalidate_disk(struct gendisk *disk)
 		goto free_id;
 	}
 
-	__nvme_revalidate_disk(disk, id);
+	ret = __nvme_revalidate_disk(disk, id);
 free_id:
 	kfree(id);
 out:
@@ -3649,7 +3659,8 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid)
 	memcpy(disk->disk_name, disk_name, DISK_NAME_LEN);
 	ns->disk = disk;
 
-	__nvme_revalidate_disk(disk, id);
+	if (__nvme_revalidate_disk(disk, id))
+		goto out_free_disk;
 
 	if ((ctrl->quirks & NVME_QUIRK_LIGHTNVM) && id->vs[0] == 0x1) {
 		ret = nvme_nvm_register(ns, disk_name, node);
@@ -3674,6 +3685,8 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid)
 	return;
  out_put_disk:
 	put_disk(ns->disk);
+ out_free_disk:
+	del_gendisk(ns->disk);
  out_unlink_ns:
 	mutex_lock(&ctrl->subsys->lock);
 	list_del_rcu(&ns->siblings);
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 0f5fa85..f60ca01 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -291,6 +291,7 @@ struct nvme_ctrl {
 	u16 icdoff;
 	u16 maxcmd;
 	int nr_reconnects;
+	bool pi_capable;
 	struct nvmf_ctrl_options *opts;
 
 	struct page *discard_page;
-- 
1.8.3.1


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 05/15] nvme: introduce max_integrity_segments ctrl attribute
  2020-04-28 13:11 [PATCH 00/15 V6] nvme-rdma/nvmet-rdma: Add metadata/T10-PI support Max Gurtovoy
                   ` (3 preceding siblings ...)
  2020-04-28 13:11 ` [PATCH 04/15] nvme: enforce extended LBA format for fabrics metadata Max Gurtovoy
@ 2020-04-28 13:11 ` Max Gurtovoy
  2020-04-28 13:11 ` [PATCH 06/15] nvme: introduce NVME_INLINE_MD_SG_CNT Max Gurtovoy
                   ` (9 subsequent siblings)
  14 siblings, 0 replies; 35+ messages in thread
From: Max Gurtovoy @ 2020-04-28 13:11 UTC (permalink / raw)
  To: maxg, linux-nvme, kbusch, hch, sagi, martin.petersen, jsmart2021, axboe
  Cc: vladimirk, shlomin, israelr, idanb, oren, nitzanc

This patch doesn't change any logic, and is needed as a preparation
for adding PI support for fabrics drivers that will use an extended
LBA format for metadata and will support more than 1 integrity segment.

Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
Signed-off-by: Israel Rukshin <israelr@mellanox.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
---
 drivers/nvme/host/core.c | 11 +++++++----
 drivers/nvme/host/nvme.h |  1 +
 drivers/nvme/host/pci.c  |  6 ++++++
 3 files changed, 14 insertions(+), 4 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index dd59de0..b437655 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -1688,7 +1688,8 @@ static int nvme_getgeo(struct block_device *bdev, struct hd_geometry *geo)
 }
 
 #ifdef CONFIG_BLK_DEV_INTEGRITY
-static void nvme_init_integrity(struct gendisk *disk, u16 ms, u8 pi_type)
+static void nvme_init_integrity(struct gendisk *disk, u16 ms, u8 pi_type,
+				u32 max_integrity_segments)
 {
 	struct blk_integrity integrity;
 
@@ -1711,10 +1712,11 @@ static void nvme_init_integrity(struct gendisk *disk, u16 ms, u8 pi_type)
 	}
 	integrity.tuple_size = ms;
 	blk_integrity_register(disk, &integrity);
-	blk_queue_max_integrity_segments(disk->queue, 1);
+	blk_queue_max_integrity_segments(disk->queue, max_integrity_segments);
 }
 #else
-static void nvme_init_integrity(struct gendisk *disk, u16 ms, u8 pi_type)
+static void nvme_init_integrity(struct gendisk *disk, u16 ms, u8 pi_type,
+				u32 max_integrity_segments)
 {
 }
 #endif /* CONFIG_BLK_DEV_INTEGRITY */
@@ -1890,7 +1892,8 @@ static void nvme_update_disk_info(struct gendisk *disk,
 	if (ns->ms) {
 		if (IS_ENABLED(CONFIG_BLK_DEV_INTEGRITY) &&
 		    (ns->features & NVME_NS_METADATA_SUPPORTED))
-			nvme_init_integrity(disk, ns->ms, ns->pi_type);
+			nvme_init_integrity(disk, ns->ms, ns->pi_type,
+					    ns->ctrl->max_integrity_segments);
 		else if (!nvme_ns_has_pi(ns))
 			capacity = 0;
 	}
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index f60ca01..a9c3d82 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -229,6 +229,7 @@ struct nvme_ctrl {
 	u32 page_size;
 	u32 max_hw_sectors;
 	u32 max_segments;
+	u32 max_integrity_segments;
 	u16 crdt[3];
 	u16 oncs;
 	u16 oacs;
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 4e79e41..fad9e78 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -2553,6 +2553,12 @@ static void nvme_reset_work(struct work_struct *work)
 		goto out;
 	}
 
+	/*
+	 * We do not support an SGL for metadata (yet), so we are limited to a
+	 * single integrity segment for the separate metadata pointer.
+	 */
+	dev->ctrl.max_integrity_segments = 1;
+
 	result = nvme_init_identify(&dev->ctrl);
 	if (result)
 		goto out;
-- 
1.8.3.1


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 06/15] nvme: introduce NVME_INLINE_MD_SG_CNT
  2020-04-28 13:11 [PATCH 00/15 V6] nvme-rdma/nvmet-rdma: Add metadata/T10-PI support Max Gurtovoy
                   ` (4 preceding siblings ...)
  2020-04-28 13:11 ` [PATCH 05/15] nvme: introduce max_integrity_segments ctrl attribute Max Gurtovoy
@ 2020-04-28 13:11 ` Max Gurtovoy
  2020-04-28 13:11 ` [PATCH 07/15] nvme-rdma: introduce nvme_rdma_sgl structure Max Gurtovoy
                   ` (8 subsequent siblings)
  14 siblings, 0 replies; 35+ messages in thread
From: Max Gurtovoy @ 2020-04-28 13:11 UTC (permalink / raw)
  To: maxg, linux-nvme, kbusch, hch, sagi, martin.petersen, jsmart2021, axboe
  Cc: vladimirk, shlomin, israelr, idanb, oren, nitzanc

From: Israel Rukshin <israelr@mellanox.com>

SGL size of metadata is usually small. Thus, 1 inline sg should cover
most cases. The macro will be used for pre-allocate a single SGL entry
for metadata. The preallocation of small inline SGLs depends on SG_CHAIN
capability so if the ARCH doesn't support SG_CHAIN, use the runtime
allocation for the SGL. This patch is a preparation for adding metadata
(T10-PI) over fabric support.

Signed-off-by: Israel Rukshin <israelr@mellanox.com>
Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
---
 drivers/nvme/host/nvme.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index a9c3d82..f13c483 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -31,8 +31,10 @@
 
 #ifdef CONFIG_ARCH_NO_SG_CHAIN
 #define  NVME_INLINE_SG_CNT  0
+#define  NVME_INLINE_MD_SG_CNT  0
 #else
 #define  NVME_INLINE_SG_CNT  2
+#define  NVME_INLINE_MD_SG_CNT  1
 #endif
 
 extern struct workqueue_struct *nvme_wq;
-- 
1.8.3.1


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 07/15] nvme-rdma: introduce nvme_rdma_sgl structure
  2020-04-28 13:11 [PATCH 00/15 V6] nvme-rdma/nvmet-rdma: Add metadata/T10-PI support Max Gurtovoy
                   ` (5 preceding siblings ...)
  2020-04-28 13:11 ` [PATCH 06/15] nvme: introduce NVME_INLINE_MD_SG_CNT Max Gurtovoy
@ 2020-04-28 13:11 ` Max Gurtovoy
  2020-04-28 13:11 ` [PATCH 08/15] nvme-rdma: add metadata/T10-PI support Max Gurtovoy
                   ` (7 subsequent siblings)
  14 siblings, 0 replies; 35+ messages in thread
From: Max Gurtovoy @ 2020-04-28 13:11 UTC (permalink / raw)
  To: maxg, linux-nvme, kbusch, hch, sagi, martin.petersen, jsmart2021, axboe
  Cc: vladimirk, shlomin, israelr, idanb, oren, nitzanc

From: Israel Rukshin <israelr@mellanox.com>

Remove first_sgl pointer from struct nvme_rdma_request and use pointer
arithmetic instead. The inline scatterlist, if exists, will be located
right after the nvme_rdma_request. This patch is needed as a preparation
for adding PI support.

Signed-off-by: Israel Rukshin <israelr@mellanox.com>
Reviewed-by: Max Gurtovoy <maxg@mellanox.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
---
 drivers/nvme/host/rdma.c | 41 ++++++++++++++++++++++++-----------------
 1 file changed, 24 insertions(+), 17 deletions(-)

diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index cac8a93..4086874 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -48,6 +48,11 @@ struct nvme_rdma_qe {
 	u64			dma;
 };
 
+struct nvme_rdma_sgl {
+	int			nents;
+	struct sg_table		sg_table;
+};
+
 struct nvme_rdma_queue;
 struct nvme_rdma_request {
 	struct nvme_request	req;
@@ -58,12 +63,10 @@ struct nvme_rdma_request {
 	refcount_t		ref;
 	struct ib_sge		sge[1 + NVME_RDMA_MAX_INLINE_SEGMENTS];
 	u32			num_sge;
-	int			nents;
 	struct ib_reg_wr	reg_wr;
 	struct ib_cqe		reg_cqe;
 	struct nvme_rdma_queue  *queue;
-	struct sg_table		sg_table;
-	struct scatterlist	first_sgl[];
+	struct nvme_rdma_sgl	data_sgl;
 };
 
 enum nvme_rdma_queue_flags {
@@ -1158,8 +1161,9 @@ static void nvme_rdma_unmap_data(struct nvme_rdma_queue *queue,
 		req->mr = NULL;
 	}
 
-	ib_dma_unmap_sg(ibdev, req->sg_table.sgl, req->nents, rq_dma_dir(rq));
-	sg_free_table_chained(&req->sg_table, NVME_INLINE_SG_CNT);
+	ib_dma_unmap_sg(ibdev, req->data_sgl.sg_table.sgl, req->data_sgl.nents,
+			rq_dma_dir(rq));
+	sg_free_table_chained(&req->data_sgl.sg_table, NVME_INLINE_SG_CNT);
 }
 
 static int nvme_rdma_set_sg_null(struct nvme_command *c)
@@ -1178,7 +1182,7 @@ static int nvme_rdma_map_sg_inline(struct nvme_rdma_queue *queue,
 		int count)
 {
 	struct nvme_sgl_desc *sg = &c->common.dptr.sgl;
-	struct scatterlist *sgl = req->sg_table.sgl;
+	struct scatterlist *sgl = req->data_sgl.sg_table.sgl;
 	struct ib_sge *sge = &req->sge[1];
 	u32 len = 0;
 	int i;
@@ -1203,8 +1207,8 @@ static int nvme_rdma_map_sg_single(struct nvme_rdma_queue *queue,
 {
 	struct nvme_keyed_sgl_desc *sg = &c->common.dptr.ksgl;
 
-	sg->addr = cpu_to_le64(sg_dma_address(req->sg_table.sgl));
-	put_unaligned_le24(sg_dma_len(req->sg_table.sgl), sg->length);
+	sg->addr = cpu_to_le64(sg_dma_address(req->data_sgl.sg_table.sgl));
+	put_unaligned_le24(sg_dma_len(req->data_sgl.sg_table.sgl), sg->length);
 	put_unaligned_le32(queue->device->pd->unsafe_global_rkey, sg->key);
 	sg->type = NVME_KEY_SGL_FMT_DATA_DESC << 4;
 	return 0;
@@ -1225,7 +1229,8 @@ static int nvme_rdma_map_sg_fr(struct nvme_rdma_queue *queue,
 	 * Align the MR to a 4K page size to match the ctrl page size and
 	 * the block virtual boundary.
 	 */
-	nr = ib_map_mr_sg(req->mr, req->sg_table.sgl, count, NULL, SZ_4K);
+	nr = ib_map_mr_sg(req->mr, req->data_sgl.sg_table.sgl, count, NULL,
+			  SZ_4K);
 	if (unlikely(nr < count)) {
 		ib_mr_pool_put(queue->qp, &queue->qp->rdma_mrs, req->mr);
 		req->mr = NULL;
@@ -1272,17 +1277,18 @@ static int nvme_rdma_map_data(struct nvme_rdma_queue *queue,
 	if (!blk_rq_nr_phys_segments(rq))
 		return nvme_rdma_set_sg_null(c);
 
-	req->sg_table.sgl = req->first_sgl;
-	ret = sg_alloc_table_chained(&req->sg_table,
-			blk_rq_nr_phys_segments(rq), req->sg_table.sgl,
+	req->data_sgl.sg_table.sgl = (struct scatterlist *)(req + 1);
+	ret = sg_alloc_table_chained(&req->data_sgl.sg_table,
+			blk_rq_nr_phys_segments(rq), req->data_sgl.sg_table.sgl,
 			NVME_INLINE_SG_CNT);
 	if (ret)
 		return -ENOMEM;
 
-	req->nents = blk_rq_map_sg(rq->q, rq, req->sg_table.sgl);
+	req->data_sgl.nents = blk_rq_map_sg(rq->q, rq,
+					    req->data_sgl.sg_table.sgl);
 
-	count = ib_dma_map_sg(ibdev, req->sg_table.sgl, req->nents,
-			      rq_dma_dir(rq));
+	count = ib_dma_map_sg(ibdev, req->data_sgl.sg_table.sgl,
+			      req->data_sgl.nents, rq_dma_dir(rq));
 	if (unlikely(count <= 0)) {
 		ret = -EIO;
 		goto out_free_table;
@@ -1311,9 +1317,10 @@ static int nvme_rdma_map_data(struct nvme_rdma_queue *queue,
 	return 0;
 
 out_unmap_sg:
-	ib_dma_unmap_sg(ibdev, req->sg_table.sgl, req->nents, rq_dma_dir(rq));
+	ib_dma_unmap_sg(ibdev, req->data_sgl.sg_table.sgl, req->data_sgl.nents,
+			rq_dma_dir(rq));
 out_free_table:
-	sg_free_table_chained(&req->sg_table, NVME_INLINE_SG_CNT);
+	sg_free_table_chained(&req->data_sgl.sg_table, NVME_INLINE_SG_CNT);
 	return ret;
 }
 
-- 
1.8.3.1


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 08/15] nvme-rdma: add metadata/T10-PI support
  2020-04-28 13:11 [PATCH 00/15 V6] nvme-rdma/nvmet-rdma: Add metadata/T10-PI support Max Gurtovoy
                   ` (6 preceding siblings ...)
  2020-04-28 13:11 ` [PATCH 07/15] nvme-rdma: introduce nvme_rdma_sgl structure Max Gurtovoy
@ 2020-04-28 13:11 ` Max Gurtovoy
  2020-05-01 14:26   ` Christoph Hellwig
  2020-04-28 13:11 ` [PATCH 09/15] nvmet: add metadata characteristics for a namespace Max Gurtovoy
                   ` (6 subsequent siblings)
  14 siblings, 1 reply; 35+ messages in thread
From: Max Gurtovoy @ 2020-04-28 13:11 UTC (permalink / raw)
  To: maxg, linux-nvme, kbusch, hch, sagi, martin.petersen, jsmart2021, axboe
  Cc: vladimirk, shlomin, israelr, idanb, oren, nitzanc

For capable HCAs (e.g. ConnectX-5/ConnectX-6) this will allow end-to-end
protection information passthrough and validation for NVMe over RDMA
transport. Metadata offload support was implemented over the new RDMA
signature verbs API and it is enabled by per capable controllers.

Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
Signed-off-by: Israel Rukshin <israelr@mellanox.com>
---
 drivers/nvme/host/rdma.c | 321 +++++++++++++++++++++++++++++++++++++++++------
 1 file changed, 285 insertions(+), 36 deletions(-)

diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index 4086874..e94ef95 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -67,6 +67,9 @@ struct nvme_rdma_request {
 	struct ib_cqe		reg_cqe;
 	struct nvme_rdma_queue  *queue;
 	struct nvme_rdma_sgl	data_sgl;
+	/* Metadata (T10-PI) support */
+	struct nvme_rdma_sgl	*md_sgl;
+	bool			use_md;
 };
 
 enum nvme_rdma_queue_flags {
@@ -88,6 +91,7 @@ struct nvme_rdma_queue {
 	struct rdma_cm_id	*cm_id;
 	int			cm_error;
 	struct completion	cm_done;
+	bool			pi_support;
 };
 
 struct nvme_rdma_ctrl {
@@ -264,6 +268,8 @@ static int nvme_rdma_create_qp(struct nvme_rdma_queue *queue, const int factor)
 	init_attr.qp_type = IB_QPT_RC;
 	init_attr.send_cq = queue->ib_cq;
 	init_attr.recv_cq = queue->ib_cq;
+	if (queue->pi_support)
+		init_attr.create_flags |= IB_QP_CREATE_INTEGRITY_EN;
 
 	ret = rdma_create_qp(queue->cm_id, dev->pd, &init_attr);
 
@@ -279,6 +285,18 @@ static void nvme_rdma_exit_request(struct blk_mq_tag_set *set,
 	kfree(req->sqe.data);
 }
 
+static unsigned int nvme_rdma_cmd_size(bool has_md)
+{
+	/*
+	 * nvme rdma command consists of struct nvme_rdma_request, data SGL,
+	 * PI nvme_rdma_sgl struct and then PI SGL.
+	 */
+	return sizeof(struct nvme_rdma_request) +
+	       sizeof(struct scatterlist) * NVME_INLINE_SG_CNT +
+	       has_md * (sizeof(struct nvme_rdma_sgl) +
+			 sizeof(struct scatterlist) * NVME_INLINE_MD_SG_CNT);
+}
+
 static int nvme_rdma_init_request(struct blk_mq_tag_set *set,
 		struct request *rq, unsigned int hctx_idx,
 		unsigned int numa_node)
@@ -293,6 +311,10 @@ static int nvme_rdma_init_request(struct blk_mq_tag_set *set,
 	if (!req->sqe.data)
 		return -ENOMEM;
 
+	/* metadata nvme_rdma_sgl struct is located after command's data SGL */
+	if (queue->pi_support)
+		req->md_sgl = (void *)nvme_req(rq) + nvme_rdma_cmd_size(false);
+
 	req->queue = queue;
 
 	return 0;
@@ -403,6 +425,8 @@ static void nvme_rdma_destroy_queue_ib(struct nvme_rdma_queue *queue)
 	dev = queue->device;
 	ibdev = dev->dev;
 
+	if (queue->pi_support)
+		ib_mr_pool_destroy(queue->qp, &queue->qp->sig_mrs);
 	ib_mr_pool_destroy(queue->qp, &queue->qp->rdma_mrs);
 
 	/*
@@ -419,10 +443,16 @@ static void nvme_rdma_destroy_queue_ib(struct nvme_rdma_queue *queue)
 	nvme_rdma_dev_put(dev);
 }
 
-static int nvme_rdma_get_max_fr_pages(struct ib_device *ibdev)
+static int nvme_rdma_get_max_fr_pages(struct ib_device *ibdev, bool pi_support)
 {
-	return min_t(u32, NVME_RDMA_MAX_SEGMENTS,
-		     ibdev->attrs.max_fast_reg_page_list_len - 1);
+	u32 max_page_list_len;
+
+	if (pi_support)
+		max_page_list_len = ibdev->attrs.max_pi_fast_reg_page_list_len;
+	else
+		max_page_list_len = ibdev->attrs.max_fast_reg_page_list_len;
+
+	return min_t(u32, NVME_RDMA_MAX_SEGMENTS, max_page_list_len - 1);
 }
 
 static int nvme_rdma_create_queue_ib(struct nvme_rdma_queue *queue)
@@ -479,7 +509,7 @@ static int nvme_rdma_create_queue_ib(struct nvme_rdma_queue *queue)
 	 * misaligned we'll end up using two entries for a single data page,
 	 * so one additional entry is required.
 	 */
-	pages_per_mr = nvme_rdma_get_max_fr_pages(ibdev) + 1;
+	pages_per_mr = nvme_rdma_get_max_fr_pages(ibdev, queue->pi_support) + 1;
 	ret = ib_mr_pool_init(queue->qp, &queue->qp->rdma_mrs,
 			      queue->queue_size,
 			      IB_MR_TYPE_MEM_REG,
@@ -491,10 +521,24 @@ static int nvme_rdma_create_queue_ib(struct nvme_rdma_queue *queue)
 		goto out_destroy_ring;
 	}
 
+	if (queue->pi_support) {
+		ret = ib_mr_pool_init(queue->qp, &queue->qp->sig_mrs,
+				      queue->queue_size, IB_MR_TYPE_INTEGRITY,
+				      pages_per_mr, pages_per_mr);
+		if (ret) {
+			dev_err(queue->ctrl->ctrl.device,
+				"failed to initialize PI MR pool sized %d for QID %d\n",
+				queue->queue_size, idx);
+			goto out_destroy_mr_pool;
+		}
+	}
+
 	set_bit(NVME_RDMA_Q_TR_READY, &queue->flags);
 
 	return 0;
 
+out_destroy_mr_pool:
+	ib_mr_pool_destroy(queue->qp, &queue->qp->rdma_mrs);
 out_destroy_ring:
 	nvme_rdma_free_ring(ibdev, queue->rsp_ring, queue->queue_size,
 			    sizeof(struct nvme_completion), DMA_FROM_DEVICE);
@@ -516,6 +560,7 @@ static int nvme_rdma_alloc_queue(struct nvme_rdma_ctrl *ctrl,
 
 	queue = &ctrl->queues[idx];
 	queue->ctrl = ctrl;
+	queue->pi_support = idx && ctrl->ctrl.pi_capable;
 	init_completion(&queue->cm_done);
 
 	if (idx > 0)
@@ -725,8 +770,7 @@ static struct blk_mq_tag_set *nvme_rdma_alloc_tagset(struct nvme_ctrl *nctrl,
 		set->queue_depth = NVME_AQ_MQ_TAG_DEPTH;
 		set->reserved_tags = 2; /* connect + keep-alive */
 		set->numa_node = nctrl->numa_node;
-		set->cmd_size = sizeof(struct nvme_rdma_request) +
-			NVME_INLINE_SG_CNT * sizeof(struct scatterlist);
+		set->cmd_size = nvme_rdma_cmd_size(false);
 		set->driver_data = ctrl;
 		set->nr_hw_queues = 1;
 		set->timeout = ADMIN_TIMEOUT;
@@ -739,8 +783,7 @@ static struct blk_mq_tag_set *nvme_rdma_alloc_tagset(struct nvme_ctrl *nctrl,
 		set->reserved_tags = 1; /* fabric connect */
 		set->numa_node = nctrl->numa_node;
 		set->flags = BLK_MQ_F_SHOULD_MERGE;
-		set->cmd_size = sizeof(struct nvme_rdma_request) +
-			NVME_INLINE_SG_CNT * sizeof(struct scatterlist);
+		set->cmd_size = nvme_rdma_cmd_size(nctrl->pi_capable);
 		set->driver_data = ctrl;
 		set->nr_hw_queues = nctrl->queue_count - 1;
 		set->timeout = NVME_IO_TIMEOUT;
@@ -782,7 +825,15 @@ static int nvme_rdma_configure_admin_queue(struct nvme_rdma_ctrl *ctrl,
 	ctrl->device = ctrl->queues[0].device;
 	ctrl->ctrl.numa_node = dev_to_node(ctrl->device->dev->dma_device);
 
-	ctrl->max_fr_pages = nvme_rdma_get_max_fr_pages(ctrl->device->dev);
+	/* T10-PI support */
+	if (ctrl->device->dev->attrs.device_cap_flags &
+	    IB_DEVICE_INTEGRITY_HANDOVER)
+		ctrl->ctrl.pi_capable = true;
+	else
+		ctrl->ctrl.pi_capable = false;
+
+	ctrl->max_fr_pages = nvme_rdma_get_max_fr_pages(ctrl->device->dev,
+							ctrl->ctrl.pi_capable);
 
 	/*
 	 * Bind the async event SQE DMA mapping to the admin queue lifetime.
@@ -824,6 +875,8 @@ static int nvme_rdma_configure_admin_queue(struct nvme_rdma_ctrl *ctrl,
 
 	ctrl->ctrl.max_segments = ctrl->max_fr_pages;
 	ctrl->ctrl.max_hw_sectors = ctrl->max_fr_pages << (ilog2(SZ_4K) - 9);
+	if (ctrl->ctrl.pi_capable)
+		ctrl->ctrl.max_integrity_segments = ctrl->max_fr_pages;
 
 	blk_mq_unquiesce_queue(ctrl->ctrl.admin_q);
 
@@ -1146,6 +1199,17 @@ static int nvme_rdma_inv_rkey(struct nvme_rdma_queue *queue,
 	return ib_post_send(queue->qp, &wr, NULL);
 }
 
+static void nvme_rdma_mr_pool_put(struct ib_qp *qp,
+		struct nvme_rdma_request *req)
+{
+	if (req->use_md)
+		ib_mr_pool_put(qp, &qp->sig_mrs, req->mr);
+	else
+		ib_mr_pool_put(qp, &qp->rdma_mrs, req->mr);
+
+	req->mr = NULL;
+}
+
 static void nvme_rdma_unmap_data(struct nvme_rdma_queue *queue,
 		struct request *rq)
 {
@@ -1156,11 +1220,16 @@ static void nvme_rdma_unmap_data(struct nvme_rdma_queue *queue,
 	if (!blk_rq_nr_phys_segments(rq))
 		return;
 
-	if (req->mr) {
-		ib_mr_pool_put(queue->qp, &queue->qp->rdma_mrs, req->mr);
-		req->mr = NULL;
+	if (blk_integrity_rq(rq)) {
+		ib_dma_unmap_sg(ibdev, req->md_sgl->sg_table.sgl,
+				req->md_sgl->nents, rq_dma_dir(rq));
+		sg_free_table_chained(&req->md_sgl->sg_table,
+				      NVME_INLINE_MD_SG_CNT);
 	}
 
+	if (req->mr)
+		nvme_rdma_mr_pool_put(queue->qp, req);
+
 	ib_dma_unmap_sg(ibdev, req->data_sgl.sg_table.sgl, req->data_sgl.nents,
 			rq_dma_dir(rq));
 	sg_free_table_chained(&req->data_sgl.sg_table, NVME_INLINE_SG_CNT);
@@ -1214,34 +1283,102 @@ static int nvme_rdma_map_sg_single(struct nvme_rdma_queue *queue,
 	return 0;
 }
 
-static int nvme_rdma_map_sg_fr(struct nvme_rdma_queue *queue,
-		struct nvme_rdma_request *req, struct nvme_command *c,
-		int count)
+static void nvme_rdma_set_sig_domain(struct blk_integrity *bi,
+		struct nvme_command *cmd, struct ib_sig_domain *domain,
+		u16 control)
+{
+	domain->sig_type = IB_SIG_TYPE_T10_DIF;
+	domain->sig.dif.bg_type = IB_T10DIF_CRC;
+#ifdef CONFIG_BLK_DEV_INTEGRITY
+	domain->sig.dif.pi_interval = 1 << bi->interval_exp;
+#endif
+	domain->sig.dif.ref_tag = le32_to_cpu(cmd->rw.reftag);
+	domain->sig.dif.apptag_check_mask = 0xffff;
+	domain->sig.dif.app_escape = true;
+	domain->sig.dif.ref_escape = true;
+	if (control & NVME_RW_PRINFO_PRCHK_REF)
+		domain->sig.dif.ref_remap = true;
+}
+
+static void nvme_rdma_set_sig_attrs(struct blk_integrity *bi,
+		struct nvme_command *cmd, struct ib_sig_attrs *sig_attrs)
+{
+	u16 control = le16_to_cpu(cmd->rw.control);
+
+	memset(sig_attrs, 0, sizeof(*sig_attrs));
+	if (control & NVME_RW_PRINFO_PRACT) {
+		/* for WRITE_INSERT/READ_STRIP no memory domain */
+		sig_attrs->mem.sig_type = IB_SIG_TYPE_NONE;
+		nvme_rdma_set_sig_domain(bi, cmd, &sig_attrs->wire, control);
+		/* Clear the PRACT bit since HCA will generate/verify the PI */
+		control &= ~NVME_RW_PRINFO_PRACT;
+		cmd->rw.control = cpu_to_le16(control);
+	} else {
+		/* for WRITE_PASS/READ_PASS both wire/memory domains exist */
+		nvme_rdma_set_sig_domain(bi, cmd, &sig_attrs->wire, control);
+		nvme_rdma_set_sig_domain(bi, cmd, &sig_attrs->mem, control);
+	}
+}
+
+static void nvme_rdma_set_prot_checks(struct nvme_command *cmd, u8 *mask)
+{
+	*mask = 0;
+	if (le16_to_cpu(cmd->rw.control) & NVME_RW_PRINFO_PRCHK_REF)
+		*mask |= IB_SIG_CHECK_REFTAG;
+	if (le16_to_cpu(cmd->rw.control) & NVME_RW_PRINFO_PRCHK_GUARD)
+		*mask |= IB_SIG_CHECK_GUARD;
+}
+
+static void nvme_rdma_sig_done(struct ib_cq *cq, struct ib_wc *wc)
+{
+	if (unlikely(wc->status != IB_WC_SUCCESS))
+		nvme_rdma_wr_error(cq, wc, "SIG");
+}
+
+static void nvme_rdma_set_pi_wr(struct nvme_rdma_request *req,
+				struct nvme_command *c)
 {
+	struct ib_sig_attrs *sig_attrs = req->mr->sig_attrs;
+	struct ib_reg_wr *wr = &req->reg_wr;
+	struct request *rq = blk_mq_rq_from_pdu(req);
+	struct bio *bio = rq->bio;
 	struct nvme_keyed_sgl_desc *sg = &c->common.dptr.ksgl;
-	int nr;
 
-	req->mr = ib_mr_pool_get(queue->qp, &queue->qp->rdma_mrs);
-	if (WARN_ON_ONCE(!req->mr))
-		return -EAGAIN;
+	if (!IS_ENABLED(CONFIG_BLK_DEV_INTEGRITY))
+		return;
 
-	/*
-	 * Align the MR to a 4K page size to match the ctrl page size and
-	 * the block virtual boundary.
-	 */
-	nr = ib_map_mr_sg(req->mr, req->data_sgl.sg_table.sgl, count, NULL,
-			  SZ_4K);
-	if (unlikely(nr < count)) {
-		ib_mr_pool_put(queue->qp, &queue->qp->rdma_mrs, req->mr);
-		req->mr = NULL;
-		if (nr < 0)
-			return nr;
-		return -EINVAL;
-	}
+	nvme_rdma_set_sig_attrs(blk_get_integrity(bio->bi_disk), c, sig_attrs);
+
+	nvme_rdma_set_prot_checks(c, &sig_attrs->check_mask);
 
 	ib_update_fast_reg_key(req->mr, ib_inc_rkey(req->mr->rkey));
 
+	req->reg_cqe.done = nvme_rdma_sig_done;
+
+	memset(wr, 0, sizeof(*wr));
+	wr->wr.opcode = IB_WR_REG_MR_INTEGRITY;
+	wr->wr.wr_cqe = &req->reg_cqe;
+	wr->wr.num_sge = 0;
+	wr->wr.send_flags = 0;
+	wr->mr = req->mr;
+	wr->key = req->mr->rkey;
+	wr->access = IB_ACCESS_LOCAL_WRITE |
+		     IB_ACCESS_REMOTE_READ |
+		     IB_ACCESS_REMOTE_WRITE;
+
+	sg->addr = cpu_to_le64(req->mr->iova);
+	put_unaligned_le24(req->mr->length, sg->length);
+	put_unaligned_le32(req->mr->rkey, sg->key);
+	sg->type = NVME_KEY_SGL_FMT_DATA_DESC << 4;
+}
+
+static void nvme_rdma_set_reg_wr(struct nvme_rdma_request *req,
+				 struct nvme_command *c)
+{
+	struct nvme_keyed_sgl_desc *sg = &c->common.dptr.ksgl;
+
 	req->reg_cqe.done = nvme_rdma_memreg_done;
+
 	memset(&req->reg_wr, 0, sizeof(req->reg_wr));
 	req->reg_wr.wr.opcode = IB_WR_REG_MR;
 	req->reg_wr.wr.wr_cqe = &req->reg_cqe;
@@ -1257,8 +1394,48 @@ static int nvme_rdma_map_sg_fr(struct nvme_rdma_queue *queue,
 	put_unaligned_le32(req->mr->rkey, sg->key);
 	sg->type = (NVME_KEY_SGL_FMT_DATA_DESC << 4) |
 			NVME_SGL_FMT_INVALIDATE;
+}
+
+static int nvme_rdma_map_sg_fr(struct nvme_rdma_queue *queue,
+		struct nvme_rdma_request *req, struct nvme_command *c,
+		int count, int pi_count)
+{
+	struct nvme_rdma_sgl *sgl = &req->data_sgl;
+	int nr;
+
+	if (req->use_md) {
+		req->mr = ib_mr_pool_get(queue->qp, &queue->qp->sig_mrs);
+		if (WARN_ON_ONCE(!req->mr))
+			return -EAGAIN;
+
+		nr = ib_map_mr_sg_pi(req->mr, sgl->sg_table.sgl, count, 0,
+				     req->md_sgl->sg_table.sgl, pi_count, 0,
+				     SZ_4K);
+		if (unlikely(nr))
+			goto mr_put;
+
+		nvme_rdma_set_pi_wr(req, c);
+	} else {
+		req->mr = ib_mr_pool_get(queue->qp, &queue->qp->rdma_mrs);
+		if (WARN_ON_ONCE(!req->mr))
+			return -EAGAIN;
+		/*
+		 * Align the MR to a 4K page size to match the ctrl page size
+		 * and the block virtual boundary.
+		 */
+		nr = ib_map_mr_sg(req->mr, sgl->sg_table.sgl, count, 0, SZ_4K);
+		if (unlikely(nr < count))
+			goto mr_put;
+
+		nvme_rdma_set_reg_wr(req, c);
+	}
 
 	return 0;
+mr_put:
+	nvme_rdma_mr_pool_put(queue->qp, req);
+	if (nr < 0)
+		return nr;
+	return -EINVAL;
 }
 
 static int nvme_rdma_map_data(struct nvme_rdma_queue *queue,
@@ -1267,6 +1444,7 @@ static int nvme_rdma_map_data(struct nvme_rdma_queue *queue,
 	struct nvme_rdma_request *req = blk_mq_rq_to_pdu(rq);
 	struct nvme_rdma_device *dev = queue->device;
 	struct ib_device *ibdev = dev->dev;
+	int pi_count = 0;
 	int count, ret;
 
 	req->num_sge = 1;
@@ -1294,7 +1472,29 @@ static int nvme_rdma_map_data(struct nvme_rdma_queue *queue,
 		goto out_free_table;
 	}
 
-	if (count <= dev->num_inline_segments) {
+	if (blk_integrity_rq(rq)) {
+		req->md_sgl->sg_table.sgl =
+			(struct scatterlist *)(req->md_sgl + 1);
+		ret = sg_alloc_table_chained(&req->md_sgl->sg_table,
+				blk_rq_count_integrity_sg(rq->q, rq->bio),
+				req->md_sgl->sg_table.sgl,
+				NVME_INLINE_MD_SG_CNT);
+		if (unlikely(ret)) {
+			ret = -ENOMEM;
+			goto out_unmap_sg;
+		}
+
+		req->md_sgl->nents = blk_rq_map_integrity_sg(rq->q, rq->bio,
+					req->md_sgl->sg_table.sgl);
+		pi_count = ib_dma_map_sg(ibdev, req->md_sgl->sg_table.sgl,
+					 req->md_sgl->nents, rq_dma_dir(rq));
+		if (unlikely(pi_count <= 0)) {
+			ret = -EIO;
+			goto out_free_pi_table;
+		}
+	}
+
+	if (count <= dev->num_inline_segments && !req->use_md) {
 		if (rq_data_dir(rq) == WRITE && nvme_rdma_queue_idx(queue) &&
 		    queue->ctrl->use_inline_data &&
 		    blk_rq_payload_bytes(rq) <=
@@ -1309,13 +1509,21 @@ static int nvme_rdma_map_data(struct nvme_rdma_queue *queue,
 		}
 	}
 
-	ret = nvme_rdma_map_sg_fr(queue, req, c, count);
+	ret = nvme_rdma_map_sg_fr(queue, req, c, count, pi_count);
 out:
 	if (unlikely(ret))
-		goto out_unmap_sg;
+		goto out_unmap_pi_sg;
 
 	return 0;
 
+out_unmap_pi_sg:
+	if (blk_integrity_rq(rq))
+		ib_dma_unmap_sg(ibdev, req->md_sgl->sg_table.sgl,
+				req->md_sgl->nents, rq_dma_dir(rq));
+out_free_pi_table:
+	if (blk_integrity_rq(rq))
+		sg_free_table_chained(&req->md_sgl->sg_table,
+				      NVME_INLINE_MD_SG_CNT);
 out_unmap_sg:
 	ib_dma_unmap_sg(ibdev, req->data_sgl.sg_table.sgl, req->data_sgl.nents,
 			rq_dma_dir(rq));
@@ -1768,6 +1976,10 @@ static blk_status_t nvme_rdma_queue_rq(struct blk_mq_hw_ctx *hctx,
 
 	blk_mq_start_request(rq);
 
+	if (c->common.opcode == nvme_cmd_write ||
+	    c->common.opcode == nvme_cmd_read)
+		req->use_md = queue->pi_support && nvme_ns_has_pi(ns);
+
 	err = nvme_rdma_map_data(queue, rq, c);
 	if (unlikely(err < 0)) {
 		dev_err(queue->ctrl->ctrl.device,
@@ -1808,12 +2020,49 @@ static int nvme_rdma_poll(struct blk_mq_hw_ctx *hctx)
 	return ib_process_cq_direct(queue->ib_cq, -1);
 }
 
+static void nvme_rdma_check_pi_status(struct nvme_rdma_request *req)
+{
+	struct request *rq = blk_mq_rq_from_pdu(req);
+	struct ib_mr_status mr_status;
+	int ret;
+
+	if (!IS_ENABLED(CONFIG_BLK_DEV_INTEGRITY))
+		return;
+
+	ret = ib_check_mr_status(req->mr, IB_MR_CHECK_SIG_STATUS, &mr_status);
+	if (ret) {
+		pr_err("ib_check_mr_status failed, ret %d\n", ret);
+		nvme_req(rq)->status = NVME_SC_INVALID_PI;
+		return;
+	}
+
+	if (mr_status.fail_status & IB_MR_CHECK_SIG_STATUS) {
+		switch (mr_status.sig_err.err_type) {
+		case IB_SIG_BAD_GUARD:
+			nvme_req(rq)->status = NVME_SC_GUARD_CHECK;
+			break;
+		case IB_SIG_BAD_REFTAG:
+			nvme_req(rq)->status = NVME_SC_REFTAG_CHECK;
+			break;
+		case IB_SIG_BAD_APPTAG:
+			nvme_req(rq)->status = NVME_SC_APPTAG_CHECK;
+			break;
+		}
+		pr_err("PI error found type %d expected 0x%x vs actual 0x%x\n",
+		       mr_status.sig_err.err_type, mr_status.sig_err.expected,
+		       mr_status.sig_err.actual);
+	}
+}
+
 static void nvme_rdma_complete_rq(struct request *rq)
 {
 	struct nvme_rdma_request *req = blk_mq_rq_to_pdu(rq);
 	struct nvme_rdma_queue *queue = req->queue;
 	struct ib_device *ibdev = queue->device->dev;
 
+	if (req->use_md)
+		nvme_rdma_check_pi_status(req);
+
 	nvme_rdma_unmap_data(queue, rq);
 	ib_dma_unmap_single(ibdev, req->sqe.dma, sizeof(struct nvme_command),
 			    DMA_TO_DEVICE);
@@ -1933,7 +2182,7 @@ static void nvme_rdma_reset_ctrl_work(struct work_struct *work)
 static const struct nvme_ctrl_ops nvme_rdma_ctrl_ops = {
 	.name			= "rdma",
 	.module			= THIS_MODULE,
-	.flags			= NVME_F_FABRICS,
+	.flags			= NVME_F_FABRICS | NVME_F_METADATA_SUPPORTED,
 	.reg_read32		= nvmf_reg_read32,
 	.reg_read64		= nvmf_reg_read64,
 	.reg_write32		= nvmf_reg_write32,
-- 
1.8.3.1


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 09/15] nvmet: add metadata characteristics for a namespace
  2020-04-28 13:11 [PATCH 00/15 V6] nvme-rdma/nvmet-rdma: Add metadata/T10-PI support Max Gurtovoy
                   ` (7 preceding siblings ...)
  2020-04-28 13:11 ` [PATCH 08/15] nvme-rdma: add metadata/T10-PI support Max Gurtovoy
@ 2020-04-28 13:11 ` Max Gurtovoy
  2020-05-01 15:50   ` Christoph Hellwig
  2020-04-28 13:11 ` [PATCH 10/15] nvmet: rename nvmet_rw_len to nvmet_rw_data_len Max Gurtovoy
                   ` (5 subsequent siblings)
  14 siblings, 1 reply; 35+ messages in thread
From: Max Gurtovoy @ 2020-04-28 13:11 UTC (permalink / raw)
  To: maxg, linux-nvme, kbusch, hch, sagi, martin.petersen, jsmart2021, axboe
  Cc: vladimirk, shlomin, israelr, idanb, oren, nitzanc

From: Israel Rukshin <israelr@mellanox.com>

Fill those namespace fields from the block device format for adding
metadata (T10-PI) over fabric support with block devices.

Signed-off-by: Israel Rukshin <israelr@mellanox.com>
Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
---
 drivers/nvme/target/io-cmd-bdev.c | 27 +++++++++++++++++++++++++++
 drivers/nvme/target/nvmet.h       |  3 +++
 2 files changed, 30 insertions(+)

diff --git a/drivers/nvme/target/io-cmd-bdev.c b/drivers/nvme/target/io-cmd-bdev.c
index 0427e04..b13d7b5 100644
--- a/drivers/nvme/target/io-cmd-bdev.c
+++ b/drivers/nvme/target/io-cmd-bdev.c
@@ -47,6 +47,28 @@ void nvmet_bdev_set_limits(struct block_device *bdev, struct nvme_id_ns *id)
 	id->nows = to0based(ql->io_opt / ql->logical_block_size);
 }
 
+#ifdef CONFIG_BLK_DEV_INTEGRITY
+static void nvmet_bdev_ns_enable_integrity(struct nvmet_ns *ns)
+{
+	struct blk_integrity *bi = bdev_get_integrity(ns->bdev);
+
+	if (bi) {
+		ns->metadata_size = bi->tuple_size;
+		if (bi->profile == &t10_pi_type1_crc)
+			ns->pi_type = NVME_NS_DPS_PI_TYPE1;
+		else if (bi->profile == &t10_pi_type3_crc)
+			ns->pi_type = NVME_NS_DPS_PI_TYPE3;
+		else
+			/* Unsupported metadata type */
+			ns->metadata_size = 0;
+	}
+}
+#else
+static void nvmet_bdev_ns_enable_integrity(struct nvmet_ns *ns)
+{
+}
+#endif /* CONFIG_BLK_DEV_INTEGRITY */
+
 int nvmet_bdev_ns_enable(struct nvmet_ns *ns)
 {
 	int ret;
@@ -64,6 +86,11 @@ int nvmet_bdev_ns_enable(struct nvmet_ns *ns)
 	}
 	ns->size = i_size_read(ns->bdev->bd_inode);
 	ns->blksize_shift = blksize_bits(bdev_logical_block_size(ns->bdev));
+
+	ns->pi_type = 0;
+	ns->metadata_size = 0;
+	nvmet_bdev_ns_enable_integrity(ns);
+
 	return 0;
 }
 
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index 3d981eb..b9fb57b 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -19,6 +19,7 @@
 #include <linux/rcupdate.h>
 #include <linux/blkdev.h>
 #include <linux/radix-tree.h>
+#include <linux/t10-pi.h>
 
 #define NVMET_ASYNC_EVENTS		4
 #define NVMET_ERROR_LOG_SLOTS		128
@@ -77,6 +78,8 @@ struct nvmet_ns {
 
 	int			use_p2pmem;
 	struct pci_dev		*p2p_dev;
+	int			pi_type;
+	int			metadata_size;
 };
 
 static inline struct nvmet_ns *to_nvmet_ns(struct config_item *item)
-- 
1.8.3.1


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 10/15] nvmet: rename nvmet_rw_len to nvmet_rw_data_len
  2020-04-28 13:11 [PATCH 00/15 V6] nvme-rdma/nvmet-rdma: Add metadata/T10-PI support Max Gurtovoy
                   ` (8 preceding siblings ...)
  2020-04-28 13:11 ` [PATCH 09/15] nvmet: add metadata characteristics for a namespace Max Gurtovoy
@ 2020-04-28 13:11 ` Max Gurtovoy
  2020-04-28 13:11 ` [PATCH 11/15] nvmet: rename nvmet_check_data_len to nvmet_check_transfer_len Max Gurtovoy
                   ` (4 subsequent siblings)
  14 siblings, 0 replies; 35+ messages in thread
From: Max Gurtovoy @ 2020-04-28 13:11 UTC (permalink / raw)
  To: maxg, linux-nvme, kbusch, hch, sagi, martin.petersen, jsmart2021, axboe
  Cc: vladimirk, shlomin, israelr, idanb, oren, nitzanc

From: Israel Rukshin <israelr@mellanox.com>

The function doesn't add the metadata length (only data length is
calculated). This is preparation for adding metadata (T10-PI) support.

Signed-off-by: Israel Rukshin <israelr@mellanox.com>
Reviewed-by: Max Gurtovoy <maxg@mellanox.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
---
 drivers/nvme/target/io-cmd-bdev.c | 2 +-
 drivers/nvme/target/io-cmd-file.c | 2 +-
 drivers/nvme/target/nvmet.h       | 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/nvme/target/io-cmd-bdev.c b/drivers/nvme/target/io-cmd-bdev.c
index b13d7b5..05df646d 100644
--- a/drivers/nvme/target/io-cmd-bdev.c
+++ b/drivers/nvme/target/io-cmd-bdev.c
@@ -183,7 +183,7 @@ static void nvmet_bdev_execute_rw(struct nvmet_req *req)
 	sector_t sector;
 	int op, i;
 
-	if (!nvmet_check_data_len(req, nvmet_rw_len(req)))
+	if (!nvmet_check_data_len(req, nvmet_rw_data_len(req)))
 		return;
 
 	if (!req->sg_cnt) {
diff --git a/drivers/nvme/target/io-cmd-file.c b/drivers/nvme/target/io-cmd-file.c
index 8ece9cab..df6abfa 100644
--- a/drivers/nvme/target/io-cmd-file.c
+++ b/drivers/nvme/target/io-cmd-file.c
@@ -242,7 +242,7 @@ static void nvmet_file_execute_rw(struct nvmet_req *req)
 {
 	ssize_t nr_bvec = req->sg_cnt;
 
-	if (!nvmet_check_data_len(req, nvmet_rw_len(req)))
+	if (!nvmet_check_data_len(req, nvmet_rw_data_len(req)))
 		return;
 
 	if (!req->sg_cnt || !nr_bvec) {
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index b9fb57b..0ce9404 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -504,7 +504,7 @@ void nvmet_add_async_event(struct nvmet_ctrl *ctrl, u8 event_type,
 void nvmet_bdev_ns_revalidate(struct nvmet_ns *ns);
 int nvmet_file_ns_revalidate(struct nvmet_ns *ns);
 
-static inline u32 nvmet_rw_len(struct nvmet_req *req)
+static inline u32 nvmet_rw_data_len(struct nvmet_req *req)
 {
 	return ((u32)le16_to_cpu(req->cmd->rw.length) + 1) <<
 			req->ns->blksize_shift;
-- 
1.8.3.1


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 11/15] nvmet: rename nvmet_check_data_len to nvmet_check_transfer_len
  2020-04-28 13:11 [PATCH 00/15 V6] nvme-rdma/nvmet-rdma: Add metadata/T10-PI support Max Gurtovoy
                   ` (9 preceding siblings ...)
  2020-04-28 13:11 ` [PATCH 10/15] nvmet: rename nvmet_rw_len to nvmet_rw_data_len Max Gurtovoy
@ 2020-04-28 13:11 ` Max Gurtovoy
  2020-04-28 13:11 ` [PATCH 12/15] nvme: add Metadata Capabilities enumerations Max Gurtovoy
                   ` (3 subsequent siblings)
  14 siblings, 0 replies; 35+ messages in thread
From: Max Gurtovoy @ 2020-04-28 13:11 UTC (permalink / raw)
  To: maxg, linux-nvme, kbusch, hch, sagi, martin.petersen, jsmart2021, axboe
  Cc: vladimirk, shlomin, israelr, idanb, oren, nitzanc

From: Israel Rukshin <israelr@mellanox.com>

The function doesn't check only the data length, because the transfer
length includes also the metadata length in some cases. This is
preparation for adding metadata (T10-PI) support.

Signed-off-by: Israel Rukshin <israelr@mellanox.com>
Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
---
 drivers/nvme/target/admin-cmd.c   | 14 +++++++-------
 drivers/nvme/target/core.c        |  6 +++---
 drivers/nvme/target/discovery.c   |  8 ++++----
 drivers/nvme/target/fabrics-cmd.c |  8 ++++----
 drivers/nvme/target/io-cmd-bdev.c |  6 +++---
 drivers/nvme/target/io-cmd-file.c |  6 +++---
 drivers/nvme/target/nvmet.h       |  2 +-
 7 files changed, 25 insertions(+), 25 deletions(-)

diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cmd.c
index 4c79aa8..905aba8 100644
--- a/drivers/nvme/target/admin-cmd.c
+++ b/drivers/nvme/target/admin-cmd.c
@@ -295,7 +295,7 @@ static void nvmet_execute_get_log_page_ana(struct nvmet_req *req)
 
 static void nvmet_execute_get_log_page(struct nvmet_req *req)
 {
-	if (!nvmet_check_data_len(req, nvmet_get_log_page_len(req->cmd)))
+	if (!nvmet_check_transfer_len(req, nvmet_get_log_page_len(req->cmd)))
 		return;
 
 	switch (req->cmd->get_log_page.lid) {
@@ -630,7 +630,7 @@ static void nvmet_execute_identify_desclist(struct nvmet_req *req)
 
 static void nvmet_execute_identify(struct nvmet_req *req)
 {
-	if (!nvmet_check_data_len(req, NVME_IDENTIFY_DATA_SIZE))
+	if (!nvmet_check_transfer_len(req, NVME_IDENTIFY_DATA_SIZE))
 		return;
 
 	switch (req->cmd->identify.cns) {
@@ -659,7 +659,7 @@ static void nvmet_execute_identify(struct nvmet_req *req)
  */
 static void nvmet_execute_abort(struct nvmet_req *req)
 {
-	if (!nvmet_check_data_len(req, 0))
+	if (!nvmet_check_transfer_len(req, 0))
 		return;
 	nvmet_set_result(req, 1);
 	nvmet_req_complete(req, 0);
@@ -748,7 +748,7 @@ static void nvmet_execute_set_features(struct nvmet_req *req)
 	u16 nsqr;
 	u16 ncqr;
 
-	if (!nvmet_check_data_len(req, 0))
+	if (!nvmet_check_transfer_len(req, 0))
 		return;
 
 	switch (cdw10 & 0xff) {
@@ -820,7 +820,7 @@ static void nvmet_execute_get_features(struct nvmet_req *req)
 	u32 cdw10 = le32_to_cpu(req->cmd->common.cdw10);
 	u16 status = 0;
 
-	if (!nvmet_check_data_len(req, nvmet_feat_data_len(req, cdw10)))
+	if (!nvmet_check_transfer_len(req, nvmet_feat_data_len(req, cdw10)))
 		return;
 
 	switch (cdw10 & 0xff) {
@@ -887,7 +887,7 @@ void nvmet_execute_async_event(struct nvmet_req *req)
 {
 	struct nvmet_ctrl *ctrl = req->sq->ctrl;
 
-	if (!nvmet_check_data_len(req, 0))
+	if (!nvmet_check_transfer_len(req, 0))
 		return;
 
 	mutex_lock(&ctrl->lock);
@@ -906,7 +906,7 @@ void nvmet_execute_keep_alive(struct nvmet_req *req)
 {
 	struct nvmet_ctrl *ctrl = req->sq->ctrl;
 
-	if (!nvmet_check_data_len(req, 0))
+	if (!nvmet_check_transfer_len(req, 0))
 		return;
 
 	pr_debug("ctrl %d update keep-alive timer for %d secs\n",
diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
index b685f99..50dfc60 100644
--- a/drivers/nvme/target/core.c
+++ b/drivers/nvme/target/core.c
@@ -936,9 +936,9 @@ void nvmet_req_uninit(struct nvmet_req *req)
 }
 EXPORT_SYMBOL_GPL(nvmet_req_uninit);
 
-bool nvmet_check_data_len(struct nvmet_req *req, size_t data_len)
+bool nvmet_check_transfer_len(struct nvmet_req *req, size_t len)
 {
-	if (unlikely(data_len != req->transfer_len)) {
+	if (unlikely(len != req->transfer_len)) {
 		req->error_loc = offsetof(struct nvme_common_command, dptr);
 		nvmet_req_complete(req, NVME_SC_SGL_INVALID_DATA | NVME_SC_DNR);
 		return false;
@@ -946,7 +946,7 @@ bool nvmet_check_data_len(struct nvmet_req *req, size_t data_len)
 
 	return true;
 }
-EXPORT_SYMBOL_GPL(nvmet_check_data_len);
+EXPORT_SYMBOL_GPL(nvmet_check_transfer_len);
 
 bool nvmet_check_data_len_lte(struct nvmet_req *req, size_t data_len)
 {
diff --git a/drivers/nvme/target/discovery.c b/drivers/nvme/target/discovery.c
index 0c2274b..40cf0b6 100644
--- a/drivers/nvme/target/discovery.c
+++ b/drivers/nvme/target/discovery.c
@@ -171,7 +171,7 @@ static void nvmet_execute_disc_get_log_page(struct nvmet_req *req)
 	u16 status = 0;
 	void *buffer;
 
-	if (!nvmet_check_data_len(req, data_len))
+	if (!nvmet_check_transfer_len(req, data_len))
 		return;
 
 	if (req->cmd->get_log_page.lid != NVME_LOG_DISC) {
@@ -244,7 +244,7 @@ static void nvmet_execute_disc_identify(struct nvmet_req *req)
 	const char model[] = "Linux";
 	u16 status = 0;
 
-	if (!nvmet_check_data_len(req, NVME_IDENTIFY_DATA_SIZE))
+	if (!nvmet_check_transfer_len(req, NVME_IDENTIFY_DATA_SIZE))
 		return;
 
 	if (req->cmd->identify.cns != NVME_ID_CNS_CTRL) {
@@ -298,7 +298,7 @@ static void nvmet_execute_disc_set_features(struct nvmet_req *req)
 	u32 cdw10 = le32_to_cpu(req->cmd->common.cdw10);
 	u16 stat;
 
-	if (!nvmet_check_data_len(req, 0))
+	if (!nvmet_check_transfer_len(req, 0))
 		return;
 
 	switch (cdw10 & 0xff) {
@@ -324,7 +324,7 @@ static void nvmet_execute_disc_get_features(struct nvmet_req *req)
 	u32 cdw10 = le32_to_cpu(req->cmd->common.cdw10);
 	u16 stat = 0;
 
-	if (!nvmet_check_data_len(req, 0))
+	if (!nvmet_check_transfer_len(req, 0))
 		return;
 
 	switch (cdw10 & 0xff) {
diff --git a/drivers/nvme/target/fabrics-cmd.c b/drivers/nvme/target/fabrics-cmd.c
index feef15c..52a6f70 100644
--- a/drivers/nvme/target/fabrics-cmd.c
+++ b/drivers/nvme/target/fabrics-cmd.c
@@ -12,7 +12,7 @@ static void nvmet_execute_prop_set(struct nvmet_req *req)
 	u64 val = le64_to_cpu(req->cmd->prop_set.value);
 	u16 status = 0;
 
-	if (!nvmet_check_data_len(req, 0))
+	if (!nvmet_check_transfer_len(req, 0))
 		return;
 
 	if (req->cmd->prop_set.attrib & 1) {
@@ -41,7 +41,7 @@ static void nvmet_execute_prop_get(struct nvmet_req *req)
 	u16 status = 0;
 	u64 val = 0;
 
-	if (!nvmet_check_data_len(req, 0))
+	if (!nvmet_check_transfer_len(req, 0))
 		return;
 
 	if (req->cmd->prop_get.attrib & 1) {
@@ -156,7 +156,7 @@ static void nvmet_execute_admin_connect(struct nvmet_req *req)
 	struct nvmet_ctrl *ctrl = NULL;
 	u16 status = 0;
 
-	if (!nvmet_check_data_len(req, sizeof(struct nvmf_connect_data)))
+	if (!nvmet_check_transfer_len(req, sizeof(struct nvmf_connect_data)))
 		return;
 
 	d = kmalloc(sizeof(*d), GFP_KERNEL);
@@ -223,7 +223,7 @@ static void nvmet_execute_io_connect(struct nvmet_req *req)
 	u16 qid = le16_to_cpu(c->qid);
 	u16 status = 0;
 
-	if (!nvmet_check_data_len(req, sizeof(struct nvmf_connect_data)))
+	if (!nvmet_check_transfer_len(req, sizeof(struct nvmf_connect_data)))
 		return;
 
 	d = kmalloc(sizeof(*d), GFP_KERNEL);
diff --git a/drivers/nvme/target/io-cmd-bdev.c b/drivers/nvme/target/io-cmd-bdev.c
index 05df646d..d265cf5 100644
--- a/drivers/nvme/target/io-cmd-bdev.c
+++ b/drivers/nvme/target/io-cmd-bdev.c
@@ -183,7 +183,7 @@ static void nvmet_bdev_execute_rw(struct nvmet_req *req)
 	sector_t sector;
 	int op, i;
 
-	if (!nvmet_check_data_len(req, nvmet_rw_data_len(req)))
+	if (!nvmet_check_transfer_len(req, nvmet_rw_data_len(req)))
 		return;
 
 	if (!req->sg_cnt) {
@@ -244,7 +244,7 @@ static void nvmet_bdev_execute_flush(struct nvmet_req *req)
 {
 	struct bio *bio = &req->b.inline_bio;
 
-	if (!nvmet_check_data_len(req, 0))
+	if (!nvmet_check_transfer_len(req, 0))
 		return;
 
 	bio_init(bio, req->inline_bvec, ARRAY_SIZE(req->inline_bvec));
@@ -336,7 +336,7 @@ static void nvmet_bdev_execute_write_zeroes(struct nvmet_req *req)
 	sector_t nr_sector;
 	int ret;
 
-	if (!nvmet_check_data_len(req, 0))
+	if (!nvmet_check_transfer_len(req, 0))
 		return;
 
 	sector = le64_to_cpu(write_zeroes->slba) <<
diff --git a/drivers/nvme/target/io-cmd-file.c b/drivers/nvme/target/io-cmd-file.c
index df6abfa..b8a2e06 100644
--- a/drivers/nvme/target/io-cmd-file.c
+++ b/drivers/nvme/target/io-cmd-file.c
@@ -242,7 +242,7 @@ static void nvmet_file_execute_rw(struct nvmet_req *req)
 {
 	ssize_t nr_bvec = req->sg_cnt;
 
-	if (!nvmet_check_data_len(req, nvmet_rw_data_len(req)))
+	if (!nvmet_check_transfer_len(req, nvmet_rw_data_len(req)))
 		return;
 
 	if (!req->sg_cnt || !nr_bvec) {
@@ -286,7 +286,7 @@ static void nvmet_file_flush_work(struct work_struct *w)
 
 static void nvmet_file_execute_flush(struct nvmet_req *req)
 {
-	if (!nvmet_check_data_len(req, 0))
+	if (!nvmet_check_transfer_len(req, 0))
 		return;
 	INIT_WORK(&req->f.work, nvmet_file_flush_work);
 	schedule_work(&req->f.work);
@@ -376,7 +376,7 @@ static void nvmet_file_write_zeroes_work(struct work_struct *w)
 
 static void nvmet_file_execute_write_zeroes(struct nvmet_req *req)
 {
-	if (!nvmet_check_data_len(req, 0))
+	if (!nvmet_check_transfer_len(req, 0))
 		return;
 	INIT_WORK(&req->f.work, nvmet_file_write_zeroes_work);
 	schedule_work(&req->f.work);
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index 0ce9404..ceedaaf 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -387,7 +387,7 @@ static inline bool nvmet_aen_bit_disabled(struct nvmet_ctrl *ctrl, u32 bn)
 bool nvmet_req_init(struct nvmet_req *req, struct nvmet_cq *cq,
 		struct nvmet_sq *sq, const struct nvmet_fabrics_ops *ops);
 void nvmet_req_uninit(struct nvmet_req *req);
-bool nvmet_check_data_len(struct nvmet_req *req, size_t data_len);
+bool nvmet_check_transfer_len(struct nvmet_req *req, size_t len);
 bool nvmet_check_data_len_lte(struct nvmet_req *req, size_t data_len);
 void nvmet_req_complete(struct nvmet_req *req, u16 status);
 int nvmet_req_alloc_sgl(struct nvmet_req *req);
-- 
1.8.3.1


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 12/15] nvme: add Metadata Capabilities enumerations
  2020-04-28 13:11 [PATCH 00/15 V6] nvme-rdma/nvmet-rdma: Add metadata/T10-PI support Max Gurtovoy
                   ` (10 preceding siblings ...)
  2020-04-28 13:11 ` [PATCH 11/15] nvmet: rename nvmet_check_data_len to nvmet_check_transfer_len Max Gurtovoy
@ 2020-04-28 13:11 ` Max Gurtovoy
  2020-05-01 15:53   ` Christoph Hellwig
  2020-04-28 13:11 ` [PATCH 13/15] nvmet: add metadata/T10-PI support Max Gurtovoy
                   ` (2 subsequent siblings)
  14 siblings, 1 reply; 35+ messages in thread
From: Max Gurtovoy @ 2020-04-28 13:11 UTC (permalink / raw)
  To: maxg, linux-nvme, kbusch, hch, sagi, martin.petersen, jsmart2021, axboe
  Cc: vladimirk, shlomin, israelr, idanb, oren, nitzanc

From: Israel Rukshin <israelr@mellanox.com>

The enumerations will be used to expose the namespace metadata format by
the target.

Suggested-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Israel Rukshin <israelr@mellanox.com>
Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
---
 include/linux/nvme.h | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/include/linux/nvme.h b/include/linux/nvme.h
index 3d5189f..57c1b6b 100644
--- a/include/linux/nvme.h
+++ b/include/linux/nvme.h
@@ -412,6 +412,12 @@ enum {
 	NVME_NS_DPS_PI_TYPE3	= 3,
 };
 
+/* Identify Namespace Metadata Capabilities (MC): */
+enum {
+	NVME_MC_EXTENDED_LBA	= (1 << 0),
+	NVME_MC_METADATA_PTR	= (1 << 1),
+};
+
 struct nvme_ns_id_desc {
 	__u8 nidt;
 	__u8 nidl;
-- 
1.8.3.1


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 13/15] nvmet: add metadata/T10-PI support
  2020-04-28 13:11 [PATCH 00/15 V6] nvme-rdma/nvmet-rdma: Add metadata/T10-PI support Max Gurtovoy
                   ` (11 preceding siblings ...)
  2020-04-28 13:11 ` [PATCH 12/15] nvme: add Metadata Capabilities enumerations Max Gurtovoy
@ 2020-04-28 13:11 ` Max Gurtovoy
  2020-05-01 15:58   ` Christoph Hellwig
  2020-05-01 16:19   ` Christoph Hellwig
  2020-04-28 13:11 ` [PATCH 14/15] nvmet: add metadata support for block devices Max Gurtovoy
  2020-04-28 13:11 ` [PATCH 15/15] nvmet-rdma: add metadata/T10-PI support Max Gurtovoy
  14 siblings, 2 replies; 35+ messages in thread
From: Max Gurtovoy @ 2020-04-28 13:11 UTC (permalink / raw)
  To: maxg, linux-nvme, kbusch, hch, sagi, martin.petersen, jsmart2021, axboe
  Cc: vladimirk, shlomin, israelr, idanb, oren, nitzanc

From: Israel Rukshin <israelr@mellanox.com>

Expose the namespace metadata format when PI is enabled. The user needs
to enable the capability per subsystem and per port. The other metadata
properties are taken from the namespace/bdev.

Usage example:
echo 1 > /config/nvmet/subsystems/${NAME}/attr_pi_enable
echo 1 > /config/nvmet/ports/${PORT_NUM}/param_pi_enable

Signed-off-by: Israel Rukshin <israelr@mellanox.com>
Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
---
 drivers/nvme/target/admin-cmd.c   | 26 ++++++++++++++---
 drivers/nvme/target/configfs.c    | 61 +++++++++++++++++++++++++++++++++++++++
 drivers/nvme/target/core.c        | 19 +++++++++---
 drivers/nvme/target/fabrics-cmd.c |  8 +++--
 drivers/nvme/target/nvmet.h       | 17 +++++++++++
 5 files changed, 121 insertions(+), 10 deletions(-)

diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cmd.c
index 905aba8..9c8a00d 100644
--- a/drivers/nvme/target/admin-cmd.c
+++ b/drivers/nvme/target/admin-cmd.c
@@ -341,6 +341,7 @@ static void nvmet_execute_identify_ctrl(struct nvmet_req *req)
 {
 	struct nvmet_ctrl *ctrl = req->sq->ctrl;
 	struct nvme_id_ctrl *id;
+	u32 cmd_capsule_size;
 	u16 status = 0;
 
 	id = kzalloc(sizeof(*id), GFP_KERNEL);
@@ -433,9 +434,15 @@ static void nvmet_execute_identify_ctrl(struct nvmet_req *req)
 
 	strlcpy(id->subnqn, ctrl->subsys->subsysnqn, sizeof(id->subnqn));
 
-	/* Max command capsule size is sqe + single page of in-capsule data */
-	id->ioccsz = cpu_to_le32((sizeof(struct nvme_command) +
-				  req->port->inline_data_size) / 16);
+	/*
+	 * Max command capsule size is sqe + in-capsule data size.
+	 * Disable in-capsule data for Metadata capable controllers.
+	 */
+	cmd_capsule_size = sizeof(struct nvme_command);
+	if (!ctrl->pi_support)
+		cmd_capsule_size += req->port->inline_data_size;
+	id->ioccsz = cpu_to_le32(cmd_capsule_size / 16);
+
 	/* Max response capsule size is cqe */
 	id->iorcsz = cpu_to_le32(sizeof(struct nvme_completion) / 16);
 
@@ -465,6 +472,7 @@ static void nvmet_execute_identify_ctrl(struct nvmet_req *req)
 
 static void nvmet_execute_identify_ns(struct nvmet_req *req)
 {
+	struct nvmet_ctrl *ctrl = req->sq->ctrl;
 	struct nvmet_ns *ns;
 	struct nvme_id_ns *id;
 	u16 status = 0;
@@ -482,7 +490,7 @@ static void nvmet_execute_identify_ns(struct nvmet_req *req)
 	}
 
 	/* return an all zeroed buffer if we can't find an active namespace */
-	ns = nvmet_find_namespace(req->sq->ctrl, req->cmd->identify.nsid);
+	ns = nvmet_find_namespace(ctrl, req->cmd->identify.nsid);
 	if (!ns)
 		goto done;
 
@@ -526,6 +534,16 @@ static void nvmet_execute_identify_ns(struct nvmet_req *req)
 
 	id->lbaf[0].ds = ns->blksize_shift;
 
+	if (ctrl->pi_support && nvmet_ns_has_pi(ns)) {
+		id->dpc = NVME_NS_DPC_PI_FIRST | NVME_NS_DPC_PI_LAST |
+			  NVME_NS_DPC_PI_TYPE1 | NVME_NS_DPC_PI_TYPE2 |
+			  NVME_NS_DPC_PI_TYPE3;
+		id->mc = NVME_MC_EXTENDED_LBA;
+		id->dps = ns->pi_type;
+		id->flbas = NVME_NS_FLBAS_META_EXT;
+		id->lbaf[0].ms = ns->metadata_size;
+	}
+
 	if (ns->readonly)
 		id->nsattr |= (1 << 0);
 	nvmet_put_namespace(ns);
diff --git a/drivers/nvme/target/configfs.c b/drivers/nvme/target/configfs.c
index 58cabd7..4593430 100644
--- a/drivers/nvme/target/configfs.c
+++ b/drivers/nvme/target/configfs.c
@@ -248,6 +248,36 @@ static ssize_t nvmet_param_inline_data_size_store(struct config_item *item,
 
 CONFIGFS_ATTR(nvmet_, param_inline_data_size);
 
+#ifdef CONFIG_BLK_DEV_INTEGRITY
+static ssize_t nvmet_param_pi_enable_show(struct config_item *item,
+		char *page)
+{
+	struct nvmet_port *port = to_nvmet_port(item);
+
+	return snprintf(page, PAGE_SIZE, "%d\n", port->pi_enable);
+}
+
+static ssize_t nvmet_param_pi_enable_store(struct config_item *item,
+		const char *page, size_t count)
+{
+	struct nvmet_port *port = to_nvmet_port(item);
+	bool val;
+
+	if (strtobool(page, &val))
+		return -EINVAL;
+
+	if (port->enabled) {
+		pr_err("Disable port before setting pi_enable value.\n");
+		return -EACCES;
+	}
+
+	port->pi_enable = val;
+	return count;
+}
+
+CONFIGFS_ATTR(nvmet_, param_pi_enable);
+#endif
+
 static ssize_t nvmet_addr_trtype_show(struct config_item *item,
 		char *page)
 {
@@ -987,6 +1017,31 @@ static ssize_t nvmet_subsys_attr_model_store(struct config_item *item,
 }
 CONFIGFS_ATTR(nvmet_subsys_, attr_model);
 
+#ifdef CONFIG_BLK_DEV_INTEGRITY
+static ssize_t nvmet_subsys_attr_pi_enable_show(struct config_item *item,
+						char *page)
+{
+	return snprintf(page, PAGE_SIZE, "%d\n", to_subsys(item)->pi_support);
+}
+
+static ssize_t nvmet_subsys_attr_pi_enable_store(struct config_item *item,
+						 const char *page, size_t count)
+{
+	struct nvmet_subsys *subsys = to_subsys(item);
+	bool pi_enable;
+
+	if (strtobool(page, &pi_enable))
+		return -EINVAL;
+
+	down_write(&nvmet_config_sem);
+	subsys->pi_support = pi_enable;
+	up_write(&nvmet_config_sem);
+
+	return count;
+}
+CONFIGFS_ATTR(nvmet_subsys_, attr_pi_enable);
+#endif
+
 static struct configfs_attribute *nvmet_subsys_attrs[] = {
 	&nvmet_subsys_attr_attr_allow_any_host,
 	&nvmet_subsys_attr_attr_version,
@@ -994,6 +1049,9 @@ static ssize_t nvmet_subsys_attr_model_store(struct config_item *item,
 	&nvmet_subsys_attr_attr_cntlid_min,
 	&nvmet_subsys_attr_attr_cntlid_max,
 	&nvmet_subsys_attr_attr_model,
+#ifdef CONFIG_BLK_DEV_INTEGRITY
+	&nvmet_subsys_attr_attr_pi_enable,
+#endif
 	NULL,
 };
 
@@ -1297,6 +1355,9 @@ static void nvmet_port_release(struct config_item *item)
 	&nvmet_attr_addr_trsvcid,
 	&nvmet_attr_addr_trtype,
 	&nvmet_attr_param_inline_data_size,
+#ifdef CONFIG_BLK_DEV_INTEGRITY
+	&nvmet_attr_param_pi_enable,
+#endif
 	NULL,
 };
 
diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
index 50dfc60..a57ab0c 100644
--- a/drivers/nvme/target/core.c
+++ b/drivers/nvme/target/core.c
@@ -322,12 +322,19 @@ int nvmet_enable_port(struct nvmet_port *port)
 	if (!try_module_get(ops->owner))
 		return -EINVAL;
 
-	ret = ops->add_port(port);
-	if (ret) {
-		module_put(ops->owner);
-		return ret;
+	/*
+	 * If the user requested PI support and the transport isn't pi capable,
+	 * don't enable the port.
+	 */
+	if (port->pi_enable && !ops->metadata_support) {
+		ret = -EINVAL;
+		goto out_put;
 	}
 
+	ret = ops->add_port(port);
+	if (ret)
+		goto out_put;
+
 	/* If the transport didn't set inline_data_size, then disable it. */
 	if (port->inline_data_size < 0)
 		port->inline_data_size = 0;
@@ -335,6 +342,10 @@ int nvmet_enable_port(struct nvmet_port *port)
 	port->enabled = true;
 	port->tr_ops = ops;
 	return 0;
+
+out_put:
+	module_put(ops->owner);
+	return ret;
 }
 
 void nvmet_disable_port(struct nvmet_port *port)
diff --git a/drivers/nvme/target/fabrics-cmd.c b/drivers/nvme/target/fabrics-cmd.c
index 52a6f70..3abb85c 100644
--- a/drivers/nvme/target/fabrics-cmd.c
+++ b/drivers/nvme/target/fabrics-cmd.c
@@ -197,6 +197,9 @@ static void nvmet_execute_admin_connect(struct nvmet_req *req)
 		goto out;
 	}
 
+	if (ctrl->port->pi_enable)
+		ctrl->pi_support = ctrl->subsys->pi_support;
+
 	uuid_copy(&ctrl->hostid, &d->hostid);
 
 	status = nvmet_install_queue(ctrl, req);
@@ -205,8 +208,9 @@ static void nvmet_execute_admin_connect(struct nvmet_req *req)
 		goto out;
 	}
 
-	pr_info("creating controller %d for subsystem %s for NQN %s.\n",
-		ctrl->cntlid, ctrl->subsys->subsysnqn, ctrl->hostnqn);
+	pr_info("creating controller %d for subsystem %s for NQN %s%s.\n",
+		ctrl->cntlid, ctrl->subsys->subsysnqn, ctrl->hostnqn,
+		ctrl->pi_support ? " T10-PI is enabled" : "");
 	req->cqe->result.u16 = cpu_to_le16(ctrl->cntlid);
 
 out:
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index ceedaaf..ccdf820 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -145,6 +145,7 @@ struct nvmet_port {
 	bool				enabled;
 	int				inline_data_size;
 	const struct nvmet_fabrics_ops	*tr_ops;
+	bool				pi_enable;
 };
 
 static inline struct nvmet_port *to_nvmet_port(struct config_item *item)
@@ -204,6 +205,7 @@ struct nvmet_ctrl {
 	spinlock_t		error_lock;
 	u64			err_counter;
 	struct nvme_error_slot	slots[NVMET_ERROR_LOG_SLOTS];
+	bool			pi_support;
 };
 
 struct nvmet_subsys_model {
@@ -233,6 +235,7 @@ struct nvmet_subsys {
 	u64			ver;
 	u64			serial;
 	char			*subsysnqn;
+	bool			pi_support;
 
 	struct config_group	group;
 
@@ -284,6 +287,7 @@ struct nvmet_fabrics_ops {
 	unsigned int type;
 	unsigned int msdbd;
 	bool has_keyed_sgls : 1;
+	bool metadata_support : 1;
 	void (*queue_response)(struct nvmet_req *req);
 	int (*add_port)(struct nvmet_port *port);
 	void (*remove_port)(struct nvmet_port *port);
@@ -510,6 +514,19 @@ static inline u32 nvmet_rw_data_len(struct nvmet_req *req)
 			req->ns->blksize_shift;
 }
 
+static inline u32 nvmet_rw_md_len(struct nvmet_req *req)
+{
+	return ((u32)le16_to_cpu(req->cmd->rw.length) + 1) *
+			req->ns->metadata_size;
+}
+
+static inline bool nvmet_ns_has_pi(struct nvmet_ns *ns)
+{
+	if (!IS_ENABLED(CONFIG_BLK_DEV_INTEGRITY))
+		return false;
+	return ns->pi_type && ns->metadata_size == sizeof(struct t10_pi_tuple);
+}
+
 static inline u32 nvmet_dsm_len(struct nvmet_req *req)
 {
 	return (le32_to_cpu(req->cmd->dsm.nr) + 1) *
-- 
1.8.3.1


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 14/15] nvmet: add metadata support for block devices
  2020-04-28 13:11 [PATCH 00/15 V6] nvme-rdma/nvmet-rdma: Add metadata/T10-PI support Max Gurtovoy
                   ` (12 preceding siblings ...)
  2020-04-28 13:11 ` [PATCH 13/15] nvmet: add metadata/T10-PI support Max Gurtovoy
@ 2020-04-28 13:11 ` Max Gurtovoy
  2020-05-01 16:25   ` Christoph Hellwig
  2020-04-28 13:11 ` [PATCH 15/15] nvmet-rdma: add metadata/T10-PI support Max Gurtovoy
  14 siblings, 1 reply; 35+ messages in thread
From: Max Gurtovoy @ 2020-04-28 13:11 UTC (permalink / raw)
  To: maxg, linux-nvme, kbusch, hch, sagi, martin.petersen, jsmart2021, axboe
  Cc: vladimirk, shlomin, israelr, idanb, oren, nitzanc

From: Israel Rukshin <israelr@mellanox.com>

Allocate the metadata SGL buffers and set metadata fields for the
request. Then create a block IO request for the metadata from the
protection SG list.

Signed-off-by: Israel Rukshin <israelr@mellanox.com>
Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
---
 drivers/nvme/target/core.c        | 90 +++++++++++++++++++++++++++++++--------
 drivers/nvme/target/io-cmd-bdev.c | 85 +++++++++++++++++++++++++++++++++++-
 drivers/nvme/target/nvmet.h       |  3 ++
 3 files changed, 159 insertions(+), 19 deletions(-)

diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
index a57ab0c..c1456ce 100644
--- a/drivers/nvme/target/core.c
+++ b/drivers/nvme/target/core.c
@@ -884,8 +884,11 @@ bool nvmet_req_init(struct nvmet_req *req, struct nvmet_cq *cq,
 	req->sq = sq;
 	req->ops = ops;
 	req->sg = NULL;
+	req->md_sg = NULL;
 	req->sg_cnt = 0;
+	req->md_sg_cnt = 0;
 	req->transfer_len = 0;
+	req->md_len = 0;
 	req->cqe->status = 0;
 	req->cqe->sq_head = 0;
 	req->ns = NULL;
@@ -970,9 +973,67 @@ bool nvmet_check_data_len_lte(struct nvmet_req *req, size_t data_len)
 	return true;
 }
 
+void nvmet_req_free_p2pmem_sgls(struct nvmet_req *req)
+{
+	pci_p2pmem_free_sgl(req->p2p_dev, req->sg);
+	if (req->md_sg)
+		pci_p2pmem_free_sgl(req->p2p_dev, req->md_sg);
+}
+
+static int nvmet_req_alloc_p2pmem_sgls(struct nvmet_req *req, int data_len,
+		struct pci_dev *p2p_dev)
+{
+	req->sg = pci_p2pmem_alloc_sgl(p2p_dev, &req->sg_cnt, data_len);
+	if (!req->sg)
+		goto out_err;
+
+	if (req->md_len) {
+		req->md_sg = pci_p2pmem_alloc_sgl(p2p_dev, &req->md_sg_cnt,
+						  req->md_len);
+		if (!req->md_sg)
+			goto out_free_sg;
+	}
+	req->p2p_dev = p2p_dev;
+	return 0;
+
+out_free_sg:
+	pci_p2pmem_free_sgl(p2p_dev, req->sg);
+out_err:
+	return -ENOMEM;
+}
+
+void nvmet_req_free_mem_sgls(struct nvmet_req *req)
+{
+	sgl_free(req->sg);
+	if (req->md_sg)
+		sgl_free(req->md_sg);
+}
+
+static int nvmet_req_alloc_mem_sgls(struct nvmet_req *req, int data_len)
+{
+	req->sg = sgl_alloc(data_len, GFP_KERNEL, &req->sg_cnt);
+	if (unlikely(!req->sg))
+		goto out;
+
+	if (req->md_len) {
+		req->md_sg = sgl_alloc(req->md_len, GFP_KERNEL,
+				       &req->md_sg_cnt);
+		if (unlikely(!req->md_sg))
+			goto out_free;
+	}
+
+	return 0;
+
+out_free:
+	sgl_free(req->sg);
+out:
+	return -ENOMEM;
+}
+
 int nvmet_req_alloc_sgl(struct nvmet_req *req)
 {
 	struct pci_dev *p2p_dev = NULL;
+	int data_len = req->transfer_len - req->md_len;
 
 	if (IS_ENABLED(CONFIG_PCI_P2PDMA)) {
 		if (req->sq->ctrl && req->ns)
@@ -981,37 +1042,32 @@ int nvmet_req_alloc_sgl(struct nvmet_req *req)
 
 		req->p2p_dev = NULL;
 		if (req->sq->qid && p2p_dev) {
-			req->sg = pci_p2pmem_alloc_sgl(p2p_dev, &req->sg_cnt,
-						       req->transfer_len);
-			if (req->sg) {
-				req->p2p_dev = p2p_dev;
+			int ret = nvmet_req_alloc_p2pmem_sgls(req, data_len,
+							      p2p_dev);
+			if (!ret)
 				return 0;
-			}
 		}
-
-		/*
-		 * If no P2P memory was available we fallback to using
-		 * regular memory
-		 */
 	}
 
-	req->sg = sgl_alloc(req->transfer_len, GFP_KERNEL, &req->sg_cnt);
-	if (unlikely(!req->sg))
-		return -ENOMEM;
-
-	return 0;
+	/*
+	 * If no P2P memory was available/enabled we fallback to using regular
+	 * memory.
+	 */
+	return nvmet_req_alloc_mem_sgls(req, data_len);
 }
 EXPORT_SYMBOL_GPL(nvmet_req_alloc_sgl);
 
 void nvmet_req_free_sgl(struct nvmet_req *req)
 {
 	if (req->p2p_dev)
-		pci_p2pmem_free_sgl(req->p2p_dev, req->sg);
+		nvmet_req_free_p2pmem_sgls(req);
 	else
-		sgl_free(req->sg);
+		nvmet_req_free_mem_sgls(req);
 
 	req->sg = NULL;
+	req->md_sg = NULL;
 	req->sg_cnt = 0;
+	req->md_sg_cnt = 0;
 }
 EXPORT_SYMBOL_GPL(nvmet_req_free_sgl);
 
diff --git a/drivers/nvme/target/io-cmd-bdev.c b/drivers/nvme/target/io-cmd-bdev.c
index d265cf5..4243156 100644
--- a/drivers/nvme/target/io-cmd-bdev.c
+++ b/drivers/nvme/target/io-cmd-bdev.c
@@ -174,6 +174,61 @@ static void nvmet_bio_done(struct bio *bio)
 		bio_put(bio);
 }
 
+#ifdef CONFIG_BLK_DEV_INTEGRITY
+static int nvmet_bdev_alloc_bip(struct nvmet_req *req, struct bio *bio,
+				struct sg_mapping_iter *miter)
+{
+	struct blk_integrity *bi;
+	struct bio_integrity_payload *bip;
+	struct block_device *bdev = req->ns->bdev;
+	int rc;
+	size_t resid, len;
+
+	bi = bdev_get_integrity(bdev);
+	if (unlikely(!bi)) {
+		pr_err("Unable to locate bio_integrity\n");
+		return -ENODEV;
+	}
+
+	bip = bio_integrity_alloc(bio, GFP_NOIO,
+			min_t(unsigned int, req->md_sg_cnt, BIO_MAX_PAGES));
+	if (IS_ERR(bip)) {
+		pr_err("Unable to allocate bio_integrity_payload\n");
+		return PTR_ERR(bip);
+	}
+
+	bip->bip_iter.bi_size = bio_integrity_bytes(bi, bio_sectors(bio));
+	/* virtual start sector must be in integrity interval units */
+	bip_set_seed(bip, bio->bi_iter.bi_sector >>
+		     (bi->interval_exp - SECTOR_SHIFT));
+
+	resid = bip->bip_iter.bi_size;
+	while (resid > 0 && sg_miter_next(miter)) {
+		len = min_t(size_t, miter->length, resid);
+		rc = bio_integrity_add_page(bio, miter->page, len,
+					    offset_in_page(miter->addr));
+		if (unlikely(rc != len)) {
+			pr_err("bio_integrity_add_page() failed; %d\n", rc);
+			sg_miter_stop(miter);
+			return -ENOMEM;
+		}
+
+		resid -= len;
+		if (len < miter->length)
+			miter->consumed -= miter->length - len;
+	}
+	sg_miter_stop(miter);
+
+	return 0;
+}
+#else
+static int nvmet_bdev_alloc_bip(struct nvmet_req *req, struct bio *bio,
+				struct sg_mapping_iter *miter)
+{
+	return -EINVAL;
+}
+#endif /* CONFIG_BLK_DEV_INTEGRITY */
+
 static void nvmet_bdev_execute_rw(struct nvmet_req *req)
 {
 	int sg_cnt = req->sg_cnt;
@@ -181,9 +236,11 @@ static void nvmet_bdev_execute_rw(struct nvmet_req *req)
 	struct scatterlist *sg;
 	struct blk_plug plug;
 	sector_t sector;
-	int op, i;
+	int op, i, rc;
+	struct sg_mapping_iter prot_miter;
 
-	if (!nvmet_check_transfer_len(req, nvmet_rw_data_len(req)))
+	if (!nvmet_check_transfer_len(req,
+				      nvmet_rw_data_len(req) + req->md_len))
 		return;
 
 	if (!req->sg_cnt) {
@@ -218,11 +275,25 @@ static void nvmet_bdev_execute_rw(struct nvmet_req *req)
 	bio->bi_opf = op;
 
 	blk_start_plug(&plug);
+	if (req->md_len)
+		sg_miter_start(&prot_miter, req->md_sg, req->md_sg_cnt,
+			       op == REQ_OP_READ ? SG_MITER_FROM_SG :
+						   SG_MITER_TO_SG);
+
 	for_each_sg(req->sg, sg, req->sg_cnt, i) {
 		while (bio_add_page(bio, sg_page(sg), sg->length, sg->offset)
 				!= sg->length) {
 			struct bio *prev = bio;
 
+			if (req->md_len) {
+				rc = nvmet_bdev_alloc_bip(req, bio,
+							  &prot_miter);
+				if (unlikely(rc)) {
+					bio_io_error(bio);
+					return;
+				}
+			}
+
 			bio = bio_alloc(GFP_KERNEL, min(sg_cnt, BIO_MAX_PAGES));
 			bio_set_dev(bio, req->ns->bdev);
 			bio->bi_iter.bi_sector = sector;
@@ -236,6 +307,14 @@ static void nvmet_bdev_execute_rw(struct nvmet_req *req)
 		sg_cnt--;
 	}
 
+	if (req->md_len) {
+		rc = nvmet_bdev_alloc_bip(req, bio, &prot_miter);
+		if (unlikely(rc)) {
+			bio_io_error(bio);
+			return;
+		}
+	}
+
 	submit_bio(bio);
 	blk_finish_plug(&plug);
 }
@@ -363,6 +442,8 @@ u16 nvmet_bdev_parse_io_cmd(struct nvmet_req *req)
 	case nvme_cmd_read:
 	case nvme_cmd_write:
 		req->execute = nvmet_bdev_execute_rw;
+		if (req->sq->ctrl->pi_support && nvmet_ns_has_pi(req->ns))
+			req->md_len = nvmet_rw_md_len(req);
 		return 0;
 	case nvme_cmd_flush:
 		req->execute = nvmet_bdev_execute_flush;
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index ccdf820..7069520 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -309,6 +309,7 @@ struct nvmet_req {
 	struct nvmet_cq		*cq;
 	struct nvmet_ns		*ns;
 	struct scatterlist	*sg;
+	struct scatterlist	*md_sg;
 	struct bio_vec		inline_bvec[NVMET_MAX_INLINE_BIOVEC];
 	union {
 		struct {
@@ -322,8 +323,10 @@ struct nvmet_req {
 		} f;
 	};
 	int			sg_cnt;
+	int			md_sg_cnt;
 	/* data length as parsed from the SGL descriptor: */
 	size_t			transfer_len;
+	size_t			md_len;
 
 	struct nvmet_port	*port;
 
-- 
1.8.3.1


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 15/15] nvmet-rdma: add metadata/T10-PI support
  2020-04-28 13:11 [PATCH 00/15 V6] nvme-rdma/nvmet-rdma: Add metadata/T10-PI support Max Gurtovoy
                   ` (13 preceding siblings ...)
  2020-04-28 13:11 ` [PATCH 14/15] nvmet: add metadata support for block devices Max Gurtovoy
@ 2020-04-28 13:11 ` Max Gurtovoy
  2020-05-01 16:48   ` Christoph Hellwig
  14 siblings, 1 reply; 35+ messages in thread
From: Max Gurtovoy @ 2020-04-28 13:11 UTC (permalink / raw)
  To: maxg, linux-nvme, kbusch, hch, sagi, martin.petersen, jsmart2021, axboe
  Cc: vladimirk, shlomin, israelr, idanb, oren, nitzanc

From: Israel Rukshin <israelr@mellanox.com>

For capable HCAs (e.g. ConnectX-5/ConnectX-6) this will allow end-to-end
protection information passthrough and validation for NVMe over RDMA
transport. Metadata support was implemented over the new RDMA signature
verbs API.

Signed-off-by: Israel Rukshin <israelr@mellanox.com>
Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
---
 drivers/nvme/target/rdma.c | 234 +++++++++++++++++++++++++++++++++++++++++----
 1 file changed, 215 insertions(+), 19 deletions(-)

diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
index 7a90b10..f4a4721 100644
--- a/drivers/nvme/target/rdma.c
+++ b/drivers/nvme/target/rdma.c
@@ -33,6 +33,7 @@
 
 /* Assume mpsmin == device_page_size == 4KB */
 #define NVMET_RDMA_MAX_MDTS			8
+#define NVMET_RDMA_MAX_METADATA_MDTS		5
 
 struct nvmet_rdma_srq;
 
@@ -60,6 +61,7 @@ struct nvmet_rdma_rsp {
 	struct nvmet_rdma_queue	*queue;
 
 	struct ib_cqe		read_cqe;
+	struct ib_cqe		write_cqe;
 	struct rdma_rw_ctx	rw;
 
 	struct nvmet_req	req;
@@ -161,6 +163,7 @@ struct nvmet_rdma_device {
 static void nvmet_rdma_send_done(struct ib_cq *cq, struct ib_wc *wc);
 static void nvmet_rdma_recv_done(struct ib_cq *cq, struct ib_wc *wc);
 static void nvmet_rdma_read_data_done(struct ib_cq *cq, struct ib_wc *wc);
+static void nvmet_rdma_write_data_done(struct ib_cq *cq, struct ib_wc *wc);
 static void nvmet_rdma_qp_event(struct ib_event *event, void *priv);
 static void nvmet_rdma_queue_disconnect(struct nvmet_rdma_queue *queue);
 static void nvmet_rdma_free_rsp(struct nvmet_rdma_device *ndev,
@@ -423,6 +426,9 @@ static int nvmet_rdma_alloc_rsp(struct nvmet_rdma_device *ndev,
 
 	/* Data In / RDMA READ */
 	r->read_cqe.done = nvmet_rdma_read_data_done;
+	/* Data Out / RDMA WRITE */
+	r->write_cqe.done = nvmet_rdma_write_data_done;
+
 	return 0;
 
 out_free_rsp:
@@ -532,6 +538,129 @@ static void nvmet_rdma_process_wr_wait_list(struct nvmet_rdma_queue *queue)
 	spin_unlock(&queue->rsp_wr_wait_lock);
 }
 
+static u16 nvmet_rdma_check_pi_status(struct ib_mr *sig_mr)
+{
+	struct ib_mr_status mr_status;
+	int ret;
+	u16 status = 0;
+
+	if (!IS_ENABLED(CONFIG_BLK_DEV_INTEGRITY))
+		return 0;
+
+	ret = ib_check_mr_status(sig_mr, IB_MR_CHECK_SIG_STATUS, &mr_status);
+	if (ret) {
+		pr_err("ib_check_mr_status failed, ret %d\n", ret);
+		return NVME_SC_INVALID_PI;
+	}
+
+	if (mr_status.fail_status & IB_MR_CHECK_SIG_STATUS) {
+		switch (mr_status.sig_err.err_type) {
+		case IB_SIG_BAD_GUARD:
+			status = NVME_SC_GUARD_CHECK;
+			break;
+		case IB_SIG_BAD_REFTAG:
+			status = NVME_SC_REFTAG_CHECK;
+			break;
+		case IB_SIG_BAD_APPTAG:
+			status = NVME_SC_APPTAG_CHECK;
+			break;
+		}
+		pr_err("PI error found type %d expected 0x%x vs actual 0x%x\n",
+		       mr_status.sig_err.err_type,
+		       mr_status.sig_err.expected,
+		       mr_status.sig_err.actual);
+	}
+
+	return status;
+}
+
+static void nvmet_rdma_set_sig_domain(struct blk_integrity *bi,
+		struct nvme_command *cmd, struct ib_sig_domain *domain,
+		u16 control)
+{
+	domain->sig_type = IB_SIG_TYPE_T10_DIF;
+	domain->sig.dif.bg_type = IB_T10DIF_CRC;
+#ifdef CONFIG_BLK_DEV_INTEGRITY
+	domain->sig.dif.pi_interval = 1 << bi->interval_exp;
+#endif
+	domain->sig.dif.ref_tag = le32_to_cpu(cmd->rw.reftag);
+	domain->sig.dif.apptag_check_mask = 0xffff;
+	domain->sig.dif.app_escape = true;
+	domain->sig.dif.ref_escape = true;
+	if (control & NVME_RW_PRINFO_PRCHK_REF)
+		domain->sig.dif.ref_remap = true;
+}
+
+static void nvmet_rdma_set_sig_attrs(struct nvmet_req *req,
+				     struct ib_sig_attrs *sig_attrs)
+{
+	struct nvme_command *cmd = req->cmd;
+	u16 control = le16_to_cpu(cmd->rw.control);
+	struct blk_integrity *bi;
+
+	if (!IS_ENABLED(CONFIG_BLK_DEV_INTEGRITY))
+		return;
+
+	bi = bdev_get_integrity(req->ns->bdev);
+
+	memset(sig_attrs, 0, sizeof(*sig_attrs));
+
+	if (control & NVME_RW_PRINFO_PRACT) {
+		/* for WRITE_INSERT/READ_STRIP no wire domain */
+		sig_attrs->wire.sig_type = IB_SIG_TYPE_NONE;
+		nvmet_rdma_set_sig_domain(bi, cmd, &sig_attrs->mem, control);
+		/* Clear the PRACT bit since HCA will generate/verify the PI */
+		control &= ~NVME_RW_PRINFO_PRACT;
+		cmd->rw.control = cpu_to_le16(control);
+		/* PI is added by the HW */
+		req->transfer_len += req->md_len;
+	} else {
+		/* for WRITE_PASS/READ_PASS both wire/memory domains exist */
+		nvmet_rdma_set_sig_domain(bi, cmd, &sig_attrs->wire, control);
+		nvmet_rdma_set_sig_domain(bi, cmd, &sig_attrs->mem, control);
+	}
+
+	if (control & NVME_RW_PRINFO_PRCHK_REF)
+		sig_attrs->check_mask |= IB_SIG_CHECK_REFTAG;
+	if (control & NVME_RW_PRINFO_PRCHK_GUARD)
+		sig_attrs->check_mask |= IB_SIG_CHECK_GUARD;
+	if (control & NVME_RW_PRINFO_PRCHK_APP)
+		sig_attrs->check_mask |= IB_SIG_CHECK_APPTAG;
+}
+
+static int nvmet_rdma_rw_ctx_init(struct nvmet_rdma_rsp *rsp, u64 addr, u32 key,
+				  struct ib_sig_attrs *sig_attrs)
+{
+	struct rdma_cm_id *cm_id = rsp->queue->cm_id;
+	struct nvmet_req *req = &rsp->req;
+	int ret;
+
+	if (req->md_len)
+		ret = rdma_rw_ctx_signature_init(&rsp->rw, cm_id->qp,
+			cm_id->port_num, req->sg, req->sg_cnt, req->md_sg,
+			req->md_sg_cnt, sig_attrs, addr, key,
+			nvmet_data_dir(req));
+	else
+		ret = rdma_rw_ctx_init(&rsp->rw, cm_id->qp, cm_id->port_num,
+				       req->sg, req->sg_cnt, 0, addr, key,
+				       nvmet_data_dir(req));
+
+	return ret;
+}
+
+static void nvmet_rdma_rw_ctx_destroy(struct nvmet_rdma_rsp *rsp)
+{
+	struct rdma_cm_id *cm_id = rsp->queue->cm_id;
+	struct nvmet_req *req = &rsp->req;
+
+	if (req->md_len)
+		rdma_rw_ctx_destroy_signature(&rsp->rw, cm_id->qp,
+			cm_id->port_num, req->sg, req->sg_cnt, req->md_sg,
+			req->md_sg_cnt, nvmet_data_dir(req));
+	else
+		rdma_rw_ctx_destroy(&rsp->rw, cm_id->qp, cm_id->port_num,
+				    req->sg, req->sg_cnt, nvmet_data_dir(req));
+}
 
 static void nvmet_rdma_release_rsp(struct nvmet_rdma_rsp *rsp)
 {
@@ -539,11 +668,8 @@ static void nvmet_rdma_release_rsp(struct nvmet_rdma_rsp *rsp)
 
 	atomic_add(1 + rsp->n_rdma, &queue->sq_wr_avail);
 
-	if (rsp->n_rdma) {
-		rdma_rw_ctx_destroy(&rsp->rw, queue->qp,
-				queue->cm_id->port_num, rsp->req.sg,
-				rsp->req.sg_cnt, nvmet_data_dir(&rsp->req));
-	}
+	if (rsp->n_rdma)
+		nvmet_rdma_rw_ctx_destroy(rsp);
 
 	if (rsp->req.sg != rsp->cmd->inline_sg)
 		nvmet_req_free_sgl(&rsp->req);
@@ -598,11 +724,16 @@ static void nvmet_rdma_queue_response(struct nvmet_req *req)
 		rsp->send_wr.opcode = IB_WR_SEND;
 	}
 
-	if (nvmet_rdma_need_data_out(rsp))
-		first_wr = rdma_rw_ctx_wrs(&rsp->rw, cm_id->qp,
-				cm_id->port_num, NULL, &rsp->send_wr);
-	else
+	if (nvmet_rdma_need_data_out(rsp)) {
+		if (rsp->req.md_len)
+			first_wr = rdma_rw_ctx_wrs(&rsp->rw, cm_id->qp,
+					cm_id->port_num, &rsp->write_cqe, NULL);
+		else
+			first_wr = rdma_rw_ctx_wrs(&rsp->rw, cm_id->qp,
+					cm_id->port_num, NULL, &rsp->send_wr);
+	} else {
 		first_wr = &rsp->send_wr;
+	}
 
 	nvmet_rdma_post_recv(rsp->queue->dev, rsp->cmd);
 
@@ -621,15 +752,14 @@ static void nvmet_rdma_read_data_done(struct ib_cq *cq, struct ib_wc *wc)
 	struct nvmet_rdma_rsp *rsp =
 		container_of(wc->wr_cqe, struct nvmet_rdma_rsp, read_cqe);
 	struct nvmet_rdma_queue *queue = cq->cq_context;
+	u16 status = 0;
 
 	WARN_ON(rsp->n_rdma <= 0);
 	atomic_add(rsp->n_rdma, &queue->sq_wr_avail);
-	rdma_rw_ctx_destroy(&rsp->rw, queue->qp,
-			queue->cm_id->port_num, rsp->req.sg,
-			rsp->req.sg_cnt, nvmet_data_dir(&rsp->req));
 	rsp->n_rdma = 0;
 
 	if (unlikely(wc->status != IB_WC_SUCCESS)) {
+		nvmet_rdma_rw_ctx_destroy(rsp);
 		nvmet_req_uninit(&rsp->req);
 		nvmet_rdma_release_rsp(rsp);
 		if (wc->status != IB_WC_WR_FLUSH_ERR) {
@@ -640,7 +770,58 @@ static void nvmet_rdma_read_data_done(struct ib_cq *cq, struct ib_wc *wc)
 		return;
 	}
 
-	rsp->req.execute(&rsp->req);
+	if (rsp->req.md_len)
+		status = nvmet_rdma_check_pi_status(rsp->rw.reg->mr);
+	nvmet_rdma_rw_ctx_destroy(rsp);
+
+	if (unlikely(status))
+		nvmet_req_complete(&rsp->req, status);
+	else
+		rsp->req.execute(&rsp->req);
+}
+
+static void nvmet_rdma_write_data_done(struct ib_cq *cq, struct ib_wc *wc)
+{
+	struct nvmet_rdma_rsp *rsp =
+		container_of(wc->wr_cqe, struct nvmet_rdma_rsp, write_cqe);
+	struct nvmet_rdma_queue *queue = cq->cq_context;
+	struct rdma_cm_id *cm_id = rsp->queue->cm_id;
+	u16 status;
+
+	if (!IS_ENABLED(CONFIG_BLK_DEV_INTEGRITY))
+		return;
+
+	WARN_ON(rsp->n_rdma <= 0);
+	atomic_add(rsp->n_rdma, &queue->sq_wr_avail);
+	rsp->n_rdma = 0;
+
+	if (unlikely(wc->status != IB_WC_SUCCESS)) {
+		nvmet_rdma_rw_ctx_destroy(rsp);
+		nvmet_req_uninit(&rsp->req);
+		nvmet_rdma_release_rsp(rsp);
+		if (wc->status != IB_WC_WR_FLUSH_ERR) {
+			pr_info("RDMA WRITE for CQE 0x%p failed with status %s (%d).\n",
+				wc->wr_cqe, ib_wc_status_msg(wc->status),
+				wc->status);
+			nvmet_rdma_error_comp(queue);
+		}
+		return;
+	}
+
+	/*
+	 * Upon RDMA completion check the signature status
+	 * - if succeeded send good NVMe response
+	 * - if failed send bad NVMe response with appropriate error
+	 */
+	status = nvmet_rdma_check_pi_status(rsp->rw.reg->mr);
+	if (unlikely(status))
+		rsp->req.cqe->status = cpu_to_le16(status << 1);
+	nvmet_rdma_rw_ctx_destroy(rsp);
+
+	if (unlikely(ib_post_send(cm_id->qp, &rsp->send_wr, NULL))) {
+		pr_err("sending cmd response failed\n");
+		nvmet_rdma_release_rsp(rsp);
+	}
 }
 
 static void nvmet_rdma_use_inline_sg(struct nvmet_rdma_rsp *rsp, u32 len,
@@ -697,9 +878,9 @@ static u16 nvmet_rdma_map_sgl_inline(struct nvmet_rdma_rsp *rsp)
 static u16 nvmet_rdma_map_sgl_keyed(struct nvmet_rdma_rsp *rsp,
 		struct nvme_keyed_sgl_desc *sgl, bool invalidate)
 {
-	struct rdma_cm_id *cm_id = rsp->queue->cm_id;
 	u64 addr = le64_to_cpu(sgl->addr);
 	u32 key = get_unaligned_le32(sgl->key);
+	struct ib_sig_attrs sig_attrs;
 	int ret;
 
 	rsp->req.transfer_len = get_unaligned_le24(sgl->length);
@@ -708,13 +889,14 @@ static u16 nvmet_rdma_map_sgl_keyed(struct nvmet_rdma_rsp *rsp,
 	if (!rsp->req.transfer_len)
 		return 0;
 
+	if (rsp->req.md_len)
+		nvmet_rdma_set_sig_attrs(&rsp->req, &sig_attrs);
+
 	ret = nvmet_req_alloc_sgl(&rsp->req);
 	if (unlikely(ret < 0))
 		goto error_out;
 
-	ret = rdma_rw_ctx_init(&rsp->rw, cm_id->qp, cm_id->port_num,
-			rsp->req.sg, rsp->req.sg_cnt, 0, addr, key,
-			nvmet_data_dir(&rsp->req));
+	ret = nvmet_rdma_rw_ctx_init(rsp, addr, key, &sig_attrs);
 	if (unlikely(ret < 0))
 		goto error_out;
 	rsp->n_rdma += ret;
@@ -1108,6 +1290,9 @@ static int nvmet_rdma_create_queue_ib(struct nvmet_rdma_queue *queue)
 		qp_attr.cap.max_recv_sge = 1 + ndev->inline_page_count;
 	}
 
+	if (queue->port->pi_enable && queue->host_qid)
+		qp_attr.create_flags |= IB_QP_CREATE_INTEGRITY_EN;
+
 	ret = rdma_create_qp(queue->cm_id, ndev->pd, &qp_attr);
 	if (ret) {
 		pr_err("failed to create_qp ret= %d\n", ret);
@@ -1226,6 +1411,7 @@ static int nvmet_rdma_cm_reject(struct rdma_cm_id *cm_id,
 		struct rdma_cm_id *cm_id,
 		struct rdma_cm_event *event)
 {
+	struct nvmet_rdma_port *port = cm_id->context;
 	struct nvmet_rdma_queue *queue;
 	int ret;
 
@@ -1252,6 +1438,7 @@ static int nvmet_rdma_cm_reject(struct rdma_cm_id *cm_id,
 	INIT_WORK(&queue->release_work, nvmet_rdma_release_queue_work);
 	queue->dev = ndev;
 	queue->cm_id = cm_id;
+	queue->port = port->nport;
 
 	spin_lock_init(&queue->state_lock);
 	queue->state = NVMET_RDMA_Q_CONNECTING;
@@ -1369,7 +1556,6 @@ static int nvmet_rdma_cm_accept(struct rdma_cm_id *cm_id,
 static int nvmet_rdma_queue_connect(struct rdma_cm_id *cm_id,
 		struct rdma_cm_event *event)
 {
-	struct nvmet_rdma_port *port = cm_id->context;
 	struct nvmet_rdma_device *ndev;
 	struct nvmet_rdma_queue *queue;
 	int ret = -EINVAL;
@@ -1385,7 +1571,6 @@ static int nvmet_rdma_queue_connect(struct rdma_cm_id *cm_id,
 		ret = -ENOMEM;
 		goto put_device;
 	}
-	queue->port = port->nport;
 
 	if (queue->host_qid == 0) {
 		/* Let inflight controller teardown complete */
@@ -1657,6 +1842,14 @@ static int nvmet_rdma_enable_port(struct nvmet_rdma_port *port)
 		goto out_destroy_id;
 	}
 
+	if (port->nport->pi_enable &&
+	    !(cm_id->device->attrs.device_cap_flags &
+	      IB_DEVICE_INTEGRITY_HANDOVER)) {
+		pr_err("T10-PI is not supported for %pISpcs\n", addr);
+		ret = -EINVAL;
+		goto out_destroy_id;
+	}
+
 	port->cm_id = cm_id;
 	return 0;
 
@@ -1766,6 +1959,8 @@ static void nvmet_rdma_disc_port_addr(struct nvmet_req *req,
 
 static u8 nvmet_rdma_get_mdts(const struct nvmet_ctrl *ctrl)
 {
+	if (ctrl->pi_support)
+		return NVMET_RDMA_MAX_METADATA_MDTS;
 	return NVMET_RDMA_MAX_MDTS;
 }
 
@@ -1774,6 +1969,7 @@ static u8 nvmet_rdma_get_mdts(const struct nvmet_ctrl *ctrl)
 	.type			= NVMF_TRTYPE_RDMA,
 	.msdbd			= 1,
 	.has_keyed_sgls		= 1,
+	.metadata_support	= 1,
 	.add_port		= nvmet_rdma_add_port,
 	.remove_port		= nvmet_rdma_remove_port,
 	.queue_response		= nvmet_rdma_queue_response,
-- 
1.8.3.1


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* Re: [PATCH 01/15] nvme: introduce namespace features flag
  2020-04-28 13:11 ` [PATCH 01/15] nvme: introduce namespace features flag Max Gurtovoy
@ 2020-05-01 13:20   ` Christoph Hellwig
  2020-05-01 14:24   ` Christoph Hellwig
  1 sibling, 0 replies; 35+ messages in thread
From: Christoph Hellwig @ 2020-05-01 13:20 UTC (permalink / raw)
  To: Max Gurtovoy
  Cc: axboe, jsmart2021, sagi, martin.petersen, shlomin, israelr,
	vladimirk, linux-nvme, idanb, oren, kbusch, nitzanc, hch

On Tue, Apr 28, 2020 at 04:11:21PM +0300, Max Gurtovoy wrote:
> Replace the specific ext boolean (that implies on extended LBA format)
> with a feature in the new namespace features flag. This is a preparation
> for adding more namespace features (such as metadata specific features).
> 
> Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
> Reviewed-by: Israel Rukshin <israelr@mellanox.com>

Looks good,

Reviewed-by: Christoph Hellwig <hch@lst.de>

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 02/15] nvme: introduce NVME_NS_METADATA_SUPPORTED flag
  2020-04-28 13:11 ` [PATCH 02/15] nvme: introduce NVME_NS_METADATA_SUPPORTED flag Max Gurtovoy
@ 2020-05-01 13:20   ` Christoph Hellwig
  0 siblings, 0 replies; 35+ messages in thread
From: Christoph Hellwig @ 2020-05-01 13:20 UTC (permalink / raw)
  To: Max Gurtovoy
  Cc: axboe, jsmart2021, sagi, martin.petersen, shlomin, israelr,
	vladimirk, linux-nvme, idanb, oren, kbusch, nitzanc, hch

On Tue, Apr 28, 2020 at 04:11:22PM +0300, Max Gurtovoy wrote:
> This is a preparation for adding support for metadata in fabric
> controllers. New flag will imply that NVMe namespace supports getting
> metadata that was originally generated by host's block layer.
> 
> Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
> Reviewed-by: Israel Rukshin <israelr@mellanox.com>

Looks good,

Reviewed-by: Christoph Hellwig <hch@lst.de>

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 03/15] nvme: make nvme_ns_has_pi accessible to transports
  2020-04-28 13:11 ` [PATCH 03/15] nvme: make nvme_ns_has_pi accessible to transports Max Gurtovoy
@ 2020-05-01 13:20   ` Christoph Hellwig
  0 siblings, 0 replies; 35+ messages in thread
From: Christoph Hellwig @ 2020-05-01 13:20 UTC (permalink / raw)
  To: Max Gurtovoy
  Cc: axboe, jsmart2021, sagi, martin.petersen, shlomin, israelr,
	vladimirk, linux-nvme, idanb, oren, kbusch, nitzanc, hch

On Tue, Apr 28, 2020 at 04:11:23PM +0300, Max Gurtovoy wrote:
> From: James Smart <jsmart2021@gmail.com>
> 
> Move the nvme_ns_has_pi() inline from core.c to the nvme.h header.
> This allows use by the transports.
> 
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> [maxg: added a comment for nvme_ns_has_pi()]
> Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
> Reviewed-by: Israel Rukshin <israelr@mellanox.com>

Looks good,

Reviewed-by: Christoph Hellwig <hch@lst.de>

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 04/15] nvme: enforce extended LBA format for fabrics metadata
  2020-04-28 13:11 ` [PATCH 04/15] nvme: enforce extended LBA format for fabrics metadata Max Gurtovoy
@ 2020-05-01 13:21   ` Christoph Hellwig
  2020-05-01 13:41   ` Christoph Hellwig
  1 sibling, 0 replies; 35+ messages in thread
From: Christoph Hellwig @ 2020-05-01 13:21 UTC (permalink / raw)
  To: Max Gurtovoy
  Cc: axboe, jsmart2021, sagi, martin.petersen, shlomin, israelr,
	vladimirk, linux-nvme, idanb, oren, kbusch, nitzanc, hch

Looks good,

Reviewed-by: Christoph Hellwig <hch@lst.de>

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 04/15] nvme: enforce extended LBA format for fabrics metadata
  2020-04-28 13:11 ` [PATCH 04/15] nvme: enforce extended LBA format for fabrics metadata Max Gurtovoy
  2020-05-01 13:21   ` Christoph Hellwig
@ 2020-05-01 13:41   ` Christoph Hellwig
  1 sibling, 0 replies; 35+ messages in thread
From: Christoph Hellwig @ 2020-05-01 13:41 UTC (permalink / raw)
  To: Max Gurtovoy
  Cc: axboe, jsmart2021, sagi, martin.petersen, shlomin, israelr,
	vladimirk, linux-nvme, idanb, oren, kbusch, nitzanc, hch

Actually - can we skip the new pi_calable field in the controller?
Just checking max_integrity_segments for being non-zero should have the
same effect.

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 01/15] nvme: introduce namespace features flag
  2020-04-28 13:11 ` [PATCH 01/15] nvme: introduce namespace features flag Max Gurtovoy
  2020-05-01 13:20   ` Christoph Hellwig
@ 2020-05-01 14:24   ` Christoph Hellwig
  2020-05-01 14:33     ` Max Gurtovoy
  1 sibling, 1 reply; 35+ messages in thread
From: Christoph Hellwig @ 2020-05-01 14:24 UTC (permalink / raw)
  To: Max Gurtovoy
  Cc: axboe, jsmart2021, sagi, martin.petersen, shlomin, israelr,
	vladimirk, linux-nvme, idanb, oren, kbusch, nitzanc, hch

This seems to break lightnvm:

drivers/nvme/host/lightnvm.c: In function ‘nvme_nvm_register’:
drivers/nvme/host/lightnvm.c:964:15: error: ‘struct nvme_ns’ has no member named ‘ext’
  964 |  geo->ext = ns->ext;
      |             

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 08/15] nvme-rdma: add metadata/T10-PI support
  2020-04-28 13:11 ` [PATCH 08/15] nvme-rdma: add metadata/T10-PI support Max Gurtovoy
@ 2020-05-01 14:26   ` Christoph Hellwig
  2020-05-01 15:00     ` Max Gurtovoy
  0 siblings, 1 reply; 35+ messages in thread
From: Christoph Hellwig @ 2020-05-01 14:26 UTC (permalink / raw)
  To: Max Gurtovoy
  Cc: axboe, jsmart2021, sagi, martin.petersen, shlomin, israelr,
	vladimirk, linux-nvme, idanb, oren, kbusch, nitzanc, hch

On Tue, Apr 28, 2020 at 04:11:28PM +0300, Max Gurtovoy wrote:
> For capable HCAs (e.g. ConnectX-5/ConnectX-6) this will allow end-to-end
> protection information passthrough and validation for NVMe over RDMA
> transport. Metadata offload support was implemented over the new RDMA
> signature verbs API and it is enabled by per capable controllers.

What do you think about the following changes to keep the churn down
a bit, and to let the compiler optimize out all the integrity code if
not built into the kernel?  The two header changes will need to go
into their own patches of course.

diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index f13c48359b65c..dc695e9338cb0 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -31,12 +31,17 @@ extern unsigned int admin_timeout;
 
 #ifdef CONFIG_ARCH_NO_SG_CHAIN
 #define  NVME_INLINE_SG_CNT  0
-#define  NVME_INLINE_MD_SG_CNT  0
+#define  NVME_INLINE_METADATA_SG_CNT  0
 #else
 #define  NVME_INLINE_SG_CNT  2
-#define  NVME_INLINE_MD_SG_CNT  1
+#define  NVME_INLINE_METADATA_SG_CNT  1
 #endif
 
+#define NVME_RDMA_DATA_SGL_SIZE \
+	(sizeof(struct scatterlist) * NVME_INLINE_SG_CNT)
+#define NVME_RDMA_METADATA_SGL_SIZE \
+	(sizeof(struct scatterlist) * NVME_INLINE_METADATA_SG_CNT)
+
 extern struct workqueue_struct *nvme_wq;
 extern struct workqueue_struct *nvme_reset_wq;
 extern struct workqueue_struct *nvme_delete_wq;
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index e94ef9593f91e..94c11ccdbba1b 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -67,9 +67,8 @@ struct nvme_rdma_request {
 	struct ib_cqe		reg_cqe;
 	struct nvme_rdma_queue  *queue;
 	struct nvme_rdma_sgl	data_sgl;
-	/* Metadata (T10-PI) support */
-	struct nvme_rdma_sgl	*md_sgl;
-	bool			use_md;
+	struct nvme_rdma_sgl	*metadata_sgl;
+	bool			use_sig_mr;
 };
 
 enum nvme_rdma_queue_flags {
@@ -285,18 +284,6 @@ static void nvme_rdma_exit_request(struct blk_mq_tag_set *set,
 	kfree(req->sqe.data);
 }
 
-static unsigned int nvme_rdma_cmd_size(bool has_md)
-{
-	/*
-	 * nvme rdma command consists of struct nvme_rdma_request, data SGL,
-	 * PI nvme_rdma_sgl struct and then PI SGL.
-	 */
-	return sizeof(struct nvme_rdma_request) +
-	       sizeof(struct scatterlist) * NVME_INLINE_SG_CNT +
-	       has_md * (sizeof(struct nvme_rdma_sgl) +
-			 sizeof(struct scatterlist) * NVME_INLINE_MD_SG_CNT);
-}
-
 static int nvme_rdma_init_request(struct blk_mq_tag_set *set,
 		struct request *rq, unsigned int hctx_idx,
 		unsigned int numa_node)
@@ -311,10 +298,6 @@ static int nvme_rdma_init_request(struct blk_mq_tag_set *set,
 	if (!req->sqe.data)
 		return -ENOMEM;
 
-	/* metadata nvme_rdma_sgl struct is located after command's data SGL */
-	if (queue->pi_support)
-		req->md_sgl = (void *)nvme_req(rq) + nvme_rdma_cmd_size(false);
-
 	req->queue = queue;
 
 	return 0;
@@ -770,7 +753,8 @@ static struct blk_mq_tag_set *nvme_rdma_alloc_tagset(struct nvme_ctrl *nctrl,
 		set->queue_depth = NVME_AQ_MQ_TAG_DEPTH;
 		set->reserved_tags = 2; /* connect + keep-alive */
 		set->numa_node = nctrl->numa_node;
-		set->cmd_size = nvme_rdma_cmd_size(false);
+		set->cmd_size = sizeof(struct nvme_rdma_request) +
+				NVME_RDMA_DATA_SGL_SIZE;
 		set->driver_data = ctrl;
 		set->nr_hw_queues = 1;
 		set->timeout = ADMIN_TIMEOUT;
@@ -783,7 +767,10 @@ static struct blk_mq_tag_set *nvme_rdma_alloc_tagset(struct nvme_ctrl *nctrl,
 		set->reserved_tags = 1; /* fabric connect */
 		set->numa_node = nctrl->numa_node;
 		set->flags = BLK_MQ_F_SHOULD_MERGE;
-		set->cmd_size = nvme_rdma_cmd_size(nctrl->pi_capable);
+		set->cmd_size = sizeof(struct nvme_rdma_request) +
+				NVME_RDMA_DATA_SGL_SIZE;
+		if (nctrl->pi_capable)
+			set->cmd_size += NVME_RDMA_METADATA_SGL_SIZE;
 		set->driver_data = ctrl;
 		set->nr_hw_queues = nctrl->queue_count - 1;
 		set->timeout = NVME_IO_TIMEOUT;
@@ -1199,15 +1186,9 @@ static int nvme_rdma_inv_rkey(struct nvme_rdma_queue *queue,
 	return ib_post_send(queue->qp, &wr, NULL);
 }
 
-static void nvme_rdma_mr_pool_put(struct ib_qp *qp,
-		struct nvme_rdma_request *req)
+static inline bool nvme_rdma_use_sig_mr(struct nvme_rdma_request *req)
 {
-	if (req->use_md)
-		ib_mr_pool_put(qp, &qp->sig_mrs, req->mr);
-	else
-		ib_mr_pool_put(qp, &qp->rdma_mrs, req->mr);
-
-	req->mr = NULL;
+	return IS_ENABLED(CONFIG_BLK_DEV_INTEGRITY) && req->use_sig_mr;
 }
 
 static void nvme_rdma_unmap_data(struct nvme_rdma_queue *queue,
@@ -1216,19 +1197,25 @@ static void nvme_rdma_unmap_data(struct nvme_rdma_queue *queue,
 	struct nvme_rdma_request *req = blk_mq_rq_to_pdu(rq);
 	struct nvme_rdma_device *dev = queue->device;
 	struct ib_device *ibdev = dev->dev;
+	struct list_head *pool = &queue->qp->rdma_mrs;
 
 	if (!blk_rq_nr_phys_segments(rq))
 		return;
 
 	if (blk_integrity_rq(rq)) {
-		ib_dma_unmap_sg(ibdev, req->md_sgl->sg_table.sgl,
-				req->md_sgl->nents, rq_dma_dir(rq));
-		sg_free_table_chained(&req->md_sgl->sg_table,
-				      NVME_INLINE_MD_SG_CNT);
+		ib_dma_unmap_sg(ibdev, req->metadata_sgl->sg_table.sgl,
+				req->metadata_sgl->nents, rq_dma_dir(rq));
+		sg_free_table_chained(&req->metadata_sgl->sg_table,
+				      NVME_INLINE_METADATA_SG_CNT);
 	}
 
-	if (req->mr)
-		nvme_rdma_mr_pool_put(queue->qp, req);
+	if (nvme_rdma_use_sig_mr(req))
+		pool = &queue->qp->sig_mrs;
+
+	if (req->mr) {
+		ib_mr_pool_put(queue->qp, pool, req->mr);
+		req->mr = NULL;
+	}
 
 	ib_dma_unmap_sg(ibdev, req->data_sgl.sg_table.sgl, req->data_sgl.nents,
 			rq_dma_dir(rq));
@@ -1283,15 +1270,59 @@ static int nvme_rdma_map_sg_single(struct nvme_rdma_queue *queue,
 	return 0;
 }
 
+static int nvme_rdma_map_sg_fr(struct nvme_rdma_queue *queue,
+		struct nvme_rdma_request *req, struct nvme_command *c,
+		int count)
+{
+	struct nvme_keyed_sgl_desc *sg = &c->common.dptr.ksgl;
+	int nr;
+
+	req->mr = ib_mr_pool_get(queue->qp, &queue->qp->rdma_mrs);
+	if (WARN_ON_ONCE(!req->mr))
+		return -EAGAIN;
+
+	/*
+	 * Align the MR to a 4K page size to match the ctrl page size and
+	 * the block virtual boundary.
+	 */
+	nr = ib_map_mr_sg(req->mr, req->data_sgl.sg_table.sgl, count, 0, SZ_4K);
+	if (unlikely(nr < count)) {
+		ib_mr_pool_put(queue->qp, &queue->qp->rdma_mrs, req->mr);
+		req->mr = NULL;
+		if (nr < 0)
+			return nr;
+		return -EINVAL;
+	}
+
+	ib_update_fast_reg_key(req->mr, ib_inc_rkey(req->mr->rkey));
+
+	req->reg_cqe.done = nvme_rdma_memreg_done;
+	memset(&req->reg_wr, 0, sizeof(req->reg_wr));
+	req->reg_wr.wr.opcode = IB_WR_REG_MR;
+	req->reg_wr.wr.wr_cqe = &req->reg_cqe;
+	req->reg_wr.wr.num_sge = 0;
+	req->reg_wr.mr = req->mr;
+	req->reg_wr.key = req->mr->rkey;
+	req->reg_wr.access = IB_ACCESS_LOCAL_WRITE |
+			     IB_ACCESS_REMOTE_READ |
+			     IB_ACCESS_REMOTE_WRITE;
+
+	sg->addr = cpu_to_le64(req->mr->iova);
+	put_unaligned_le24(req->mr->length, sg->length);
+	put_unaligned_le32(req->mr->rkey, sg->key);
+	sg->type = (NVME_KEY_SGL_FMT_DATA_DESC << 4) |
+			NVME_SGL_FMT_INVALIDATE;
+
+	return 0;
+}
+
 static void nvme_rdma_set_sig_domain(struct blk_integrity *bi,
 		struct nvme_command *cmd, struct ib_sig_domain *domain,
 		u16 control)
 {
 	domain->sig_type = IB_SIG_TYPE_T10_DIF;
 	domain->sig.dif.bg_type = IB_T10DIF_CRC;
-#ifdef CONFIG_BLK_DEV_INTEGRITY
 	domain->sig.dif.pi_interval = 1 << bi->interval_exp;
-#endif
 	domain->sig.dif.ref_tag = le32_to_cpu(cmd->rw.reftag);
 	domain->sig.dif.apptag_check_mask = 0xffff;
 	domain->sig.dif.app_escape = true;
@@ -1335,26 +1366,34 @@ static void nvme_rdma_sig_done(struct ib_cq *cq, struct ib_wc *wc)
 		nvme_rdma_wr_error(cq, wc, "SIG");
 }
 
-static void nvme_rdma_set_pi_wr(struct nvme_rdma_request *req,
-				struct nvme_command *c)
+static int nvme_rdma_map_sg_pi(struct nvme_rdma_queue *queue,
+		struct nvme_rdma_request *req, struct nvme_command *c,
+		int count, int pi_count)
 {
+	struct nvme_rdma_sgl *sgl = &req->data_sgl;
 	struct ib_sig_attrs *sig_attrs = req->mr->sig_attrs;
 	struct ib_reg_wr *wr = &req->reg_wr;
 	struct request *rq = blk_mq_rq_from_pdu(req);
 	struct bio *bio = rq->bio;
 	struct nvme_keyed_sgl_desc *sg = &c->common.dptr.ksgl;
+	int nr;
 
-	if (!IS_ENABLED(CONFIG_BLK_DEV_INTEGRITY))
-		return;
+	req->mr = ib_mr_pool_get(queue->qp, &queue->qp->sig_mrs);
+	if (WARN_ON_ONCE(!req->mr))
+		return -EAGAIN;
 
-	nvme_rdma_set_sig_attrs(blk_get_integrity(bio->bi_disk), c, sig_attrs);
+	nr = ib_map_mr_sg_pi(req->mr, sgl->sg_table.sgl, count, 0,
+			     req->metadata_sgl->sg_table.sgl, pi_count, 0,
+			     SZ_4K);
+	if (unlikely(nr))
+		goto mr_put;
 
+	nvme_rdma_set_sig_attrs(blk_get_integrity(bio->bi_disk), c, sig_attrs);
 	nvme_rdma_set_prot_checks(c, &sig_attrs->check_mask);
 
 	ib_update_fast_reg_key(req->mr, ib_inc_rkey(req->mr->rkey));
 
 	req->reg_cqe.done = nvme_rdma_sig_done;
-
 	memset(wr, 0, sizeof(*wr));
 	wr->wr.opcode = IB_WR_REG_MR_INTEGRITY;
 	wr->wr.wr_cqe = &req->reg_cqe;
@@ -1370,69 +1409,10 @@ static void nvme_rdma_set_pi_wr(struct nvme_rdma_request *req,
 	put_unaligned_le24(req->mr->length, sg->length);
 	put_unaligned_le32(req->mr->rkey, sg->key);
 	sg->type = NVME_KEY_SGL_FMT_DATA_DESC << 4;
-}
-
-static void nvme_rdma_set_reg_wr(struct nvme_rdma_request *req,
-				 struct nvme_command *c)
-{
-	struct nvme_keyed_sgl_desc *sg = &c->common.dptr.ksgl;
-
-	req->reg_cqe.done = nvme_rdma_memreg_done;
-
-	memset(&req->reg_wr, 0, sizeof(req->reg_wr));
-	req->reg_wr.wr.opcode = IB_WR_REG_MR;
-	req->reg_wr.wr.wr_cqe = &req->reg_cqe;
-	req->reg_wr.wr.num_sge = 0;
-	req->reg_wr.mr = req->mr;
-	req->reg_wr.key = req->mr->rkey;
-	req->reg_wr.access = IB_ACCESS_LOCAL_WRITE |
-			     IB_ACCESS_REMOTE_READ |
-			     IB_ACCESS_REMOTE_WRITE;
-
-	sg->addr = cpu_to_le64(req->mr->iova);
-	put_unaligned_le24(req->mr->length, sg->length);
-	put_unaligned_le32(req->mr->rkey, sg->key);
-	sg->type = (NVME_KEY_SGL_FMT_DATA_DESC << 4) |
-			NVME_SGL_FMT_INVALIDATE;
-}
-
-static int nvme_rdma_map_sg_fr(struct nvme_rdma_queue *queue,
-		struct nvme_rdma_request *req, struct nvme_command *c,
-		int count, int pi_count)
-{
-	struct nvme_rdma_sgl *sgl = &req->data_sgl;
-	int nr;
-
-	if (req->use_md) {
-		req->mr = ib_mr_pool_get(queue->qp, &queue->qp->sig_mrs);
-		if (WARN_ON_ONCE(!req->mr))
-			return -EAGAIN;
-
-		nr = ib_map_mr_sg_pi(req->mr, sgl->sg_table.sgl, count, 0,
-				     req->md_sgl->sg_table.sgl, pi_count, 0,
-				     SZ_4K);
-		if (unlikely(nr))
-			goto mr_put;
-
-		nvme_rdma_set_pi_wr(req, c);
-	} else {
-		req->mr = ib_mr_pool_get(queue->qp, &queue->qp->rdma_mrs);
-		if (WARN_ON_ONCE(!req->mr))
-			return -EAGAIN;
-		/*
-		 * Align the MR to a 4K page size to match the ctrl page size
-		 * and the block virtual boundary.
-		 */
-		nr = ib_map_mr_sg(req->mr, sgl->sg_table.sgl, count, 0, SZ_4K);
-		if (unlikely(nr < count))
-			goto mr_put;
-
-		nvme_rdma_set_reg_wr(req, c);
-	}
-
 	return 0;
 mr_put:
-	nvme_rdma_mr_pool_put(queue->qp, req);
+	ib_mr_pool_put(queue->qp, &queue->qp->sig_mrs, req->mr);
+	req->mr = NULL;
 	if (nr < 0)
 		return nr;
 	return -EINVAL;
@@ -1473,28 +1453,35 @@ static int nvme_rdma_map_data(struct nvme_rdma_queue *queue,
 	}
 
 	if (blk_integrity_rq(rq)) {
-		req->md_sgl->sg_table.sgl =
-			(struct scatterlist *)(req->md_sgl + 1);
-		ret = sg_alloc_table_chained(&req->md_sgl->sg_table,
+		req->metadata_sgl = (void *)nvme_req(rq) +
+			sizeof(struct nvme_rdma_request) +
+			NVME_RDMA_DATA_SGL_SIZE;
+		req->metadata_sgl->sg_table.sgl =
+			(struct scatterlist *)(req->metadata_sgl + 1);
+		ret = sg_alloc_table_chained(&req->metadata_sgl->sg_table,
 				blk_rq_count_integrity_sg(rq->q, rq->bio),
-				req->md_sgl->sg_table.sgl,
-				NVME_INLINE_MD_SG_CNT);
+				req->metadata_sgl->sg_table.sgl,
+				NVME_INLINE_METADATA_SG_CNT);
 		if (unlikely(ret)) {
 			ret = -ENOMEM;
 			goto out_unmap_sg;
 		}
 
-		req->md_sgl->nents = blk_rq_map_integrity_sg(rq->q, rq->bio,
-					req->md_sgl->sg_table.sgl);
-		pi_count = ib_dma_map_sg(ibdev, req->md_sgl->sg_table.sgl,
-					 req->md_sgl->nents, rq_dma_dir(rq));
+		req->metadata_sgl->nents = blk_rq_map_integrity_sg(rq->q,
+				rq->bio, req->metadata_sgl->sg_table.sgl);
+		pi_count = ib_dma_map_sg(ibdev, req->metadata_sgl->sg_table.sgl,
+				req->metadata_sgl->nents, rq_dma_dir(rq));
 		if (unlikely(pi_count <= 0)) {
 			ret = -EIO;
 			goto out_free_pi_table;
 		}
 	}
 
-	if (count <= dev->num_inline_segments && !req->use_md) {
+	if (nvme_rdma_use_sig_mr(req)) {
+		ret = nvme_rdma_map_sg_pi(queue, req, c, count, pi_count);
+		goto out;
+	}
+	if (count <= dev->num_inline_segments) {
 		if (rq_data_dir(rq) == WRITE && nvme_rdma_queue_idx(queue) &&
 		    queue->ctrl->use_inline_data &&
 		    blk_rq_payload_bytes(rq) <=
@@ -1509,7 +1496,7 @@ static int nvme_rdma_map_data(struct nvme_rdma_queue *queue,
 		}
 	}
 
-	ret = nvme_rdma_map_sg_fr(queue, req, c, count, pi_count);
+	ret = nvme_rdma_map_sg_fr(queue, req, c, count);
 out:
 	if (unlikely(ret))
 		goto out_unmap_pi_sg;
@@ -1518,12 +1505,12 @@ static int nvme_rdma_map_data(struct nvme_rdma_queue *queue,
 
 out_unmap_pi_sg:
 	if (blk_integrity_rq(rq))
-		ib_dma_unmap_sg(ibdev, req->md_sgl->sg_table.sgl,
-				req->md_sgl->nents, rq_dma_dir(rq));
+		ib_dma_unmap_sg(ibdev, req->metadata_sgl->sg_table.sgl,
+				req->metadata_sgl->nents, rq_dma_dir(rq));
 out_free_pi_table:
 	if (blk_integrity_rq(rq))
-		sg_free_table_chained(&req->md_sgl->sg_table,
-				      NVME_INLINE_MD_SG_CNT);
+		sg_free_table_chained(&req->metadata_sgl->sg_table,
+				      NVME_INLINE_METADATA_SG_CNT);
 out_unmap_sg:
 	ib_dma_unmap_sg(ibdev, req->data_sgl.sg_table.sgl, req->data_sgl.nents,
 			rq_dma_dir(rq));
@@ -1976,9 +1963,13 @@ static blk_status_t nvme_rdma_queue_rq(struct blk_mq_hw_ctx *hctx,
 
 	blk_mq_start_request(rq);
 
-	if (c->common.opcode == nvme_cmd_write ||
-	    c->common.opcode == nvme_cmd_read)
-		req->use_md = queue->pi_support && nvme_ns_has_pi(ns);
+	if (IS_ENABLED(CONFIG_BLK_DEV_INTEGRITY) &&
+	    queue->pi_support && nvme_ns_has_pi(ns) &&
+	    (c->common.opcode == nvme_cmd_write ||
+	     c->common.opcode == nvme_cmd_read))
+		req->use_sig_mr = true;
+	else
+		req->use_sig_mr = false;
 
 	err = nvme_rdma_map_data(queue, rq, c);
 	if (unlikely(err < 0)) {
@@ -2060,7 +2051,7 @@ static void nvme_rdma_complete_rq(struct request *rq)
 	struct nvme_rdma_queue *queue = req->queue;
 	struct ib_device *ibdev = queue->device->dev;
 
-	if (req->use_md)
+	if (nvme_rdma_use_sig_mr(req))
 		nvme_rdma_check_pi_status(req);
 
 	nvme_rdma_unmap_data(queue, rq);
diff --git a/include/linux/genhd.h b/include/linux/genhd.h
index 058d895544c75..841858150ab7a 100644
--- a/include/linux/genhd.h
+++ b/include/linux/genhd.h
@@ -169,8 +169,6 @@ struct disk_part_tbl {
 struct disk_events;
 struct badblocks;
 
-#if defined(CONFIG_BLK_DEV_INTEGRITY)
-
 struct blk_integrity {
 	const struct blk_integrity_profile	*profile;
 	unsigned char				flags;
@@ -179,8 +177,6 @@ struct blk_integrity {
 	unsigned char				tag_size;
 };
 
-#endif	/* CONFIG_BLK_DEV_INTEGRITY */
-
 struct gendisk {
 	/* major, first_minor and minors are input parameters only,
 	 * don't use directly.  Use disk_devt() and disk_max_parts().

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* Re: [PATCH 01/15] nvme: introduce namespace features flag
  2020-05-01 14:24   ` Christoph Hellwig
@ 2020-05-01 14:33     ` Max Gurtovoy
  0 siblings, 0 replies; 35+ messages in thread
From: Max Gurtovoy @ 2020-05-01 14:33 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: axboe, jsmart2021, sagi, martin.petersen, shlomin, israelr,
	vladimirk, linux-nvme, idanb, oren, kbusch, nitzanc


On 5/1/2020 5:24 PM, Christoph Hellwig wrote:
> This seems to break lightnvm:
>
> drivers/nvme/host/lightnvm.c: In function ‘nvme_nvm_register’:
> drivers/nvme/host/lightnvm.c:964:15: error: ‘struct nvme_ns’ has no member named ‘ext’
>    964 |  geo->ext = ns->ext;
>        |

I'll fix that, thanks.


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 08/15] nvme-rdma: add metadata/T10-PI support
  2020-05-01 14:26   ` Christoph Hellwig
@ 2020-05-01 15:00     ` Max Gurtovoy
  0 siblings, 0 replies; 35+ messages in thread
From: Max Gurtovoy @ 2020-05-01 15:00 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: axboe, jsmart2021, sagi, martin.petersen, shlomin, israelr,
	vladimirk, linux-nvme, idanb, oren, kbusch, nitzanc


On 5/1/2020 5:26 PM, Christoph Hellwig wrote:
> On Tue, Apr 28, 2020 at 04:11:28PM +0300, Max Gurtovoy wrote:
>> For capable HCAs (e.g. ConnectX-5/ConnectX-6) this will allow end-to-end
>> protection information passthrough and validation for NVMe over RDMA
>> transport. Metadata offload support was implemented over the new RDMA
>> signature verbs API and it is enabled by per capable controllers.
> What do you think about the following changes to keep the churn down
> a bit, and to let the compiler optimize out all the integrity code if
> not built into the kernel?  The two header changes will need to go
> into their own patches of course.

Looks good to me but I need to review it more carefully and test it 
(specially around the nvme_rdma_request memory layout).

I'll do it early next week.


>
> diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
> index f13c48359b65c..dc695e9338cb0 100644
> --- a/drivers/nvme/host/nvme.h
> +++ b/drivers/nvme/host/nvme.h
> @@ -31,12 +31,17 @@ extern unsigned int admin_timeout;
>   
>   #ifdef CONFIG_ARCH_NO_SG_CHAIN
>   #define  NVME_INLINE_SG_CNT  0
> -#define  NVME_INLINE_MD_SG_CNT  0
> +#define  NVME_INLINE_METADATA_SG_CNT  0
>   #else
>   #define  NVME_INLINE_SG_CNT  2
> -#define  NVME_INLINE_MD_SG_CNT  1
> +#define  NVME_INLINE_METADATA_SG_CNT  1
>   #endif
>   
> +#define NVME_RDMA_DATA_SGL_SIZE \
> +	(sizeof(struct scatterlist) * NVME_INLINE_SG_CNT)
> +#define NVME_RDMA_METADATA_SGL_SIZE \
> +	(sizeof(struct scatterlist) * NVME_INLINE_METADATA_SG_CNT)
> +
>   extern struct workqueue_struct *nvme_wq;
>   extern struct workqueue_struct *nvme_reset_wq;
>   extern struct workqueue_struct *nvme_delete_wq;
> diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
> index e94ef9593f91e..94c11ccdbba1b 100644
> --- a/drivers/nvme/host/rdma.c
> +++ b/drivers/nvme/host/rdma.c
> @@ -67,9 +67,8 @@ struct nvme_rdma_request {
>   	struct ib_cqe		reg_cqe;
>   	struct nvme_rdma_queue  *queue;
>   	struct nvme_rdma_sgl	data_sgl;
> -	/* Metadata (T10-PI) support */
> -	struct nvme_rdma_sgl	*md_sgl;
> -	bool			use_md;
> +	struct nvme_rdma_sgl	*metadata_sgl;
> +	bool			use_sig_mr;
>   };
>   
>   enum nvme_rdma_queue_flags {
> @@ -285,18 +284,6 @@ static void nvme_rdma_exit_request(struct blk_mq_tag_set *set,
>   	kfree(req->sqe.data);
>   }
>   
> -static unsigned int nvme_rdma_cmd_size(bool has_md)
> -{
> -	/*
> -	 * nvme rdma command consists of struct nvme_rdma_request, data SGL,
> -	 * PI nvme_rdma_sgl struct and then PI SGL.
> -	 */
> -	return sizeof(struct nvme_rdma_request) +
> -	       sizeof(struct scatterlist) * NVME_INLINE_SG_CNT +
> -	       has_md * (sizeof(struct nvme_rdma_sgl) +
> -			 sizeof(struct scatterlist) * NVME_INLINE_MD_SG_CNT);
> -}
> -
>   static int nvme_rdma_init_request(struct blk_mq_tag_set *set,
>   		struct request *rq, unsigned int hctx_idx,
>   		unsigned int numa_node)
> @@ -311,10 +298,6 @@ static int nvme_rdma_init_request(struct blk_mq_tag_set *set,
>   	if (!req->sqe.data)
>   		return -ENOMEM;
>   
> -	/* metadata nvme_rdma_sgl struct is located after command's data SGL */
> -	if (queue->pi_support)
> -		req->md_sgl = (void *)nvme_req(rq) + nvme_rdma_cmd_size(false);
> -
>   	req->queue = queue;
>   
>   	return 0;
> @@ -770,7 +753,8 @@ static struct blk_mq_tag_set *nvme_rdma_alloc_tagset(struct nvme_ctrl *nctrl,
>   		set->queue_depth = NVME_AQ_MQ_TAG_DEPTH;
>   		set->reserved_tags = 2; /* connect + keep-alive */
>   		set->numa_node = nctrl->numa_node;
> -		set->cmd_size = nvme_rdma_cmd_size(false);
> +		set->cmd_size = sizeof(struct nvme_rdma_request) +
> +				NVME_RDMA_DATA_SGL_SIZE;
>   		set->driver_data = ctrl;
>   		set->nr_hw_queues = 1;
>   		set->timeout = ADMIN_TIMEOUT;
> @@ -783,7 +767,10 @@ static struct blk_mq_tag_set *nvme_rdma_alloc_tagset(struct nvme_ctrl *nctrl,
>   		set->reserved_tags = 1; /* fabric connect */
>   		set->numa_node = nctrl->numa_node;
>   		set->flags = BLK_MQ_F_SHOULD_MERGE;
> -		set->cmd_size = nvme_rdma_cmd_size(nctrl->pi_capable);
> +		set->cmd_size = sizeof(struct nvme_rdma_request) +
> +				NVME_RDMA_DATA_SGL_SIZE;
> +		if (nctrl->pi_capable)
> +			set->cmd_size += NVME_RDMA_METADATA_SGL_SIZE;
>   		set->driver_data = ctrl;
>   		set->nr_hw_queues = nctrl->queue_count - 1;
>   		set->timeout = NVME_IO_TIMEOUT;
> @@ -1199,15 +1186,9 @@ static int nvme_rdma_inv_rkey(struct nvme_rdma_queue *queue,
>   	return ib_post_send(queue->qp, &wr, NULL);
>   }
>   
> -static void nvme_rdma_mr_pool_put(struct ib_qp *qp,
> -		struct nvme_rdma_request *req)
> +static inline bool nvme_rdma_use_sig_mr(struct nvme_rdma_request *req)
>   {
> -	if (req->use_md)
> -		ib_mr_pool_put(qp, &qp->sig_mrs, req->mr);
> -	else
> -		ib_mr_pool_put(qp, &qp->rdma_mrs, req->mr);
> -
> -	req->mr = NULL;
> +	return IS_ENABLED(CONFIG_BLK_DEV_INTEGRITY) && req->use_sig_mr;
>   }
>   
>   static void nvme_rdma_unmap_data(struct nvme_rdma_queue *queue,
> @@ -1216,19 +1197,25 @@ static void nvme_rdma_unmap_data(struct nvme_rdma_queue *queue,
>   	struct nvme_rdma_request *req = blk_mq_rq_to_pdu(rq);
>   	struct nvme_rdma_device *dev = queue->device;
>   	struct ib_device *ibdev = dev->dev;
> +	struct list_head *pool = &queue->qp->rdma_mrs;
>   
>   	if (!blk_rq_nr_phys_segments(rq))
>   		return;
>   
>   	if (blk_integrity_rq(rq)) {
> -		ib_dma_unmap_sg(ibdev, req->md_sgl->sg_table.sgl,
> -				req->md_sgl->nents, rq_dma_dir(rq));
> -		sg_free_table_chained(&req->md_sgl->sg_table,
> -				      NVME_INLINE_MD_SG_CNT);
> +		ib_dma_unmap_sg(ibdev, req->metadata_sgl->sg_table.sgl,
> +				req->metadata_sgl->nents, rq_dma_dir(rq));
> +		sg_free_table_chained(&req->metadata_sgl->sg_table,
> +				      NVME_INLINE_METADATA_SG_CNT);
>   	}
>   
> -	if (req->mr)
> -		nvme_rdma_mr_pool_put(queue->qp, req);
> +	if (nvme_rdma_use_sig_mr(req))
> +		pool = &queue->qp->sig_mrs;
> +
> +	if (req->mr) {
> +		ib_mr_pool_put(queue->qp, pool, req->mr);
> +		req->mr = NULL;
> +	}
>   
>   	ib_dma_unmap_sg(ibdev, req->data_sgl.sg_table.sgl, req->data_sgl.nents,
>   			rq_dma_dir(rq));
> @@ -1283,15 +1270,59 @@ static int nvme_rdma_map_sg_single(struct nvme_rdma_queue *queue,
>   	return 0;
>   }
>   
> +static int nvme_rdma_map_sg_fr(struct nvme_rdma_queue *queue,
> +		struct nvme_rdma_request *req, struct nvme_command *c,
> +		int count)
> +{
> +	struct nvme_keyed_sgl_desc *sg = &c->common.dptr.ksgl;
> +	int nr;
> +
> +	req->mr = ib_mr_pool_get(queue->qp, &queue->qp->rdma_mrs);
> +	if (WARN_ON_ONCE(!req->mr))
> +		return -EAGAIN;
> +
> +	/*
> +	 * Align the MR to a 4K page size to match the ctrl page size and
> +	 * the block virtual boundary.
> +	 */
> +	nr = ib_map_mr_sg(req->mr, req->data_sgl.sg_table.sgl, count, 0, SZ_4K);
> +	if (unlikely(nr < count)) {
> +		ib_mr_pool_put(queue->qp, &queue->qp->rdma_mrs, req->mr);
> +		req->mr = NULL;
> +		if (nr < 0)
> +			return nr;
> +		return -EINVAL;
> +	}
> +
> +	ib_update_fast_reg_key(req->mr, ib_inc_rkey(req->mr->rkey));
> +
> +	req->reg_cqe.done = nvme_rdma_memreg_done;
> +	memset(&req->reg_wr, 0, sizeof(req->reg_wr));
> +	req->reg_wr.wr.opcode = IB_WR_REG_MR;
> +	req->reg_wr.wr.wr_cqe = &req->reg_cqe;
> +	req->reg_wr.wr.num_sge = 0;
> +	req->reg_wr.mr = req->mr;
> +	req->reg_wr.key = req->mr->rkey;
> +	req->reg_wr.access = IB_ACCESS_LOCAL_WRITE |
> +			     IB_ACCESS_REMOTE_READ |
> +			     IB_ACCESS_REMOTE_WRITE;
> +
> +	sg->addr = cpu_to_le64(req->mr->iova);
> +	put_unaligned_le24(req->mr->length, sg->length);
> +	put_unaligned_le32(req->mr->rkey, sg->key);
> +	sg->type = (NVME_KEY_SGL_FMT_DATA_DESC << 4) |
> +			NVME_SGL_FMT_INVALIDATE;
> +
> +	return 0;
> +}
> +
>   static void nvme_rdma_set_sig_domain(struct blk_integrity *bi,
>   		struct nvme_command *cmd, struct ib_sig_domain *domain,
>   		u16 control)
>   {
>   	domain->sig_type = IB_SIG_TYPE_T10_DIF;
>   	domain->sig.dif.bg_type = IB_T10DIF_CRC;
> -#ifdef CONFIG_BLK_DEV_INTEGRITY
>   	domain->sig.dif.pi_interval = 1 << bi->interval_exp;
> -#endif
>   	domain->sig.dif.ref_tag = le32_to_cpu(cmd->rw.reftag);
>   	domain->sig.dif.apptag_check_mask = 0xffff;
>   	domain->sig.dif.app_escape = true;
> @@ -1335,26 +1366,34 @@ static void nvme_rdma_sig_done(struct ib_cq *cq, struct ib_wc *wc)
>   		nvme_rdma_wr_error(cq, wc, "SIG");
>   }
>   
> -static void nvme_rdma_set_pi_wr(struct nvme_rdma_request *req,
> -				struct nvme_command *c)
> +static int nvme_rdma_map_sg_pi(struct nvme_rdma_queue *queue,
> +		struct nvme_rdma_request *req, struct nvme_command *c,
> +		int count, int pi_count)
>   {
> +	struct nvme_rdma_sgl *sgl = &req->data_sgl;
>   	struct ib_sig_attrs *sig_attrs = req->mr->sig_attrs;
>   	struct ib_reg_wr *wr = &req->reg_wr;
>   	struct request *rq = blk_mq_rq_from_pdu(req);
>   	struct bio *bio = rq->bio;
>   	struct nvme_keyed_sgl_desc *sg = &c->common.dptr.ksgl;
> +	int nr;
>   
> -	if (!IS_ENABLED(CONFIG_BLK_DEV_INTEGRITY))
> -		return;
> +	req->mr = ib_mr_pool_get(queue->qp, &queue->qp->sig_mrs);
> +	if (WARN_ON_ONCE(!req->mr))
> +		return -EAGAIN;
>   
> -	nvme_rdma_set_sig_attrs(blk_get_integrity(bio->bi_disk), c, sig_attrs);
> +	nr = ib_map_mr_sg_pi(req->mr, sgl->sg_table.sgl, count, 0,
> +			     req->metadata_sgl->sg_table.sgl, pi_count, 0,
> +			     SZ_4K);
> +	if (unlikely(nr))
> +		goto mr_put;
>   
> +	nvme_rdma_set_sig_attrs(blk_get_integrity(bio->bi_disk), c, sig_attrs);
>   	nvme_rdma_set_prot_checks(c, &sig_attrs->check_mask);
>   
>   	ib_update_fast_reg_key(req->mr, ib_inc_rkey(req->mr->rkey));
>   
>   	req->reg_cqe.done = nvme_rdma_sig_done;
> -
>   	memset(wr, 0, sizeof(*wr));
>   	wr->wr.opcode = IB_WR_REG_MR_INTEGRITY;
>   	wr->wr.wr_cqe = &req->reg_cqe;
> @@ -1370,69 +1409,10 @@ static void nvme_rdma_set_pi_wr(struct nvme_rdma_request *req,
>   	put_unaligned_le24(req->mr->length, sg->length);
>   	put_unaligned_le32(req->mr->rkey, sg->key);
>   	sg->type = NVME_KEY_SGL_FMT_DATA_DESC << 4;
> -}
> -
> -static void nvme_rdma_set_reg_wr(struct nvme_rdma_request *req,
> -				 struct nvme_command *c)
> -{
> -	struct nvme_keyed_sgl_desc *sg = &c->common.dptr.ksgl;
> -
> -	req->reg_cqe.done = nvme_rdma_memreg_done;
> -
> -	memset(&req->reg_wr, 0, sizeof(req->reg_wr));
> -	req->reg_wr.wr.opcode = IB_WR_REG_MR;
> -	req->reg_wr.wr.wr_cqe = &req->reg_cqe;
> -	req->reg_wr.wr.num_sge = 0;
> -	req->reg_wr.mr = req->mr;
> -	req->reg_wr.key = req->mr->rkey;
> -	req->reg_wr.access = IB_ACCESS_LOCAL_WRITE |
> -			     IB_ACCESS_REMOTE_READ |
> -			     IB_ACCESS_REMOTE_WRITE;
> -
> -	sg->addr = cpu_to_le64(req->mr->iova);
> -	put_unaligned_le24(req->mr->length, sg->length);
> -	put_unaligned_le32(req->mr->rkey, sg->key);
> -	sg->type = (NVME_KEY_SGL_FMT_DATA_DESC << 4) |
> -			NVME_SGL_FMT_INVALIDATE;
> -}
> -
> -static int nvme_rdma_map_sg_fr(struct nvme_rdma_queue *queue,
> -		struct nvme_rdma_request *req, struct nvme_command *c,
> -		int count, int pi_count)
> -{
> -	struct nvme_rdma_sgl *sgl = &req->data_sgl;
> -	int nr;
> -
> -	if (req->use_md) {
> -		req->mr = ib_mr_pool_get(queue->qp, &queue->qp->sig_mrs);
> -		if (WARN_ON_ONCE(!req->mr))
> -			return -EAGAIN;
> -
> -		nr = ib_map_mr_sg_pi(req->mr, sgl->sg_table.sgl, count, 0,
> -				     req->md_sgl->sg_table.sgl, pi_count, 0,
> -				     SZ_4K);
> -		if (unlikely(nr))
> -			goto mr_put;
> -
> -		nvme_rdma_set_pi_wr(req, c);
> -	} else {
> -		req->mr = ib_mr_pool_get(queue->qp, &queue->qp->rdma_mrs);
> -		if (WARN_ON_ONCE(!req->mr))
> -			return -EAGAIN;
> -		/*
> -		 * Align the MR to a 4K page size to match the ctrl page size
> -		 * and the block virtual boundary.
> -		 */
> -		nr = ib_map_mr_sg(req->mr, sgl->sg_table.sgl, count, 0, SZ_4K);
> -		if (unlikely(nr < count))
> -			goto mr_put;
> -
> -		nvme_rdma_set_reg_wr(req, c);
> -	}
> -
>   	return 0;
>   mr_put:
> -	nvme_rdma_mr_pool_put(queue->qp, req);
> +	ib_mr_pool_put(queue->qp, &queue->qp->sig_mrs, req->mr);
> +	req->mr = NULL;
>   	if (nr < 0)
>   		return nr;
>   	return -EINVAL;
> @@ -1473,28 +1453,35 @@ static int nvme_rdma_map_data(struct nvme_rdma_queue *queue,
>   	}
>   
>   	if (blk_integrity_rq(rq)) {
> -		req->md_sgl->sg_table.sgl =
> -			(struct scatterlist *)(req->md_sgl + 1);
> -		ret = sg_alloc_table_chained(&req->md_sgl->sg_table,
> +		req->metadata_sgl = (void *)nvme_req(rq) +
> +			sizeof(struct nvme_rdma_request) +
> +			NVME_RDMA_DATA_SGL_SIZE;
> +		req->metadata_sgl->sg_table.sgl =
> +			(struct scatterlist *)(req->metadata_sgl + 1);
> +		ret = sg_alloc_table_chained(&req->metadata_sgl->sg_table,
>   				blk_rq_count_integrity_sg(rq->q, rq->bio),
> -				req->md_sgl->sg_table.sgl,
> -				NVME_INLINE_MD_SG_CNT);
> +				req->metadata_sgl->sg_table.sgl,
> +				NVME_INLINE_METADATA_SG_CNT);
>   		if (unlikely(ret)) {
>   			ret = -ENOMEM;
>   			goto out_unmap_sg;
>   		}
>   
> -		req->md_sgl->nents = blk_rq_map_integrity_sg(rq->q, rq->bio,
> -					req->md_sgl->sg_table.sgl);
> -		pi_count = ib_dma_map_sg(ibdev, req->md_sgl->sg_table.sgl,
> -					 req->md_sgl->nents, rq_dma_dir(rq));
> +		req->metadata_sgl->nents = blk_rq_map_integrity_sg(rq->q,
> +				rq->bio, req->metadata_sgl->sg_table.sgl);
> +		pi_count = ib_dma_map_sg(ibdev, req->metadata_sgl->sg_table.sgl,
> +				req->metadata_sgl->nents, rq_dma_dir(rq));
>   		if (unlikely(pi_count <= 0)) {
>   			ret = -EIO;
>   			goto out_free_pi_table;
>   		}
>   	}
>   
> -	if (count <= dev->num_inline_segments && !req->use_md) {
> +	if (nvme_rdma_use_sig_mr(req)) {
> +		ret = nvme_rdma_map_sg_pi(queue, req, c, count, pi_count);
> +		goto out;
> +	}
> +	if (count <= dev->num_inline_segments) {
>   		if (rq_data_dir(rq) == WRITE && nvme_rdma_queue_idx(queue) &&
>   		    queue->ctrl->use_inline_data &&
>   		    blk_rq_payload_bytes(rq) <=
> @@ -1509,7 +1496,7 @@ static int nvme_rdma_map_data(struct nvme_rdma_queue *queue,
>   		}
>   	}
>   
> -	ret = nvme_rdma_map_sg_fr(queue, req, c, count, pi_count);
> +	ret = nvme_rdma_map_sg_fr(queue, req, c, count);
>   out:
>   	if (unlikely(ret))
>   		goto out_unmap_pi_sg;
> @@ -1518,12 +1505,12 @@ static int nvme_rdma_map_data(struct nvme_rdma_queue *queue,
>   
>   out_unmap_pi_sg:
>   	if (blk_integrity_rq(rq))
> -		ib_dma_unmap_sg(ibdev, req->md_sgl->sg_table.sgl,
> -				req->md_sgl->nents, rq_dma_dir(rq));
> +		ib_dma_unmap_sg(ibdev, req->metadata_sgl->sg_table.sgl,
> +				req->metadata_sgl->nents, rq_dma_dir(rq));
>   out_free_pi_table:
>   	if (blk_integrity_rq(rq))
> -		sg_free_table_chained(&req->md_sgl->sg_table,
> -				      NVME_INLINE_MD_SG_CNT);
> +		sg_free_table_chained(&req->metadata_sgl->sg_table,
> +				      NVME_INLINE_METADATA_SG_CNT);
>   out_unmap_sg:
>   	ib_dma_unmap_sg(ibdev, req->data_sgl.sg_table.sgl, req->data_sgl.nents,
>   			rq_dma_dir(rq));
> @@ -1976,9 +1963,13 @@ static blk_status_t nvme_rdma_queue_rq(struct blk_mq_hw_ctx *hctx,
>   
>   	blk_mq_start_request(rq);
>   
> -	if (c->common.opcode == nvme_cmd_write ||
> -	    c->common.opcode == nvme_cmd_read)
> -		req->use_md = queue->pi_support && nvme_ns_has_pi(ns);
> +	if (IS_ENABLED(CONFIG_BLK_DEV_INTEGRITY) &&
> +	    queue->pi_support && nvme_ns_has_pi(ns) &&
> +	    (c->common.opcode == nvme_cmd_write ||
> +	     c->common.opcode == nvme_cmd_read))
> +		req->use_sig_mr = true;
> +	else
> +		req->use_sig_mr = false;
>   
>   	err = nvme_rdma_map_data(queue, rq, c);
>   	if (unlikely(err < 0)) {
> @@ -2060,7 +2051,7 @@ static void nvme_rdma_complete_rq(struct request *rq)
>   	struct nvme_rdma_queue *queue = req->queue;
>   	struct ib_device *ibdev = queue->device->dev;
>   
> -	if (req->use_md)
> +	if (nvme_rdma_use_sig_mr(req))
>   		nvme_rdma_check_pi_status(req);
>   
>   	nvme_rdma_unmap_data(queue, rq);
> diff --git a/include/linux/genhd.h b/include/linux/genhd.h
> index 058d895544c75..841858150ab7a 100644
> --- a/include/linux/genhd.h
> +++ b/include/linux/genhd.h
> @@ -169,8 +169,6 @@ struct disk_part_tbl {
>   struct disk_events;
>   struct badblocks;
>   
> -#if defined(CONFIG_BLK_DEV_INTEGRITY)
> -
>   struct blk_integrity {
>   	const struct blk_integrity_profile	*profile;
>   	unsigned char				flags;
> @@ -179,8 +177,6 @@ struct blk_integrity {
>   	unsigned char				tag_size;
>   };
>   
> -#endif	/* CONFIG_BLK_DEV_INTEGRITY */
> -
>   struct gendisk {
>   	/* major, first_minor and minors are input parameters only,
>   	 * don't use directly.  Use disk_devt() and disk_max_parts().

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 09/15] nvmet: add metadata characteristics for a namespace
  2020-04-28 13:11 ` [PATCH 09/15] nvmet: add metadata characteristics for a namespace Max Gurtovoy
@ 2020-05-01 15:50   ` Christoph Hellwig
  0 siblings, 0 replies; 35+ messages in thread
From: Christoph Hellwig @ 2020-05-01 15:50 UTC (permalink / raw)
  To: Max Gurtovoy
  Cc: axboe, jsmart2021, sagi, martin.petersen, shlomin, israelr,
	vladimirk, linux-nvme, idanb, oren, kbusch, nitzanc, hch

On Tue, Apr 28, 2020 at 04:11:29PM +0300, Max Gurtovoy wrote:
> From: Israel Rukshin <israelr@mellanox.com>
> 
> Fill those namespace fields from the block device format for adding
> metadata (T10-PI) over fabric support with block devices.
> 
> Signed-off-by: Israel Rukshin <israelr@mellanox.com>
> Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>

Looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 12/15] nvme: add Metadata Capabilities enumerations
  2020-04-28 13:11 ` [PATCH 12/15] nvme: add Metadata Capabilities enumerations Max Gurtovoy
@ 2020-05-01 15:53   ` Christoph Hellwig
  2020-05-03 12:43     ` Max Gurtovoy
  0 siblings, 1 reply; 35+ messages in thread
From: Christoph Hellwig @ 2020-05-01 15:53 UTC (permalink / raw)
  To: Max Gurtovoy
  Cc: axboe, jsmart2021, sagi, martin.petersen, shlomin, israelr,
	vladimirk, linux-nvme, idanb, oren, kbusch, nitzanc, hch

On Tue, Apr 28, 2020 at 04:11:32PM +0300, Max Gurtovoy wrote:
> From: Israel Rukshin <israelr@mellanox.com>
> 
> The enumerations will be used to expose the namespace metadata format by
> the target.
> 
> Suggested-by: Christoph Hellwig <hch@lst.de>
> Signed-off-by: Israel Rukshin <israelr@mellanox.com>
> Signed-off-by: Max Gurtovoy <maxg@mellanox.com>

Looks good,

Reviewed-by: Christoph Hellwig <hch@lst.de>

And while reviewing this I really wonder why NVMe has Bit 4
of FLBAS and the MC field, as I can't see what MC buys us over the
FLBAS bit.  But of course we'll need to set this on the target side,
so..

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 13/15] nvmet: add metadata/T10-PI support
  2020-04-28 13:11 ` [PATCH 13/15] nvmet: add metadata/T10-PI support Max Gurtovoy
@ 2020-05-01 15:58   ` Christoph Hellwig
  2020-05-01 16:19   ` Christoph Hellwig
  1 sibling, 0 replies; 35+ messages in thread
From: Christoph Hellwig @ 2020-05-01 15:58 UTC (permalink / raw)
  To: Max Gurtovoy
  Cc: axboe, jsmart2021, sagi, martin.petersen, shlomin, israelr,
	vladimirk, linux-nvme, idanb, oren, kbusch, nitzanc, hch

> +static ssize_t nvmet_subsys_attr_pi_enable_store(struct config_item *item,
> +						 const char *page, size_t count)
> +{
> +	struct nvmet_subsys *subsys = to_subsys(item);
> +	bool pi_enable;
> +
> +	if (strtobool(page, &pi_enable))
> +		return -EINVAL;
> +
> +	down_write(&nvmet_config_sem);
> +	subsys->pi_support = pi_enable;
> +	up_write(&nvmet_config_sem);

No need to take a lock for setting a single scalar value.

> +	/*
> +	 * If the user requested PI support and the transport isn't pi capable,
> +	 * don't enable the port.
> +	 */
> +	if (port->pi_enable && !ops->metadata_support) {
> +		ret = -EINVAL;

In this case it might be worth to print a messag to help with debugging.

> +	if (ctrl->port->pi_enable)
> +		ctrl->pi_support = ctrl->subsys->pi_support;

I find this a little weird to read due to the mix of styles.  Either:

	if (ctrl->port->pi_enable && ctrl->subsys->pi_support)
		ctrl->pi_support = true;

or

	ctrl->pi_support = ctrl->port->pi_enable && ctrl->subsys->pi_support;

seem ok to be, with a slight preference for the first one.

> +static inline u32 nvmet_rw_md_len(struct nvmet_req *req)

Please spell out metadata instead of md.

> +{
> +	return ((u32)le16_to_cpu(req->cmd->rw.length) + 1) *
> +			req->ns->metadata_size;
> +}

Also this could probably use another IS_ENABLED to optimize out
the code if CONFIG_BLK_DEV_INTEGRITY is not enabled.

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 13/15] nvmet: add metadata/T10-PI support
  2020-04-28 13:11 ` [PATCH 13/15] nvmet: add metadata/T10-PI support Max Gurtovoy
  2020-05-01 15:58   ` Christoph Hellwig
@ 2020-05-01 16:19   ` Christoph Hellwig
  1 sibling, 0 replies; 35+ messages in thread
From: Christoph Hellwig @ 2020-05-01 16:19 UTC (permalink / raw)
  To: Max Gurtovoy
  Cc: axboe, jsmart2021, sagi, martin.petersen, shlomin, israelr,
	vladimirk, linux-nvme, idanb, oren, kbusch, nitzanc, hch

>  
> +static inline u32 nvmet_rw_md_len(struct nvmet_req *req)
> +{
> +	return ((u32)le16_to_cpu(req->cmd->rw.length) + 1) *
> +			req->ns->metadata_size;
> +}

> +static inline bool nvmet_ns_has_pi(struct nvmet_ns *ns)
> +{
> +	if (!IS_ENABLED(CONFIG_BLK_DEV_INTEGRITY))
> +		return false;
> +	return ns->pi_type && ns->metadata_size == sizeof(struct t10_pi_tuple);
> +}

Nit: Can you add the nvmet_ns_has_pi no in the middle of the helpers
that parse the comment, but in a different place?

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 14/15] nvmet: add metadata support for block devices
  2020-04-28 13:11 ` [PATCH 14/15] nvmet: add metadata support for block devices Max Gurtovoy
@ 2020-05-01 16:25   ` Christoph Hellwig
  0 siblings, 0 replies; 35+ messages in thread
From: Christoph Hellwig @ 2020-05-01 16:25 UTC (permalink / raw)
  To: Max Gurtovoy
  Cc: axboe, jsmart2021, sagi, martin.petersen, shlomin, israelr,
	vladimirk, linux-nvme, idanb, oren, kbusch, nitzanc, hch

Looks generally good.  _md_ should be renamed to _metadata_, and
I think we can do the SGL allocation a little cleaner.  Please take
a look at the attached diff:

diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
index c1456ce9a8427..485285c9c8a3c 100644
--- a/drivers/nvme/target/core.c
+++ b/drivers/nvme/target/core.c
@@ -884,11 +884,11 @@ bool nvmet_req_init(struct nvmet_req *req, struct nvmet_cq *cq,
 	req->sq = sq;
 	req->ops = ops;
 	req->sg = NULL;
-	req->md_sg = NULL;
+	req->metadata_sg = NULL;
 	req->sg_cnt = 0;
-	req->md_sg_cnt = 0;
+	req->metadata_sg_cnt = 0;
 	req->transfer_len = 0;
-	req->md_len = 0;
+	req->metadata_len = 0;
 	req->cqe->status = 0;
 	req->cqe->sq_head = 0;
 	req->ns = NULL;
@@ -973,103 +973,92 @@ bool nvmet_check_data_len_lte(struct nvmet_req *req, size_t data_len)
 	return true;
 }
 
-void nvmet_req_free_p2pmem_sgls(struct nvmet_req *req)
+static unsigned int nvmet_data_transfer_len(struct nvmet_req *req)
 {
-	pci_p2pmem_free_sgl(req->p2p_dev, req->sg);
-	if (req->md_sg)
-		pci_p2pmem_free_sgl(req->p2p_dev, req->md_sg);
+	return req->transfer_len - req->metadata_len;
 }
 
-static int nvmet_req_alloc_p2pmem_sgls(struct nvmet_req *req, int data_len,
-		struct pci_dev *p2p_dev)
+static int nvmet_req_alloc_p2pmem_sgls(struct nvmet_req *req)
 {
-	req->sg = pci_p2pmem_alloc_sgl(p2p_dev, &req->sg_cnt, data_len);
+	req->sg = pci_p2pmem_alloc_sgl(req->p2p_dev, &req->sg_cnt,
+			nvmet_data_transfer_len(req));
 	if (!req->sg)
 		goto out_err;
 
-	if (req->md_len) {
-		req->md_sg = pci_p2pmem_alloc_sgl(p2p_dev, &req->md_sg_cnt,
-						  req->md_len);
-		if (!req->md_sg)
+	if (req->metadata_len) {
+		req->metadata_sg = pci_p2pmem_alloc_sgl(req->p2p_dev,
+				&req->metadata_sg_cnt, req->metadata_len);
+		if (!req->metadata_sg)
 			goto out_free_sg;
 	}
-	req->p2p_dev = p2p_dev;
 	return 0;
 
 out_free_sg:
-	pci_p2pmem_free_sgl(p2p_dev, req->sg);
+	pci_p2pmem_free_sgl(req->p2p_dev, req->sg);
 out_err:
+	req->p2p_dev = NULL;
 	return -ENOMEM;
 }
 
-void nvmet_req_free_mem_sgls(struct nvmet_req *req)
+static bool nvmet_req_find_p2p_dev(struct nvmet_req *req)
 {
-	sgl_free(req->sg);
-	if (req->md_sg)
-		sgl_free(req->md_sg);
+	if (!IS_ENABLED(CONFIG_PCI_P2PDMA))
+		return false;
+
+	if (req->sq->ctrl && req->sq->qid && req->ns) {
+		req->p2p_dev = radix_tree_lookup(&req->sq->ctrl->p2p_ns_map,
+						 req->ns->nsid);
+		if (req->p2p_dev)
+			return true;
+	}
+
+	req->p2p_dev = NULL;
+	return false;
 }
 
-static int nvmet_req_alloc_mem_sgls(struct nvmet_req *req, int data_len)
+int nvmet_req_alloc_sgl(struct nvmet_req *req)
 {
-	req->sg = sgl_alloc(data_len, GFP_KERNEL, &req->sg_cnt);
+	if (nvmet_req_find_p2p_dev(req) && nvmet_req_alloc_p2pmem_sgls(req))
+		return 0;
+
+	req->sg = sgl_alloc(nvmet_data_transfer_len(req), GFP_KERNEL,
+			    &req->sg_cnt);
 	if (unlikely(!req->sg))
 		goto out;
 
-	if (req->md_len) {
-		req->md_sg = sgl_alloc(req->md_len, GFP_KERNEL,
-				       &req->md_sg_cnt);
-		if (unlikely(!req->md_sg))
+	if (req->metadata_len) {
+		req->metadata_sg = sgl_alloc(req->metadata_len, GFP_KERNEL,
+				       &req->metadata_sg_cnt);
+		if (unlikely(!req->metadata_sg))
 			goto out_free;
 	}
 
 	return 0;
-
 out_free:
 	sgl_free(req->sg);
 out:
 	return -ENOMEM;
 }
+EXPORT_SYMBOL_GPL(nvmet_req_alloc_sgls);
 
-int nvmet_req_alloc_sgl(struct nvmet_req *req)
+void nvmet_req_free_sgls(struct nvmet_req *req)
 {
-	struct pci_dev *p2p_dev = NULL;
-	int data_len = req->transfer_len - req->md_len;
-
-	if (IS_ENABLED(CONFIG_PCI_P2PDMA)) {
-		if (req->sq->ctrl && req->ns)
-			p2p_dev = radix_tree_lookup(&req->sq->ctrl->p2p_ns_map,
-						    req->ns->nsid);
-
-		req->p2p_dev = NULL;
-		if (req->sq->qid && p2p_dev) {
-			int ret = nvmet_req_alloc_p2pmem_sgls(req, data_len,
-							      p2p_dev);
-			if (!ret)
-				return 0;
-		}
+	if (req->p2p_dev) {
+		pci_p2pmem_free_sgl(req->p2p_dev, req->sg);
+		if (req->metadata_sg)
+			pci_p2pmem_free_sgl(req->p2p_dev, req->metadata_sg);
+	} else {
+		sgl_free(req->sg);
+		if (req->metadata_sg)
+			sgl_free(req->metadata_sg);
 	}
 
-	/*
-	 * If no P2P memory was available/enabled we fallback to using regular
-	 * memory.
-	 */
-	return nvmet_req_alloc_mem_sgls(req, data_len);
-}
-EXPORT_SYMBOL_GPL(nvmet_req_alloc_sgl);
-
-void nvmet_req_free_sgl(struct nvmet_req *req)
-{
-	if (req->p2p_dev)
-		nvmet_req_free_p2pmem_sgls(req);
-	else
-		nvmet_req_free_mem_sgls(req);
-
 	req->sg = NULL;
-	req->md_sg = NULL;
+	req->metadata_sg = NULL;
 	req->sg_cnt = 0;
-	req->md_sg_cnt = 0;
+	req->metadata_sg_cnt = 0;
 }
-EXPORT_SYMBOL_GPL(nvmet_req_free_sgl);
+EXPORT_SYMBOL_GPL(nvmet_req_free_sgls);
 
 static inline bool nvmet_cc_en(u32 cc)
 {
diff --git a/drivers/nvme/target/io-cmd-bdev.c b/drivers/nvme/target/io-cmd-bdev.c
index 4243156146b92..02ce6df8877b8 100644
--- a/drivers/nvme/target/io-cmd-bdev.c
+++ b/drivers/nvme/target/io-cmd-bdev.c
@@ -191,7 +191,7 @@ static int nvmet_bdev_alloc_bip(struct nvmet_req *req, struct bio *bio,
 	}
 
 	bip = bio_integrity_alloc(bio, GFP_NOIO,
-			min_t(unsigned int, req->md_sg_cnt, BIO_MAX_PAGES));
+		min_t(unsigned int, req->metadata_sg_cnt, BIO_MAX_PAGES));
 	if (IS_ERR(bip)) {
 		pr_err("Unable to allocate bio_integrity_payload\n");
 		return PTR_ERR(bip);
@@ -238,9 +238,10 @@ static void nvmet_bdev_execute_rw(struct nvmet_req *req)
 	sector_t sector;
 	int op, i, rc;
 	struct sg_mapping_iter prot_miter;
+	unsigned int iter_flags;
+	unsigned int total_len = nvmet_rw_data_len(req) + req->metadata_len;
 
-	if (!nvmet_check_transfer_len(req,
-				      nvmet_rw_data_len(req) + req->md_len))
+	if (!nvmet_check_transfer_len(req, total_len))
 		return;
 
 	if (!req->sg_cnt) {
@@ -252,8 +253,10 @@ static void nvmet_bdev_execute_rw(struct nvmet_req *req)
 		op = REQ_OP_WRITE | REQ_SYNC | REQ_IDLE;
 		if (req->cmd->rw.control & cpu_to_le16(NVME_RW_FUA))
 			op |= REQ_FUA;
+		iter_flags = SG_MITER_TO_SG;
 	} else {
 		op = REQ_OP_READ;
+		iter_flags = SG_MITER_FROM_SG;
 	}
 
 	if (is_pci_p2pdma_page(sg_page(req->sg)))
@@ -275,17 +278,16 @@ static void nvmet_bdev_execute_rw(struct nvmet_req *req)
 	bio->bi_opf = op;
 
 	blk_start_plug(&plug);
-	if (req->md_len)
-		sg_miter_start(&prot_miter, req->md_sg, req->md_sg_cnt,
-			       op == REQ_OP_READ ? SG_MITER_FROM_SG :
-						   SG_MITER_TO_SG);
+	if (req->metadata_len)
+		sg_miter_start(&prot_miter, req->metadata_sg,
+				req->metadata_sg_cnt, iter_flags);
 
 	for_each_sg(req->sg, sg, req->sg_cnt, i) {
 		while (bio_add_page(bio, sg_page(sg), sg->length, sg->offset)
 				!= sg->length) {
 			struct bio *prev = bio;
 
-			if (req->md_len) {
+			if (req->metadata_len) {
 				rc = nvmet_bdev_alloc_bip(req, bio,
 							  &prot_miter);
 				if (unlikely(rc)) {
@@ -307,7 +309,7 @@ static void nvmet_bdev_execute_rw(struct nvmet_req *req)
 		sg_cnt--;
 	}
 
-	if (req->md_len) {
+	if (req->metadata_len) {
 		rc = nvmet_bdev_alloc_bip(req, bio, &prot_miter);
 		if (unlikely(rc)) {
 			bio_io_error(bio);
@@ -443,7 +445,7 @@ u16 nvmet_bdev_parse_io_cmd(struct nvmet_req *req)
 	case nvme_cmd_write:
 		req->execute = nvmet_bdev_execute_rw;
 		if (req->sq->ctrl->pi_support && nvmet_ns_has_pi(req->ns))
-			req->md_len = nvmet_rw_md_len(req);
+			req->metadata_len = nvmet_rw_metadata_len(req);
 		return 0;
 	case nvme_cmd_flush:
 		req->execute = nvmet_bdev_execute_flush;
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index 706952032ef8f..237707e72120d 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -309,7 +309,7 @@ struct nvmet_req {
 	struct nvmet_cq		*cq;
 	struct nvmet_ns		*ns;
 	struct scatterlist	*sg;
-	struct scatterlist	*md_sg;
+	struct scatterlist	*metadata_sg;
 	struct bio_vec		inline_bvec[NVMET_MAX_INLINE_BIOVEC];
 	union {
 		struct {
@@ -323,10 +323,10 @@ struct nvmet_req {
 		} f;
 	};
 	int			sg_cnt;
-	int			md_sg_cnt;
+	int			metadata_sg_cnt;
 	/* data length as parsed from the SGL descriptor: */
 	size_t			transfer_len;
-	size_t			md_len;
+	size_t			metadata_len;
 
 	struct nvmet_port	*port;
 
@@ -397,8 +397,8 @@ void nvmet_req_uninit(struct nvmet_req *req);
 bool nvmet_check_transfer_len(struct nvmet_req *req, size_t len);
 bool nvmet_check_data_len_lte(struct nvmet_req *req, size_t data_len);
 void nvmet_req_complete(struct nvmet_req *req, u16 status);
-int nvmet_req_alloc_sgl(struct nvmet_req *req);
-void nvmet_req_free_sgl(struct nvmet_req *req);
+int nvmet_req_alloc_sgls(struct nvmet_req *req);
+void nvmet_req_free_sgls(struct nvmet_req *req);
 
 void nvmet_execute_keep_alive(struct nvmet_req *req);
 
@@ -517,7 +517,7 @@ static inline u32 nvmet_rw_data_len(struct nvmet_req *req)
 			req->ns->blksize_shift;
 }
 
-static inline u32 nvmet_rw_md_len(struct nvmet_req *req)
+static inline u32 nvmet_rw_metadata_len(struct nvmet_req *req)
 {
 	return ((u32)le16_to_cpu(req->cmd->rw.length) + 1) *
 			req->ns->metadata_size;
diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
index f4a4721774a80..b62a56959b0ce 100644
--- a/drivers/nvme/target/rdma.c
+++ b/drivers/nvme/target/rdma.c
@@ -613,7 +613,7 @@ static void nvmet_rdma_set_sig_attrs(struct nvmet_req *req,
 		control &= ~NVME_RW_PRINFO_PRACT;
 		cmd->rw.control = cpu_to_le16(control);
 		/* PI is added by the HW */
-		req->transfer_len += req->md_len;
+		req->transfer_len += req->metadata_len;
 	} else {
 		/* for WRITE_PASS/READ_PASS both wire/memory domains exist */
 		nvmet_rdma_set_sig_domain(bi, cmd, &sig_attrs->wire, control);
@@ -635,10 +635,10 @@ static int nvmet_rdma_rw_ctx_init(struct nvmet_rdma_rsp *rsp, u64 addr, u32 key,
 	struct nvmet_req *req = &rsp->req;
 	int ret;
 
-	if (req->md_len)
+	if (req->metadata_len)
 		ret = rdma_rw_ctx_signature_init(&rsp->rw, cm_id->qp,
-			cm_id->port_num, req->sg, req->sg_cnt, req->md_sg,
-			req->md_sg_cnt, sig_attrs, addr, key,
+			cm_id->port_num, req->sg, req->sg_cnt, req->metadata_sg,
+			req->metadata_sg_cnt, sig_attrs, addr, key,
 			nvmet_data_dir(req));
 	else
 		ret = rdma_rw_ctx_init(&rsp->rw, cm_id->qp, cm_id->port_num,
@@ -653,10 +653,10 @@ static void nvmet_rdma_rw_ctx_destroy(struct nvmet_rdma_rsp *rsp)
 	struct rdma_cm_id *cm_id = rsp->queue->cm_id;
 	struct nvmet_req *req = &rsp->req;
 
-	if (req->md_len)
+	if (req->metadata_len)
 		rdma_rw_ctx_destroy_signature(&rsp->rw, cm_id->qp,
-			cm_id->port_num, req->sg, req->sg_cnt, req->md_sg,
-			req->md_sg_cnt, nvmet_data_dir(req));
+			cm_id->port_num, req->sg, req->sg_cnt, req->metadata_sg,
+			req->metadata_sg_cnt, nvmet_data_dir(req));
 	else
 		rdma_rw_ctx_destroy(&rsp->rw, cm_id->qp, cm_id->port_num,
 				    req->sg, req->sg_cnt, nvmet_data_dir(req));
@@ -672,7 +672,7 @@ static void nvmet_rdma_release_rsp(struct nvmet_rdma_rsp *rsp)
 		nvmet_rdma_rw_ctx_destroy(rsp);
 
 	if (rsp->req.sg != rsp->cmd->inline_sg)
-		nvmet_req_free_sgl(&rsp->req);
+		nvmet_req_free_sgls(&rsp->req);
 
 	if (unlikely(!list_empty_careful(&queue->rsp_wr_wait_list)))
 		nvmet_rdma_process_wr_wait_list(queue);
@@ -725,7 +725,7 @@ static void nvmet_rdma_queue_response(struct nvmet_req *req)
 	}
 
 	if (nvmet_rdma_need_data_out(rsp)) {
-		if (rsp->req.md_len)
+		if (rsp->req.metadata_len)
 			first_wr = rdma_rw_ctx_wrs(&rsp->rw, cm_id->qp,
 					cm_id->port_num, &rsp->write_cqe, NULL);
 		else
@@ -770,7 +770,7 @@ static void nvmet_rdma_read_data_done(struct ib_cq *cq, struct ib_wc *wc)
 		return;
 	}
 
-	if (rsp->req.md_len)
+	if (rsp->req.metadata_len)
 		status = nvmet_rdma_check_pi_status(rsp->rw.reg->mr);
 	nvmet_rdma_rw_ctx_destroy(rsp);
 
@@ -889,10 +889,10 @@ static u16 nvmet_rdma_map_sgl_keyed(struct nvmet_rdma_rsp *rsp,
 	if (!rsp->req.transfer_len)
 		return 0;
 
-	if (rsp->req.md_len)
+	if (rsp->req.metadata_len)
 		nvmet_rdma_set_sig_attrs(&rsp->req, &sig_attrs);
 
-	ret = nvmet_req_alloc_sgl(&rsp->req);
+	ret = nvmet_req_alloc_sgls(&rsp->req);
 	if (unlikely(ret < 0))
 		goto error_out;
 

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* Re: [PATCH 15/15] nvmet-rdma: add metadata/T10-PI support
  2020-04-28 13:11 ` [PATCH 15/15] nvmet-rdma: add metadata/T10-PI support Max Gurtovoy
@ 2020-05-01 16:48   ` Christoph Hellwig
  2020-05-03 16:29     ` Max Gurtovoy
  0 siblings, 1 reply; 35+ messages in thread
From: Christoph Hellwig @ 2020-05-01 16:48 UTC (permalink / raw)
  To: Max Gurtovoy
  Cc: axboe, jsmart2021, sagi, martin.petersen, shlomin, israelr,
	vladimirk, linux-nvme, idanb, oren, kbusch, nitzanc, hch

> +static u16 nvmet_rdma_check_pi_status(struct ib_mr *sig_mr)
> +{
> +	struct ib_mr_status mr_status;
> +	int ret;
> +	u16 status = 0;
> +
> +	if (!IS_ENABLED(CONFIG_BLK_DEV_INTEGRITY))
> +		return 0;

I'd expect this in the callers..

> +static void nvmet_rdma_set_sig_domain(struct blk_integrity *bi,
> +		struct nvme_command *cmd, struct ib_sig_domain *domain,
> +		u16 control)
> +{
> +	domain->sig_type = IB_SIG_TYPE_T10_DIF;
> +	domain->sig.dif.bg_type = IB_T10DIF_CRC;
> +#ifdef CONFIG_BLK_DEV_INTEGRITY
> +	domain->sig.dif.pi_interval = 1 << bi->interval_exp;
> +#endif

With the genhd.h tweak I sent in response to the host code, we
shouldn't need this.

> +	if (!IS_ENABLED(CONFIG_BLK_DEV_INTEGRITY))
> +		return;

Same comment as above.

> +	if (nvmet_rdma_need_data_out(rsp)) {
> +		if (rsp->req.md_len)

Can we have a helper for metadata len that also uses
IS_ENABLED(CONFIG_BLK_DEV_INTEGRITY) to let the compiler garbage collect
the whole code if it is compiled out?

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 12/15] nvme: add Metadata Capabilities enumerations
  2020-05-01 15:53   ` Christoph Hellwig
@ 2020-05-03 12:43     ` Max Gurtovoy
  0 siblings, 0 replies; 35+ messages in thread
From: Max Gurtovoy @ 2020-05-03 12:43 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: axboe, jsmart2021, sagi, martin.petersen, shlomin, israelr,
	vladimirk, linux-nvme, idanb, oren, kbusch, nitzanc


On 5/1/2020 6:53 PM, Christoph Hellwig wrote:
> On Tue, Apr 28, 2020 at 04:11:32PM +0300, Max Gurtovoy wrote:
>> From: Israel Rukshin <israelr@mellanox.com>
>>
>> The enumerations will be used to expose the namespace metadata format by
>> the target.
>>
>> Suggested-by: Christoph Hellwig <hch@lst.de>
>> Signed-off-by: Israel Rukshin <israelr@mellanox.com>
>> Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
> Looks good,
>
> Reviewed-by: Christoph Hellwig <hch@lst.de>
>
> And while reviewing this I really wonder why NVMe has Bit 4
> of FLBAS and the MC field, as I can't see what MC buys us over the
> FLBAS bit.  But of course we'll need to set this on the target side,
> so..

Right, this Bit 4 seems redundant to me as well.

But we set both FLBAS and MC to be compatible to spec and other SW 
implementations that might check both fields.


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 15/15] nvmet-rdma: add metadata/T10-PI support
  2020-05-01 16:48   ` Christoph Hellwig
@ 2020-05-03 16:29     ` Max Gurtovoy
  2020-05-04 14:08       ` Christoph Hellwig
  0 siblings, 1 reply; 35+ messages in thread
From: Max Gurtovoy @ 2020-05-03 16:29 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: axboe, jsmart2021, sagi, martin.petersen, shlomin, israelr,
	vladimirk, linux-nvme, idanb, oren, kbusch, nitzanc


On 5/1/2020 7:48 PM, Christoph Hellwig wrote:
>> +static u16 nvmet_rdma_check_pi_status(struct ib_mr *sig_mr)
>> +{
>> +	struct ib_mr_status mr_status;
>> +	int ret;
>> +	u16 status = 0;
>> +
>> +	if (!IS_ENABLED(CONFIG_BLK_DEV_INTEGRITY))
>> +		return 0;
> I'd expect this in the callers..
>
>> +static void nvmet_rdma_set_sig_domain(struct blk_integrity *bi,
>> +		struct nvme_command *cmd, struct ib_sig_domain *domain,
>> +		u16 control)
>> +{
>> +	domain->sig_type = IB_SIG_TYPE_T10_DIF;
>> +	domain->sig.dif.bg_type = IB_T10DIF_CRC;
>> +#ifdef CONFIG_BLK_DEV_INTEGRITY
>> +	domain->sig.dif.pi_interval = 1 << bi->interval_exp;
>> +#endif
> With the genhd.h tweak I sent in response to the host code, we
> shouldn't need this.
>
>> +	if (!IS_ENABLED(CONFIG_BLK_DEV_INTEGRITY))
>> +		return;
> Same comment as above.
>
>> +	if (nvmet_rdma_need_data_out(rsp)) {
>> +		if (rsp->req.md_len)
> Can we have a helper for metadata len that also uses
> IS_ENABLED(CONFIG_BLK_DEV_INTEGRITY) to let the compiler garbage collect
> the whole code if it is compiled out?

We have it in:

static inline u32 nvmet_rw_metadata_len(struct nvmet_req *req)
{
         if (!IS_ENABLED(CONFIG_BLK_DEV_INTEGRITY))
                 return 0;
         return ((u32)le16_to_cpu(req->cmd->rw.length) + 1) *
                         req->ns->metadata_size;
}


and

req->metadata_len = nvmet_rw_metadata_len(req);

is it good enough for the compiler ?



_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 15/15] nvmet-rdma: add metadata/T10-PI support
  2020-05-03 16:29     ` Max Gurtovoy
@ 2020-05-04 14:08       ` Christoph Hellwig
  2020-05-04 14:19         ` Max Gurtovoy
  0 siblings, 1 reply; 35+ messages in thread
From: Christoph Hellwig @ 2020-05-04 14:08 UTC (permalink / raw)
  To: Max Gurtovoy
  Cc: axboe, jsmart2021, sagi, martin.petersen, shlomin, israelr,
	vladimirk, linux-nvme, idanb, oren, kbusch, nitzanc,
	Christoph Hellwig

On Sun, May 03, 2020 at 07:29:12PM +0300, Max Gurtovoy wrote:
>> Can we have a helper for metadata len that also uses
>> IS_ENABLED(CONFIG_BLK_DEV_INTEGRITY) to let the compiler garbage collect
>> the whole code if it is compiled out?
>
> We have it in:
>
> static inline u32 nvmet_rw_metadata_len(struct nvmet_req *req)
> {
>         if (!IS_ENABLED(CONFIG_BLK_DEV_INTEGRITY))
>                 return 0;
>         return ((u32)le16_to_cpu(req->cmd->rw.length) + 1) *
>                         req->ns->metadata_size;
> }
>
>
> and
>
> req->metadata_len = nvmet_rw_metadata_len(req);
>
> is it good enough for the compiler ?

Well, this is the calculated metadata len, not the field in the
request, and the compiler can't really know it always is 0 in that
case.  At some point these optimizations might become a little
too ugly, though.

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 15/15] nvmet-rdma: add metadata/T10-PI support
  2020-05-04 14:08       ` Christoph Hellwig
@ 2020-05-04 14:19         ` Max Gurtovoy
  0 siblings, 0 replies; 35+ messages in thread
From: Max Gurtovoy @ 2020-05-04 14:19 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: axboe, jsmart2021, sagi, martin.petersen, shlomin, israelr,
	vladimirk, linux-nvme, idanb, oren, kbusch, nitzanc


On 5/4/2020 5:08 PM, Christoph Hellwig wrote:
> On Sun, May 03, 2020 at 07:29:12PM +0300, Max Gurtovoy wrote:
>>> Can we have a helper for metadata len that also uses
>>> IS_ENABLED(CONFIG_BLK_DEV_INTEGRITY) to let the compiler garbage collect
>>> the whole code if it is compiled out?
>> We have it in:
>>
>> static inline u32 nvmet_rw_metadata_len(struct nvmet_req *req)
>> {
>>          if (!IS_ENABLED(CONFIG_BLK_DEV_INTEGRITY))
>>                  return 0;
>>          return ((u32)le16_to_cpu(req->cmd->rw.length) + 1) *
>>                          req->ns->metadata_size;
>> }
>>
>>
>> and
>>
>> req->metadata_len = nvmet_rw_metadata_len(req);
>>
>> is it good enough for the compiler ?
> Well, this is the calculated metadata len, not the field in the
> request, and the compiler can't really know it always is 0 in that
> case.  At some point these optimizations might become a little
> too ugly, though.

Yes, I tried to make some balance between optimization and code 
readability and I'll sent V7 today.



_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 35+ messages in thread

end of thread, other threads:[~2020-05-04 14:19 UTC | newest]

Thread overview: 35+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-28 13:11 [PATCH 00/15 V6] nvme-rdma/nvmet-rdma: Add metadata/T10-PI support Max Gurtovoy
2020-04-28 13:11 ` [PATCH 01/15] nvme: introduce namespace features flag Max Gurtovoy
2020-05-01 13:20   ` Christoph Hellwig
2020-05-01 14:24   ` Christoph Hellwig
2020-05-01 14:33     ` Max Gurtovoy
2020-04-28 13:11 ` [PATCH 02/15] nvme: introduce NVME_NS_METADATA_SUPPORTED flag Max Gurtovoy
2020-05-01 13:20   ` Christoph Hellwig
2020-04-28 13:11 ` [PATCH 03/15] nvme: make nvme_ns_has_pi accessible to transports Max Gurtovoy
2020-05-01 13:20   ` Christoph Hellwig
2020-04-28 13:11 ` [PATCH 04/15] nvme: enforce extended LBA format for fabrics metadata Max Gurtovoy
2020-05-01 13:21   ` Christoph Hellwig
2020-05-01 13:41   ` Christoph Hellwig
2020-04-28 13:11 ` [PATCH 05/15] nvme: introduce max_integrity_segments ctrl attribute Max Gurtovoy
2020-04-28 13:11 ` [PATCH 06/15] nvme: introduce NVME_INLINE_MD_SG_CNT Max Gurtovoy
2020-04-28 13:11 ` [PATCH 07/15] nvme-rdma: introduce nvme_rdma_sgl structure Max Gurtovoy
2020-04-28 13:11 ` [PATCH 08/15] nvme-rdma: add metadata/T10-PI support Max Gurtovoy
2020-05-01 14:26   ` Christoph Hellwig
2020-05-01 15:00     ` Max Gurtovoy
2020-04-28 13:11 ` [PATCH 09/15] nvmet: add metadata characteristics for a namespace Max Gurtovoy
2020-05-01 15:50   ` Christoph Hellwig
2020-04-28 13:11 ` [PATCH 10/15] nvmet: rename nvmet_rw_len to nvmet_rw_data_len Max Gurtovoy
2020-04-28 13:11 ` [PATCH 11/15] nvmet: rename nvmet_check_data_len to nvmet_check_transfer_len Max Gurtovoy
2020-04-28 13:11 ` [PATCH 12/15] nvme: add Metadata Capabilities enumerations Max Gurtovoy
2020-05-01 15:53   ` Christoph Hellwig
2020-05-03 12:43     ` Max Gurtovoy
2020-04-28 13:11 ` [PATCH 13/15] nvmet: add metadata/T10-PI support Max Gurtovoy
2020-05-01 15:58   ` Christoph Hellwig
2020-05-01 16:19   ` Christoph Hellwig
2020-04-28 13:11 ` [PATCH 14/15] nvmet: add metadata support for block devices Max Gurtovoy
2020-05-01 16:25   ` Christoph Hellwig
2020-04-28 13:11 ` [PATCH 15/15] nvmet-rdma: add metadata/T10-PI support Max Gurtovoy
2020-05-01 16:48   ` Christoph Hellwig
2020-05-03 16:29     ` Max Gurtovoy
2020-05-04 14:08       ` Christoph Hellwig
2020-05-04 14:19         ` Max Gurtovoy

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).