* [PATCH 00/16 V2] nvme-rdma/nvmet-rdma: Add metadata/T10-PI support
@ 2019-12-02 14:47 Max Gurtovoy
2019-12-02 14:47 ` [PATCH] nvme-cli/fabrics: Add pi_enable param to connect cmd Max Gurtovoy
` (16 more replies)
0 siblings, 17 replies; 28+ messages in thread
From: Max Gurtovoy @ 2019-12-02 14:47 UTC (permalink / raw)
To: linux-nvme, kbusch, hch, sagi, martin.petersen
Cc: axboe, vladimirk, shlomin, israelr, idanb, oren, maxg
Hello Sagi, Christoph, Keith, Martin and Co
This patchset adds metadata (T10-PI) support for NVMeoF/RDMA host side and
target side, using signature verbs API. This set starts with a few preparation
commits to the NVMe host core layer. It continues with NVMeoF/RDMA host
implementation + BIP flag patch (block) + few preparation commits to the NVMe
target core layer. The patchset ends with NVMeoF/RDMA target implementation.
Configuration:
Host:
- nvme connect --pi_enable --transport=rdma --traddr=10.0.1.1 --nqn=test-nvme
Target:
- echo 1 > /config/nvmet/subsystems/${NAME}/attr_pi_enable
- echo 1 > /config/nvmet/ports/${PORT_NUM}/param_pi_enable
The code was tested using Mellanox's ConnectX-4/ConnectX-5 HCAs.
This series applies on top of nvme_5.5 branch cleanly.
Changes from v1:
- Added Reviewed-by signatures
- Added namespace features flag (Patch 01/16)
- Remove nvme_ns_has_pi function (Patch 01/16)
- Added has_pi field to struct nvme_request (Patch 01/16)
- Subject change for patch 02/16
- Fix comment for PCI metadata (Patch 03/16)
- Rebase over "nvme: Avoid preallocating big SGL for data" patchset
- Introduce NVME_INLINE_PROT_SG_CNT flag (Patch 05/16)
- Introduce nvme_rdma_sgl structure (Patch 06/16)
- Remove first_sgl pointer from struct nvme_rdma_request (Patch 06/16)
- Split nvme-rdma patches (Patches 06/16, 07/16)
- Rename is_protected to use_pi (Patch 07/16)
- Refactor nvme_rdma_get_max_fr_pages function (Patch 07/16)
- Added ifdef CONFIG_BLK_DEV_INTEGRITY (Patches 07/16, 10/16, 14/16,
15/16, 16/16)
- Use existing BIP_MAPPED_INTEGRITY flag (Patches 08/16, 15/16)
- Added port configfs pi_enable (Patch 14/16)
Israel Rukshin (13):
nvme: Introduce namespace features flag
nvme-fabrics: Allow user enabling metadata/T10-PI support
nvme: Introduce NVME_INLINE_PROT_SG_CNT
nvme-rdma: Introduce nvme_rdma_sgl structure
block: Add a BIP_MAPPED_INTEGRITY check to complete_fn
nvmet: Prepare metadata request
nvmet: Add metadata characteristics for a namespace
nvmet: Rename nvmet_rw_len to nvmet_rw_data_len
nvmet: Rename nvmet_check_data_len to nvmet_check_transfer_len
nvme: Add Metadata Capabilities enumerations
nvmet: Add metadata/T10-PI support
nvmet: Add metadata support for block devices
nvmet-rdma: Add metadata/T10-PI support
Max Gurtovoy (3):
nvme: Enforce extended LBA format for fabrics metadata
nvme: Introduce max_integrity_segments ctrl attribute
nvme-rdma: Add metadata/T10-PI support
block/t10-pi.c | 4 +
drivers/nvme/host/core.c | 75 +++++---
drivers/nvme/host/fabrics.c | 11 ++
drivers/nvme/host/fabrics.h | 3 +
drivers/nvme/host/nvme.h | 9 +-
drivers/nvme/host/pci.c | 7 +
drivers/nvme/host/rdma.c | 368 +++++++++++++++++++++++++++++++++-----
drivers/nvme/target/admin-cmd.c | 37 ++--
drivers/nvme/target/configfs.c | 61 +++++++
drivers/nvme/target/core.c | 54 ++++--
drivers/nvme/target/discovery.c | 8 +-
drivers/nvme/target/fabrics-cmd.c | 19 +-
drivers/nvme/target/io-cmd-bdev.c | 115 +++++++++++-
drivers/nvme/target/io-cmd-file.c | 8 +-
drivers/nvme/target/nvmet.h | 40 ++++-
drivers/nvme/target/rdma.c | 235 ++++++++++++++++++++++--
include/linux/nvme.h | 2 +
17 files changed, 929 insertions(+), 127 deletions(-)
--
2.16.3
_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 28+ messages in thread
* [PATCH] nvme-cli/fabrics: Add pi_enable param to connect cmd
2019-12-02 14:47 [PATCH 00/16 V2] nvme-rdma/nvmet-rdma: Add metadata/T10-PI support Max Gurtovoy
@ 2019-12-02 14:47 ` Max Gurtovoy
2019-12-02 14:47 ` [PATCH 01/16] nvme: Introduce namespace features flag Max Gurtovoy
` (15 subsequent siblings)
16 siblings, 0 replies; 28+ messages in thread
From: Max Gurtovoy @ 2019-12-02 14:47 UTC (permalink / raw)
To: linux-nvme, kbusch, hch, sagi, martin.petersen
Cc: axboe, vladimirk, shlomin, israelr, idanb, oren, maxg
From: Israel Rukshin <israelr@mellanox.com>
Added 'pi_enable' to 'connect' command so users can enable metadata support.
usage examples:
nvme connect --pi_enable --transport=rdma --traddr=10.0.1.1 --nqn=test-nvme
nvme connect -p -t rdma -a 10.0.1.1 -n test_nvme
Signed-off-by: Israel Rukshin <israelr@mellanox.com>
Reviewed-by: Max Gurtovoy <maxg@mellanox.com>
---
fabrics.c | 14 +++++++++++++-
1 file changed, 13 insertions(+), 1 deletion(-)
diff --git a/fabrics.c b/fabrics.c
index 8982ae4..b22ea14 100644
--- a/fabrics.c
+++ b/fabrics.c
@@ -72,6 +72,7 @@ static struct config {
int disable_sqflow;
int hdr_digest;
int data_digest;
+ int pi_enable;
bool persistent;
bool quiet;
} cfg = { NULL };
@@ -756,7 +757,9 @@ static int build_options(char *argstr, int max_len, bool discover)
add_bool_argument(&argstr, &max_len, "disable_sqflow",
cfg.disable_sqflow) ||
add_bool_argument(&argstr, &max_len, "hdr_digest", cfg.hdr_digest) ||
- add_bool_argument(&argstr, &max_len, "data_digest", cfg.data_digest))
+ add_bool_argument(&argstr, &max_len, "data_digest",
+ cfg.data_digest) ||
+ add_bool_argument(&argstr, &max_len, "pi_enable", cfg.pi_enable))
return -EINVAL;
return 0;
@@ -885,6 +888,13 @@ retry:
p += len;
}
+ if (cfg.pi_enable) {
+ len = sprintf(p, ",pi_enable");
+ if (len < 0)
+ return -EINVAL;
+ p += len;
+ }
+
switch (e->trtype) {
case NVMF_TRTYPE_RDMA:
case NVMF_TRTYPE_TCP:
@@ -1171,6 +1181,7 @@ int discover(const char *desc, int argc, char **argv, bool connect)
OPT_INT("tos", 'T', &cfg.tos, "type of service"),
OPT_FLAG("hdr_digest", 'g', &cfg.hdr_digest, "enable transport protocol header digest (TCP transport)"),
OPT_FLAG("data_digest", 'G', &cfg.data_digest, "enable transport protocol data digest (TCP transport)"),
+ OPT_FLAG("pi_enable", 'p', &cfg.pi_enable, "enable metadata (T10-PI) support (default false)"),
OPT_INT("nr-io-queues", 'i', &cfg.nr_io_queues, "number of io queues to use (default is core count)"),
OPT_INT("nr-write-queues", 'W', &cfg.nr_write_queues, "number of write queues to use (default 0)"),
OPT_INT("nr-poll-queues", 'P', &cfg.nr_poll_queues, "number of poll queues to use (default 0)"),
@@ -1231,6 +1242,7 @@ int connect(const char *desc, int argc, char **argv)
OPT_FLAG("disable-sqflow", 'd', &cfg.disable_sqflow, "disable controller sq flow control (default false)"),
OPT_FLAG("hdr-digest", 'g', &cfg.hdr_digest, "enable transport protocol header digest (TCP transport)"),
OPT_FLAG("data-digest", 'G', &cfg.data_digest, "enable transport protocol data digest (TCP transport)"),
+ OPT_FLAG("pi_enable", 'p', &cfg.pi_enable, "enable metadata (T10-PI) support (default false)"),
OPT_END()
};
--
1.8.3.1
_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 01/16] nvme: Introduce namespace features flag
2019-12-02 14:47 [PATCH 00/16 V2] nvme-rdma/nvmet-rdma: Add metadata/T10-PI support Max Gurtovoy
2019-12-02 14:47 ` [PATCH] nvme-cli/fabrics: Add pi_enable param to connect cmd Max Gurtovoy
@ 2019-12-02 14:47 ` Max Gurtovoy
2019-12-04 8:41 ` Christoph Hellwig
2019-12-02 14:47 ` [PATCH 02/16] nvme: Enforce extended LBA format for fabrics metadata Max Gurtovoy
` (14 subsequent siblings)
16 siblings, 1 reply; 28+ messages in thread
From: Max Gurtovoy @ 2019-12-02 14:47 UTC (permalink / raw)
To: linux-nvme, kbusch, hch, sagi, martin.petersen
Cc: axboe, vladimirk, shlomin, israelr, idanb, oren, maxg
From: Israel Rukshin <israelr@mellanox.com>
This patch is a preparation for adding PI support for fabrics drivers
that will use an extended LBA format for metadata.
Suggested-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Max Gurtovoy <maxg@mellanox.com>
Signed-off-by: Israel Rukshin <israelr@mellanox.com>
---
drivers/nvme/host/core.c | 40 +++++++++++++++++++++++++++-------------
drivers/nvme/host/nvme.h | 6 +++++-
2 files changed, 32 insertions(+), 14 deletions(-)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index e6ee34376c5e..2e027627ab58 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -209,11 +209,6 @@ static int nvme_delete_ctrl_sync(struct nvme_ctrl *ctrl)
return ret;
}
-static inline bool nvme_ns_has_pi(struct nvme_ns *ns)
-{
- return ns->pi_type && ns->ms == sizeof(struct t10_pi_tuple);
-}
-
static blk_status_t nvme_error_status(u16 status)
{
switch (status & 0x7ff) {
@@ -473,6 +468,7 @@ static inline void nvme_clear_nvme_request(struct request *req)
if (!(req->rq_flags & RQF_DONTPREP)) {
nvme_req(req)->retries = 0;
nvme_req(req)->flags = 0;
+ nvme_req(req)->has_pi = false;
req->rq_flags |= RQF_DONTPREP;
}
}
@@ -706,6 +702,7 @@ static inline blk_status_t nvme_setup_rw(struct nvme_ns *ns,
nvme_assign_write_stream(ctrl, req, &control, &dsmgmt);
if (ns->ms) {
+ nvme_req(req)->has_pi = ns->features & NVME_NS_DIF_SUPPORTED;
/*
* If formated with metadata, the block layer always provides a
* metadata buffer if CONFIG_BLK_DEV_INTEGRITY is enabled. Else
@@ -713,7 +710,7 @@ static inline blk_status_t nvme_setup_rw(struct nvme_ns *ns,
* namespace capacity to zero to prevent any I/O.
*/
if (!blk_integrity_rq(req)) {
- if (WARN_ON_ONCE(!nvme_ns_has_pi(ns)))
+ if (WARN_ON_ONCE(!nvme_req(req)->has_pi))
return BLK_STS_NOTSUPP;
control |= NVME_RW_PRINFO_PRACT;
}
@@ -1271,7 +1268,7 @@ static int nvme_submit_io(struct nvme_ns *ns, struct nvme_user_io __user *uio)
meta_len = (io.nblocks + 1) * ns->ms;
metadata = (void __user *)(uintptr_t)io.metadata;
- if (ns->ext) {
+ if (ns->features & NVME_NS_EXT_LBAS) {
length += meta_len;
meta_len = 0;
} else if (meta_len) {
@@ -1799,10 +1796,10 @@ static void nvme_update_disk_info(struct gendisk *disk,
blk_queue_io_min(disk->queue, phys_bs);
blk_queue_io_opt(disk->queue, io_opt);
- if (ns->ms && !ns->ext &&
- (ns->ctrl->ops->flags & NVME_F_METADATA_SUPPORTED))
+ if (ns->features & NVME_NS_DIX_SUPPORTED)
nvme_init_integrity(disk, ns->ms, ns->pi_type);
- if ((ns->ms && !nvme_ns_has_pi(ns) && !blk_get_integrity(disk)) ||
+ if ((ns->ms && !(ns->features & NVME_NS_DIF_SUPPORTED) &&
+ !blk_get_integrity(disk)) ||
ns->lba_shift > PAGE_SHIFT)
capacity = 0;
@@ -1832,12 +1829,29 @@ static void __nvme_revalidate_disk(struct gendisk *disk, struct nvme_id_ns *id)
ns->lba_shift = 9;
ns->noiob = le16_to_cpu(id->noiob);
ns->ms = le16_to_cpu(id->lbaf[id->flbas & NVME_NS_FLBAS_LBA_MASK].ms);
- ns->ext = ns->ms && (id->flbas & NVME_NS_FLBAS_META_EXT);
+ ns->features = 0;
/* the PI implementation requires metadata equal t10 pi tuple size */
- if (ns->ms == sizeof(struct t10_pi_tuple))
+ if (ns->ms == sizeof(struct t10_pi_tuple)) {
ns->pi_type = id->dps & NVME_NS_DPS_PI_MASK;
- else
+ if (ns->pi_type)
+ ns->features |= NVME_NS_DIF_SUPPORTED;
+ } else {
ns->pi_type = 0;
+ }
+
+ if (ns->ms) {
+ if (id->flbas & NVME_NS_FLBAS_META_EXT)
+ ns->features |= NVME_NS_EXT_LBAS;
+
+ /*
+ * For PCI, Extended logical block will be generated by the
+ * controller.
+ */
+ if (ns->ctrl->ops->flags & NVME_F_METADATA_SUPPORTED) {
+ if (!(ns->features & NVME_NS_EXT_LBAS))
+ ns->features |= NVME_NS_DIX_SUPPORTED;
+ }
+ }
if (ns->noiob)
nvme_set_chunk_size(ns);
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 1024fec7914c..72473902f364 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -139,6 +139,7 @@ struct nvme_request {
u8 flags;
u16 status;
struct nvme_ctrl *ctrl;
+ bool has_pi;
};
/*
@@ -381,8 +382,11 @@ struct nvme_ns {
u16 ms;
u16 sgs;
u32 sws;
- bool ext;
u8 pi_type;
+ unsigned long features;
+#define NVME_NS_EXT_LBAS (1 << 0)
+#define NVME_NS_DIX_SUPPORTED (1 << 1)
+#define NVME_NS_DIF_SUPPORTED (1 << 2)
unsigned long flags;
#define NVME_NS_REMOVING 0
#define NVME_NS_DEAD 1
--
2.16.3
_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 02/16] nvme: Enforce extended LBA format for fabrics metadata
2019-12-02 14:47 [PATCH 00/16 V2] nvme-rdma/nvmet-rdma: Add metadata/T10-PI support Max Gurtovoy
2019-12-02 14:47 ` [PATCH] nvme-cli/fabrics: Add pi_enable param to connect cmd Max Gurtovoy
2019-12-02 14:47 ` [PATCH 01/16] nvme: Introduce namespace features flag Max Gurtovoy
@ 2019-12-02 14:47 ` Max Gurtovoy
2019-12-02 14:47 ` [PATCH 03/16] nvme: Introduce max_integrity_segments ctrl attribute Max Gurtovoy
` (13 subsequent siblings)
16 siblings, 0 replies; 28+ messages in thread
From: Max Gurtovoy @ 2019-12-02 14:47 UTC (permalink / raw)
To: linux-nvme, kbusch, hch, sagi, martin.petersen
Cc: axboe, vladimirk, shlomin, israelr, idanb, oren, maxg
An extended LBA is a larger LBA that is created when metadata associated
with the LBA is transferred contiguously with the LBA data (AKA
interleaved). The metadata may be either transferred as part of the LBA
(creating an extended LBA) or it may be transferred as a separate
contiguous buffer of data. According to the NVMeoF spec, a fabrics ctrl
supports only an Extended LBA format. Fail revalidation in case we have a
spec violation. Also initialize the integrity profile for the block device
for fabrics ctrl.
Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
Signed-off-by: Israel Rukshin <israelr@mellanox.com>
---
drivers/nvme/host/core.c | 25 +++++++++++++++++++++----
1 file changed, 21 insertions(+), 4 deletions(-)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 2e027627ab58..2ebaa9edc08b 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -1816,7 +1816,7 @@ static void nvme_update_disk_info(struct gendisk *disk,
blk_mq_unfreeze_queue(disk->queue);
}
-static void __nvme_revalidate_disk(struct gendisk *disk, struct nvme_id_ns *id)
+static int __nvme_revalidate_disk(struct gendisk *disk, struct nvme_id_ns *id)
{
struct nvme_ns *ns = disk->private_data;
@@ -1843,12 +1843,22 @@ static void __nvme_revalidate_disk(struct gendisk *disk, struct nvme_id_ns *id)
if (id->flbas & NVME_NS_FLBAS_META_EXT)
ns->features |= NVME_NS_EXT_LBAS;
+ /*
+ * For Fabrics, only metadata as part of extended data LBA is
+ * supported. Fail in case of a spec violation.
+ */
+ if (ns->ctrl->ops->flags & NVME_F_FABRICS) {
+ if (WARN_ON_ONCE(!(ns->features & NVME_NS_EXT_LBAS)))
+ return -EINVAL;
+ }
+
/*
* For PCI, Extended logical block will be generated by the
* controller.
*/
if (ns->ctrl->ops->flags & NVME_F_METADATA_SUPPORTED) {
- if (!(ns->features & NVME_NS_EXT_LBAS))
+ if (ns->ctrl->ops->flags & NVME_F_FABRICS ||
+ !(ns->features & NVME_NS_EXT_LBAS))
ns->features |= NVME_NS_DIX_SUPPORTED;
}
}
@@ -1863,6 +1873,7 @@ static void __nvme_revalidate_disk(struct gendisk *disk, struct nvme_id_ns *id)
revalidate_disk(ns->head->disk);
}
#endif
+ return 0;
}
static int nvme_revalidate_disk(struct gendisk *disk)
@@ -1887,7 +1898,10 @@ static int nvme_revalidate_disk(struct gendisk *disk)
goto free_id;
}
- __nvme_revalidate_disk(disk, id);
+ ret = __nvme_revalidate_disk(disk, id);
+ if (ret)
+ goto free_id;
+
ret = nvme_report_ns_ids(ctrl, ns->head->ns_id, id, &ids);
if (ret)
goto free_id;
@@ -3563,7 +3577,8 @@ static int nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid)
memcpy(disk->disk_name, disk_name, DISK_NAME_LEN);
ns->disk = disk;
- __nvme_revalidate_disk(disk, id);
+ if (__nvme_revalidate_disk(disk, id))
+ goto out_free_disk;
if ((ctrl->quirks & NVME_QUIRK_LIGHTNVM) && id->vs[0] == 0x1) {
ret = nvme_nvm_register(ns, disk_name, node);
@@ -3588,6 +3603,8 @@ static int nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid)
return 0;
out_put_disk:
put_disk(ns->disk);
+ out_free_disk:
+ del_gendisk(ns->disk);
out_unlink_ns:
mutex_lock(&ctrl->subsys->lock);
list_del_rcu(&ns->siblings);
--
2.16.3
_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 03/16] nvme: Introduce max_integrity_segments ctrl attribute
2019-12-02 14:47 [PATCH 00/16 V2] nvme-rdma/nvmet-rdma: Add metadata/T10-PI support Max Gurtovoy
` (2 preceding siblings ...)
2019-12-02 14:47 ` [PATCH 02/16] nvme: Enforce extended LBA format for fabrics metadata Max Gurtovoy
@ 2019-12-02 14:47 ` Max Gurtovoy
2019-12-02 14:48 ` [PATCH 04/16] nvme-fabrics: Allow user enabling metadata/T10-PI support Max Gurtovoy
` (12 subsequent siblings)
16 siblings, 0 replies; 28+ messages in thread
From: Max Gurtovoy @ 2019-12-02 14:47 UTC (permalink / raw)
To: linux-nvme, kbusch, hch, sagi, martin.petersen
Cc: axboe, vladimirk, shlomin, israelr, idanb, oren, maxg
This patch doesn't change any logic, and is needed as a preparation
for adding PI support for fabrics drivers that will use an extended
LBA format for metadata.
Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
Signed-off-by: Israel Rukshin <israelr@mellanox.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
---
drivers/nvme/host/core.c | 11 +++++++----
drivers/nvme/host/nvme.h | 1 +
drivers/nvme/host/pci.c | 7 +++++++
3 files changed, 15 insertions(+), 4 deletions(-)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 2ebaa9edc08b..3c1516fc5288 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -1619,7 +1619,8 @@ static int nvme_getgeo(struct block_device *bdev, struct hd_geometry *geo)
}
#ifdef CONFIG_BLK_DEV_INTEGRITY
-static void nvme_init_integrity(struct gendisk *disk, u16 ms, u8 pi_type)
+static void nvme_init_integrity(struct gendisk *disk, u16 ms, u8 pi_type,
+ u32 max_integrity_segments)
{
struct blk_integrity integrity;
@@ -1642,10 +1643,11 @@ static void nvme_init_integrity(struct gendisk *disk, u16 ms, u8 pi_type)
}
integrity.tuple_size = ms;
blk_integrity_register(disk, &integrity);
- blk_queue_max_integrity_segments(disk->queue, 1);
+ blk_queue_max_integrity_segments(disk->queue, max_integrity_segments);
}
#else
-static void nvme_init_integrity(struct gendisk *disk, u16 ms, u8 pi_type)
+static void nvme_init_integrity(struct gendisk *disk, u16 ms, u8 pi_type,
+ u32 max_integrity_segments)
{
}
#endif /* CONFIG_BLK_DEV_INTEGRITY */
@@ -1797,7 +1799,8 @@ static void nvme_update_disk_info(struct gendisk *disk,
blk_queue_io_opt(disk->queue, io_opt);
if (ns->features & NVME_NS_DIX_SUPPORTED)
- nvme_init_integrity(disk, ns->ms, ns->pi_type);
+ nvme_init_integrity(disk, ns->ms, ns->pi_type,
+ ns->ctrl->max_integrity_segments);
if ((ns->ms && !(ns->features & NVME_NS_DIF_SUPPORTED) &&
!blk_get_integrity(disk)) ||
ns->lba_shift > PAGE_SHIFT)
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 72473902f364..0f8aacff3542 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -229,6 +229,7 @@ struct nvme_ctrl {
u32 page_size;
u32 max_hw_sectors;
u32 max_segments;
+ u32 max_integrity_segments;
u16 crdt[3];
u16 oncs;
u16 oacs;
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 9d307593b94f..af6af197285f 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -2588,6 +2588,13 @@ static void nvme_reset_work(struct work_struct *work)
goto out;
}
+ /*
+ * NVMe PCI driver doesn't support Extended LBA format and supports
+ * only a single integrity segment for a separate contiguous buffer
+ * of metadata.
+ */
+ dev->ctrl.max_integrity_segments = 1;
+
result = nvme_init_identify(&dev->ctrl);
if (result)
goto out;
--
2.16.3
_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 04/16] nvme-fabrics: Allow user enabling metadata/T10-PI support
2019-12-02 14:47 [PATCH 00/16 V2] nvme-rdma/nvmet-rdma: Add metadata/T10-PI support Max Gurtovoy
` (3 preceding siblings ...)
2019-12-02 14:47 ` [PATCH 03/16] nvme: Introduce max_integrity_segments ctrl attribute Max Gurtovoy
@ 2019-12-02 14:48 ` Max Gurtovoy
2019-12-02 14:48 ` [PATCH 05/16] nvme: Introduce NVME_INLINE_PROT_SG_CNT Max Gurtovoy
` (11 subsequent siblings)
16 siblings, 0 replies; 28+ messages in thread
From: Max Gurtovoy @ 2019-12-02 14:48 UTC (permalink / raw)
To: linux-nvme, kbusch, hch, sagi, martin.petersen
Cc: axboe, vladimirk, shlomin, israelr, idanb, oren, maxg
From: Israel Rukshin <israelr@mellanox.com>
Preparation for adding metadata (T10-PI) over fabric support. This will
allow end-to-end protection information passthrough and validation for
NVMe over Fabric.
Signed-off-by: Israel Rukshin <israelr@mellanox.com>
Reviewed-by: Max Gurtovoy <maxg@mellanox.com>
---
drivers/nvme/host/core.c | 3 ++-
drivers/nvme/host/fabrics.c | 11 +++++++++++
drivers/nvme/host/fabrics.h | 3 +++
3 files changed, 16 insertions(+), 1 deletion(-)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 3c1516fc5288..1ce15bddf390 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -1860,7 +1860,8 @@ static int __nvme_revalidate_disk(struct gendisk *disk, struct nvme_id_ns *id)
* controller.
*/
if (ns->ctrl->ops->flags & NVME_F_METADATA_SUPPORTED) {
- if (ns->ctrl->ops->flags & NVME_F_FABRICS ||
+ if ((ns->ctrl->ops->flags & NVME_F_FABRICS &&
+ ns->ctrl->opts->pi_enable) ||
!(ns->features & NVME_NS_EXT_LBAS))
ns->features |= NVME_NS_DIX_SUPPORTED;
}
diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c
index 74b8818ac9a1..c09230e18980 100644
--- a/drivers/nvme/host/fabrics.c
+++ b/drivers/nvme/host/fabrics.c
@@ -612,6 +612,7 @@ static const match_table_t opt_tokens = {
{ NVMF_OPT_NR_WRITE_QUEUES, "nr_write_queues=%d" },
{ NVMF_OPT_NR_POLL_QUEUES, "nr_poll_queues=%d" },
{ NVMF_OPT_TOS, "tos=%d" },
+ { NVMF_OPT_PI_ENABLE, "pi_enable" },
{ NVMF_OPT_ERR, NULL }
};
@@ -634,6 +635,7 @@ static int nvmf_parse_options(struct nvmf_ctrl_options *opts,
opts->hdr_digest = false;
opts->data_digest = false;
opts->tos = -1; /* < 0 == use transport default */
+ opts->pi_enable = false;
options = o = kstrdup(buf, GFP_KERNEL);
if (!options)
@@ -867,6 +869,15 @@ static int nvmf_parse_options(struct nvmf_ctrl_options *opts,
}
opts->tos = token;
break;
+#ifdef CONFIG_BLK_DEV_INTEGRITY
+ case NVMF_OPT_PI_ENABLE:
+ if (opts->discovery_nqn) {
+ pr_debug("Ignoring pi_enable value for discovery controller\n");
+ break;
+ }
+ opts->pi_enable = true;
+ break;
+#endif
default:
pr_warn("unknown parameter or missing value '%s' in ctrl creation request\n",
p);
diff --git a/drivers/nvme/host/fabrics.h b/drivers/nvme/host/fabrics.h
index a0ec40ab62ee..773f74873a9e 100644
--- a/drivers/nvme/host/fabrics.h
+++ b/drivers/nvme/host/fabrics.h
@@ -56,6 +56,7 @@ enum {
NVMF_OPT_NR_WRITE_QUEUES = 1 << 17,
NVMF_OPT_NR_POLL_QUEUES = 1 << 18,
NVMF_OPT_TOS = 1 << 19,
+ NVMF_OPT_PI_ENABLE = 1 << 20,
};
/**
@@ -89,6 +90,7 @@ enum {
* @nr_write_queues: number of queues for write I/O
* @nr_poll_queues: number of queues for polling I/O
* @tos: type of service
+ * @pi_enable: Enable metadata (T10-PI) support
*/
struct nvmf_ctrl_options {
unsigned mask;
@@ -111,6 +113,7 @@ struct nvmf_ctrl_options {
unsigned int nr_write_queues;
unsigned int nr_poll_queues;
int tos;
+ bool pi_enable;
};
/*
--
2.16.3
_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 05/16] nvme: Introduce NVME_INLINE_PROT_SG_CNT
2019-12-02 14:47 [PATCH 00/16 V2] nvme-rdma/nvmet-rdma: Add metadata/T10-PI support Max Gurtovoy
` (4 preceding siblings ...)
2019-12-02 14:48 ` [PATCH 04/16] nvme-fabrics: Allow user enabling metadata/T10-PI support Max Gurtovoy
@ 2019-12-02 14:48 ` Max Gurtovoy
2019-12-02 14:48 ` [PATCH 06/16] nvme-rdma: Introduce nvme_rdma_sgl structure Max Gurtovoy
` (10 subsequent siblings)
16 siblings, 0 replies; 28+ messages in thread
From: Max Gurtovoy @ 2019-12-02 14:48 UTC (permalink / raw)
To: linux-nvme, kbusch, hch, sagi, martin.petersen
Cc: axboe, vladimirk, shlomin, israelr, idanb, oren, maxg
From: Israel Rukshin <israelr@mellanox.com>
SGL size of PI metadata is usually small. Thus, 1 inline sg should
cover most cases. The macro will be used for pre-allocate a single SGL
entry for protection information. The preallocation of small inline SGLs
depends on SG_CHAIN capability so if the ARCH doesn't support SG_CHAIN,
use the runtime allocation for the SGL. This patch is a preparation for
adding metadata (T10-PI) over fabric support.
Signed-off-by: Israel Rukshin <israelr@mellanox.com>
Reviewed-by: Max Gurtovoy <maxg@mellanox.com>
---
drivers/nvme/host/nvme.h | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 0f8aacff3542..5c94fcfdd605 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -30,8 +30,10 @@ extern unsigned int admin_timeout;
#ifdef CONFIG_ARCH_NO_SG_CHAIN
#define NVME_INLINE_SG_CNT 0
+#define NVME_INLINE_PROT_SG_CNT 0
#else
#define NVME_INLINE_SG_CNT 2
+#define NVME_INLINE_PROT_SG_CNT 1
#endif
extern struct workqueue_struct *nvme_wq;
--
2.16.3
_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 06/16] nvme-rdma: Introduce nvme_rdma_sgl structure
2019-12-02 14:47 [PATCH 00/16 V2] nvme-rdma/nvmet-rdma: Add metadata/T10-PI support Max Gurtovoy
` (5 preceding siblings ...)
2019-12-02 14:48 ` [PATCH 05/16] nvme: Introduce NVME_INLINE_PROT_SG_CNT Max Gurtovoy
@ 2019-12-02 14:48 ` Max Gurtovoy
2019-12-02 14:48 ` [PATCH 07/16] nvme-rdma: Add metadata/T10-PI support Max Gurtovoy
` (9 subsequent siblings)
16 siblings, 0 replies; 28+ messages in thread
From: Max Gurtovoy @ 2019-12-02 14:48 UTC (permalink / raw)
To: linux-nvme, kbusch, hch, sagi, martin.petersen
Cc: axboe, vladimirk, shlomin, israelr, idanb, oren, maxg
From: Israel Rukshin <israelr@mellanox.com>
Remove first_sgl pointer from struct nvme_rdma_request and use pointer
arithmetic instead. The inline scatterlist, if exists, will be located
right after the nvme_rdma_request. This patch is needed as a preparation
for adding PI support.
Signed-off-by: Israel Rukshin <israelr@mellanox.com>
Reviewed-by: Max Gurtovoy <maxg@mellanox.com>
---
drivers/nvme/host/rdma.c | 41 ++++++++++++++++++++++++-----------------
1 file changed, 24 insertions(+), 17 deletions(-)
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index 2a47c6c5007e..eec5a796a8f5 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -48,6 +48,11 @@ struct nvme_rdma_qe {
u64 dma;
};
+struct nvme_rdma_sgl {
+ int nents;
+ struct sg_table sg_table;
+};
+
struct nvme_rdma_queue;
struct nvme_rdma_request {
struct nvme_request req;
@@ -58,12 +63,10 @@ struct nvme_rdma_request {
refcount_t ref;
struct ib_sge sge[1 + NVME_RDMA_MAX_INLINE_SEGMENTS];
u32 num_sge;
- int nents;
struct ib_reg_wr reg_wr;
struct ib_cqe reg_cqe;
struct nvme_rdma_queue *queue;
- struct sg_table sg_table;
- struct scatterlist first_sgl[];
+ struct nvme_rdma_sgl data_sgl;
};
enum nvme_rdma_queue_flags {
@@ -1159,8 +1162,9 @@ static void nvme_rdma_unmap_data(struct nvme_rdma_queue *queue,
req->mr = NULL;
}
- ib_dma_unmap_sg(ibdev, req->sg_table.sgl, req->nents, rq_dma_dir(rq));
- sg_free_table_chained(&req->sg_table, NVME_INLINE_SG_CNT);
+ ib_dma_unmap_sg(ibdev, req->data_sgl.sg_table.sgl, req->data_sgl.nents,
+ rq_dma_dir(rq));
+ sg_free_table_chained(&req->data_sgl.sg_table, NVME_INLINE_SG_CNT);
}
static int nvme_rdma_set_sg_null(struct nvme_command *c)
@@ -1179,7 +1183,7 @@ static int nvme_rdma_map_sg_inline(struct nvme_rdma_queue *queue,
int count)
{
struct nvme_sgl_desc *sg = &c->common.dptr.sgl;
- struct scatterlist *sgl = req->sg_table.sgl;
+ struct scatterlist *sgl = req->data_sgl.sg_table.sgl;
struct ib_sge *sge = &req->sge[1];
u32 len = 0;
int i;
@@ -1204,8 +1208,8 @@ static int nvme_rdma_map_sg_single(struct nvme_rdma_queue *queue,
{
struct nvme_keyed_sgl_desc *sg = &c->common.dptr.ksgl;
- sg->addr = cpu_to_le64(sg_dma_address(req->sg_table.sgl));
- put_unaligned_le24(sg_dma_len(req->sg_table.sgl), sg->length);
+ sg->addr = cpu_to_le64(sg_dma_address(req->data_sgl.sg_table.sgl));
+ put_unaligned_le24(sg_dma_len(req->data_sgl.sg_table.sgl), sg->length);
put_unaligned_le32(queue->device->pd->unsafe_global_rkey, sg->key);
sg->type = NVME_KEY_SGL_FMT_DATA_DESC << 4;
return 0;
@@ -1226,7 +1230,8 @@ static int nvme_rdma_map_sg_fr(struct nvme_rdma_queue *queue,
* Align the MR to a 4K page size to match the ctrl page size and
* the block virtual boundary.
*/
- nr = ib_map_mr_sg(req->mr, req->sg_table.sgl, count, NULL, SZ_4K);
+ nr = ib_map_mr_sg(req->mr, req->data_sgl.sg_table.sgl, count, NULL,
+ SZ_4K);
if (unlikely(nr < count)) {
ib_mr_pool_put(queue->qp, &queue->qp->rdma_mrs, req->mr);
req->mr = NULL;
@@ -1273,17 +1278,18 @@ static int nvme_rdma_map_data(struct nvme_rdma_queue *queue,
if (!blk_rq_nr_phys_segments(rq))
return nvme_rdma_set_sg_null(c);
- req->sg_table.sgl = req->first_sgl;
- ret = sg_alloc_table_chained(&req->sg_table,
- blk_rq_nr_phys_segments(rq), req->sg_table.sgl,
+ req->data_sgl.sg_table.sgl = (struct scatterlist *)(req + 1);
+ ret = sg_alloc_table_chained(&req->data_sgl.sg_table,
+ blk_rq_nr_phys_segments(rq), req->data_sgl.sg_table.sgl,
NVME_INLINE_SG_CNT);
if (ret)
return -ENOMEM;
- req->nents = blk_rq_map_sg(rq->q, rq, req->sg_table.sgl);
+ req->data_sgl.nents = blk_rq_map_sg(rq->q, rq,
+ req->data_sgl.sg_table.sgl);
- count = ib_dma_map_sg(ibdev, req->sg_table.sgl, req->nents,
- rq_dma_dir(rq));
+ count = ib_dma_map_sg(ibdev, req->data_sgl.sg_table.sgl,
+ req->data_sgl.nents, rq_dma_dir(rq));
if (unlikely(count <= 0)) {
ret = -EIO;
goto out_free_table;
@@ -1312,9 +1318,10 @@ static int nvme_rdma_map_data(struct nvme_rdma_queue *queue,
return 0;
out_unmap_sg:
- ib_dma_unmap_sg(ibdev, req->sg_table.sgl, req->nents, rq_dma_dir(rq));
+ ib_dma_unmap_sg(ibdev, req->data_sgl.sg_table.sgl, req->data_sgl.nents,
+ rq_dma_dir(rq));
out_free_table:
- sg_free_table_chained(&req->sg_table, NVME_INLINE_SG_CNT);
+ sg_free_table_chained(&req->data_sgl.sg_table, NVME_INLINE_SG_CNT);
return ret;
}
--
2.16.3
_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 07/16] nvme-rdma: Add metadata/T10-PI support
2019-12-02 14:47 [PATCH 00/16 V2] nvme-rdma/nvmet-rdma: Add metadata/T10-PI support Max Gurtovoy
` (6 preceding siblings ...)
2019-12-02 14:48 ` [PATCH 06/16] nvme-rdma: Introduce nvme_rdma_sgl structure Max Gurtovoy
@ 2019-12-02 14:48 ` Max Gurtovoy
2020-01-22 21:57 ` James Smart
2019-12-02 14:48 ` [PATCH 08/16] block: Add a BIP_MAPPED_INTEGRITY check to complete_fn Max Gurtovoy
` (8 subsequent siblings)
16 siblings, 1 reply; 28+ messages in thread
From: Max Gurtovoy @ 2019-12-02 14:48 UTC (permalink / raw)
To: linux-nvme, kbusch, hch, sagi, martin.petersen
Cc: axboe, vladimirk, shlomin, israelr, idanb, oren, maxg
For capable HCAs (e.g. ConnectX-4/ConnectX-5) this will allow end-to-end
protection information passthrough and validation for NVMe over RDMA
transport. Metadata offload support was implemented over the new RDMA
signature verbs API and it is enabled per controller by using nvme-cli.
usage example:
nvme connect --pi_enable --transport=rdma --traddr=10.0.1.1 --nqn=test-nvme
Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
Signed-off-by: Israel Rukshin <israelr@mellanox.com>
---
drivers/nvme/host/rdma.c | 331 ++++++++++++++++++++++++++++++++++++++++++-----
1 file changed, 298 insertions(+), 33 deletions(-)
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index eec5a796a8f5..6ff790aff54c 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -67,6 +67,9 @@ struct nvme_rdma_request {
struct ib_cqe reg_cqe;
struct nvme_rdma_queue *queue;
struct nvme_rdma_sgl data_sgl;
+ /* Metadata (T10-PI) support */
+ struct nvme_rdma_sgl *pi_sgl;
+ bool use_pi;
};
enum nvme_rdma_queue_flags {
@@ -88,6 +91,7 @@ struct nvme_rdma_queue {
struct rdma_cm_id *cm_id;
int cm_error;
struct completion cm_done;
+ bool pi_support;
};
struct nvme_rdma_ctrl {
@@ -115,6 +119,7 @@ struct nvme_rdma_ctrl {
struct nvme_ctrl ctrl;
bool use_inline_data;
u32 io_queues[HCTX_MAX_TYPES];
+ bool pi_support;
};
static inline struct nvme_rdma_ctrl *to_rdma_ctrl(struct nvme_ctrl *ctrl)
@@ -272,6 +277,8 @@ static int nvme_rdma_create_qp(struct nvme_rdma_queue *queue, const int factor)
init_attr.qp_type = IB_QPT_RC;
init_attr.send_cq = queue->ib_cq;
init_attr.recv_cq = queue->ib_cq;
+ if (queue->pi_support)
+ init_attr.create_flags |= IB_QP_CREATE_INTEGRITY_EN;
ret = rdma_create_qp(queue->cm_id, dev->pd, &init_attr);
@@ -287,6 +294,18 @@ static void nvme_rdma_exit_request(struct blk_mq_tag_set *set,
kfree(req->sqe.data);
}
+static unsigned int nvme_rdma_cmd_size(bool has_pi)
+{
+ /*
+ * nvme rdma command consists of struct nvme_rdma_request, data SGL,
+ * PI nvme_rdma_sgl struct and then PI SGL.
+ */
+ return sizeof(struct nvme_rdma_request) +
+ sizeof(struct scatterlist) * NVME_INLINE_SG_CNT +
+ has_pi * (sizeof(struct nvme_rdma_sgl) +
+ sizeof(struct scatterlist) * NVME_INLINE_PROT_SG_CNT);
+}
+
static int nvme_rdma_init_request(struct blk_mq_tag_set *set,
struct request *rq, unsigned int hctx_idx,
unsigned int numa_node)
@@ -301,6 +320,10 @@ static int nvme_rdma_init_request(struct blk_mq_tag_set *set,
if (!req->sqe.data)
return -ENOMEM;
+ /* PI nvme_rdma_sgl struct is located after command's data SGL */
+ if (queue->pi_support)
+ req->pi_sgl = (void *)nvme_req(rq) + nvme_rdma_cmd_size(false);
+
req->queue = queue;
return 0;
@@ -411,6 +434,8 @@ static void nvme_rdma_destroy_queue_ib(struct nvme_rdma_queue *queue)
dev = queue->device;
ibdev = dev->dev;
+ if (queue->pi_support)
+ ib_mr_pool_destroy(queue->qp, &queue->qp->sig_mrs);
ib_mr_pool_destroy(queue->qp, &queue->qp->rdma_mrs);
/*
@@ -427,10 +452,13 @@ static void nvme_rdma_destroy_queue_ib(struct nvme_rdma_queue *queue)
nvme_rdma_dev_put(dev);
}
-static int nvme_rdma_get_max_fr_pages(struct ib_device *ibdev)
+static int nvme_rdma_get_max_fr_pages(struct ib_device *ibdev, bool pi_support)
{
- return min_t(u32, NVME_RDMA_MAX_SEGMENTS,
- ibdev->attrs.max_fast_reg_page_list_len - 1);
+ u32 max_page_list_len =
+ pi_support ? ibdev->attrs.max_pi_fast_reg_page_list_len :
+ ibdev->attrs.max_fast_reg_page_list_len;
+
+ return min_t(u32, NVME_RDMA_MAX_SEGMENTS, max_page_list_len - 1);
}
static int nvme_rdma_create_queue_ib(struct nvme_rdma_queue *queue)
@@ -487,7 +515,7 @@ static int nvme_rdma_create_queue_ib(struct nvme_rdma_queue *queue)
* misaligned we'll end up using two entries for a single data page,
* so one additional entry is required.
*/
- pages_per_mr = nvme_rdma_get_max_fr_pages(ibdev) + 1;
+ pages_per_mr = nvme_rdma_get_max_fr_pages(ibdev, queue->pi_support) + 1;
ret = ib_mr_pool_init(queue->qp, &queue->qp->rdma_mrs,
queue->queue_size,
IB_MR_TYPE_MEM_REG,
@@ -499,10 +527,24 @@ static int nvme_rdma_create_queue_ib(struct nvme_rdma_queue *queue)
goto out_destroy_ring;
}
+ if (queue->pi_support) {
+ ret = ib_mr_pool_init(queue->qp, &queue->qp->sig_mrs,
+ queue->queue_size, IB_MR_TYPE_INTEGRITY,
+ pages_per_mr, pages_per_mr);
+ if (ret) {
+ dev_err(queue->ctrl->ctrl.device,
+ "failed to initialize PI MR pool sized %d for QID %d\n",
+ queue->queue_size, idx);
+ goto out_destroy_mr_pool;
+ }
+ }
+
set_bit(NVME_RDMA_Q_TR_READY, &queue->flags);
return 0;
+out_destroy_mr_pool:
+ ib_mr_pool_destroy(queue->qp, &queue->qp->rdma_mrs);
out_destroy_ring:
nvme_rdma_free_ring(ibdev, queue->rsp_ring, queue->queue_size,
sizeof(struct nvme_completion), DMA_FROM_DEVICE);
@@ -524,6 +566,7 @@ static int nvme_rdma_alloc_queue(struct nvme_rdma_ctrl *ctrl,
queue = &ctrl->queues[idx];
queue->ctrl = ctrl;
+ queue->pi_support = idx && ctrl->pi_support;
init_completion(&queue->cm_done);
if (idx > 0)
@@ -733,8 +776,7 @@ static struct blk_mq_tag_set *nvme_rdma_alloc_tagset(struct nvme_ctrl *nctrl,
set->queue_depth = NVME_AQ_MQ_TAG_DEPTH;
set->reserved_tags = 2; /* connect + keep-alive */
set->numa_node = nctrl->numa_node;
- set->cmd_size = sizeof(struct nvme_rdma_request) +
- NVME_INLINE_SG_CNT * sizeof(struct scatterlist);
+ set->cmd_size = nvme_rdma_cmd_size(false);
set->driver_data = ctrl;
set->nr_hw_queues = 1;
set->timeout = ADMIN_TIMEOUT;
@@ -747,8 +789,7 @@ static struct blk_mq_tag_set *nvme_rdma_alloc_tagset(struct nvme_ctrl *nctrl,
set->reserved_tags = 1; /* fabric connect */
set->numa_node = nctrl->numa_node;
set->flags = BLK_MQ_F_SHOULD_MERGE;
- set->cmd_size = sizeof(struct nvme_rdma_request) +
- NVME_INLINE_SG_CNT * sizeof(struct scatterlist);
+ set->cmd_size = nvme_rdma_cmd_size(ctrl->pi_support);
set->driver_data = ctrl;
set->nr_hw_queues = nctrl->queue_count - 1;
set->timeout = NVME_IO_TIMEOUT;
@@ -790,7 +831,21 @@ static int nvme_rdma_configure_admin_queue(struct nvme_rdma_ctrl *ctrl,
ctrl->device = ctrl->queues[0].device;
ctrl->ctrl.numa_node = dev_to_node(ctrl->device->dev->dma_device);
- ctrl->max_fr_pages = nvme_rdma_get_max_fr_pages(ctrl->device->dev);
+ /* T10-PI support */
+ if (ctrl->ctrl.opts->pi_enable) {
+ if (!(ctrl->device->dev->attrs.device_cap_flags &
+ IB_DEVICE_INTEGRITY_HANDOVER)) {
+ dev_warn(ctrl->ctrl.device,
+ "T10-PI requested but not supported on %s, continue without T10-PI\n",
+ ctrl->device->dev->name);
+ ctrl->ctrl.opts->pi_enable = false;
+ } else {
+ ctrl->pi_support = true;
+ }
+ }
+
+ ctrl->max_fr_pages = nvme_rdma_get_max_fr_pages(ctrl->device->dev,
+ ctrl->pi_support);
/*
* Bind the async event SQE DMA mapping to the admin queue lifetime.
@@ -832,6 +887,8 @@ static int nvme_rdma_configure_admin_queue(struct nvme_rdma_ctrl *ctrl,
ctrl->ctrl.max_segments = ctrl->max_fr_pages;
ctrl->ctrl.max_hw_sectors = ctrl->max_fr_pages << (ilog2(SZ_4K) - 9);
+ if (ctrl->pi_support)
+ ctrl->ctrl.max_integrity_segments = ctrl->max_fr_pages;
blk_mq_unquiesce_queue(ctrl->ctrl.admin_q);
@@ -1157,8 +1214,19 @@ static void nvme_rdma_unmap_data(struct nvme_rdma_queue *queue,
if (!blk_rq_nr_phys_segments(rq))
return;
+ if (blk_integrity_rq(rq)) {
+ ib_dma_unmap_sg(ibdev, req->pi_sgl->sg_table.sgl,
+ req->pi_sgl->nents, rq_dma_dir(rq));
+ sg_free_table_chained(&req->pi_sgl->sg_table,
+ NVME_INLINE_PROT_SG_CNT);
+ }
+
if (req->mr) {
- ib_mr_pool_put(queue->qp, &queue->qp->rdma_mrs, req->mr);
+ if (req->use_pi)
+ ib_mr_pool_put(queue->qp, &queue->qp->sig_mrs, req->mr);
+ else
+ ib_mr_pool_put(queue->qp, &queue->qp->rdma_mrs,
+ req->mr);
req->mr = NULL;
}
@@ -1215,34 +1283,115 @@ static int nvme_rdma_map_sg_single(struct nvme_rdma_queue *queue,
return 0;
}
-static int nvme_rdma_map_sg_fr(struct nvme_rdma_queue *queue,
- struct nvme_rdma_request *req, struct nvme_command *c,
- int count)
+#ifdef CONFIG_BLK_DEV_INTEGRITY
+static void nvme_rdma_set_diff_domain(struct nvme_command *cmd, struct bio *bio,
+ struct ib_sig_domain *domain, struct request *rq)
{
- struct nvme_keyed_sgl_desc *sg = &c->common.dptr.ksgl;
- int nr;
+ struct blk_integrity *bi = blk_get_integrity(bio->bi_disk);
+ struct nvme_ns *ns = rq->rq_disk->private_data;
+
+ WARN_ON(bi == NULL);
- req->mr = ib_mr_pool_get(queue->qp, &queue->qp->rdma_mrs);
- if (WARN_ON_ONCE(!req->mr))
- return -EAGAIN;
+ domain->sig_type = IB_SIG_TYPE_T10_DIF;
+ domain->sig.dif.bg_type = IB_T10DIF_CRC;
+ domain->sig.dif.pi_interval = 1 << bi->interval_exp;
+ domain->sig.dif.ref_tag = le32_to_cpu(cmd->rw.reftag);
/*
- * Align the MR to a 4K page size to match the ctrl page size and
- * the block virtual boundary.
+ * At the moment we hard code those, but in the future
+ * we will take them from cmd.
*/
- nr = ib_map_mr_sg(req->mr, req->data_sgl.sg_table.sgl, count, NULL,
- SZ_4K);
- if (unlikely(nr < count)) {
- ib_mr_pool_put(queue->qp, &queue->qp->rdma_mrs, req->mr);
- req->mr = NULL;
- if (nr < 0)
- return nr;
- return -EINVAL;
+ domain->sig.dif.apptag_check_mask = 0xffff;
+ domain->sig.dif.app_escape = true;
+ domain->sig.dif.ref_escape = true;
+ if (ns->pi_type != NVME_NS_DPS_PI_TYPE3)
+ domain->sig.dif.ref_remap = true;
+}
+
+static void nvme_rdma_set_sig_attrs(struct bio *bio,
+ struct ib_sig_attrs *sig_attrs, struct nvme_command *c,
+ struct request *rq)
+{
+ u16 control = le16_to_cpu(c->rw.control);
+
+ memset(sig_attrs, 0, sizeof(*sig_attrs));
+
+ if (control & NVME_RW_PRINFO_PRACT) {
+ /* for WRITE_INSERT/READ_STRIP no memory domain */
+ sig_attrs->mem.sig_type = IB_SIG_TYPE_NONE;
+ nvme_rdma_set_diff_domain(c, bio, &sig_attrs->wire, rq);
+ /* Clear the PRACT bit since HCA will generate/verify the PI */
+ control &= ~NVME_RW_PRINFO_PRACT;
+ c->rw.control = cpu_to_le16(control);
+ } else {
+ /* for WRITE_PASS/READ_PASS both wire/memory domains exist */
+ nvme_rdma_set_diff_domain(c, bio, &sig_attrs->wire, rq);
+ nvme_rdma_set_diff_domain(c, bio, &sig_attrs->mem, rq);
}
+}
+
+static void nvme_rdma_set_prot_checks(struct nvme_command *cmd, u8 *mask)
+{
+ *mask = 0;
+ if (le16_to_cpu(cmd->rw.control) & NVME_RW_PRINFO_PRCHK_REF)
+ *mask |= IB_SIG_CHECK_REFTAG;
+ if (le16_to_cpu(cmd->rw.control) & NVME_RW_PRINFO_PRCHK_GUARD)
+ *mask |= IB_SIG_CHECK_GUARD;
+}
+
+static void nvme_rdma_sig_done(struct ib_cq *cq, struct ib_wc *wc)
+{
+ if (unlikely(wc->status != IB_WC_SUCCESS))
+ nvme_rdma_wr_error(cq, wc, "SIG");
+}
+
+static void nvme_rdma_set_pi_wr(struct nvme_rdma_request *req,
+ struct nvme_command *c)
+{
+ struct ib_sig_attrs *sig_attrs = req->mr->sig_attrs;
+ struct ib_reg_wr *wr = &req->reg_wr;
+ struct request *rq = blk_mq_rq_from_pdu(req);
+ struct bio *bio = rq->bio;
+ struct nvme_keyed_sgl_desc *sg = &c->common.dptr.ksgl;
+
+ nvme_rdma_set_sig_attrs(bio, sig_attrs, c, rq);
+
+ nvme_rdma_set_prot_checks(c, &sig_attrs->check_mask);
ib_update_fast_reg_key(req->mr, ib_inc_rkey(req->mr->rkey));
+ req->reg_cqe.done = nvme_rdma_sig_done;
+
+ memset(wr, 0, sizeof(*wr));
+ wr->wr.opcode = IB_WR_REG_MR_INTEGRITY;
+ wr->wr.wr_cqe = &req->reg_cqe;
+ wr->wr.num_sge = 0;
+ wr->wr.send_flags = 0;
+ wr->mr = req->mr;
+ wr->key = req->mr->rkey;
+ wr->access = IB_ACCESS_LOCAL_WRITE |
+ IB_ACCESS_REMOTE_READ |
+ IB_ACCESS_REMOTE_WRITE;
+
+ sg->addr = cpu_to_le64(req->mr->iova);
+ put_unaligned_le24(req->mr->length, sg->length);
+ put_unaligned_le32(req->mr->rkey, sg->key);
+ sg->type = (NVME_KEY_SGL_FMT_DATA_DESC << 4) | NVME_SGL_FMT_INVALIDATE;
+}
+#else
+static void nvme_rdma_set_pi_wr(struct nvme_rdma_request *req,
+ struct nvme_command *c)
+{
+}
+#endif /* CONFIG_BLK_DEV_INTEGRITY */
+
+static void nvme_rdma_set_reg_wr(struct nvme_rdma_request *req,
+ struct nvme_command *c)
+{
+ struct nvme_keyed_sgl_desc *sg = &c->common.dptr.ksgl;
+
req->reg_cqe.done = nvme_rdma_memreg_done;
+
memset(&req->reg_wr, 0, sizeof(req->reg_wr));
req->reg_wr.wr.opcode = IB_WR_REG_MR;
req->reg_wr.wr.wr_cqe = &req->reg_cqe;
@@ -1258,8 +1407,52 @@ static int nvme_rdma_map_sg_fr(struct nvme_rdma_queue *queue,
put_unaligned_le32(req->mr->rkey, sg->key);
sg->type = (NVME_KEY_SGL_FMT_DATA_DESC << 4) |
NVME_SGL_FMT_INVALIDATE;
+}
+
+static int nvme_rdma_map_sg_fr(struct nvme_rdma_queue *queue,
+ struct nvme_rdma_request *req, struct nvme_command *c,
+ int count, int pi_count)
+{
+ struct nvme_rdma_sgl *sgl = &req->data_sgl;
+ int nr;
+
+ if (req->use_pi) {
+ req->mr = ib_mr_pool_get(queue->qp, &queue->qp->sig_mrs);
+ if (WARN_ON_ONCE(!req->mr))
+ return -EAGAIN;
+
+ nr = ib_map_mr_sg_pi(req->mr, sgl->sg_table.sgl, count, 0,
+ req->pi_sgl->sg_table.sgl, pi_count, 0,
+ SZ_4K);
+ if (unlikely(nr))
+ goto mr_put;
+
+ nvme_rdma_set_pi_wr(req, c);
+ } else {
+ req->mr = ib_mr_pool_get(queue->qp, &queue->qp->rdma_mrs);
+ if (WARN_ON_ONCE(!req->mr))
+ return -EAGAIN;
+ /*
+ * Align the MR to a 4K page size to match the ctrl page size
+ * and the block virtual boundary.
+ */
+ nr = ib_map_mr_sg(req->mr, sgl->sg_table.sgl, count, 0, SZ_4K);
+ if (unlikely(nr < count))
+ goto mr_put;
+
+ nvme_rdma_set_reg_wr(req, c);
+ }
return 0;
+mr_put:
+ if (req->use_pi)
+ ib_mr_pool_put(queue->qp, &queue->qp->sig_mrs, req->mr);
+ else
+ ib_mr_pool_put(queue->qp, &queue->qp->rdma_mrs, req->mr);
+ req->mr = NULL;
+ if (nr < 0)
+ return nr;
+ return -EINVAL;
}
static int nvme_rdma_map_data(struct nvme_rdma_queue *queue,
@@ -1268,6 +1461,7 @@ static int nvme_rdma_map_data(struct nvme_rdma_queue *queue,
struct nvme_rdma_request *req = blk_mq_rq_to_pdu(rq);
struct nvme_rdma_device *dev = queue->device;
struct ib_device *ibdev = dev->dev;
+ int pi_count = 0;
int count, ret;
req->num_sge = 1;
@@ -1295,7 +1489,30 @@ static int nvme_rdma_map_data(struct nvme_rdma_queue *queue,
goto out_free_table;
}
- if (count <= dev->num_inline_segments) {
+ if (blk_integrity_rq(rq)) {
+ memset(req->pi_sgl, 0, sizeof(struct nvme_rdma_sgl));
+ req->pi_sgl->sg_table.sgl =
+ (struct scatterlist *)(req->pi_sgl + 1);
+ ret = sg_alloc_table_chained(&req->pi_sgl->sg_table,
+ blk_rq_count_integrity_sg(rq->q, rq->bio),
+ req->pi_sgl->sg_table.sgl,
+ NVME_INLINE_PROT_SG_CNT);
+ if (unlikely(ret)) {
+ ret = -ENOMEM;
+ goto out_unmap_sg;
+ }
+
+ req->pi_sgl->nents = blk_rq_map_integrity_sg(rq->q, rq->bio,
+ req->pi_sgl->sg_table.sgl);
+ pi_count = ib_dma_map_sg(ibdev, req->pi_sgl->sg_table.sgl,
+ req->pi_sgl->nents, rq_dma_dir(rq));
+ if (unlikely(pi_count <= 0)) {
+ ret = -EIO;
+ goto out_free_pi_table;
+ }
+ }
+
+ if (count <= dev->num_inline_segments && !req->use_pi) {
if (rq_data_dir(rq) == WRITE && nvme_rdma_queue_idx(queue) &&
queue->ctrl->use_inline_data &&
blk_rq_payload_bytes(rq) <=
@@ -1310,13 +1527,21 @@ static int nvme_rdma_map_data(struct nvme_rdma_queue *queue,
}
}
- ret = nvme_rdma_map_sg_fr(queue, req, c, count);
+ ret = nvme_rdma_map_sg_fr(queue, req, c, count, pi_count);
out:
if (unlikely(ret))
- goto out_unmap_sg;
+ goto out_unmap_pi_sg;
return 0;
+out_unmap_pi_sg:
+ if (blk_integrity_rq(rq))
+ ib_dma_unmap_sg(ibdev, req->pi_sgl->sg_table.sgl,
+ req->pi_sgl->nents, rq_dma_dir(rq));
+out_free_pi_table:
+ if (blk_integrity_rq(rq))
+ sg_free_table_chained(&req->pi_sgl->sg_table,
+ NVME_INLINE_PROT_SG_CNT);
out_unmap_sg:
ib_dma_unmap_sg(ibdev, req->data_sgl.sg_table.sgl, req->data_sgl.nents,
rq_dma_dir(rq));
@@ -1769,6 +1994,8 @@ static blk_status_t nvme_rdma_queue_rq(struct blk_mq_hw_ctx *hctx,
blk_mq_start_request(rq);
+ req->use_pi = queue->pi_support && nvme_req(rq)->has_pi;
+
err = nvme_rdma_map_data(queue, rq, c);
if (unlikely(err < 0)) {
dev_err(queue->ctrl->ctrl.device,
@@ -1809,12 +2036,50 @@ static int nvme_rdma_poll(struct blk_mq_hw_ctx *hctx)
return ib_process_cq_direct(queue->ib_cq, -1);
}
+#ifdef CONFIG_BLK_DEV_INTEGRITY
+static void nvme_rdma_check_pi_status(struct nvme_rdma_request *req)
+{
+ struct request *rq = blk_mq_rq_from_pdu(req);
+ struct ib_mr_status mr_status;
+ int ret;
+
+ ret = ib_check_mr_status(req->mr, IB_MR_CHECK_SIG_STATUS, &mr_status);
+ if (ret) {
+ pr_err("ib_check_mr_status failed, ret %d\n", ret);
+ nvme_req(rq)->status = NVME_SC_INVALID_PI;
+ return;
+ }
+
+ if (mr_status.fail_status & IB_MR_CHECK_SIG_STATUS) {
+ switch (mr_status.sig_err.err_type) {
+ case IB_SIG_BAD_GUARD:
+ nvme_req(rq)->status = NVME_SC_GUARD_CHECK;
+ break;
+ case IB_SIG_BAD_REFTAG:
+ nvme_req(rq)->status = NVME_SC_REFTAG_CHECK;
+ break;
+ case IB_SIG_BAD_APPTAG:
+ nvme_req(rq)->status = NVME_SC_APPTAG_CHECK;
+ break;
+ }
+ pr_err("PI error found type %d expected 0x%x vs actual 0x%x\n",
+ mr_status.sig_err.err_type, mr_status.sig_err.expected,
+ mr_status.sig_err.actual);
+ }
+}
+#endif
+
static void nvme_rdma_complete_rq(struct request *rq)
{
struct nvme_rdma_request *req = blk_mq_rq_to_pdu(rq);
struct nvme_rdma_queue *queue = req->queue;
struct ib_device *ibdev = queue->device->dev;
+#ifdef CONFIG_BLK_DEV_INTEGRITY
+ if (req->use_pi)
+ nvme_rdma_check_pi_status(req);
+#endif
+
nvme_rdma_unmap_data(queue, rq);
ib_dma_unmap_single(ibdev, req->sqe.dma, sizeof(struct nvme_command),
DMA_TO_DEVICE);
@@ -1934,7 +2199,7 @@ static void nvme_rdma_reset_ctrl_work(struct work_struct *work)
static const struct nvme_ctrl_ops nvme_rdma_ctrl_ops = {
.name = "rdma",
.module = THIS_MODULE,
- .flags = NVME_F_FABRICS,
+ .flags = NVME_F_FABRICS | NVME_F_METADATA_SUPPORTED,
.reg_read32 = nvmf_reg_read32,
.reg_read64 = nvmf_reg_read64,
.reg_write32 = nvmf_reg_write32,
@@ -2078,7 +2343,7 @@ static struct nvmf_transport_ops nvme_rdma_transport = {
.allowed_opts = NVMF_OPT_TRSVCID | NVMF_OPT_RECONNECT_DELAY |
NVMF_OPT_HOST_TRADDR | NVMF_OPT_CTRL_LOSS_TMO |
NVMF_OPT_NR_WRITE_QUEUES | NVMF_OPT_NR_POLL_QUEUES |
- NVMF_OPT_TOS,
+ NVMF_OPT_TOS | NVMF_OPT_PI_ENABLE,
.create_ctrl = nvme_rdma_create_ctrl,
};
--
2.16.3
_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 08/16] block: Add a BIP_MAPPED_INTEGRITY check to complete_fn
2019-12-02 14:47 [PATCH 00/16 V2] nvme-rdma/nvmet-rdma: Add metadata/T10-PI support Max Gurtovoy
` (7 preceding siblings ...)
2019-12-02 14:48 ` [PATCH 07/16] nvme-rdma: Add metadata/T10-PI support Max Gurtovoy
@ 2019-12-02 14:48 ` Max Gurtovoy
2019-12-02 14:48 ` [PATCH 09/16] nvmet: Prepare metadata request Max Gurtovoy
` (7 subsequent siblings)
16 siblings, 0 replies; 28+ messages in thread
From: Max Gurtovoy @ 2019-12-02 14:48 UTC (permalink / raw)
To: linux-nvme, kbusch, hch, sagi, martin.petersen
Cc: axboe, vladimirk, shlomin, israelr, idanb, oren, maxg
From: Israel Rukshin <israelr@mellanox.com>
This check is needed in case some other layer did the reftag remapping
(e.g. in NVMe/RDMA the initiator controller performs the remapping so
target side shouldn't map it again).
Signed-off-by: Israel Rukshin <israelr@mellanox.com>
Reviewed-by: Max Gurtovoy <maxg@mellanox.com>
---
block/t10-pi.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/block/t10-pi.c b/block/t10-pi.c
index f4907d941f03..92f86f00bcad 100644
--- a/block/t10-pi.c
+++ b/block/t10-pi.c
@@ -193,6 +193,10 @@ static void t10_pi_type1_complete(struct request *rq, unsigned int nr_bytes)
struct bio_vec iv;
struct bvec_iter iter;
+ /* Already remapped? */
+ if (bip->bip_flags & BIP_MAPPED_INTEGRITY)
+ break;
+
bip_for_each_vec(iv, bip, iter) {
void *p, *pmap;
unsigned int j;
--
2.16.3
_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 09/16] nvmet: Prepare metadata request
2019-12-02 14:47 [PATCH 00/16 V2] nvme-rdma/nvmet-rdma: Add metadata/T10-PI support Max Gurtovoy
` (8 preceding siblings ...)
2019-12-02 14:48 ` [PATCH 08/16] block: Add a BIP_MAPPED_INTEGRITY check to complete_fn Max Gurtovoy
@ 2019-12-02 14:48 ` Max Gurtovoy
2019-12-02 14:48 ` [PATCH 10/16] nvmet: Add metadata characteristics for a namespace Max Gurtovoy
` (6 subsequent siblings)
16 siblings, 0 replies; 28+ messages in thread
From: Max Gurtovoy @ 2019-12-02 14:48 UTC (permalink / raw)
To: linux-nvme, kbusch, hch, sagi, martin.petersen
Cc: axboe, vladimirk, shlomin, israelr, idanb, oren, maxg
From: Israel Rukshin <israelr@mellanox.com>
Allocate the metadata SGL buffers and add metadata fields for the
request. This is a preparation for adding metadata support over the
fabrics.
Signed-off-by: Israel Rukshin <israelr@mellanox.com>
Reviewed-by: Max Gurtovoy <maxg@mellanox.com>
---
drivers/nvme/target/core.c | 48 ++++++++++++++++++++++++++++++++++++++-------
drivers/nvme/target/nvmet.h | 5 +++++
2 files changed, 46 insertions(+), 7 deletions(-)
diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
index 28438b833c1b..ab3ffc6f5a4a 100644
--- a/drivers/nvme/target/core.c
+++ b/drivers/nvme/target/core.c
@@ -863,13 +863,17 @@ bool nvmet_req_init(struct nvmet_req *req, struct nvmet_cq *cq,
req->sq = sq;
req->ops = ops;
req->sg = NULL;
+ req->prot_sg = NULL;
req->sg_cnt = 0;
+ req->prot_sg_cnt = 0;
req->transfer_len = 0;
+ req->prot_len = 0;
req->cqe->status = 0;
req->cqe->sq_head = 0;
req->ns = NULL;
req->error_loc = NVMET_NO_ERROR_LOC;
req->error_slba = 0;
+ req->use_pi = false;
trace_nvmet_req_init(req, req->cmd);
@@ -941,6 +945,7 @@ EXPORT_SYMBOL_GPL(nvmet_check_data_len);
int nvmet_req_alloc_sgl(struct nvmet_req *req)
{
struct pci_dev *p2p_dev = NULL;
+ int data_len = req->transfer_len - req->prot_len;
if (IS_ENABLED(CONFIG_PCI_P2PDMA)) {
if (req->sq->ctrl && req->ns)
@@ -950,11 +955,23 @@ int nvmet_req_alloc_sgl(struct nvmet_req *req)
req->p2p_dev = NULL;
if (req->sq->qid && p2p_dev) {
req->sg = pci_p2pmem_alloc_sgl(p2p_dev, &req->sg_cnt,
- req->transfer_len);
- if (req->sg) {
- req->p2p_dev = p2p_dev;
- return 0;
+ data_len);
+ if (!req->sg)
+ goto fallback;
+
+ if (req->prot_len) {
+ req->prot_sg =
+ pci_p2pmem_alloc_sgl(p2p_dev,
+ &req->prot_sg_cnt,
+ req->prot_len);
+ if (!req->prot_sg) {
+ pci_p2pmem_free_sgl(req->p2p_dev,
+ req->sg);
+ goto fallback;
+ }
}
+ req->p2p_dev = p2p_dev;
+ return 0;
}
/*
@@ -963,23 +980,40 @@ int nvmet_req_alloc_sgl(struct nvmet_req *req)
*/
}
- req->sg = sgl_alloc(req->transfer_len, GFP_KERNEL, &req->sg_cnt);
+fallback:
+ req->sg = sgl_alloc(data_len, GFP_KERNEL, &req->sg_cnt);
if (unlikely(!req->sg))
return -ENOMEM;
+ if (req->prot_len) {
+ req->prot_sg = sgl_alloc(req->prot_len, GFP_KERNEL,
+ &req->prot_sg_cnt);
+ if (unlikely(!req->prot_sg)) {
+ sgl_free(req->sg);
+ return -ENOMEM;
+ }
+ }
+
return 0;
}
EXPORT_SYMBOL_GPL(nvmet_req_alloc_sgl);
void nvmet_req_free_sgl(struct nvmet_req *req)
{
- if (req->p2p_dev)
+ if (req->p2p_dev) {
pci_p2pmem_free_sgl(req->p2p_dev, req->sg);
- else
+ if (req->prot_sg)
+ pci_p2pmem_free_sgl(req->p2p_dev, req->prot_sg);
+ } else {
sgl_free(req->sg);
+ if (req->prot_sg)
+ sgl_free(req->prot_sg);
+ }
req->sg = NULL;
+ req->prot_sg = NULL;
req->sg_cnt = 0;
+ req->prot_sg_cnt = 0;
}
EXPORT_SYMBOL_GPL(nvmet_req_free_sgl);
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index 46df45e837c9..60011f3a50a7 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -291,6 +291,7 @@ struct nvmet_req {
struct nvmet_cq *cq;
struct nvmet_ns *ns;
struct scatterlist *sg;
+ struct scatterlist *prot_sg;
struct bio_vec inline_bvec[NVMET_MAX_INLINE_BIOVEC];
union {
struct {
@@ -304,8 +305,10 @@ struct nvmet_req {
} f;
};
int sg_cnt;
+ int prot_sg_cnt;
/* data length as parsed from the SGL descriptor: */
size_t transfer_len;
+ size_t prot_len;
struct nvmet_port *port;
@@ -316,6 +319,8 @@ struct nvmet_req {
struct device *p2p_client;
u16 error_loc;
u64 error_slba;
+ /* Metadata (T10-PI) support */
+ bool use_pi;
};
extern struct workqueue_struct *buffered_io_wq;
--
2.16.3
_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 10/16] nvmet: Add metadata characteristics for a namespace
2019-12-02 14:47 [PATCH 00/16 V2] nvme-rdma/nvmet-rdma: Add metadata/T10-PI support Max Gurtovoy
` (9 preceding siblings ...)
2019-12-02 14:48 ` [PATCH 09/16] nvmet: Prepare metadata request Max Gurtovoy
@ 2019-12-02 14:48 ` Max Gurtovoy
2019-12-02 14:48 ` [PATCH 11/16] nvmet: Rename nvmet_rw_len to nvmet_rw_data_len Max Gurtovoy
` (5 subsequent siblings)
16 siblings, 0 replies; 28+ messages in thread
From: Max Gurtovoy @ 2019-12-02 14:48 UTC (permalink / raw)
To: linux-nvme, kbusch, hch, sagi, martin.petersen
Cc: axboe, vladimirk, shlomin, israelr, idanb, oren, maxg
From: Israel Rukshin <israelr@mellanox.com>
This is a preparation for adding metadata (T10-PI) over fabric support.
Signed-off-by: Israel Rukshin <israelr@mellanox.com>
Reviewed-by: Max Gurtovoy <maxg@mellanox.com>
---
drivers/nvme/target/io-cmd-bdev.c | 22 ++++++++++++++++++++++
drivers/nvme/target/nvmet.h | 3 +++
2 files changed, 25 insertions(+)
diff --git a/drivers/nvme/target/io-cmd-bdev.c b/drivers/nvme/target/io-cmd-bdev.c
index b6fca0e421ef..fb400221146e 100644
--- a/drivers/nvme/target/io-cmd-bdev.c
+++ b/drivers/nvme/target/io-cmd-bdev.c
@@ -50,6 +50,9 @@ void nvmet_bdev_set_limits(struct block_device *bdev, struct nvme_id_ns *id)
int nvmet_bdev_ns_enable(struct nvmet_ns *ns)
{
int ret;
+#ifdef CONFIG_BLK_DEV_INTEGRITY
+ struct blk_integrity *bi;
+#endif
ns->bdev = blkdev_get_by_path(ns->device_path,
FMODE_READ | FMODE_WRITE, NULL);
@@ -64,6 +67,25 @@ int nvmet_bdev_ns_enable(struct nvmet_ns *ns)
}
ns->size = i_size_read(ns->bdev->bd_inode);
ns->blksize_shift = blksize_bits(bdev_logical_block_size(ns->bdev));
+
+ ns->prot_type = 0;
+ ns->ms = 0;
+#ifdef CONFIG_BLK_DEV_INTEGRITY
+ bi = bdev_get_integrity(ns->bdev);
+ if (bi) {
+ ns->ms = bi->tuple_size;
+ if (bi->profile == &t10_pi_type1_crc)
+ ns->prot_type = NVME_NS_DPS_PI_TYPE1;
+ else if (bi->profile == &t10_pi_type3_crc)
+ ns->prot_type = NVME_NS_DPS_PI_TYPE3;
+ else
+ /* Unsupported metadata type */
+ ns->ms = 0;
+ }
+
+ pr_debug("ms %d pi_type %d\n", ns->ms, ns->prot_type);
+#endif
+
return 0;
}
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index 60011f3a50a7..89e0174ec98f 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -19,6 +19,7 @@
#include <linux/rcupdate.h>
#include <linux/blkdev.h>
#include <linux/radix-tree.h>
+#include <linux/t10-pi.h>
#define NVMET_ASYNC_EVENTS 4
#define NVMET_ERROR_LOG_SLOTS 128
@@ -76,6 +77,8 @@ struct nvmet_ns {
int use_p2pmem;
struct pci_dev *p2p_dev;
+ int prot_type;
+ int ms;
};
static inline struct nvmet_ns *to_nvmet_ns(struct config_item *item)
--
2.16.3
_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 11/16] nvmet: Rename nvmet_rw_len to nvmet_rw_data_len
2019-12-02 14:47 [PATCH 00/16 V2] nvme-rdma/nvmet-rdma: Add metadata/T10-PI support Max Gurtovoy
` (10 preceding siblings ...)
2019-12-02 14:48 ` [PATCH 10/16] nvmet: Add metadata characteristics for a namespace Max Gurtovoy
@ 2019-12-02 14:48 ` Max Gurtovoy
2019-12-02 14:48 ` [PATCH 12/16] nvmet: Rename nvmet_check_data_len to nvmet_check_transfer_len Max Gurtovoy
` (4 subsequent siblings)
16 siblings, 0 replies; 28+ messages in thread
From: Max Gurtovoy @ 2019-12-02 14:48 UTC (permalink / raw)
To: linux-nvme, kbusch, hch, sagi, martin.petersen
Cc: axboe, vladimirk, shlomin, israelr, idanb, oren, maxg
From: Israel Rukshin <israelr@mellanox.com>
The function doesn't add the metadata length (only data length is
calculated). This is preparation for adding metadata (T10-PI) support.
Signed-off-by: Israel Rukshin <israelr@mellanox.com>
Reviewed-by: Max Gurtovoy <maxg@mellanox.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
---
drivers/nvme/target/io-cmd-bdev.c | 2 +-
drivers/nvme/target/io-cmd-file.c | 2 +-
drivers/nvme/target/nvmet.h | 2 +-
3 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/nvme/target/io-cmd-bdev.c b/drivers/nvme/target/io-cmd-bdev.c
index fb400221146e..190cdc89b560 100644
--- a/drivers/nvme/target/io-cmd-bdev.c
+++ b/drivers/nvme/target/io-cmd-bdev.c
@@ -173,7 +173,7 @@ static void nvmet_bdev_execute_rw(struct nvmet_req *req)
sector_t sector;
int op, i;
- if (!nvmet_check_data_len(req, nvmet_rw_len(req)))
+ if (!nvmet_check_data_len(req, nvmet_rw_data_len(req)))
return;
if (!req->sg_cnt) {
diff --git a/drivers/nvme/target/io-cmd-file.c b/drivers/nvme/target/io-cmd-file.c
index caebfce06605..e0a4859e9735 100644
--- a/drivers/nvme/target/io-cmd-file.c
+++ b/drivers/nvme/target/io-cmd-file.c
@@ -232,7 +232,7 @@ static void nvmet_file_execute_rw(struct nvmet_req *req)
{
ssize_t nr_bvec = req->sg_cnt;
- if (!nvmet_check_data_len(req, nvmet_rw_len(req)))
+ if (!nvmet_check_data_len(req, nvmet_rw_data_len(req)))
return;
if (!req->sg_cnt || !nr_bvec) {
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index 89e0174ec98f..8e6f11f54e76 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -495,7 +495,7 @@ u16 nvmet_bdev_flush(struct nvmet_req *req);
u16 nvmet_file_flush(struct nvmet_req *req);
void nvmet_ns_changed(struct nvmet_subsys *subsys, u32 nsid);
-static inline u32 nvmet_rw_len(struct nvmet_req *req)
+static inline u32 nvmet_rw_data_len(struct nvmet_req *req)
{
return ((u32)le16_to_cpu(req->cmd->rw.length) + 1) <<
req->ns->blksize_shift;
--
2.16.3
_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 12/16] nvmet: Rename nvmet_check_data_len to nvmet_check_transfer_len
2019-12-02 14:47 [PATCH 00/16 V2] nvme-rdma/nvmet-rdma: Add metadata/T10-PI support Max Gurtovoy
` (11 preceding siblings ...)
2019-12-02 14:48 ` [PATCH 11/16] nvmet: Rename nvmet_rw_len to nvmet_rw_data_len Max Gurtovoy
@ 2019-12-02 14:48 ` Max Gurtovoy
2019-12-02 14:48 ` [PATCH 13/16] nvme: Add Metadata Capabilities enumerations Max Gurtovoy
` (3 subsequent siblings)
16 siblings, 0 replies; 28+ messages in thread
From: Max Gurtovoy @ 2019-12-02 14:48 UTC (permalink / raw)
To: linux-nvme, kbusch, hch, sagi, martin.petersen
Cc: axboe, vladimirk, shlomin, israelr, idanb, oren, maxg
From: Israel Rukshin <israelr@mellanox.com>
The function doesn't checks only the data length, because the transfer
length includes also the metadata length in some cases. This is
preparation for adding metadata (T10-PI) support.
Signed-off-by: Israel Rukshin <israelr@mellanox.com>
Reviewed-by: Max Gurtovoy <maxg@mellanox.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
---
drivers/nvme/target/admin-cmd.c | 14 +++++++-------
drivers/nvme/target/core.c | 6 +++---
drivers/nvme/target/discovery.c | 8 ++++----
drivers/nvme/target/fabrics-cmd.c | 8 ++++----
drivers/nvme/target/io-cmd-bdev.c | 8 ++++----
drivers/nvme/target/io-cmd-file.c | 8 ++++----
drivers/nvme/target/nvmet.h | 2 +-
7 files changed, 27 insertions(+), 27 deletions(-)
diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cmd.c
index 56c21b501185..19ae1552e54a 100644
--- a/drivers/nvme/target/admin-cmd.c
+++ b/drivers/nvme/target/admin-cmd.c
@@ -284,7 +284,7 @@ static void nvmet_execute_get_log_page_ana(struct nvmet_req *req)
static void nvmet_execute_get_log_page(struct nvmet_req *req)
{
- if (!nvmet_check_data_len(req, nvmet_get_log_page_len(req->cmd)))
+ if (!nvmet_check_transfer_len(req, nvmet_get_log_page_len(req->cmd)))
return;
switch (req->cmd->get_log_page.lid) {
@@ -597,7 +597,7 @@ static void nvmet_execute_identify_desclist(struct nvmet_req *req)
static void nvmet_execute_identify(struct nvmet_req *req)
{
- if (!nvmet_check_data_len(req, NVME_IDENTIFY_DATA_SIZE))
+ if (!nvmet_check_transfer_len(req, NVME_IDENTIFY_DATA_SIZE))
return;
switch (req->cmd->identify.cns) {
@@ -626,7 +626,7 @@ static void nvmet_execute_identify(struct nvmet_req *req)
*/
static void nvmet_execute_abort(struct nvmet_req *req)
{
- if (!nvmet_check_data_len(req, 0))
+ if (!nvmet_check_transfer_len(req, 0))
return;
nvmet_set_result(req, 1);
nvmet_req_complete(req, 0);
@@ -712,7 +712,7 @@ static void nvmet_execute_set_features(struct nvmet_req *req)
u32 cdw10 = le32_to_cpu(req->cmd->common.cdw10);
u16 status = 0;
- if (!nvmet_check_data_len(req, 0))
+ if (!nvmet_check_transfer_len(req, 0))
return;
switch (cdw10 & 0xff) {
@@ -778,7 +778,7 @@ static void nvmet_execute_get_features(struct nvmet_req *req)
u32 cdw10 = le32_to_cpu(req->cmd->common.cdw10);
u16 status = 0;
- if (!nvmet_check_data_len(req, 0))
+ if (!nvmet_check_transfer_len(req, 0))
return;
switch (cdw10 & 0xff) {
@@ -845,7 +845,7 @@ void nvmet_execute_async_event(struct nvmet_req *req)
{
struct nvmet_ctrl *ctrl = req->sq->ctrl;
- if (!nvmet_check_data_len(req, 0))
+ if (!nvmet_check_transfer_len(req, 0))
return;
mutex_lock(&ctrl->lock);
@@ -864,7 +864,7 @@ void nvmet_execute_keep_alive(struct nvmet_req *req)
{
struct nvmet_ctrl *ctrl = req->sq->ctrl;
- if (!nvmet_check_data_len(req, 0))
+ if (!nvmet_check_transfer_len(req, 0))
return;
pr_debug("ctrl %d update keep-alive timer for %d secs\n",
diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
index ab3ffc6f5a4a..092c06fdc524 100644
--- a/drivers/nvme/target/core.c
+++ b/drivers/nvme/target/core.c
@@ -930,9 +930,9 @@ void nvmet_req_uninit(struct nvmet_req *req)
}
EXPORT_SYMBOL_GPL(nvmet_req_uninit);
-bool nvmet_check_data_len(struct nvmet_req *req, size_t data_len)
+bool nvmet_check_transfer_len(struct nvmet_req *req, size_t len)
{
- if (unlikely(data_len != req->transfer_len)) {
+ if (unlikely(len != req->transfer_len)) {
req->error_loc = offsetof(struct nvme_common_command, dptr);
nvmet_req_complete(req, NVME_SC_SGL_INVALID_DATA | NVME_SC_DNR);
return false;
@@ -940,7 +940,7 @@ bool nvmet_check_data_len(struct nvmet_req *req, size_t data_len)
return true;
}
-EXPORT_SYMBOL_GPL(nvmet_check_data_len);
+EXPORT_SYMBOL_GPL(nvmet_check_transfer_len);
int nvmet_req_alloc_sgl(struct nvmet_req *req)
{
diff --git a/drivers/nvme/target/discovery.c b/drivers/nvme/target/discovery.c
index 0c2274b21e15..40cf0b6e6c9d 100644
--- a/drivers/nvme/target/discovery.c
+++ b/drivers/nvme/target/discovery.c
@@ -171,7 +171,7 @@ static void nvmet_execute_disc_get_log_page(struct nvmet_req *req)
u16 status = 0;
void *buffer;
- if (!nvmet_check_data_len(req, data_len))
+ if (!nvmet_check_transfer_len(req, data_len))
return;
if (req->cmd->get_log_page.lid != NVME_LOG_DISC) {
@@ -244,7 +244,7 @@ static void nvmet_execute_disc_identify(struct nvmet_req *req)
const char model[] = "Linux";
u16 status = 0;
- if (!nvmet_check_data_len(req, NVME_IDENTIFY_DATA_SIZE))
+ if (!nvmet_check_transfer_len(req, NVME_IDENTIFY_DATA_SIZE))
return;
if (req->cmd->identify.cns != NVME_ID_CNS_CTRL) {
@@ -298,7 +298,7 @@ static void nvmet_execute_disc_set_features(struct nvmet_req *req)
u32 cdw10 = le32_to_cpu(req->cmd->common.cdw10);
u16 stat;
- if (!nvmet_check_data_len(req, 0))
+ if (!nvmet_check_transfer_len(req, 0))
return;
switch (cdw10 & 0xff) {
@@ -324,7 +324,7 @@ static void nvmet_execute_disc_get_features(struct nvmet_req *req)
u32 cdw10 = le32_to_cpu(req->cmd->common.cdw10);
u16 stat = 0;
- if (!nvmet_check_data_len(req, 0))
+ if (!nvmet_check_transfer_len(req, 0))
return;
switch (cdw10 & 0xff) {
diff --git a/drivers/nvme/target/fabrics-cmd.c b/drivers/nvme/target/fabrics-cmd.c
index f7297473d9eb..ee41b2bb359e 100644
--- a/drivers/nvme/target/fabrics-cmd.c
+++ b/drivers/nvme/target/fabrics-cmd.c
@@ -12,7 +12,7 @@ static void nvmet_execute_prop_set(struct nvmet_req *req)
u64 val = le64_to_cpu(req->cmd->prop_set.value);
u16 status = 0;
- if (!nvmet_check_data_len(req, 0))
+ if (!nvmet_check_transfer_len(req, 0))
return;
if (req->cmd->prop_set.attrib & 1) {
@@ -41,7 +41,7 @@ static void nvmet_execute_prop_get(struct nvmet_req *req)
u16 status = 0;
u64 val = 0;
- if (!nvmet_check_data_len(req, 0))
+ if (!nvmet_check_transfer_len(req, 0))
return;
if (req->cmd->prop_get.attrib & 1) {
@@ -151,7 +151,7 @@ static void nvmet_execute_admin_connect(struct nvmet_req *req)
struct nvmet_ctrl *ctrl = NULL;
u16 status = 0;
- if (!nvmet_check_data_len(req, sizeof(struct nvmf_connect_data)))
+ if (!nvmet_check_transfer_len(req, sizeof(struct nvmf_connect_data)))
return;
d = kmalloc(sizeof(*d), GFP_KERNEL);
@@ -218,7 +218,7 @@ static void nvmet_execute_io_connect(struct nvmet_req *req)
u16 qid = le16_to_cpu(c->qid);
u16 status = 0;
- if (!nvmet_check_data_len(req, sizeof(struct nvmf_connect_data)))
+ if (!nvmet_check_transfer_len(req, sizeof(struct nvmf_connect_data)))
return;
d = kmalloc(sizeof(*d), GFP_KERNEL);
diff --git a/drivers/nvme/target/io-cmd-bdev.c b/drivers/nvme/target/io-cmd-bdev.c
index 190cdc89b560..9f99b9d47d1e 100644
--- a/drivers/nvme/target/io-cmd-bdev.c
+++ b/drivers/nvme/target/io-cmd-bdev.c
@@ -173,7 +173,7 @@ static void nvmet_bdev_execute_rw(struct nvmet_req *req)
sector_t sector;
int op, i;
- if (!nvmet_check_data_len(req, nvmet_rw_data_len(req)))
+ if (!nvmet_check_transfer_len(req, nvmet_rw_data_len(req)))
return;
if (!req->sg_cnt) {
@@ -234,7 +234,7 @@ static void nvmet_bdev_execute_flush(struct nvmet_req *req)
{
struct bio *bio = &req->b.inline_bio;
- if (!nvmet_check_data_len(req, 0))
+ if (!nvmet_check_transfer_len(req, 0))
return;
bio_init(bio, req->inline_bvec, ARRAY_SIZE(req->inline_bvec));
@@ -302,7 +302,7 @@ static void nvmet_bdev_execute_discard(struct nvmet_req *req)
static void nvmet_bdev_execute_dsm(struct nvmet_req *req)
{
- if (!nvmet_check_data_len(req, nvmet_dsm_len(req)))
+ if (!nvmet_check_transfer_len(req, nvmet_dsm_len(req)))
return;
switch (le32_to_cpu(req->cmd->dsm.attributes)) {
@@ -326,7 +326,7 @@ static void nvmet_bdev_execute_write_zeroes(struct nvmet_req *req)
sector_t nr_sector;
int ret;
- if (!nvmet_check_data_len(req, 0))
+ if (!nvmet_check_transfer_len(req, 0))
return;
sector = le64_to_cpu(write_zeroes->slba) <<
diff --git a/drivers/nvme/target/io-cmd-file.c b/drivers/nvme/target/io-cmd-file.c
index e0a4859e9735..1492f551c5d7 100644
--- a/drivers/nvme/target/io-cmd-file.c
+++ b/drivers/nvme/target/io-cmd-file.c
@@ -232,7 +232,7 @@ static void nvmet_file_execute_rw(struct nvmet_req *req)
{
ssize_t nr_bvec = req->sg_cnt;
- if (!nvmet_check_data_len(req, nvmet_rw_data_len(req)))
+ if (!nvmet_check_transfer_len(req, nvmet_rw_data_len(req)))
return;
if (!req->sg_cnt || !nr_bvec) {
@@ -276,7 +276,7 @@ static void nvmet_file_flush_work(struct work_struct *w)
static void nvmet_file_execute_flush(struct nvmet_req *req)
{
- if (!nvmet_check_data_len(req, 0))
+ if (!nvmet_check_transfer_len(req, 0))
return;
INIT_WORK(&req->f.work, nvmet_file_flush_work);
schedule_work(&req->f.work);
@@ -336,7 +336,7 @@ static void nvmet_file_dsm_work(struct work_struct *w)
static void nvmet_file_execute_dsm(struct nvmet_req *req)
{
- if (!nvmet_check_data_len(req, nvmet_dsm_len(req)))
+ if (!nvmet_check_transfer_len(req, nvmet_dsm_len(req)))
return;
INIT_WORK(&req->f.work, nvmet_file_dsm_work);
schedule_work(&req->f.work);
@@ -366,7 +366,7 @@ static void nvmet_file_write_zeroes_work(struct work_struct *w)
static void nvmet_file_execute_write_zeroes(struct nvmet_req *req)
{
- if (!nvmet_check_data_len(req, 0))
+ if (!nvmet_check_transfer_len(req, 0))
return;
INIT_WORK(&req->f.work, nvmet_file_write_zeroes_work);
schedule_work(&req->f.work);
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index 8e6f11f54e76..c8b7095a0ded 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -381,7 +381,7 @@ u16 nvmet_parse_fabrics_cmd(struct nvmet_req *req);
bool nvmet_req_init(struct nvmet_req *req, struct nvmet_cq *cq,
struct nvmet_sq *sq, const struct nvmet_fabrics_ops *ops);
void nvmet_req_uninit(struct nvmet_req *req);
-bool nvmet_check_data_len(struct nvmet_req *req, size_t data_len);
+bool nvmet_check_transfer_len(struct nvmet_req *req, size_t len);
void nvmet_req_complete(struct nvmet_req *req, u16 status);
int nvmet_req_alloc_sgl(struct nvmet_req *req);
void nvmet_req_free_sgl(struct nvmet_req *req);
--
2.16.3
_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 13/16] nvme: Add Metadata Capabilities enumerations
2019-12-02 14:47 [PATCH 00/16 V2] nvme-rdma/nvmet-rdma: Add metadata/T10-PI support Max Gurtovoy
` (12 preceding siblings ...)
2019-12-02 14:48 ` [PATCH 12/16] nvmet: Rename nvmet_check_data_len to nvmet_check_transfer_len Max Gurtovoy
@ 2019-12-02 14:48 ` Max Gurtovoy
2019-12-02 14:48 ` [PATCH 14/16] nvmet: Add metadata/T10-PI support Max Gurtovoy
` (2 subsequent siblings)
16 siblings, 0 replies; 28+ messages in thread
From: Max Gurtovoy @ 2019-12-02 14:48 UTC (permalink / raw)
To: linux-nvme, kbusch, hch, sagi, martin.petersen
Cc: axboe, vladimirk, shlomin, israelr, idanb, oren, maxg
From: Israel Rukshin <israelr@mellanox.com>
This is preparation for adding metadata (T10-PI) over fabric support.
Signed-off-by: Israel Rukshin <israelr@mellanox.com>
Reviewed-by: Max Gurtovoy <maxg@mellanox.com>
---
include/linux/nvme.h | 2 ++
1 file changed, 2 insertions(+)
diff --git a/include/linux/nvme.h b/include/linux/nvme.h
index 3d5189f46cb1..4e968d338d87 100644
--- a/include/linux/nvme.h
+++ b/include/linux/nvme.h
@@ -396,6 +396,8 @@ enum {
NVME_NS_FEAT_THIN = 1 << 0,
NVME_NS_FLBAS_LBA_MASK = 0xf,
NVME_NS_FLBAS_META_EXT = 0x10,
+ NVME_NS_MC_META_EXT = 1 << 0,
+ NVME_NS_MC_META_BUF = 1 << 1,
NVME_LBAF_RP_BEST = 0,
NVME_LBAF_RP_BETTER = 1,
NVME_LBAF_RP_GOOD = 2,
--
2.16.3
_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 14/16] nvmet: Add metadata/T10-PI support
2019-12-02 14:47 [PATCH 00/16 V2] nvme-rdma/nvmet-rdma: Add metadata/T10-PI support Max Gurtovoy
` (13 preceding siblings ...)
2019-12-02 14:48 ` [PATCH 13/16] nvme: Add Metadata Capabilities enumerations Max Gurtovoy
@ 2019-12-02 14:48 ` Max Gurtovoy
2019-12-02 14:48 ` [PATCH 15/16] nvmet: Add metadata support for block devices Max Gurtovoy
2019-12-02 14:48 ` [PATCH 16/16] nvmet-rdma: Add metadata/T10-PI support Max Gurtovoy
16 siblings, 0 replies; 28+ messages in thread
From: Max Gurtovoy @ 2019-12-02 14:48 UTC (permalink / raw)
To: linux-nvme, kbusch, hch, sagi, martin.petersen
Cc: axboe, vladimirk, shlomin, israelr, idanb, oren, maxg
From: Israel Rukshin <israelr@mellanox.com>
Expose the namespace metadata format when PI is enabled. The user needs
to enable the capability per subsystem and per port. The other metadata
properties are taken from the namespace/bdev.
Usage example:
echo 1 > /config/nvmet/subsystems/${NAME}/attr_pi_enable
echo 1 > /config/nvmet/ports/${PORT_NUM}/param_pi_enable
Signed-off-by: Israel Rukshin <israelr@mellanox.com>
Reviewed-by: Max Gurtovoy <maxg@mellanox.com>
---
drivers/nvme/target/admin-cmd.c | 23 ++++++++++++---
drivers/nvme/target/configfs.c | 61 +++++++++++++++++++++++++++++++++++++++
drivers/nvme/target/fabrics-cmd.c | 11 +++++++
drivers/nvme/target/nvmet.h | 28 ++++++++++++++++++
4 files changed, 119 insertions(+), 4 deletions(-)
diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cmd.c
index 19ae1552e54a..20e7f08c8b44 100644
--- a/drivers/nvme/target/admin-cmd.c
+++ b/drivers/nvme/target/admin-cmd.c
@@ -346,8 +346,8 @@ static void nvmet_execute_identify_ctrl(struct nvmet_req *req)
/* we support multiple ports, multiples hosts and ANA: */
id->cmic = (1 << 0) | (1 << 1) | (1 << 3);
- /* no limit on data transfer sizes for now */
- id->mdts = 0;
+ /* Limit MDTS for metadata capable controllers (assume mpsmin is 4k) */
+ id->mdts = ctrl->pi_support ? ilog2(NVMET_T10_PI_MAX_KB_SZ >> 2) : 0;
id->cntlid = cpu_to_le16(ctrl->cntlid);
id->ver = cpu_to_le32(ctrl->subsys->ver);
@@ -405,9 +405,13 @@ static void nvmet_execute_identify_ctrl(struct nvmet_req *req)
strlcpy(id->subnqn, ctrl->subsys->subsysnqn, sizeof(id->subnqn));
- /* Max command capsule size is sqe + single page of in-capsule data */
+ /*
+ * Max command capsule size is sqe + single page of in-capsule data.
+ * Disable inline data for Metadata capable controllers.
+ */
id->ioccsz = cpu_to_le32((sizeof(struct nvme_command) +
- req->port->inline_data_size) / 16);
+ req->port->inline_data_size *
+ !ctrl->pi_support) / 16);
/* Max response capsule size is cqe */
id->iorcsz = cpu_to_le32(sizeof(struct nvme_completion) / 16);
@@ -437,6 +441,7 @@ static void nvmet_execute_identify_ctrl(struct nvmet_req *req)
static void nvmet_execute_identify_ns(struct nvmet_req *req)
{
+ struct nvmet_ctrl *ctrl = req->sq->ctrl;
struct nvmet_ns *ns;
struct nvme_id_ns *id;
u16 status = 0;
@@ -493,6 +498,16 @@ static void nvmet_execute_identify_ns(struct nvmet_req *req)
id->lbaf[0].ds = ns->blksize_shift;
+ if (ctrl->pi_support && nvmet_ns_has_pi(ns)) {
+ id->dpc = NVME_NS_DPC_PI_FIRST | NVME_NS_DPC_PI_LAST |
+ NVME_NS_DPC_PI_TYPE1 | NVME_NS_DPC_PI_TYPE2 |
+ NVME_NS_DPC_PI_TYPE3;
+ id->mc = NVME_NS_MC_META_EXT;
+ id->dps = ns->prot_type;
+ id->flbas = NVME_NS_FLBAS_META_EXT;
+ id->lbaf[0].ms = ns->ms;
+ }
+
if (ns->readonly)
id->nsattr |= (1 << 0);
nvmet_put_namespace(ns);
diff --git a/drivers/nvme/target/configfs.c b/drivers/nvme/target/configfs.c
index 98613a45bd3b..754302532567 100644
--- a/drivers/nvme/target/configfs.c
+++ b/drivers/nvme/target/configfs.c
@@ -248,6 +248,36 @@ static ssize_t nvmet_param_inline_data_size_store(struct config_item *item,
CONFIGFS_ATTR(nvmet_, param_inline_data_size);
+#ifdef CONFIG_BLK_DEV_INTEGRITY
+static ssize_t nvmet_param_pi_enable_show(struct config_item *item,
+ char *page)
+{
+ struct nvmet_port *port = to_nvmet_port(item);
+
+ return snprintf(page, PAGE_SIZE, "%d\n", port->pi_enable);
+}
+
+static ssize_t nvmet_param_pi_enable_store(struct config_item *item,
+ const char *page, size_t count)
+{
+ struct nvmet_port *port = to_nvmet_port(item);
+ bool val;
+
+ if (strtobool(page, &val))
+ return -EINVAL;
+
+ if (port->enabled) {
+ pr_err("Disable port before setting pi_enable value.\n");
+ return -EACCES;
+ }
+
+ port->pi_enable = val;
+ return count;
+}
+
+CONFIGFS_ATTR(nvmet_, param_pi_enable);
+#endif
+
static ssize_t nvmet_addr_trtype_show(struct config_item *item,
char *page)
{
@@ -862,10 +892,38 @@ static ssize_t nvmet_subsys_attr_serial_store(struct config_item *item,
}
CONFIGFS_ATTR(nvmet_subsys_, attr_serial);
+#ifdef CONFIG_BLK_DEV_INTEGRITY
+static ssize_t nvmet_subsys_attr_pi_enable_show(struct config_item *item,
+ char *page)
+{
+ return snprintf(page, PAGE_SIZE, "%d\n", to_subsys(item)->pi_support);
+}
+
+static ssize_t nvmet_subsys_attr_pi_enable_store(struct config_item *item,
+ const char *page, size_t count)
+{
+ struct nvmet_subsys *subsys = to_subsys(item);
+ bool pi_enable;
+
+ if (strtobool(page, &pi_enable))
+ return -EINVAL;
+
+ down_write(&nvmet_config_sem);
+ subsys->pi_support = pi_enable;
+ up_write(&nvmet_config_sem);
+
+ return count;
+}
+CONFIGFS_ATTR(nvmet_subsys_, attr_pi_enable);
+#endif
+
static struct configfs_attribute *nvmet_subsys_attrs[] = {
&nvmet_subsys_attr_attr_allow_any_host,
&nvmet_subsys_attr_attr_version,
&nvmet_subsys_attr_attr_serial,
+#ifdef CONFIG_BLK_DEV_INTEGRITY
+ &nvmet_subsys_attr_attr_pi_enable,
+#endif
NULL,
};
@@ -1161,6 +1219,9 @@ static struct configfs_attribute *nvmet_port_attrs[] = {
&nvmet_attr_addr_trsvcid,
&nvmet_attr_addr_trtype,
&nvmet_attr_param_inline_data_size,
+#ifdef CONFIG_BLK_DEV_INTEGRITY
+ &nvmet_attr_param_pi_enable,
+#endif
NULL,
};
diff --git a/drivers/nvme/target/fabrics-cmd.c b/drivers/nvme/target/fabrics-cmd.c
index ee41b2bb359e..72119960489d 100644
--- a/drivers/nvme/target/fabrics-cmd.c
+++ b/drivers/nvme/target/fabrics-cmd.c
@@ -192,6 +192,17 @@ static void nvmet_execute_admin_connect(struct nvmet_req *req)
goto out;
}
+ if (ctrl->subsys->pi_support && ctrl->port->pi_enable) {
+ if (ctrl->port->pi_capable) {
+ ctrl->pi_support = true;
+ pr_info("controller %d T10-PI enabled\n", ctrl->cntlid);
+ } else {
+ ctrl->pi_support = false;
+ pr_warn("T10-PI is not supported on controller %d\n",
+ ctrl->cntlid);
+ }
+ }
+
uuid_copy(&ctrl->hostid, &d->hostid);
status = nvmet_install_queue(ctrl, req);
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index c8b7095a0ded..5ee01e0887e7 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -144,6 +144,8 @@ struct nvmet_port {
bool enabled;
int inline_data_size;
const struct nvmet_fabrics_ops *tr_ops;
+ bool pi_capable;
+ bool pi_enable;
};
static inline struct nvmet_port *to_nvmet_port(struct config_item *item)
@@ -203,6 +205,7 @@ struct nvmet_ctrl {
spinlock_t error_lock;
u64 err_counter;
struct nvme_error_slot slots[NVMET_ERROR_LOG_SLOTS];
+ bool pi_support;
};
struct nvmet_subsys {
@@ -225,6 +228,7 @@ struct nvmet_subsys {
u64 ver;
u64 serial;
char *subsysnqn;
+ bool pi_support;
struct config_group group;
@@ -501,6 +505,30 @@ static inline u32 nvmet_rw_data_len(struct nvmet_req *req)
req->ns->blksize_shift;
}
+#ifdef CONFIG_BLK_DEV_INTEGRITY
+static inline u32 nvmet_rw_prot_len(struct nvmet_req *req)
+{
+ return ((u32)le16_to_cpu(req->cmd->rw.length) + 1) * req->ns->ms;
+}
+
+static inline bool nvmet_ns_has_pi(struct nvmet_ns *ns)
+{
+ return ns->prot_type && ns->ms == sizeof(struct t10_pi_tuple);
+}
+#else
+static inline u32 nvmet_rw_prot_len(struct nvmet_req *req)
+{
+ return 0;
+}
+
+static inline bool nvmet_ns_has_pi(struct nvmet_ns *ns)
+{
+ return false;
+}
+#endif /* CONFIG_BLK_DEV_INTEGRITY */
+
+#define NVMET_T10_PI_MAX_KB_SZ 128
+
static inline u32 nvmet_dsm_len(struct nvmet_req *req)
{
return (le32_to_cpu(req->cmd->dsm.nr) + 1) *
--
2.16.3
_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 15/16] nvmet: Add metadata support for block devices
2019-12-02 14:47 [PATCH 00/16 V2] nvme-rdma/nvmet-rdma: Add metadata/T10-PI support Max Gurtovoy
` (14 preceding siblings ...)
2019-12-02 14:48 ` [PATCH 14/16] nvmet: Add metadata/T10-PI support Max Gurtovoy
@ 2019-12-02 14:48 ` Max Gurtovoy
2019-12-02 14:48 ` [PATCH 16/16] nvmet-rdma: Add metadata/T10-PI support Max Gurtovoy
16 siblings, 0 replies; 28+ messages in thread
From: Max Gurtovoy @ 2019-12-02 14:48 UTC (permalink / raw)
To: linux-nvme, kbusch, hch, sagi, martin.petersen
Cc: axboe, vladimirk, shlomin, israelr, idanb, oren, maxg
From: Israel Rukshin <israelr@mellanox.com>
Create a block IO request for the metadata from the protection SG list.
Signed-off-by: Israel Rukshin <israelr@mellanox.com>
Reviewed-by: Max Gurtovoy <maxg@mellanox.com>
---
drivers/nvme/target/io-cmd-bdev.c | 87 ++++++++++++++++++++++++++++++++++++++-
1 file changed, 85 insertions(+), 2 deletions(-)
diff --git a/drivers/nvme/target/io-cmd-bdev.c b/drivers/nvme/target/io-cmd-bdev.c
index 9f99b9d47d1e..034e0a64e6b0 100644
--- a/drivers/nvme/target/io-cmd-bdev.c
+++ b/drivers/nvme/target/io-cmd-bdev.c
@@ -164,6 +164,61 @@ static void nvmet_bio_done(struct bio *bio)
bio_put(bio);
}
+#ifdef CONFIG_BLK_DEV_INTEGRITY
+static int nvmet_bdev_alloc_bip(struct nvmet_req *req, struct bio *bio,
+ struct sg_mapping_iter *miter)
+{
+ struct blk_integrity *bi;
+ struct bio_integrity_payload *bip;
+ struct block_device *bdev = req->ns->bdev;
+ int rc;
+ size_t resid, len;
+
+ bi = bdev_get_integrity(bdev);
+ if (unlikely(!bi)) {
+ pr_err("Unable to locate bio_integrity\n");
+ return -ENODEV;
+ }
+
+ bip = bio_integrity_alloc(bio, GFP_NOIO,
+ min_t(unsigned int, req->prot_sg_cnt, BIO_MAX_PAGES));
+ if (IS_ERR(bip)) {
+ pr_err("Unable to allocate bio_integrity_payload\n");
+ return PTR_ERR(bip);
+ }
+
+ /* Payload is mapped by the fabrics host side */
+ bip->bip_flags |= BIP_MAPPED_INTEGRITY;
+ bip->bip_iter.bi_size = bio_integrity_bytes(bi, bio_sectors(bio));
+ bip_set_seed(bip, bio->bi_iter.bi_sector);
+
+ resid = bip->bip_iter.bi_size;
+ while (resid > 0 && sg_miter_next(miter)) {
+ len = min_t(size_t, miter->length, resid);
+ rc = bio_integrity_add_page(bio, miter->page, len,
+ offset_in_page(miter->addr));
+ if (unlikely(rc != len)) {
+ pr_err("bio_integrity_add_page() failed; %d\n", rc);
+ sg_miter_stop(miter);
+ return -ENOMEM;
+ }
+
+ resid -= len;
+ if (len < miter->length)
+ miter->consumed -= miter->length - len;
+ }
+ sg_miter_stop(miter);
+
+ return 0;
+}
+#else
+static int nvmet_bdev_alloc_bip(struct nvmet_req *req, struct bio *bio,
+ struct sg_mapping_iter *miter)
+{
+ return 0;
+}
+#endif /* CONFIG_BLK_DEV_INTEGRITY */
+
static void nvmet_bdev_execute_rw(struct nvmet_req *req)
{
int sg_cnt = req->sg_cnt;
@@ -171,9 +226,11 @@ static void nvmet_bdev_execute_rw(struct nvmet_req *req)
struct scatterlist *sg;
struct blk_plug plug;
sector_t sector;
- int op, i;
+ int op, i, rc;
+ struct sg_mapping_iter prot_miter;
- if (!nvmet_check_transfer_len(req, nvmet_rw_data_len(req)))
+ if (!nvmet_check_transfer_len(req,
+ nvmet_rw_data_len(req) + req->prot_len))
return;
if (!req->sg_cnt) {
@@ -208,11 +265,25 @@ static void nvmet_bdev_execute_rw(struct nvmet_req *req)
bio->bi_opf = op;
blk_start_plug(&plug);
+ if (req->use_pi)
+ sg_miter_start(&prot_miter, req->prot_sg, req->prot_sg_cnt,
+ op == REQ_OP_READ ? SG_MITER_FROM_SG :
+ SG_MITER_TO_SG);
+
for_each_sg(req->sg, sg, req->sg_cnt, i) {
while (bio_add_page(bio, sg_page(sg), sg->length, sg->offset)
!= sg->length) {
struct bio *prev = bio;
+ if (req->use_pi) {
+ rc = nvmet_bdev_alloc_bip(req, bio,
+ &prot_miter);
+ if (unlikely(rc)) {
+ bio_io_error(bio);
+ return;
+ }
+ }
+
bio = bio_alloc(GFP_KERNEL, min(sg_cnt, BIO_MAX_PAGES));
bio_set_dev(bio, req->ns->bdev);
bio->bi_iter.bi_sector = sector;
@@ -226,6 +297,14 @@ static void nvmet_bdev_execute_rw(struct nvmet_req *req)
sg_cnt--;
}
+ if (req->use_pi) {
+ rc = nvmet_bdev_alloc_bip(req, bio, &prot_miter);
+ if (unlikely(rc)) {
+ bio_io_error(bio);
+ return;
+ }
+ }
+
submit_bio(bio);
blk_finish_plug(&plug);
}
@@ -353,6 +432,10 @@ u16 nvmet_bdev_parse_io_cmd(struct nvmet_req *req)
case nvme_cmd_read:
case nvme_cmd_write:
req->execute = nvmet_bdev_execute_rw;
+ if (req->sq->ctrl->pi_support && nvmet_ns_has_pi(req->ns)) {
+ req->use_pi = true;
+ req->prot_len = nvmet_rw_prot_len(req);
+ }
return 0;
case nvme_cmd_flush:
req->execute = nvmet_bdev_execute_flush;
--
2.16.3
_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 16/16] nvmet-rdma: Add metadata/T10-PI support
2019-12-02 14:47 [PATCH 00/16 V2] nvme-rdma/nvmet-rdma: Add metadata/T10-PI support Max Gurtovoy
` (15 preceding siblings ...)
2019-12-02 14:48 ` [PATCH 15/16] nvmet: Add metadata support for block devices Max Gurtovoy
@ 2019-12-02 14:48 ` Max Gurtovoy
16 siblings, 0 replies; 28+ messages in thread
From: Max Gurtovoy @ 2019-12-02 14:48 UTC (permalink / raw)
To: linux-nvme, kbusch, hch, sagi, martin.petersen
Cc: axboe, vladimirk, shlomin, israelr, idanb, oren, maxg
From: Israel Rukshin <israelr@mellanox.com>
For capable HCAs (e.g. ConnectX-4/ConnectX-5) this will allow end-to-end
protection information passthrough and validation for NVMe over RDMA
transport. Metadata support was implemented over the new RDMA signature
verbs API.
Signed-off-by: Israel Rukshin <israelr@mellanox.com>
Reviewed-by: Max Gurtovoy <maxg@mellanox.com>
---
drivers/nvme/target/rdma.c | 235 +++++++++++++++++++++++++++++++++++++++++----
1 file changed, 217 insertions(+), 18 deletions(-)
diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
index 37d262a65877..bdd3299d5739 100644
--- a/drivers/nvme/target/rdma.c
+++ b/drivers/nvme/target/rdma.c
@@ -54,6 +54,7 @@ struct nvmet_rdma_rsp {
struct nvmet_rdma_queue *queue;
struct ib_cqe read_cqe;
+ struct ib_cqe write_cqe;
struct rdma_rw_ctx rw;
struct nvmet_req req;
@@ -129,6 +130,9 @@ static bool nvmet_rdma_execute_command(struct nvmet_rdma_rsp *rsp);
static void nvmet_rdma_send_done(struct ib_cq *cq, struct ib_wc *wc);
static void nvmet_rdma_recv_done(struct ib_cq *cq, struct ib_wc *wc);
static void nvmet_rdma_read_data_done(struct ib_cq *cq, struct ib_wc *wc);
+#ifdef CONFIG_BLK_DEV_INTEGRITY
+static void nvmet_rdma_write_data_done(struct ib_cq *cq, struct ib_wc *wc);
+#endif
static void nvmet_rdma_qp_event(struct ib_event *event, void *priv);
static void nvmet_rdma_queue_disconnect(struct nvmet_rdma_queue *queue);
static void nvmet_rdma_free_rsp(struct nvmet_rdma_device *ndev,
@@ -386,6 +390,10 @@ static int nvmet_rdma_alloc_rsp(struct nvmet_rdma_device *ndev,
/* Data In / RDMA READ */
r->read_cqe.done = nvmet_rdma_read_data_done;
+#ifdef CONFIG_BLK_DEV_INTEGRITY
+ /* Data Out / RDMA WRITE */
+ r->write_cqe.done = nvmet_rdma_write_data_done;
+#endif
return 0;
out_free_rsp:
@@ -495,6 +503,138 @@ static void nvmet_rdma_process_wr_wait_list(struct nvmet_rdma_queue *queue)
spin_unlock(&queue->rsp_wr_wait_lock);
}
+#ifdef CONFIG_BLK_DEV_INTEGRITY
+static u16 nvmet_rdma_check_pi_status(struct ib_mr *sig_mr)
+{
+ struct ib_mr_status mr_status;
+ int ret;
+ u16 status = 0;
+
+ ret = ib_check_mr_status(sig_mr, IB_MR_CHECK_SIG_STATUS, &mr_status);
+ if (ret) {
+ pr_err("ib_check_mr_status failed, ret %d\n", ret);
+ return NVME_SC_INVALID_PI;
+ }
+
+ if (mr_status.fail_status & IB_MR_CHECK_SIG_STATUS) {
+ switch (mr_status.sig_err.err_type) {
+ case IB_SIG_BAD_GUARD:
+ status = NVME_SC_GUARD_CHECK;
+ break;
+ case IB_SIG_BAD_REFTAG:
+ status = NVME_SC_REFTAG_CHECK;
+ break;
+ case IB_SIG_BAD_APPTAG:
+ status = NVME_SC_APPTAG_CHECK;
+ break;
+ }
+ pr_err("PI error found type %d expected 0x%x vs actual 0x%x\n",
+ mr_status.sig_err.err_type,
+ mr_status.sig_err.expected,
+ mr_status.sig_err.actual);
+ }
+
+ return status;
+}
+
+static void nvmet_rdma_set_diff_domain(struct nvme_command *cmd,
+ struct ib_sig_domain *domain, struct nvmet_ns *ns)
+{
+ struct blk_integrity *bi = bdev_get_integrity(ns->bdev);
+
+ WARN_ON(bi == NULL);
+
+ domain->sig_type = IB_SIG_TYPE_T10_DIF;
+ domain->sig.dif.bg_type = IB_T10DIF_CRC;
+ domain->sig.dif.pi_interval = 1 << bi->interval_exp;
+ domain->sig.dif.ref_tag = le32_to_cpu(cmd->rw.reftag);
+
+ /*
+ * At the moment we hard code those, but in the future
+ * we will take them from cmd.
+ */
+ domain->sig.dif.apptag_check_mask = 0xffff;
+ domain->sig.dif.app_escape = true;
+ domain->sig.dif.ref_escape = true;
+ if (ns->prot_type != NVME_NS_DPS_PI_TYPE3)
+ domain->sig.dif.ref_remap = true;
+}
+
+static void nvmet_rdma_set_sig_attrs(struct nvmet_req *req,
+ struct ib_sig_attrs *sig_attrs)
+{
+ struct nvme_command *cmd = req->cmd;
+ u16 control = le16_to_cpu(cmd->rw.control);
+
+ memset(sig_attrs, 0, sizeof(*sig_attrs));
+
+ if (control & NVME_RW_PRINFO_PRACT) {
+ /* for WRITE_INSERT/READ_STRIP no wire domain */
+ sig_attrs->wire.sig_type = IB_SIG_TYPE_NONE;
+ nvmet_rdma_set_diff_domain(cmd, &sig_attrs->mem, req->ns);
+ /* Clear the PRACT bit since HCA will generate/verify the PI */
+ control &= ~NVME_RW_PRINFO_PRACT;
+ cmd->rw.control = cpu_to_le16(control);
+ /* PI is added by the HW */
+ req->transfer_len += req->prot_len;
+ } else {
+ /* for WRITE_PASS/READ_PASS both wire/memory domains exist */
+ nvmet_rdma_set_diff_domain(cmd, &sig_attrs->wire, req->ns);
+ nvmet_rdma_set_diff_domain(cmd, &sig_attrs->mem, req->ns);
+ }
+
+ if (le16_to_cpu(cmd->rw.control) & NVME_RW_PRINFO_PRCHK_REF)
+ sig_attrs->check_mask |= IB_SIG_CHECK_REFTAG;
+ if (le16_to_cpu(cmd->rw.control) & NVME_RW_PRINFO_PRCHK_GUARD)
+ sig_attrs->check_mask |= IB_SIG_CHECK_GUARD;
+ if (le16_to_cpu(cmd->rw.control) & NVME_RW_PRINFO_PRCHK_APP)
+ sig_attrs->check_mask |= IB_SIG_CHECK_APPTAG;
+}
+#else
+static u16 nvmet_rdma_check_pi_status(struct ib_mr *sig_mr)
+{
+ return 0;
+}
+
+static void nvmet_rdma_set_sig_attrs(struct nvmet_req *req,
+ struct ib_sig_attrs *sig_attrs)
+{
+}
+#endif /* CONFIG_BLK_DEV_INTEGRITY */
+
+static int nvmet_rdma_rw_ctx_init(struct nvmet_rdma_rsp *rsp, u64 addr, u32 key,
+ struct ib_sig_attrs *sig_attrs)
+{
+ struct rdma_cm_id *cm_id = rsp->queue->cm_id;
+ struct nvmet_req *req = &rsp->req;
+ int ret;
+
+ if (req->use_pi)
+ ret = rdma_rw_ctx_signature_init(&rsp->rw, cm_id->qp,
+ cm_id->port_num, req->sg, req->sg_cnt, req->prot_sg,
+ req->prot_sg_cnt, sig_attrs, addr, key,
+ nvmet_data_dir(req));
+ else
+ ret = rdma_rw_ctx_init(&rsp->rw, cm_id->qp, cm_id->port_num,
+ req->sg, req->sg_cnt, 0, addr, key,
+ nvmet_data_dir(req));
+
+ return ret;
+}
+
+static void nvmet_rdma_rw_ctx_destroy(struct nvmet_rdma_rsp *rsp)
+{
+ struct rdma_cm_id *cm_id = rsp->queue->cm_id;
+ struct nvmet_req *req = &rsp->req;
+
+ if (req->use_pi)
+ rdma_rw_ctx_destroy_signature(&rsp->rw, cm_id->qp,
+ cm_id->port_num, req->sg, req->sg_cnt, req->prot_sg,
+ req->prot_sg_cnt, nvmet_data_dir(req));
+ else
+ rdma_rw_ctx_destroy(&rsp->rw, cm_id->qp, cm_id->port_num,
+ req->sg, req->sg_cnt, nvmet_data_dir(req));
+}
static void nvmet_rdma_release_rsp(struct nvmet_rdma_rsp *rsp)
{
@@ -502,11 +642,8 @@ static void nvmet_rdma_release_rsp(struct nvmet_rdma_rsp *rsp)
atomic_add(1 + rsp->n_rdma, &queue->sq_wr_avail);
- if (rsp->n_rdma) {
- rdma_rw_ctx_destroy(&rsp->rw, queue->cm_id->qp,
- queue->cm_id->port_num, rsp->req.sg,
- rsp->req.sg_cnt, nvmet_data_dir(&rsp->req));
- }
+ if (rsp->n_rdma)
+ nvmet_rdma_rw_ctx_destroy(rsp);
if (rsp->req.sg != rsp->cmd->inline_sg)
nvmet_req_free_sgl(&rsp->req);
@@ -561,11 +698,16 @@ static void nvmet_rdma_queue_response(struct nvmet_req *req)
rsp->send_wr.opcode = IB_WR_SEND;
}
- if (nvmet_rdma_need_data_out(rsp))
- first_wr = rdma_rw_ctx_wrs(&rsp->rw, cm_id->qp,
- cm_id->port_num, NULL, &rsp->send_wr);
- else
+ if (nvmet_rdma_need_data_out(rsp)) {
+ if (rsp->req.use_pi)
+ first_wr = rdma_rw_ctx_wrs(&rsp->rw, cm_id->qp,
+ cm_id->port_num, &rsp->write_cqe, NULL);
+ else
+ first_wr = rdma_rw_ctx_wrs(&rsp->rw, cm_id->qp,
+ cm_id->port_num, NULL, &rsp->send_wr);
+ } else {
first_wr = &rsp->send_wr;
+ }
nvmet_rdma_post_recv(rsp->queue->dev, rsp->cmd);
@@ -584,15 +726,14 @@ static void nvmet_rdma_read_data_done(struct ib_cq *cq, struct ib_wc *wc)
struct nvmet_rdma_rsp *rsp =
container_of(wc->wr_cqe, struct nvmet_rdma_rsp, read_cqe);
struct nvmet_rdma_queue *queue = cq->cq_context;
+ u16 status = 0;
WARN_ON(rsp->n_rdma <= 0);
atomic_add(rsp->n_rdma, &queue->sq_wr_avail);
- rdma_rw_ctx_destroy(&rsp->rw, queue->cm_id->qp,
- queue->cm_id->port_num, rsp->req.sg,
- rsp->req.sg_cnt, nvmet_data_dir(&rsp->req));
rsp->n_rdma = 0;
if (unlikely(wc->status != IB_WC_SUCCESS)) {
+ nvmet_rdma_rw_ctx_destroy(rsp);
nvmet_req_uninit(&rsp->req);
nvmet_rdma_release_rsp(rsp);
if (wc->status != IB_WC_WR_FLUSH_ERR) {
@@ -603,9 +744,59 @@ static void nvmet_rdma_read_data_done(struct ib_cq *cq, struct ib_wc *wc)
return;
}
- rsp->req.execute(&rsp->req);
+ if (rsp->req.use_pi)
+ status = nvmet_rdma_check_pi_status(rsp->rw.reg->mr);
+ nvmet_rdma_rw_ctx_destroy(rsp);
+
+ if (unlikely(status))
+ nvmet_req_complete(&rsp->req, status);
+ else
+ rsp->req.execute(&rsp->req);
}
+#ifdef CONFIG_BLK_DEV_INTEGRITY
+static void nvmet_rdma_write_data_done(struct ib_cq *cq, struct ib_wc *wc)
+{
+ struct nvmet_rdma_rsp *rsp =
+ container_of(wc->wr_cqe, struct nvmet_rdma_rsp, write_cqe);
+ struct nvmet_rdma_queue *queue = cq->cq_context;
+ struct rdma_cm_id *cm_id = rsp->queue->cm_id;
+ u16 status;
+
+ WARN_ON(rsp->n_rdma <= 0);
+ atomic_add(rsp->n_rdma, &queue->sq_wr_avail);
+ rsp->n_rdma = 0;
+
+ if (unlikely(wc->status != IB_WC_SUCCESS)) {
+ nvmet_rdma_rw_ctx_destroy(rsp);
+ nvmet_req_uninit(&rsp->req);
+ nvmet_rdma_release_rsp(rsp);
+ if (wc->status != IB_WC_WR_FLUSH_ERR) {
+ pr_info("RDMA WRITE for CQE 0x%p failed with status %s (%d).\n",
+ wc->wr_cqe, ib_wc_status_msg(wc->status),
+ wc->status);
+ nvmet_rdma_error_comp(queue);
+ }
+ return;
+ }
+
+ /*
+ * Upon RDMA completion check the signature status
+ * - if succeeded send good NVMe response
+ * - if failed send bad NVMe response with appropriate error
+ */
+ status = nvmet_rdma_check_pi_status(rsp->rw.reg->mr);
+ if (unlikely(status))
+ rsp->req.cqe->status = cpu_to_le16(status << 1);
+ nvmet_rdma_rw_ctx_destroy(rsp);
+
+ if (unlikely(ib_post_send(cm_id->qp, &rsp->send_wr, NULL))) {
+ pr_err("sending cmd response failed\n");
+ nvmet_rdma_release_rsp(rsp);
+ }
+}
+#endif /* CONFIG_BLK_DEV_INTEGRITY */
+
static void nvmet_rdma_use_inline_sg(struct nvmet_rdma_rsp *rsp, u32 len,
u64 off)
{
@@ -660,9 +851,9 @@ static u16 nvmet_rdma_map_sgl_inline(struct nvmet_rdma_rsp *rsp)
static u16 nvmet_rdma_map_sgl_keyed(struct nvmet_rdma_rsp *rsp,
struct nvme_keyed_sgl_desc *sgl, bool invalidate)
{
- struct rdma_cm_id *cm_id = rsp->queue->cm_id;
u64 addr = le64_to_cpu(sgl->addr);
u32 key = get_unaligned_le32(sgl->key);
+ struct ib_sig_attrs sig_attrs;
int ret;
rsp->req.transfer_len = get_unaligned_le24(sgl->length);
@@ -671,13 +862,14 @@ static u16 nvmet_rdma_map_sgl_keyed(struct nvmet_rdma_rsp *rsp,
if (!rsp->req.transfer_len)
return 0;
+ if (rsp->req.use_pi)
+ nvmet_rdma_set_sig_attrs(&rsp->req, &sig_attrs);
+
ret = nvmet_req_alloc_sgl(&rsp->req);
if (unlikely(ret < 0))
goto error_out;
- ret = rdma_rw_ctx_init(&rsp->rw, cm_id->qp, cm_id->port_num,
- rsp->req.sg, rsp->req.sg_cnt, 0, addr, key,
- nvmet_data_dir(&rsp->req));
+ ret = nvmet_rdma_rw_ctx_init(rsp, addr, key, &sig_attrs);
if (unlikely(ret < 0))
goto error_out;
rsp->n_rdma += ret;
@@ -956,6 +1148,9 @@ nvmet_rdma_find_get_device(struct rdma_cm_id *cm_id)
goto out_free_pd;
}
+ port->pi_capable = ndev->device->attrs.device_cap_flags &
+ IB_DEVICE_INTEGRITY_HANDOVER ? true : false;
+
list_add(&ndev->entry, &device_list);
out_unlock:
mutex_unlock(&device_list_mutex);
@@ -1020,6 +1215,10 @@ static int nvmet_rdma_create_queue_ib(struct nvmet_rdma_queue *queue)
qp_attr.cap.max_recv_sge = 1 + ndev->inline_page_count;
}
+ if (queue->port->pi_capable && queue->port->pi_enable &&
+ queue->host_qid)
+ qp_attr.create_flags |= IB_QP_CREATE_INTEGRITY_EN;
+
ret = rdma_create_qp(queue->cm_id, ndev->pd, &qp_attr);
if (ret) {
pr_err("failed to create_qp ret= %d\n", ret);
@@ -1164,6 +1363,7 @@ nvmet_rdma_alloc_queue(struct nvmet_rdma_device *ndev,
INIT_WORK(&queue->release_work, nvmet_rdma_release_queue_work);
queue->dev = ndev;
queue->cm_id = cm_id;
+ queue->port = cm_id->context;
spin_lock_init(&queue->state_lock);
queue->state = NVMET_RDMA_Q_CONNECTING;
@@ -1282,7 +1482,6 @@ static int nvmet_rdma_queue_connect(struct rdma_cm_id *cm_id,
ret = -ENOMEM;
goto put_device;
}
- queue->port = cm_id->context;
if (queue->host_qid == 0) {
/* Let inflight controller teardown complete */
--
2.16.3
_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply related [flat|nested] 28+ messages in thread
* Re: [PATCH 01/16] nvme: Introduce namespace features flag
2019-12-02 14:47 ` [PATCH 01/16] nvme: Introduce namespace features flag Max Gurtovoy
@ 2019-12-04 8:41 ` Christoph Hellwig
0 siblings, 0 replies; 28+ messages in thread
From: Christoph Hellwig @ 2019-12-04 8:41 UTC (permalink / raw)
To: Max Gurtovoy
Cc: axboe, sagi, martin.petersen, idanb, israelr, vladimirk,
linux-nvme, shlomin, oren, kbusch, hch
On Mon, Dec 02, 2019 at 04:47:57PM +0200, Max Gurtovoy wrote:
> From: Israel Rukshin <israelr@mellanox.com>
>
> This patch is a preparation for adding PI support for fabrics drivers
No need to ever say "This patch..". Better explain why it is added.
Also the patch adds a field to the nvme_req structure as well.
_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 07/16] nvme-rdma: Add metadata/T10-PI support
2019-12-02 14:48 ` [PATCH 07/16] nvme-rdma: Add metadata/T10-PI support Max Gurtovoy
@ 2020-01-22 21:57 ` James Smart
2020-01-23 9:59 ` Max Gurtovoy
0 siblings, 1 reply; 28+ messages in thread
From: James Smart @ 2020-01-22 21:57 UTC (permalink / raw)
To: Max Gurtovoy, linux-nvme, kbusch, hch, sagi, martin.petersen
Cc: axboe, vladimirk, idanb, israelr, shlomin, oren
On 12/2/2019 6:48 AM, Max Gurtovoy wrote:
> @@ -1215,34 +1283,115 @@ static int nvme_rdma_map_sg_single(struct nvme_rdma_queue *queue,
> return 0;
> }
>
> -static int nvme_rdma_map_sg_fr(struct nvme_rdma_queue *queue,
> - struct nvme_rdma_request *req, struct nvme_command *c,
> - int count)
> +#ifdef CONFIG_BLK_DEV_INTEGRITY
> +static void nvme_rdma_set_diff_domain(struct nvme_command *cmd, struct bio *bio,
> + struct ib_sig_domain *domain, struct request *rq)
> {
> - struct nvme_keyed_sgl_desc *sg = &c->common.dptr.ksgl;
> - int nr;
> + struct blk_integrity *bi = blk_get_integrity(bio->bi_disk);
> + struct nvme_ns *ns = rq->rq_disk->private_data;
> +
> + WARN_ON(bi == NULL);
>
> - req->mr = ib_mr_pool_get(queue->qp, &queue->qp->rdma_mrs);
> - if (WARN_ON_ONCE(!req->mr))
> - return -EAGAIN;
> + domain->sig_type = IB_SIG_TYPE_T10_DIF;
> + domain->sig.dif.bg_type = IB_T10DIF_CRC;
> + domain->sig.dif.pi_interval = 1 << bi->interval_exp;
> + domain->sig.dif.ref_tag = le32_to_cpu(cmd->rw.reftag);
>
> /*
> - * Align the MR to a 4K page size to match the ctrl page size and
> - * the block virtual boundary.
> + * At the moment we hard code those, but in the future
> + * we will take them from cmd.
> */
> - nr = ib_map_mr_sg(req->mr, req->data_sgl.sg_table.sgl, count, NULL,
> - SZ_4K);
> - if (unlikely(nr < count)) {
> - ib_mr_pool_put(queue->qp, &queue->qp->rdma_mrs, req->mr);
> - req->mr = NULL;
> - if (nr < 0)
> - return nr;
> - return -EINVAL;
> + domain->sig.dif.apptag_check_mask = 0xffff;
> + domain->sig.dif.app_escape = true;
> + domain->sig.dif.ref_escape = true;
> + if (ns->pi_type != NVME_NS_DPS_PI_TYPE3)
> + domain->sig.dif.ref_remap = true;
> +}
> +
>
On a per-io basis, there needs to be specific descriptions of the DIF
information to program the port hardware. Things such as block size,
type, and so. I see this routine using a mix of the bio that is
associated with the original request as well as the namespace pointer to
get this info. To me the reaching into the bio, as well as the
locating of the ns structures are reaching into the other layers too much.
Wouldn't we be better off with with the core layer doing all the
reaching and setting up a pi structure in the nvme_request with this
information ? replace has_pi with this pi struct and
"nvme_req(rq)->pi.pi_type == 0" is equivalent to has_pi ? If we didn't
want to replicate the PI info, then nvme_request can simply add a
pointer to the ns, and the ns can be looked at explicitly to gather the
attributes.
Thoughts ?
-- james
_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 07/16] nvme-rdma: Add metadata/T10-PI support
2020-01-22 21:57 ` James Smart
@ 2020-01-23 9:59 ` Max Gurtovoy
2020-01-23 15:34 ` James Smart
0 siblings, 1 reply; 28+ messages in thread
From: Max Gurtovoy @ 2020-01-23 9:59 UTC (permalink / raw)
To: James Smart, linux-nvme, kbusch, hch, sagi, martin.petersen
Cc: axboe, vladimirk, idanb, israelr, shlomin, oren
On 1/22/2020 11:57 PM, James Smart wrote:
>
>
> On 12/2/2019 6:48 AM, Max Gurtovoy wrote:
>> @@ -1215,34 +1283,115 @@ static int nvme_rdma_map_sg_single(struct
>> nvme_rdma_queue *queue,
>> return 0;
>> }
>> -static int nvme_rdma_map_sg_fr(struct nvme_rdma_queue *queue,
>> - struct nvme_rdma_request *req, struct nvme_command *c,
>> - int count)
>> +#ifdef CONFIG_BLK_DEV_INTEGRITY
>> +static void nvme_rdma_set_diff_domain(struct nvme_command *cmd,
>> struct bio *bio,
>> + struct ib_sig_domain *domain, struct request *rq)
>> {
>> - struct nvme_keyed_sgl_desc *sg = &c->common.dptr.ksgl;
>> - int nr;
>> + struct blk_integrity *bi = blk_get_integrity(bio->bi_disk);
>> + struct nvme_ns *ns = rq->rq_disk->private_data;
>> +
>> + WARN_ON(bi == NULL);
>> - req->mr = ib_mr_pool_get(queue->qp, &queue->qp->rdma_mrs);
>> - if (WARN_ON_ONCE(!req->mr))
>> - return -EAGAIN;
>> + domain->sig_type = IB_SIG_TYPE_T10_DIF;
>> + domain->sig.dif.bg_type = IB_T10DIF_CRC;
>> + domain->sig.dif.pi_interval = 1 << bi->interval_exp;
>> + domain->sig.dif.ref_tag = le32_to_cpu(cmd->rw.reftag);
>> /*
>> - * Align the MR to a 4K page size to match the ctrl page size and
>> - * the block virtual boundary.
>> + * At the moment we hard code those, but in the future
>> + * we will take them from cmd.
>> */
>> - nr = ib_map_mr_sg(req->mr, req->data_sgl.sg_table.sgl, count, NULL,
>> - SZ_4K);
>> - if (unlikely(nr < count)) {
>> - ib_mr_pool_put(queue->qp, &queue->qp->rdma_mrs, req->mr);
>> - req->mr = NULL;
>> - if (nr < 0)
>> - return nr;
>> - return -EINVAL;
>> + domain->sig.dif.apptag_check_mask = 0xffff;
>> + domain->sig.dif.app_escape = true;
>> + domain->sig.dif.ref_escape = true;
>> + if (ns->pi_type != NVME_NS_DPS_PI_TYPE3)
>> + domain->sig.dif.ref_remap = true;
>> +}
>> +
>>
>
> On a per-io basis, there needs to be specific descriptions of the DIF
> information to program the port hardware. Things such as block size,
> type, and so. I see this routine using a mix of the bio that is
> associated with the original request as well as the namespace pointer
> to get this info. To me the reaching into the bio, as well as the
> locating of the ns structures are reaching into the other layers too
> much.
>
> Wouldn't we be better off with with the core layer doing all the
> reaching and setting up a pi structure in the nvme_request with this
> information ? replace has_pi with this pi struct and
> "nvme_req(rq)->pi.pi_type == 0" is equivalent to has_pi ? If we
> didn't want to replicate the PI info, then nvme_request can simply add
> a pointer to the ns, and the ns can be looked at explicitly to gather
> the attributes.
>
> Thoughts ?
>
NVMe namespace is used by RDMA/TCP/FC transport drivers in each queue_rq
implementation. We can pass it instead of reaching it from the rq, if
this looks better.
NVMe PI attributes (e.g reftag/check_flags/action_flags) are all set in
the core layer.
PI attributes for configuring the HW of each transport should be done in
the transport driver.
> -- james
>
_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 07/16] nvme-rdma: Add metadata/T10-PI support
2020-01-23 9:59 ` Max Gurtovoy
@ 2020-01-23 15:34 ` James Smart
2020-01-23 18:52 ` James Smart
0 siblings, 1 reply; 28+ messages in thread
From: James Smart @ 2020-01-23 15:34 UTC (permalink / raw)
To: Max Gurtovoy, linux-nvme, kbusch, hch, sagi, martin.petersen
Cc: axboe, vladimirk, idanb, israelr, shlomin, oren
On 1/23/2020 1:59 AM, Max Gurtovoy wrote:
>
> On 1/22/2020 11:57 PM, James Smart wrote:
>> On a per-io basis, there needs to be specific descriptions of the DIF
>> information to program the port hardware. Things such as block size,
>> type, and so. I see this routine using a mix of the bio that is
>> associated with the original request as well as the namespace pointer
>> to get this info. To me the reaching into the bio, as well as the
>> locating of the ns structures are reaching into the other layers too
>> much.
>>
>> Wouldn't we be better off with with the core layer doing all the
>> reaching and setting up a pi structure in the nvme_request with this
>> information ? replace has_pi with this pi struct and
>> "nvme_req(rq)->pi.pi_type == 0" is equivalent to has_pi ? If we
>> didn't want to replicate the PI info, then nvme_request can simply
>> add a pointer to the ns, and the ns can be looked at explicitly to
>> gather the attributes.
>>
>> Thoughts ?
>>
> NVMe namespace is used by RDMA/TCP/FC transport drivers in each
> queue_rq implementation. We can pass it instead of reaching it from
> the rq, if this looks better.
?? there hasn't been any reason to look at the namespace by the
transports to date.
But yes, I much prefer passing the ns and using it for the dif info.
This removes what I didn't like - encoding the relationship of how to
get from the rq to ns in each transport.
>
> NVMe PI attributes (e.g reftag/check_flags/action_flags) are all set
> in the core layer.
>
> PI attributes for configuring the HW of each transport should be done
> in the transport driver.
>
agreed.
-- james
_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 07/16] nvme-rdma: Add metadata/T10-PI support
2020-01-23 15:34 ` James Smart
@ 2020-01-23 18:52 ` James Smart
0 siblings, 0 replies; 28+ messages in thread
From: James Smart @ 2020-01-23 18:52 UTC (permalink / raw)
To: Max Gurtovoy, linux-nvme, kbusch, hch, sagi, martin.petersen
Cc: axboe, vladimirk, idanb, israelr, shlomin, oren
On 1/23/2020 7:34 AM, James Smart wrote:
>
>
> On 1/23/2020 1:59 AM, Max Gurtovoy wrote:
>>
>> On 1/22/2020 11:57 PM, James Smart wrote:
>>> On a per-io basis, there needs to be specific descriptions of the
>>> DIF information to program the port hardware. Things such as block
>>> size, type, and so. I see this routine using a mix of the bio that
>>> is associated with the original request as well as the namespace
>>> pointer to get this info. To me the reaching into the bio, as
>>> well as the locating of the ns structures are reaching into the
>>> other layers too much.
>>>
>>> Wouldn't we be better off with with the core layer doing all the
>>> reaching and setting up a pi structure in the nvme_request with this
>>> information ? replace has_pi with this pi struct and
>>> "nvme_req(rq)->pi.pi_type == 0" is equivalent to has_pi ? If we
>>> didn't want to replicate the PI info, then nvme_request can simply
>>> add a pointer to the ns, and the ns can be looked at explicitly to
>>> gather the attributes.
>>>
>>> Thoughts ?
>>>
>> NVMe namespace is used by RDMA/TCP/FC transport drivers in each
>> queue_rq implementation. We can pass it instead of reaching it from
>> the rq, if this looks better.
>
> ?? there hasn't been any reason to look at the namespace by the
> transports to date.
>
> But yes, I much prefer passing the ns and using it for the dif info.
> This removes what I didn't like - encoding the relationship of how to
> get from the rq to ns in each transport.
>
also means the bio isn't necessary in most of the other routines until
you call blk_rq_count_integrity_sg(). blksz can come from (1 <<
ns->lba_shift)
-- james
_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 28+ messages in thread
* [PATCH 14/16] nvmet: add metadata/T10-PI support
2020-05-19 14:05 [PATCH 00/16 v8] nvme-rdma/nvmet-rdma: Add " Max Gurtovoy
@ 2020-05-19 14:06 ` Max Gurtovoy
0 siblings, 0 replies; 28+ messages in thread
From: Max Gurtovoy @ 2020-05-19 14:06 UTC (permalink / raw)
To: sagi, linux-nvme, kbusch, hch, martin.petersen, jsmart2021, axboe
Cc: vladimirk, shlomin, israelr, James Smart, idanb, oren,
Max Gurtovoy, nitzanc
From: Israel Rukshin <israelr@mellanox.com>
Expose the namespace metadata format when PI is enabled. The user needs
to enable the capability per subsystem and per port. The other metadata
properties are taken from the namespace/bdev.
Usage example:
echo 1 > /config/nvmet/subsystems/${NAME}/attr_pi_enable
echo 1 > /config/nvmet/ports/${PORT_NUM}/param_pi_enable
Signed-off-by: Israel Rukshin <israelr@mellanox.com>
Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
Reviewed-by: James Smart <james.smart@broadcom.com>
---
drivers/nvme/target/admin-cmd.c | 26 +++++++++++++++---
drivers/nvme/target/configfs.c | 58 +++++++++++++++++++++++++++++++++++++++
drivers/nvme/target/core.c | 21 +++++++++++---
drivers/nvme/target/fabrics-cmd.c | 7 +++--
drivers/nvme/target/nvmet.h | 19 +++++++++++++
5 files changed, 121 insertions(+), 10 deletions(-)
diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cmd.c
index 905aba8..9c8a00d 100644
--- a/drivers/nvme/target/admin-cmd.c
+++ b/drivers/nvme/target/admin-cmd.c
@@ -341,6 +341,7 @@ static void nvmet_execute_identify_ctrl(struct nvmet_req *req)
{
struct nvmet_ctrl *ctrl = req->sq->ctrl;
struct nvme_id_ctrl *id;
+ u32 cmd_capsule_size;
u16 status = 0;
id = kzalloc(sizeof(*id), GFP_KERNEL);
@@ -433,9 +434,15 @@ static void nvmet_execute_identify_ctrl(struct nvmet_req *req)
strlcpy(id->subnqn, ctrl->subsys->subsysnqn, sizeof(id->subnqn));
- /* Max command capsule size is sqe + single page of in-capsule data */
- id->ioccsz = cpu_to_le32((sizeof(struct nvme_command) +
- req->port->inline_data_size) / 16);
+ /*
+ * Max command capsule size is sqe + in-capsule data size.
+ * Disable in-capsule data for Metadata capable controllers.
+ */
+ cmd_capsule_size = sizeof(struct nvme_command);
+ if (!ctrl->pi_support)
+ cmd_capsule_size += req->port->inline_data_size;
+ id->ioccsz = cpu_to_le32(cmd_capsule_size / 16);
+
/* Max response capsule size is cqe */
id->iorcsz = cpu_to_le32(sizeof(struct nvme_completion) / 16);
@@ -465,6 +472,7 @@ static void nvmet_execute_identify_ctrl(struct nvmet_req *req)
static void nvmet_execute_identify_ns(struct nvmet_req *req)
{
+ struct nvmet_ctrl *ctrl = req->sq->ctrl;
struct nvmet_ns *ns;
struct nvme_id_ns *id;
u16 status = 0;
@@ -482,7 +490,7 @@ static void nvmet_execute_identify_ns(struct nvmet_req *req)
}
/* return an all zeroed buffer if we can't find an active namespace */
- ns = nvmet_find_namespace(req->sq->ctrl, req->cmd->identify.nsid);
+ ns = nvmet_find_namespace(ctrl, req->cmd->identify.nsid);
if (!ns)
goto done;
@@ -526,6 +534,16 @@ static void nvmet_execute_identify_ns(struct nvmet_req *req)
id->lbaf[0].ds = ns->blksize_shift;
+ if (ctrl->pi_support && nvmet_ns_has_pi(ns)) {
+ id->dpc = NVME_NS_DPC_PI_FIRST | NVME_NS_DPC_PI_LAST |
+ NVME_NS_DPC_PI_TYPE1 | NVME_NS_DPC_PI_TYPE2 |
+ NVME_NS_DPC_PI_TYPE3;
+ id->mc = NVME_MC_EXTENDED_LBA;
+ id->dps = ns->pi_type;
+ id->flbas = NVME_NS_FLBAS_META_EXT;
+ id->lbaf[0].ms = ns->metadata_size;
+ }
+
if (ns->readonly)
id->nsattr |= (1 << 0);
nvmet_put_namespace(ns);
diff --git a/drivers/nvme/target/configfs.c b/drivers/nvme/target/configfs.c
index 24eb4cf..d71d216 100644
--- a/drivers/nvme/target/configfs.c
+++ b/drivers/nvme/target/configfs.c
@@ -248,6 +248,36 @@ static ssize_t nvmet_param_inline_data_size_store(struct config_item *item,
CONFIGFS_ATTR(nvmet_, param_inline_data_size);
+#ifdef CONFIG_BLK_DEV_INTEGRITY
+static ssize_t nvmet_param_pi_enable_show(struct config_item *item,
+ char *page)
+{
+ struct nvmet_port *port = to_nvmet_port(item);
+
+ return snprintf(page, PAGE_SIZE, "%d\n", port->pi_enable);
+}
+
+static ssize_t nvmet_param_pi_enable_store(struct config_item *item,
+ const char *page, size_t count)
+{
+ struct nvmet_port *port = to_nvmet_port(item);
+ bool val;
+
+ if (strtobool(page, &val))
+ return -EINVAL;
+
+ if (port->enabled) {
+ pr_err("Disable port before setting pi_enable value.\n");
+ return -EACCES;
+ }
+
+ port->pi_enable = val;
+ return count;
+}
+
+CONFIGFS_ATTR(nvmet_, param_pi_enable);
+#endif
+
static ssize_t nvmet_addr_trtype_show(struct config_item *item,
char *page)
{
@@ -984,6 +1014,28 @@ static ssize_t nvmet_subsys_attr_model_store(struct config_item *item,
}
CONFIGFS_ATTR(nvmet_subsys_, attr_model);
+#ifdef CONFIG_BLK_DEV_INTEGRITY
+static ssize_t nvmet_subsys_attr_pi_enable_show(struct config_item *item,
+ char *page)
+{
+ return snprintf(page, PAGE_SIZE, "%d\n", to_subsys(item)->pi_support);
+}
+
+static ssize_t nvmet_subsys_attr_pi_enable_store(struct config_item *item,
+ const char *page, size_t count)
+{
+ struct nvmet_subsys *subsys = to_subsys(item);
+ bool pi_enable;
+
+ if (strtobool(page, &pi_enable))
+ return -EINVAL;
+
+ subsys->pi_support = pi_enable;
+ return count;
+}
+CONFIGFS_ATTR(nvmet_subsys_, attr_pi_enable);
+#endif
+
static struct configfs_attribute *nvmet_subsys_attrs[] = {
&nvmet_subsys_attr_attr_allow_any_host,
&nvmet_subsys_attr_attr_version,
@@ -991,6 +1043,9 @@ static ssize_t nvmet_subsys_attr_model_store(struct config_item *item,
&nvmet_subsys_attr_attr_cntlid_min,
&nvmet_subsys_attr_attr_cntlid_max,
&nvmet_subsys_attr_attr_model,
+#ifdef CONFIG_BLK_DEV_INTEGRITY
+ &nvmet_subsys_attr_attr_pi_enable,
+#endif
NULL,
};
@@ -1290,6 +1345,9 @@ static void nvmet_port_release(struct config_item *item)
&nvmet_attr_addr_trsvcid,
&nvmet_attr_addr_trtype,
&nvmet_attr_param_inline_data_size,
+#ifdef CONFIG_BLK_DEV_INTEGRITY
+ &nvmet_attr_param_pi_enable,
+#endif
NULL,
};
diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
index 50dfc60..9f6b0f9 100644
--- a/drivers/nvme/target/core.c
+++ b/drivers/nvme/target/core.c
@@ -322,12 +322,21 @@ int nvmet_enable_port(struct nvmet_port *port)
if (!try_module_get(ops->owner))
return -EINVAL;
- ret = ops->add_port(port);
- if (ret) {
- module_put(ops->owner);
- return ret;
+ /*
+ * If the user requested PI support and the transport isn't pi capable,
+ * don't enable the port.
+ */
+ if (port->pi_enable && !ops->metadata_support) {
+ pr_err("T10-PI is not supported by transport type %d\n",
+ port->disc_addr.trtype);
+ ret = -EINVAL;
+ goto out_put;
}
+ ret = ops->add_port(port);
+ if (ret)
+ goto out_put;
+
/* If the transport didn't set inline_data_size, then disable it. */
if (port->inline_data_size < 0)
port->inline_data_size = 0;
@@ -335,6 +344,10 @@ int nvmet_enable_port(struct nvmet_port *port)
port->enabled = true;
port->tr_ops = ops;
return 0;
+
+out_put:
+ module_put(ops->owner);
+ return ret;
}
void nvmet_disable_port(struct nvmet_port *port)
diff --git a/drivers/nvme/target/fabrics-cmd.c b/drivers/nvme/target/fabrics-cmd.c
index 52a6f70..42bd12b 100644
--- a/drivers/nvme/target/fabrics-cmd.c
+++ b/drivers/nvme/target/fabrics-cmd.c
@@ -197,6 +197,8 @@ static void nvmet_execute_admin_connect(struct nvmet_req *req)
goto out;
}
+ ctrl->pi_support = ctrl->port->pi_enable && ctrl->subsys->pi_support;
+
uuid_copy(&ctrl->hostid, &d->hostid);
status = nvmet_install_queue(ctrl, req);
@@ -205,8 +207,9 @@ static void nvmet_execute_admin_connect(struct nvmet_req *req)
goto out;
}
- pr_info("creating controller %d for subsystem %s for NQN %s.\n",
- ctrl->cntlid, ctrl->subsys->subsysnqn, ctrl->hostnqn);
+ pr_info("creating controller %d for subsystem %s for NQN %s%s.\n",
+ ctrl->cntlid, ctrl->subsys->subsysnqn, ctrl->hostnqn,
+ ctrl->pi_support ? " T10-PI is enabled" : "");
req->cqe->result.u16 = cpu_to_le16(ctrl->cntlid);
out:
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index ceedaaf..2986e2c 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -145,6 +145,7 @@ struct nvmet_port {
bool enabled;
int inline_data_size;
const struct nvmet_fabrics_ops *tr_ops;
+ bool pi_enable;
};
static inline struct nvmet_port *to_nvmet_port(struct config_item *item)
@@ -204,6 +205,7 @@ struct nvmet_ctrl {
spinlock_t error_lock;
u64 err_counter;
struct nvme_error_slot slots[NVMET_ERROR_LOG_SLOTS];
+ bool pi_support;
};
struct nvmet_subsys_model {
@@ -233,6 +235,7 @@ struct nvmet_subsys {
u64 ver;
u64 serial;
char *subsysnqn;
+ bool pi_support;
struct config_group group;
@@ -284,6 +287,7 @@ struct nvmet_fabrics_ops {
unsigned int type;
unsigned int msdbd;
bool has_keyed_sgls : 1;
+ bool metadata_support : 1;
void (*queue_response)(struct nvmet_req *req);
int (*add_port)(struct nvmet_port *port);
void (*remove_port)(struct nvmet_port *port);
@@ -510,6 +514,14 @@ static inline u32 nvmet_rw_data_len(struct nvmet_req *req)
req->ns->blksize_shift;
}
+static inline u32 nvmet_rw_metadata_len(struct nvmet_req *req)
+{
+ if (!IS_ENABLED(CONFIG_BLK_DEV_INTEGRITY))
+ return 0;
+ return ((u32)le16_to_cpu(req->cmd->rw.length) + 1) *
+ req->ns->metadata_size;
+}
+
static inline u32 nvmet_dsm_len(struct nvmet_req *req)
{
return (le32_to_cpu(req->cmd->dsm.nr) + 1) *
@@ -524,4 +536,11 @@ static inline __le16 to0based(u32 a)
return cpu_to_le16(max(1U, min(1U << 16, a)) - 1);
}
+static inline bool nvmet_ns_has_pi(struct nvmet_ns *ns)
+{
+ if (!IS_ENABLED(CONFIG_BLK_DEV_INTEGRITY))
+ return false;
+ return ns->pi_type && ns->metadata_size == sizeof(struct t10_pi_tuple);
+}
+
#endif /* _NVMET_H */
--
1.8.3.1
_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply related [flat|nested] 28+ messages in thread
* Re: [PATCH 14/16] nvmet: add metadata/T10-PI support
2020-05-14 15:09 ` Max Gurtovoy
@ 2020-05-14 15:37 ` James Smart
0 siblings, 0 replies; 28+ messages in thread
From: James Smart @ 2020-05-14 15:37 UTC (permalink / raw)
To: Max Gurtovoy, James Smart, linux-nvme, kbusch, hch, sagi,
martin.petersen, axboe
Cc: vladimirk, idanb, israelr, shlomin, oren, nitzanc
ok - I'm good on this
-- james
_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 14/16] nvmet: add metadata/T10-PI support
2020-05-13 19:51 ` James Smart
@ 2020-05-14 15:09 ` Max Gurtovoy
2020-05-14 15:37 ` James Smart
0 siblings, 1 reply; 28+ messages in thread
From: Max Gurtovoy @ 2020-05-14 15:09 UTC (permalink / raw)
To: James Smart, linux-nvme, kbusch, hch, sagi, martin.petersen,
jsmart2021, axboe
Cc: vladimirk, idanb, israelr, shlomin, oren, nitzanc
On 5/13/2020 10:51 PM, James Smart wrote:
>
>
> On 5/4/2020 8:57 AM, Max Gurtovoy wrote:
>> From: Israel Rukshin <israelr@mellanox.com>
>>
>> Expose the namespace metadata format when PI is enabled. The user needs
>> to enable the capability per subsystem and per port. The other metadata
>> properties are taken from the namespace/bdev.
>>
>> Usage example:
>> echo 1 > /config/nvmet/subsystems/${NAME}/attr_pi_enable
>> echo 1 > /config/nvmet/ports/${PORT_NUM}/param_pi_enable
>>
>> Signed-off-by: Israel Rukshin <israelr@mellanox.com>
>> Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
>> ---
>> drivers/nvme/target/admin-cmd.c | 26 +++++++++++++++---
>> drivers/nvme/target/configfs.c | 58
>> +++++++++++++++++++++++++++++++++++++++
>> drivers/nvme/target/core.c | 21 +++++++++++---
>> drivers/nvme/target/fabrics-cmd.c | 7 +++--
>> drivers/nvme/target/nvmet.h | 19 +++++++++++++
>> 5 files changed, 121 insertions(+), 10 deletions(-)
>>
>> diff --git a/drivers/nvme/target/admin-cmd.c
>> b/drivers/nvme/target/admin-cmd.c
>> index 905aba8..9c8a00d 100644
>> --- a/drivers/nvme/target/admin-cmd.c
>> +++ b/drivers/nvme/target/admin-cmd.c
>> @@ -341,6 +341,7 @@ static void nvmet_execute_identify_ctrl(struct
>> nvmet_req *req)
>> {
>> struct nvmet_ctrl *ctrl = req->sq->ctrl;
>> struct nvme_id_ctrl *id;
>> + u32 cmd_capsule_size;
>> u16 status = 0;
>> id = kzalloc(sizeof(*id), GFP_KERNEL);
>> @@ -433,9 +434,15 @@ static void nvmet_execute_identify_ctrl(struct
>> nvmet_req *req)
>> strlcpy(id->subnqn, ctrl->subsys->subsysnqn,
>> sizeof(id->subnqn));
>> - /* Max command capsule size is sqe + single page of in-capsule
>> data */
>> - id->ioccsz = cpu_to_le32((sizeof(struct nvme_command) +
>> - req->port->inline_data_size) / 16);
>> + /*
>> + * Max command capsule size is sqe + in-capsule data size.
>> + * Disable in-capsule data for Metadata capable controllers.
>> + */
>> + cmd_capsule_size = sizeof(struct nvme_command);
>> + if (!ctrl->pi_support)
>> + cmd_capsule_size += req->port->inline_data_size;
>> + id->ioccsz = cpu_to_le32(cmd_capsule_size / 16);
>> +
>
> This seems odd to me, and should probably be referencing attributes of
> the a transport, which should probably be set based on what transport
> port the command was received on.
we set this in admin_connect:
ctrl->pi_support = ctrl->port->pi_enable && ctrl->subsys->pi_support;
>
>
>> /* Max response capsule size is cqe */
>> id->iorcsz = cpu_to_le32(sizeof(struct nvme_completion) / 16);
>> @@ -465,6 +472,7 @@ static void nvmet_execute_identify_ctrl(struct
>> nvmet_req *req)
>> static void nvmet_execute_identify_ns(struct nvmet_req *req)
>> {
>> + struct nvmet_ctrl *ctrl = req->sq->ctrl;
>> struct nvmet_ns *ns;
>> struct nvme_id_ns *id;
>> u16 status = 0;
>> @@ -482,7 +490,7 @@ static void nvmet_execute_identify_ns(struct
>> nvmet_req *req)
>> }
>> /* return an all zeroed buffer if we can't find an active
>> namespace */
>> - ns = nvmet_find_namespace(req->sq->ctrl, req->cmd->identify.nsid);
>> + ns = nvmet_find_namespace(ctrl, req->cmd->identify.nsid);
>> if (!ns)
>> goto done;
>> @@ -526,6 +534,16 @@ static void nvmet_execute_identify_ns(struct
>> nvmet_req *req)
>> id->lbaf[0].ds = ns->blksize_shift;
>> + if (ctrl->pi_support && nvmet_ns_has_pi(ns)) {
>> + id->dpc = NVME_NS_DPC_PI_FIRST | NVME_NS_DPC_PI_LAST |
>> + NVME_NS_DPC_PI_TYPE1 | NVME_NS_DPC_PI_TYPE2 |
>> + NVME_NS_DPC_PI_TYPE3;
>
> Why is PI_TYPE2 getting set if nvmet_bdev_ns_enable_integrity() won't
> set a metadata size ?
It wont:
static inline bool nvmet_ns_has_pi(struct nvmet_ns *ns)
{
if (!IS_ENABLED(CONFIG_BLK_DEV_INTEGRITY))
return false;
return ns->pi_type && ns->metadata_size == sizeof(struct
t10_pi_tuple);
}
>
>> + id->mc = NVME_MC_EXTENDED_LBA;
>> + id->dps = ns->pi_type;
>> + id->flbas = NVME_NS_FLBAS_META_EXT;
>
> I guess this is ok - always requiring extended lbas. Will this ever
> change over time ?
This is according to the SPEC of NVMe/Fabrics
>
>> + id->lbaf[0].ms = ns->metadata_size;
>> + }
>> +
>> if (ns->readonly)
>> id->nsattr |= (1 << 0);
>> nvmet_put_namespace(ns);
> ...
>> diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
>> index 50dfc60..9f6b0f9 100644
>> --- a/drivers/nvme/target/core.c
>> +++ b/drivers/nvme/target/core.c
>> @@ -322,12 +322,21 @@ int nvmet_enable_port(struct nvmet_port *port)
>> if (!try_module_get(ops->owner))
>> return -EINVAL;
>> - ret = ops->add_port(port);
>> - if (ret) {
>> - module_put(ops->owner);
>> - return ret;
>> + /*
>> + * If the user requested PI support and the transport isn't pi
>> capable,
>> + * don't enable the port.
>> + */
>> + if (port->pi_enable && !ops->metadata_support) {
>> + pr_err("T10-PI is not supported by transport type %d\n",
>> + port->disc_addr.trtype);
>> + ret = -EINVAL;
>> + goto out_put;
>> }
>> + ret = ops->add_port(port);
>> + if (ret)
>> + goto out_put;
>> +
>
> ok for now - but it's going to have to be changed later to check the
> port attributes, not the transport ops. So it'll go back to add_port,
> then validation w/ delete_port if fails.
>
>> /* If the transport didn't set inline_data_size, then disable
>> it. */
>> if (port->inline_data_size < 0)
>> port->inline_data_size = 0;
>> @@ -335,6 +344,10 @@ int nvmet_enable_port(struct nvmet_port *port)
>> port->enabled = true;
>> port->tr_ops = ops;
>> return 0;
>> +
>> +out_put:
>> + module_put(ops->owner);
>> + return ret;
>> }
>> void nvmet_disable_port(struct nvmet_port *port)
>> diff --git a/drivers/nvme/target/fabrics-cmd.c
>> b/drivers/nvme/target/fabrics-cmd.c
>> index 52a6f70..42bd12b 100644
>> --- a/drivers/nvme/target/fabrics-cmd.c
>> +++ b/drivers/nvme/target/fabrics-cmd.c
>> @@ -197,6 +197,8 @@ static void nvmet_execute_admin_connect(struct
>> nvmet_req *req)
>> goto out;
>> }
>> + ctrl->pi_support = ctrl->port->pi_enable &&
>> ctrl->subsys->pi_support;
>> +
>> uuid_copy(&ctrl->hostid, &d->hostid);
>> status = nvmet_install_queue(ctrl, req);
>> @@ -205,8 +207,9 @@ static void nvmet_execute_admin_connect(struct
>> nvmet_req *req)
>> goto out;
>> }
>> - pr_info("creating controller %d for subsystem %s for NQN %s.\n",
>> - ctrl->cntlid, ctrl->subsys->subsysnqn, ctrl->hostnqn);
>> + pr_info("creating controller %d for subsystem %s for NQN %s%s.\n",
>> + ctrl->cntlid, ctrl->subsys->subsysnqn, ctrl->hostnqn,
>> + ctrl->pi_support ? " T10-PI is enabled" : "");
>> req->cqe->result.u16 = cpu_to_le16(ctrl->cntlid);
>> out:
>> diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
>> index ceedaaf..2986e2c 100644
>> --- a/drivers/nvme/target/nvmet.h
>> +++ b/drivers/nvme/target/nvmet.h
>> @@ -145,6 +145,7 @@ struct nvmet_port {
>> bool enabled;
>> int inline_data_size;
>> const struct nvmet_fabrics_ops *tr_ops;
>> + bool pi_enable;
>
> ok - but would have liked to have seen this generic-ized for metadata
> support, then PI in the metadata.
>
>
> -- james
>
>
>
_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 14/16] nvmet: add metadata/T10-PI support
2020-05-04 15:57 ` [PATCH 14/16] nvmet: add " Max Gurtovoy
@ 2020-05-13 19:51 ` James Smart
2020-05-14 15:09 ` Max Gurtovoy
0 siblings, 1 reply; 28+ messages in thread
From: James Smart @ 2020-05-13 19:51 UTC (permalink / raw)
To: Max Gurtovoy, linux-nvme, kbusch, hch, sagi, martin.petersen,
jsmart2021, axboe
Cc: vladimirk, idanb, israelr, shlomin, oren, nitzanc
On 5/4/2020 8:57 AM, Max Gurtovoy wrote:
> From: Israel Rukshin <israelr@mellanox.com>
>
> Expose the namespace metadata format when PI is enabled. The user needs
> to enable the capability per subsystem and per port. The other metadata
> properties are taken from the namespace/bdev.
>
> Usage example:
> echo 1 > /config/nvmet/subsystems/${NAME}/attr_pi_enable
> echo 1 > /config/nvmet/ports/${PORT_NUM}/param_pi_enable
>
> Signed-off-by: Israel Rukshin <israelr@mellanox.com>
> Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
> ---
> drivers/nvme/target/admin-cmd.c | 26 +++++++++++++++---
> drivers/nvme/target/configfs.c | 58 +++++++++++++++++++++++++++++++++++++++
> drivers/nvme/target/core.c | 21 +++++++++++---
> drivers/nvme/target/fabrics-cmd.c | 7 +++--
> drivers/nvme/target/nvmet.h | 19 +++++++++++++
> 5 files changed, 121 insertions(+), 10 deletions(-)
>
> diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cmd.c
> index 905aba8..9c8a00d 100644
> --- a/drivers/nvme/target/admin-cmd.c
> +++ b/drivers/nvme/target/admin-cmd.c
> @@ -341,6 +341,7 @@ static void nvmet_execute_identify_ctrl(struct nvmet_req *req)
> {
> struct nvmet_ctrl *ctrl = req->sq->ctrl;
> struct nvme_id_ctrl *id;
> + u32 cmd_capsule_size;
> u16 status = 0;
>
> id = kzalloc(sizeof(*id), GFP_KERNEL);
> @@ -433,9 +434,15 @@ static void nvmet_execute_identify_ctrl(struct nvmet_req *req)
>
> strlcpy(id->subnqn, ctrl->subsys->subsysnqn, sizeof(id->subnqn));
>
> - /* Max command capsule size is sqe + single page of in-capsule data */
> - id->ioccsz = cpu_to_le32((sizeof(struct nvme_command) +
> - req->port->inline_data_size) / 16);
> + /*
> + * Max command capsule size is sqe + in-capsule data size.
> + * Disable in-capsule data for Metadata capable controllers.
> + */
> + cmd_capsule_size = sizeof(struct nvme_command);
> + if (!ctrl->pi_support)
> + cmd_capsule_size += req->port->inline_data_size;
> + id->ioccsz = cpu_to_le32(cmd_capsule_size / 16);
> +
This seems odd to me, and should probably be referencing attributes of
the a transport, which should probably be set based on what transport
port the command was received on.
> /* Max response capsule size is cqe */
> id->iorcsz = cpu_to_le32(sizeof(struct nvme_completion) / 16);
>
> @@ -465,6 +472,7 @@ static void nvmet_execute_identify_ctrl(struct nvmet_req *req)
>
> static void nvmet_execute_identify_ns(struct nvmet_req *req)
> {
> + struct nvmet_ctrl *ctrl = req->sq->ctrl;
> struct nvmet_ns *ns;
> struct nvme_id_ns *id;
> u16 status = 0;
> @@ -482,7 +490,7 @@ static void nvmet_execute_identify_ns(struct nvmet_req *req)
> }
>
> /* return an all zeroed buffer if we can't find an active namespace */
> - ns = nvmet_find_namespace(req->sq->ctrl, req->cmd->identify.nsid);
> + ns = nvmet_find_namespace(ctrl, req->cmd->identify.nsid);
> if (!ns)
> goto done;
>
> @@ -526,6 +534,16 @@ static void nvmet_execute_identify_ns(struct nvmet_req *req)
>
> id->lbaf[0].ds = ns->blksize_shift;
>
> + if (ctrl->pi_support && nvmet_ns_has_pi(ns)) {
> + id->dpc = NVME_NS_DPC_PI_FIRST | NVME_NS_DPC_PI_LAST |
> + NVME_NS_DPC_PI_TYPE1 | NVME_NS_DPC_PI_TYPE2 |
> + NVME_NS_DPC_PI_TYPE3;
Why is PI_TYPE2 getting set if nvmet_bdev_ns_enable_integrity() won't
set a metadata size ?
> + id->mc = NVME_MC_EXTENDED_LBA;
> + id->dps = ns->pi_type;
> + id->flbas = NVME_NS_FLBAS_META_EXT;
I guess this is ok - always requiring extended lbas. Will this ever
change over time ?
> + id->lbaf[0].ms = ns->metadata_size;
> + }
> +
> if (ns->readonly)
> id->nsattr |= (1 << 0);
> nvmet_put_namespace(ns);
...
> diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
> index 50dfc60..9f6b0f9 100644
> --- a/drivers/nvme/target/core.c
> +++ b/drivers/nvme/target/core.c
> @@ -322,12 +322,21 @@ int nvmet_enable_port(struct nvmet_port *port)
> if (!try_module_get(ops->owner))
> return -EINVAL;
>
> - ret = ops->add_port(port);
> - if (ret) {
> - module_put(ops->owner);
> - return ret;
> + /*
> + * If the user requested PI support and the transport isn't pi capable,
> + * don't enable the port.
> + */
> + if (port->pi_enable && !ops->metadata_support) {
> + pr_err("T10-PI is not supported by transport type %d\n",
> + port->disc_addr.trtype);
> + ret = -EINVAL;
> + goto out_put;
> }
>
> + ret = ops->add_port(port);
> + if (ret)
> + goto out_put;
> +
ok for now - but it's going to have to be changed later to check the
port attributes, not the transport ops. So it'll go back to add_port,
then validation w/ delete_port if fails.
> /* If the transport didn't set inline_data_size, then disable it. */
> if (port->inline_data_size < 0)
> port->inline_data_size = 0;
> @@ -335,6 +344,10 @@ int nvmet_enable_port(struct nvmet_port *port)
> port->enabled = true;
> port->tr_ops = ops;
> return 0;
> +
> +out_put:
> + module_put(ops->owner);
> + return ret;
> }
>
> void nvmet_disable_port(struct nvmet_port *port)
> diff --git a/drivers/nvme/target/fabrics-cmd.c b/drivers/nvme/target/fabrics-cmd.c
> index 52a6f70..42bd12b 100644
> --- a/drivers/nvme/target/fabrics-cmd.c
> +++ b/drivers/nvme/target/fabrics-cmd.c
> @@ -197,6 +197,8 @@ static void nvmet_execute_admin_connect(struct nvmet_req *req)
> goto out;
> }
>
> + ctrl->pi_support = ctrl->port->pi_enable && ctrl->subsys->pi_support;
> +
> uuid_copy(&ctrl->hostid, &d->hostid);
>
> status = nvmet_install_queue(ctrl, req);
> @@ -205,8 +207,9 @@ static void nvmet_execute_admin_connect(struct nvmet_req *req)
> goto out;
> }
>
> - pr_info("creating controller %d for subsystem %s for NQN %s.\n",
> - ctrl->cntlid, ctrl->subsys->subsysnqn, ctrl->hostnqn);
> + pr_info("creating controller %d for subsystem %s for NQN %s%s.\n",
> + ctrl->cntlid, ctrl->subsys->subsysnqn, ctrl->hostnqn,
> + ctrl->pi_support ? " T10-PI is enabled" : "");
> req->cqe->result.u16 = cpu_to_le16(ctrl->cntlid);
>
> out:
> diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
> index ceedaaf..2986e2c 100644
> --- a/drivers/nvme/target/nvmet.h
> +++ b/drivers/nvme/target/nvmet.h
> @@ -145,6 +145,7 @@ struct nvmet_port {
> bool enabled;
> int inline_data_size;
> const struct nvmet_fabrics_ops *tr_ops;
> + bool pi_enable;
ok - but would have liked to have seen this generic-ized for metadata
support, then PI in the metadata.
-- james
_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 28+ messages in thread
* [PATCH 14/16] nvmet: add metadata/T10-PI support
2020-05-04 15:57 [PATCH 00/16 v7] nvme-rdma/nvmet-rdma: " Max Gurtovoy
@ 2020-05-04 15:57 ` Max Gurtovoy
2020-05-13 19:51 ` James Smart
0 siblings, 1 reply; 28+ messages in thread
From: Max Gurtovoy @ 2020-05-04 15:57 UTC (permalink / raw)
To: maxg, linux-nvme, kbusch, hch, sagi, martin.petersen, jsmart2021, axboe
Cc: vladimirk, shlomin, israelr, idanb, oren, nitzanc
From: Israel Rukshin <israelr@mellanox.com>
Expose the namespace metadata format when PI is enabled. The user needs
to enable the capability per subsystem and per port. The other metadata
properties are taken from the namespace/bdev.
Usage example:
echo 1 > /config/nvmet/subsystems/${NAME}/attr_pi_enable
echo 1 > /config/nvmet/ports/${PORT_NUM}/param_pi_enable
Signed-off-by: Israel Rukshin <israelr@mellanox.com>
Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
---
drivers/nvme/target/admin-cmd.c | 26 +++++++++++++++---
drivers/nvme/target/configfs.c | 58 +++++++++++++++++++++++++++++++++++++++
drivers/nvme/target/core.c | 21 +++++++++++---
drivers/nvme/target/fabrics-cmd.c | 7 +++--
drivers/nvme/target/nvmet.h | 19 +++++++++++++
5 files changed, 121 insertions(+), 10 deletions(-)
diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cmd.c
index 905aba8..9c8a00d 100644
--- a/drivers/nvme/target/admin-cmd.c
+++ b/drivers/nvme/target/admin-cmd.c
@@ -341,6 +341,7 @@ static void nvmet_execute_identify_ctrl(struct nvmet_req *req)
{
struct nvmet_ctrl *ctrl = req->sq->ctrl;
struct nvme_id_ctrl *id;
+ u32 cmd_capsule_size;
u16 status = 0;
id = kzalloc(sizeof(*id), GFP_KERNEL);
@@ -433,9 +434,15 @@ static void nvmet_execute_identify_ctrl(struct nvmet_req *req)
strlcpy(id->subnqn, ctrl->subsys->subsysnqn, sizeof(id->subnqn));
- /* Max command capsule size is sqe + single page of in-capsule data */
- id->ioccsz = cpu_to_le32((sizeof(struct nvme_command) +
- req->port->inline_data_size) / 16);
+ /*
+ * Max command capsule size is sqe + in-capsule data size.
+ * Disable in-capsule data for Metadata capable controllers.
+ */
+ cmd_capsule_size = sizeof(struct nvme_command);
+ if (!ctrl->pi_support)
+ cmd_capsule_size += req->port->inline_data_size;
+ id->ioccsz = cpu_to_le32(cmd_capsule_size / 16);
+
/* Max response capsule size is cqe */
id->iorcsz = cpu_to_le32(sizeof(struct nvme_completion) / 16);
@@ -465,6 +472,7 @@ static void nvmet_execute_identify_ctrl(struct nvmet_req *req)
static void nvmet_execute_identify_ns(struct nvmet_req *req)
{
+ struct nvmet_ctrl *ctrl = req->sq->ctrl;
struct nvmet_ns *ns;
struct nvme_id_ns *id;
u16 status = 0;
@@ -482,7 +490,7 @@ static void nvmet_execute_identify_ns(struct nvmet_req *req)
}
/* return an all zeroed buffer if we can't find an active namespace */
- ns = nvmet_find_namespace(req->sq->ctrl, req->cmd->identify.nsid);
+ ns = nvmet_find_namespace(ctrl, req->cmd->identify.nsid);
if (!ns)
goto done;
@@ -526,6 +534,16 @@ static void nvmet_execute_identify_ns(struct nvmet_req *req)
id->lbaf[0].ds = ns->blksize_shift;
+ if (ctrl->pi_support && nvmet_ns_has_pi(ns)) {
+ id->dpc = NVME_NS_DPC_PI_FIRST | NVME_NS_DPC_PI_LAST |
+ NVME_NS_DPC_PI_TYPE1 | NVME_NS_DPC_PI_TYPE2 |
+ NVME_NS_DPC_PI_TYPE3;
+ id->mc = NVME_MC_EXTENDED_LBA;
+ id->dps = ns->pi_type;
+ id->flbas = NVME_NS_FLBAS_META_EXT;
+ id->lbaf[0].ms = ns->metadata_size;
+ }
+
if (ns->readonly)
id->nsattr |= (1 << 0);
nvmet_put_namespace(ns);
diff --git a/drivers/nvme/target/configfs.c b/drivers/nvme/target/configfs.c
index 58cabd7..42e5f61 100644
--- a/drivers/nvme/target/configfs.c
+++ b/drivers/nvme/target/configfs.c
@@ -248,6 +248,36 @@ static ssize_t nvmet_param_inline_data_size_store(struct config_item *item,
CONFIGFS_ATTR(nvmet_, param_inline_data_size);
+#ifdef CONFIG_BLK_DEV_INTEGRITY
+static ssize_t nvmet_param_pi_enable_show(struct config_item *item,
+ char *page)
+{
+ struct nvmet_port *port = to_nvmet_port(item);
+
+ return snprintf(page, PAGE_SIZE, "%d\n", port->pi_enable);
+}
+
+static ssize_t nvmet_param_pi_enable_store(struct config_item *item,
+ const char *page, size_t count)
+{
+ struct nvmet_port *port = to_nvmet_port(item);
+ bool val;
+
+ if (strtobool(page, &val))
+ return -EINVAL;
+
+ if (port->enabled) {
+ pr_err("Disable port before setting pi_enable value.\n");
+ return -EACCES;
+ }
+
+ port->pi_enable = val;
+ return count;
+}
+
+CONFIGFS_ATTR(nvmet_, param_pi_enable);
+#endif
+
static ssize_t nvmet_addr_trtype_show(struct config_item *item,
char *page)
{
@@ -987,6 +1017,28 @@ static ssize_t nvmet_subsys_attr_model_store(struct config_item *item,
}
CONFIGFS_ATTR(nvmet_subsys_, attr_model);
+#ifdef CONFIG_BLK_DEV_INTEGRITY
+static ssize_t nvmet_subsys_attr_pi_enable_show(struct config_item *item,
+ char *page)
+{
+ return snprintf(page, PAGE_SIZE, "%d\n", to_subsys(item)->pi_support);
+}
+
+static ssize_t nvmet_subsys_attr_pi_enable_store(struct config_item *item,
+ const char *page, size_t count)
+{
+ struct nvmet_subsys *subsys = to_subsys(item);
+ bool pi_enable;
+
+ if (strtobool(page, &pi_enable))
+ return -EINVAL;
+
+ subsys->pi_support = pi_enable;
+ return count;
+}
+CONFIGFS_ATTR(nvmet_subsys_, attr_pi_enable);
+#endif
+
static struct configfs_attribute *nvmet_subsys_attrs[] = {
&nvmet_subsys_attr_attr_allow_any_host,
&nvmet_subsys_attr_attr_version,
@@ -994,6 +1046,9 @@ static ssize_t nvmet_subsys_attr_model_store(struct config_item *item,
&nvmet_subsys_attr_attr_cntlid_min,
&nvmet_subsys_attr_attr_cntlid_max,
&nvmet_subsys_attr_attr_model,
+#ifdef CONFIG_BLK_DEV_INTEGRITY
+ &nvmet_subsys_attr_attr_pi_enable,
+#endif
NULL,
};
@@ -1297,6 +1352,9 @@ static void nvmet_port_release(struct config_item *item)
&nvmet_attr_addr_trsvcid,
&nvmet_attr_addr_trtype,
&nvmet_attr_param_inline_data_size,
+#ifdef CONFIG_BLK_DEV_INTEGRITY
+ &nvmet_attr_param_pi_enable,
+#endif
NULL,
};
diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
index 50dfc60..9f6b0f9 100644
--- a/drivers/nvme/target/core.c
+++ b/drivers/nvme/target/core.c
@@ -322,12 +322,21 @@ int nvmet_enable_port(struct nvmet_port *port)
if (!try_module_get(ops->owner))
return -EINVAL;
- ret = ops->add_port(port);
- if (ret) {
- module_put(ops->owner);
- return ret;
+ /*
+ * If the user requested PI support and the transport isn't pi capable,
+ * don't enable the port.
+ */
+ if (port->pi_enable && !ops->metadata_support) {
+ pr_err("T10-PI is not supported by transport type %d\n",
+ port->disc_addr.trtype);
+ ret = -EINVAL;
+ goto out_put;
}
+ ret = ops->add_port(port);
+ if (ret)
+ goto out_put;
+
/* If the transport didn't set inline_data_size, then disable it. */
if (port->inline_data_size < 0)
port->inline_data_size = 0;
@@ -335,6 +344,10 @@ int nvmet_enable_port(struct nvmet_port *port)
port->enabled = true;
port->tr_ops = ops;
return 0;
+
+out_put:
+ module_put(ops->owner);
+ return ret;
}
void nvmet_disable_port(struct nvmet_port *port)
diff --git a/drivers/nvme/target/fabrics-cmd.c b/drivers/nvme/target/fabrics-cmd.c
index 52a6f70..42bd12b 100644
--- a/drivers/nvme/target/fabrics-cmd.c
+++ b/drivers/nvme/target/fabrics-cmd.c
@@ -197,6 +197,8 @@ static void nvmet_execute_admin_connect(struct nvmet_req *req)
goto out;
}
+ ctrl->pi_support = ctrl->port->pi_enable && ctrl->subsys->pi_support;
+
uuid_copy(&ctrl->hostid, &d->hostid);
status = nvmet_install_queue(ctrl, req);
@@ -205,8 +207,9 @@ static void nvmet_execute_admin_connect(struct nvmet_req *req)
goto out;
}
- pr_info("creating controller %d for subsystem %s for NQN %s.\n",
- ctrl->cntlid, ctrl->subsys->subsysnqn, ctrl->hostnqn);
+ pr_info("creating controller %d for subsystem %s for NQN %s%s.\n",
+ ctrl->cntlid, ctrl->subsys->subsysnqn, ctrl->hostnqn,
+ ctrl->pi_support ? " T10-PI is enabled" : "");
req->cqe->result.u16 = cpu_to_le16(ctrl->cntlid);
out:
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index ceedaaf..2986e2c 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -145,6 +145,7 @@ struct nvmet_port {
bool enabled;
int inline_data_size;
const struct nvmet_fabrics_ops *tr_ops;
+ bool pi_enable;
};
static inline struct nvmet_port *to_nvmet_port(struct config_item *item)
@@ -204,6 +205,7 @@ struct nvmet_ctrl {
spinlock_t error_lock;
u64 err_counter;
struct nvme_error_slot slots[NVMET_ERROR_LOG_SLOTS];
+ bool pi_support;
};
struct nvmet_subsys_model {
@@ -233,6 +235,7 @@ struct nvmet_subsys {
u64 ver;
u64 serial;
char *subsysnqn;
+ bool pi_support;
struct config_group group;
@@ -284,6 +287,7 @@ struct nvmet_fabrics_ops {
unsigned int type;
unsigned int msdbd;
bool has_keyed_sgls : 1;
+ bool metadata_support : 1;
void (*queue_response)(struct nvmet_req *req);
int (*add_port)(struct nvmet_port *port);
void (*remove_port)(struct nvmet_port *port);
@@ -510,6 +514,14 @@ static inline u32 nvmet_rw_data_len(struct nvmet_req *req)
req->ns->blksize_shift;
}
+static inline u32 nvmet_rw_metadata_len(struct nvmet_req *req)
+{
+ if (!IS_ENABLED(CONFIG_BLK_DEV_INTEGRITY))
+ return 0;
+ return ((u32)le16_to_cpu(req->cmd->rw.length) + 1) *
+ req->ns->metadata_size;
+}
+
static inline u32 nvmet_dsm_len(struct nvmet_req *req)
{
return (le32_to_cpu(req->cmd->dsm.nr) + 1) *
@@ -524,4 +536,11 @@ static inline __le16 to0based(u32 a)
return cpu_to_le16(max(1U, min(1U << 16, a)) - 1);
}
+static inline bool nvmet_ns_has_pi(struct nvmet_ns *ns)
+{
+ if (!IS_ENABLED(CONFIG_BLK_DEV_INTEGRITY))
+ return false;
+ return ns->pi_type && ns->metadata_size == sizeof(struct t10_pi_tuple);
+}
+
#endif /* _NVMET_H */
--
1.8.3.1
_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply related [flat|nested] 28+ messages in thread
end of thread, other threads:[~2020-05-19 14:08 UTC | newest]
Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-12-02 14:47 [PATCH 00/16 V2] nvme-rdma/nvmet-rdma: Add metadata/T10-PI support Max Gurtovoy
2019-12-02 14:47 ` [PATCH] nvme-cli/fabrics: Add pi_enable param to connect cmd Max Gurtovoy
2019-12-02 14:47 ` [PATCH 01/16] nvme: Introduce namespace features flag Max Gurtovoy
2019-12-04 8:41 ` Christoph Hellwig
2019-12-02 14:47 ` [PATCH 02/16] nvme: Enforce extended LBA format for fabrics metadata Max Gurtovoy
2019-12-02 14:47 ` [PATCH 03/16] nvme: Introduce max_integrity_segments ctrl attribute Max Gurtovoy
2019-12-02 14:48 ` [PATCH 04/16] nvme-fabrics: Allow user enabling metadata/T10-PI support Max Gurtovoy
2019-12-02 14:48 ` [PATCH 05/16] nvme: Introduce NVME_INLINE_PROT_SG_CNT Max Gurtovoy
2019-12-02 14:48 ` [PATCH 06/16] nvme-rdma: Introduce nvme_rdma_sgl structure Max Gurtovoy
2019-12-02 14:48 ` [PATCH 07/16] nvme-rdma: Add metadata/T10-PI support Max Gurtovoy
2020-01-22 21:57 ` James Smart
2020-01-23 9:59 ` Max Gurtovoy
2020-01-23 15:34 ` James Smart
2020-01-23 18:52 ` James Smart
2019-12-02 14:48 ` [PATCH 08/16] block: Add a BIP_MAPPED_INTEGRITY check to complete_fn Max Gurtovoy
2019-12-02 14:48 ` [PATCH 09/16] nvmet: Prepare metadata request Max Gurtovoy
2019-12-02 14:48 ` [PATCH 10/16] nvmet: Add metadata characteristics for a namespace Max Gurtovoy
2019-12-02 14:48 ` [PATCH 11/16] nvmet: Rename nvmet_rw_len to nvmet_rw_data_len Max Gurtovoy
2019-12-02 14:48 ` [PATCH 12/16] nvmet: Rename nvmet_check_data_len to nvmet_check_transfer_len Max Gurtovoy
2019-12-02 14:48 ` [PATCH 13/16] nvme: Add Metadata Capabilities enumerations Max Gurtovoy
2019-12-02 14:48 ` [PATCH 14/16] nvmet: Add metadata/T10-PI support Max Gurtovoy
2019-12-02 14:48 ` [PATCH 15/16] nvmet: Add metadata support for block devices Max Gurtovoy
2019-12-02 14:48 ` [PATCH 16/16] nvmet-rdma: Add metadata/T10-PI support Max Gurtovoy
2020-05-04 15:57 [PATCH 00/16 v7] nvme-rdma/nvmet-rdma: " Max Gurtovoy
2020-05-04 15:57 ` [PATCH 14/16] nvmet: add " Max Gurtovoy
2020-05-13 19:51 ` James Smart
2020-05-14 15:09 ` Max Gurtovoy
2020-05-14 15:37 ` James Smart
2020-05-19 14:05 [PATCH 00/16 v8] nvme-rdma/nvmet-rdma: Add " Max Gurtovoy
2020-05-19 14:06 ` [PATCH 14/16] nvmet: add " Max Gurtovoy
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).