linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/7] nvme-fcloop: fix shutdown and improve logging
@ 2020-09-22 12:14 Hannes Reinecke
  2020-09-22 12:14 ` [PATCH 1/7] nvme-fcloop: flush workqueue before calling nvme_fc_unregister_remoteport() Hannes Reinecke
                   ` (6 more replies)
  0 siblings, 7 replies; 16+ messages in thread
From: Hannes Reinecke @ 2020-09-22 12:14 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Hannes Reinecke, linux-nvme, Sagi Grimberg, Keith Busch, James Smart

Hi all,

there's a long-standing issue with fcloop that it triggers a use-after-free
when removing ports as the disconnect I/O is still running while the ports
are being removed.
This patchset fixes up this issue and also creates a device for each localport
and each nport; with that the logging is vastly improved as we now have
readable device names in the logging output and not "(NULL devuce *)".

Blocktests will be send separately.

As usual, comments and reviews are welcome.

Hannes Reinecke (7):
  nvme-fcloop: flush workqueue before calling
    nvme_fc_unregister_remoteport()
  nvmet-fc: use per-target workqueue when removing associations
  nvme-fcloop: use IDA for port ids
  nvmet-fc: use feature flag for virtual LLDD
  nvme-fc: use feature flag for virtual LLDD
  nvme-fcloop: use a device for nport
  nvme-fcloop: use a device for lport

 drivers/nvme/host/fc.c         |  93 ++++++++++++++++-------------
 drivers/nvme/target/fc.c       | 122 +++++++++++++++++++++++---------------
 drivers/nvme/target/fcloop.c   | 131 ++++++++++++++++++++++++++++++++---------
 include/linux/nvme-fc-driver.h |  12 ++++
 4 files changed, 241 insertions(+), 117 deletions(-)

-- 
2.16.4


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 1/7] nvme-fcloop: flush workqueue before calling nvme_fc_unregister_remoteport()
  2020-09-22 12:14 [PATCH 0/7] nvme-fcloop: fix shutdown and improve logging Hannes Reinecke
@ 2020-09-22 12:14 ` Hannes Reinecke
  2020-10-05 17:14   ` James Smart
  2020-09-22 12:14 ` [PATCH 2/7] nvmet-fc: use per-target workqueue when removing associations Hannes Reinecke
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 16+ messages in thread
From: Hannes Reinecke @ 2020-09-22 12:14 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Hannes Reinecke, linux-nvme, Sagi Grimberg, Keith Busch, James Smart

nvme_fc_unregister_remoteport() will be sending LS requests, which then
would end up on a workqueue for processing. This will deadlock with
fcloop_remoteport_delete() which would try to flush the very same queue.

Signed-off-by: Hannes Reinecke <hare@suse.de>
---
 drivers/nvme/target/fcloop.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/nvme/target/fcloop.c b/drivers/nvme/target/fcloop.c
index c97e60b71bbc..082aa4dee406 100644
--- a/drivers/nvme/target/fcloop.c
+++ b/drivers/nvme/target/fcloop.c
@@ -979,7 +979,6 @@ fcloop_remoteport_delete(struct nvme_fc_remote_port *remoteport)
 {
 	struct fcloop_rport *rport = remoteport->private;
 
-	flush_work(&rport->ls_work);
 	fcloop_nport_put(rport->nport);
 }
 
@@ -1313,6 +1312,7 @@ __remoteport_unreg(struct fcloop_nport *nport, struct fcloop_rport *rport)
 	if (!rport)
 		return -EALREADY;
 
+	flush_work(&rport->ls_work);
 	return nvme_fc_unregister_remoteport(rport->remoteport);
 }
 
-- 
2.16.4


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 2/7] nvmet-fc: use per-target workqueue when removing associations
  2020-09-22 12:14 [PATCH 0/7] nvme-fcloop: fix shutdown and improve logging Hannes Reinecke
  2020-09-22 12:14 ` [PATCH 1/7] nvme-fcloop: flush workqueue before calling nvme_fc_unregister_remoteport() Hannes Reinecke
@ 2020-09-22 12:14 ` Hannes Reinecke
  2020-10-05 17:18   ` James Smart
  2020-09-22 12:14 ` [PATCH 3/7] nvme-fcloop: use IDA for port ids Hannes Reinecke
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 16+ messages in thread
From: Hannes Reinecke @ 2020-09-22 12:14 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Hannes Reinecke, linux-nvme, Sagi Grimberg, Keith Busch, James Smart

When removing target ports all outstanding associations need to be
terminated / cleaned up. As this involves several exchanges on the
wire a synchronization point is required to establish when these
exchanges have ran their course and it's safe to delete the association.
So add a per-target workqueue and flush this workqueue to ensure
the association can really be deleted.

Signed-off-by: Hannes Reinecke <hare@suse.de>
---
 drivers/nvme/target/fc.c | 29 +++++++++++++++++++++++------
 1 file changed, 23 insertions(+), 6 deletions(-)

diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c
index 04ec0076ae59..63f5deb3b68a 100644
--- a/drivers/nvme/target/fc.c
+++ b/drivers/nvme/target/fc.c
@@ -99,6 +99,7 @@ struct nvmet_fc_tgtport {
 	struct list_head		tgt_list; /* nvmet_fc_target_list */
 	struct device			*dev;	/* dev for dma mapping */
 	struct nvmet_fc_target_template	*ops;
+	struct workqueue_struct		*work_q;
 
 	struct nvmet_fc_ls_iod		*iod;
 	spinlock_t			lock;
@@ -1403,10 +1404,17 @@ nvmet_fc_register_targetport(struct nvmet_fc_port_info *pinfo,
 	ida_init(&newrec->assoc_cnt);
 	newrec->max_sg_cnt = template->max_sgl_segments;
 
+	newrec->work_q = alloc_workqueue("ntfc%d", 0, 0,
+					 newrec->fc_target_port.port_num);
+	if (!newrec->work_q) {
+		ret = -ENOMEM;
+		goto out_free_newrec;
+	}
+
 	ret = nvmet_fc_alloc_ls_iodlist(newrec);
 	if (ret) {
 		ret = -ENOMEM;
-		goto out_free_newrec;
+		goto out_free_workq;
 	}
 
 	nvmet_fc_portentry_rebind_tgt(newrec);
@@ -1418,6 +1426,8 @@ nvmet_fc_register_targetport(struct nvmet_fc_port_info *pinfo,
 	*portptr = &newrec->fc_target_port;
 	return 0;
 
+out_free_workq:
+	destroy_workqueue(newrec->work_q);
 out_free_newrec:
 	put_device(dev);
 out_ida_put:
@@ -1443,6 +1453,8 @@ nvmet_fc_free_tgtport(struct kref *ref)
 	list_del(&tgtport->tgt_list);
 	spin_unlock_irqrestore(&nvmet_fc_tgtlock, flags);
 
+	destroy_workqueue(tgtport->work_q);
+
 	nvmet_fc_free_ls_iodlist(tgtport);
 
 	/* let the LLDD know we've finished tearing it down */
@@ -1481,11 +1493,13 @@ __nvmet_fc_free_assocs(struct nvmet_fc_tgtport *tgtport)
 				&tgtport->assoc_list, a_list) {
 		if (!nvmet_fc_tgt_a_get(assoc))
 			continue;
-		if (!schedule_work(&assoc->del_work))
+		if (!queue_work(tgtport->work_q, &assoc->del_work))
 			/* already deleting - release local reference */
 			nvmet_fc_tgt_a_put(assoc);
 	}
 	spin_unlock_irqrestore(&tgtport->lock, flags);
+
+	flush_workqueue(tgtport->work_q);
 }
 
 /**
@@ -1536,12 +1550,14 @@ nvmet_fc_invalidate_host(struct nvmet_fc_target_port *target_port,
 			continue;
 		assoc->hostport->invalid = 1;
 		noassoc = false;
-		if (!schedule_work(&assoc->del_work))
+		if (!queue_work(tgtport->work_q, &assoc->del_work))
 			/* already deleting - release local reference */
 			nvmet_fc_tgt_a_put(assoc);
 	}
 	spin_unlock_irqrestore(&tgtport->lock, flags);
 
+	flush_workqueue(tgtport->work_q);
+
 	/* if there's nothing to wait for - call the callback */
 	if (noassoc && tgtport->ops->host_release)
 		tgtport->ops->host_release(hosthandle);
@@ -1579,14 +1595,15 @@ nvmet_fc_delete_ctrl(struct nvmet_ctrl *ctrl)
 		}
 		spin_unlock_irqrestore(&tgtport->lock, flags);
 
-		nvmet_fc_tgtport_put(tgtport);
-
 		if (found_ctrl) {
-			if (!schedule_work(&assoc->del_work))
+			if (!queue_work(tgtport->work_q, &assoc->del_work))
 				/* already deleting - release local reference */
 				nvmet_fc_tgt_a_put(assoc);
+			flush_workqueue(tgtport->work_q);
+			nvmet_fc_tgtport_put(tgtport);
 			return;
 		}
+		nvmet_fc_tgtport_put(tgtport);
 
 		spin_lock_irqsave(&nvmet_fc_tgtlock, flags);
 	}
-- 
2.16.4


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 3/7] nvme-fcloop: use IDA for port ids
  2020-09-22 12:14 [PATCH 0/7] nvme-fcloop: fix shutdown and improve logging Hannes Reinecke
  2020-09-22 12:14 ` [PATCH 1/7] nvme-fcloop: flush workqueue before calling nvme_fc_unregister_remoteport() Hannes Reinecke
  2020-09-22 12:14 ` [PATCH 2/7] nvmet-fc: use per-target workqueue when removing associations Hannes Reinecke
@ 2020-09-22 12:14 ` Hannes Reinecke
  2020-10-05 17:33   ` James Smart
  2020-09-22 12:14 ` [PATCH 4/7] nvmet-fc: use feature flag for virtual LLDD Hannes Reinecke
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 16+ messages in thread
From: Hannes Reinecke @ 2020-09-22 12:14 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Hannes Reinecke, linux-nvme, Sagi Grimberg, Keith Busch, James Smart

Use an IDA to register port IDs to avoid duplicate port IDs. It
also will generate a unique port ID if none is specified.

Signed-off-by: Hannes Reinecke <hare@suse.de>
---
 drivers/nvme/target/fcloop.c | 51 +++++++++++++++++++++++++++++++++++++-------
 1 file changed, 43 insertions(+), 8 deletions(-)

diff --git a/drivers/nvme/target/fcloop.c b/drivers/nvme/target/fcloop.c
index 082aa4dee406..e56f323fa7d4 100644
--- a/drivers/nvme/target/fcloop.c
+++ b/drivers/nvme/target/fcloop.c
@@ -12,6 +12,7 @@
 #include <linux/nvme-fc-driver.h>
 #include <linux/nvme-fc.h>
 
+#define FCLOOP_MAX_AL_PA 0xEF
 
 enum {
 	NVMF_OPT_ERR		= 0,
@@ -63,6 +64,9 @@ fcloop_parse_options(struct fcloop_ctrl_options *opts,
 	int token, ret = 0;
 	u64 token64;
 
+	/* Default setting */
+	opts->fcaddr = 1;
+
 	options = o = kstrdup(buf, GFP_KERNEL);
 	if (!options)
 		return -ENOMEM;
@@ -102,6 +106,10 @@ fcloop_parse_options(struct fcloop_ctrl_options *opts,
 				ret = -EINVAL;
 				goto out_free_options;
 			}
+			if (!token || token > FCLOOP_MAX_AL_PA) {
+				ret = -EINVAL;
+				goto out_free_options;
+			}
 			opts->fcaddr = token;
 			break;
 		case NVMF_OPT_LPWWNN:
@@ -203,11 +211,13 @@ fcloop_parse_nm_options(struct device *dev, u64 *nname, u64 *pname,
 static DEFINE_SPINLOCK(fcloop_lock);
 static LIST_HEAD(fcloop_lports);
 static LIST_HEAD(fcloop_nports);
+static DEFINE_IDA(fcloop_portid_ida);
 
 struct fcloop_lport {
 	struct nvme_fc_local_port *localport;
 	struct list_head lport_list;
 	struct completion unreg_done;
+	u32 port_id;
 };
 
 struct fcloop_lport_priv {
@@ -949,6 +959,8 @@ fcloop_nport_free(struct kref *ref)
 	list_del(&nport->nport_list);
 	spin_unlock_irqrestore(&fcloop_lock, flags);
 
+	ida_free(&fcloop_portid_ida, nport->port_id);
+
 	kfree(nport);
 }
 
@@ -1047,7 +1059,7 @@ fcloop_create_local_port(struct device *dev, struct device_attribute *attr,
 	struct fcloop_lport *lport;
 	struct fcloop_lport_priv *lport_priv;
 	unsigned long flags;
-	int ret = -ENOMEM;
+	int ret = -ENOMEM, port_id;
 
 	lport = kzalloc(sizeof(*lport), GFP_KERNEL);
 	if (!lport)
@@ -1067,11 +1079,22 @@ fcloop_create_local_port(struct device *dev, struct device_attribute *attr,
 		goto out_free_opts;
 	}
 
+	port_id = ida_alloc_range(&fcloop_portid_ida,
+				  opts->fcaddr, FCLOOP_MAX_AL_PA, GFP_KERNEL);
+	if (port_id < 0) {
+		ret = port_id;
+		goto out_free_opts;
+	}
+	if ((opts->mask & NVMF_OPT_FCADDR) && opts->fcaddr != port_id) {
+		ret = -EINVAL;
+		goto out_free_ida;
+	}
+
 	memset(&pinfo, 0, sizeof(pinfo));
 	pinfo.node_name = opts->wwnn;
 	pinfo.port_name = opts->wwpn;
 	pinfo.port_role = opts->roles;
-	pinfo.port_id = opts->fcaddr;
+	pinfo.port_id = port_id;
 
 	ret = nvme_fc_register_localport(&pinfo, &fctemplate, NULL, &localport);
 	if (!ret) {
@@ -1086,7 +1109,9 @@ fcloop_create_local_port(struct device *dev, struct device_attribute *attr,
 		list_add_tail(&lport->lport_list, &fcloop_lports);
 		spin_unlock_irqrestore(&fcloop_lock, flags);
 	}
-
+out_free_ida:
+	if (ret)
+		ida_free(&fcloop_portid_ida, port_id);
 out_free_opts:
 	kfree(opts);
 out_free_lport:
@@ -1115,6 +1140,8 @@ __wait_localport_unreg(struct fcloop_lport *lport)
 
 	wait_for_completion(&lport->unreg_done);
 
+	ida_free(&fcloop_portid_ida, lport->port_id);
+
 	kfree(lport);
 
 	return ret;
@@ -1162,7 +1189,7 @@ fcloop_alloc_nport(const char *buf, size_t count, bool remoteport)
 	struct fcloop_ctrl_options *opts;
 	unsigned long flags;
 	u32 opts_mask = (remoteport) ? RPORT_OPTS : TGTPORT_OPTS;
-	int ret;
+	int ret, port_id;
 
 	opts = kzalloc(sizeof(*opts), GFP_KERNEL);
 	if (!opts)
@@ -1182,13 +1209,21 @@ fcloop_alloc_nport(const char *buf, size_t count, bool remoteport)
 	if (!newnport)
 		goto out_free_opts;
 
+	port_id = ida_alloc_range(&fcloop_portid_ida,
+				  opts->fcaddr, FCLOOP_MAX_AL_PA, GFP_KERNEL);
+	if (port_id < 0)
+		goto out_free_newnport;
+
+	if ((opts->mask & NVMF_OPT_FCADDR) && opts->fcaddr != port_id)
+		goto out_free_ida;
+
 	INIT_LIST_HEAD(&newnport->nport_list);
 	newnport->node_name = opts->wwnn;
 	newnport->port_name = opts->wwpn;
+	newnport->port_id = port_id;
 	if (opts->mask & NVMF_OPT_ROLES)
 		newnport->port_role = opts->roles;
-	if (opts->mask & NVMF_OPT_FCADDR)
-		newnport->port_id = opts->fcaddr;
+
 	kref_init(&newnport->ref);
 
 	spin_lock_irqsave(&fcloop_lock, flags);
@@ -1226,8 +1261,6 @@ fcloop_alloc_nport(const char *buf, size_t count, bool remoteport)
 				nport->lport = lport;
 			if (opts->mask & NVMF_OPT_ROLES)
 				nport->port_role = opts->roles;
-			if (opts->mask & NVMF_OPT_FCADDR)
-				nport->port_id = opts->fcaddr;
 			goto out_free_newnport;
 		}
 	}
@@ -1241,6 +1274,8 @@ fcloop_alloc_nport(const char *buf, size_t count, bool remoteport)
 
 out_invalid_opts:
 	spin_unlock_irqrestore(&fcloop_lock, flags);
+out_free_ida:
+	ida_free(&fcloop_portid_ida, port_id);
 out_free_newnport:
 	kfree(newnport);
 out_free_opts:
-- 
2.16.4


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 4/7] nvmet-fc: use feature flag for virtual LLDD
  2020-09-22 12:14 [PATCH 0/7] nvme-fcloop: fix shutdown and improve logging Hannes Reinecke
                   ` (2 preceding siblings ...)
  2020-09-22 12:14 ` [PATCH 3/7] nvme-fcloop: use IDA for port ids Hannes Reinecke
@ 2020-09-22 12:14 ` Hannes Reinecke
  2020-10-05 17:37   ` James Smart
  2020-09-22 12:14 ` [PATCH 5/7] nvme-fc: " Hannes Reinecke
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 16+ messages in thread
From: Hannes Reinecke @ 2020-09-22 12:14 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Hannes Reinecke, linux-nvme, Sagi Grimberg, Keith Busch, James Smart

Virtual LLDDs like fcloop don't need to do DMA, but still
might want to expose a device. So add a new feature flag
to mark these LLDDs instead of relying on a non-existing
struct device.

Signed-off-by: Hannes Reinecke <hare@suse.de>
---
 drivers/nvme/target/fc.c       | 93 +++++++++++++++++++++++-------------------
 drivers/nvme/target/fcloop.c   |  2 +-
 include/linux/nvme-fc-driver.h |  2 +
 3 files changed, 55 insertions(+), 42 deletions(-)

diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c
index 63f5deb3b68a..6f5784767d35 100644
--- a/drivers/nvme/target/fc.c
+++ b/drivers/nvme/target/fc.c
@@ -273,41 +273,50 @@ static void nvmet_fc_xmt_ls_rsp(struct nvmet_fc_tgtport *tgtport,
  * in the scatter list, setting all dma addresses to 0.
  */
 
+static bool fc_lldd_is_virtual(struct nvmet_fc_tgtport *tgtport)
+{
+	return !!(tgtport->ops->target_features & NVMET_FCTGTFEAT_VIRTUAL_DMA);
+}
+
 static inline dma_addr_t
-fc_dma_map_single(struct device *dev, void *ptr, size_t size,
+fc_dma_map_single(struct nvmet_fc_tgtport *tgtport, void *ptr, size_t size,
 		enum dma_data_direction dir)
 {
-	return dev ? dma_map_single(dev, ptr, size, dir) : (dma_addr_t)0L;
+	if (fc_lldd_is_virtual(tgtport))
+		return (dma_addr_t)0L;
+	return dma_map_single(tgtport->dev, ptr, size, dir);
 }
 
 static inline int
-fc_dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
+fc_dma_mapping_error(struct nvmet_fc_tgtport *tgtport, dma_addr_t dma_addr)
 {
-	return dev ? dma_mapping_error(dev, dma_addr) : 0;
+	if (fc_lldd_is_virtual(tgtport))
+		return 0;
+	return dma_mapping_error(tgtport->dev, dma_addr);
 }
 
 static inline void
-fc_dma_unmap_single(struct device *dev, dma_addr_t addr, size_t size,
-	enum dma_data_direction dir)
+fc_dma_unmap_single(struct nvmet_fc_tgtport *tgtport, dma_addr_t addr,
+	size_t size, enum dma_data_direction dir)
 {
-	if (dev)
-		dma_unmap_single(dev, addr, size, dir);
+	if (!fc_lldd_is_virtual(tgtport))
+		dma_unmap_single(tgtport->dev, addr, size, dir);
 }
 
 static inline void
-fc_dma_sync_single_for_cpu(struct device *dev, dma_addr_t addr, size_t size,
-		enum dma_data_direction dir)
+fc_dma_sync_single_for_cpu(struct nvmet_fc_tgtport *tgtport, dma_addr_t addr,
+	size_t size, enum dma_data_direction dir)
 {
-	if (dev)
-		dma_sync_single_for_cpu(dev, addr, size, dir);
+	if (!fc_lldd_is_virtual(tgtport))
+		dma_sync_single_for_cpu(tgtport->dev, addr, size, dir);
 }
 
 static inline void
-fc_dma_sync_single_for_device(struct device *dev, dma_addr_t addr, size_t size,
-		enum dma_data_direction dir)
+fc_dma_sync_single_for_device(struct nvmet_fc_tgtport *tgtport, dma_addr_t addr,
+	size_t size, enum dma_data_direction dir)
 {
-	if (dev)
-		dma_sync_single_for_device(dev, addr, size, dir);
+	if (!fc_lldd_is_virtual(tgtport))
+		dma_sync_single_for_device(tgtport->dev, addr, size, dir);
 }
 
 /* pseudo dma_map_sg call */
@@ -329,18 +338,20 @@ fc_map_sg(struct scatterlist *sg, int nents)
 }
 
 static inline int
-fc_dma_map_sg(struct device *dev, struct scatterlist *sg, int nents,
-		enum dma_data_direction dir)
+fc_dma_map_sg(struct nvmet_fc_tgtport *tgtport, struct scatterlist *sg,
+	int nents, enum dma_data_direction dir)
 {
-	return dev ? dma_map_sg(dev, sg, nents, dir) : fc_map_sg(sg, nents);
+	if (fc_lldd_is_virtual(tgtport))
+		return fc_map_sg(sg, nents);
+	return dma_map_sg(tgtport->dev, sg, nents, dir);
 }
 
 static inline void
-fc_dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nents,
-		enum dma_data_direction dir)
+fc_dma_unmap_sg(struct nvmet_fc_tgtport *tgtport, struct scatterlist *sg,
+	int nents, enum dma_data_direction dir)
 {
-	if (dev)
-		dma_unmap_sg(dev, sg, nents, dir);
+	if (!fc_lldd_is_virtual(tgtport))
+		dma_unmap_sg(tgtport->dev, sg, nents, dir);
 }
 
 
@@ -368,7 +379,7 @@ __nvmet_fc_finish_ls_req(struct nvmet_fc_ls_req_op *lsop)
 
 	spin_unlock_irqrestore(&tgtport->lock, flags);
 
-	fc_dma_unmap_single(tgtport->dev, lsreq->rqstdma,
+	fc_dma_unmap_single(tgtport, lsreq->rqstdma,
 				  (lsreq->rqstlen + lsreq->rsplen),
 				  DMA_BIDIRECTIONAL);
 
@@ -391,10 +402,10 @@ __nvmet_fc_send_ls_req(struct nvmet_fc_tgtport *tgtport,
 	lsop->req_queued = false;
 	INIT_LIST_HEAD(&lsop->lsreq_list);
 
-	lsreq->rqstdma = fc_dma_map_single(tgtport->dev, lsreq->rqstaddr,
+	lsreq->rqstdma = fc_dma_map_single(tgtport, lsreq->rqstaddr,
 				  lsreq->rqstlen + lsreq->rsplen,
 				  DMA_BIDIRECTIONAL);
-	if (fc_dma_mapping_error(tgtport->dev, lsreq->rqstdma))
+	if (fc_dma_mapping_error(tgtport, lsreq->rqstdma))
 		return -EFAULT;
 
 	lsreq->rspdma = lsreq->rqstdma + lsreq->rqstlen;
@@ -420,7 +431,7 @@ __nvmet_fc_send_ls_req(struct nvmet_fc_tgtport *tgtport,
 	lsop->req_queued = false;
 	list_del(&lsop->lsreq_list);
 	spin_unlock_irqrestore(&tgtport->lock, flags);
-	fc_dma_unmap_single(tgtport->dev, lsreq->rqstdma,
+	fc_dma_unmap_single(tgtport, lsreq->rqstdma,
 				  (lsreq->rqstlen + lsreq->rsplen),
 				  DMA_BIDIRECTIONAL);
 	return ret;
@@ -555,10 +566,10 @@ nvmet_fc_alloc_ls_iodlist(struct nvmet_fc_tgtport *tgtport)
 
 		iod->rspbuf = (union nvmefc_ls_responses *)&iod->rqstbuf[1];
 
-		iod->rspdma = fc_dma_map_single(tgtport->dev, iod->rspbuf,
+		iod->rspdma = fc_dma_map_single(tgtport, iod->rspbuf,
 						sizeof(*iod->rspbuf),
 						DMA_TO_DEVICE);
-		if (fc_dma_mapping_error(tgtport->dev, iod->rspdma))
+		if (fc_dma_mapping_error(tgtport, iod->rspdma))
 			goto out_fail;
 	}
 
@@ -568,7 +579,7 @@ nvmet_fc_alloc_ls_iodlist(struct nvmet_fc_tgtport *tgtport)
 	kfree(iod->rqstbuf);
 	list_del(&iod->ls_rcv_list);
 	for (iod--, i--; i >= 0; iod--, i--) {
-		fc_dma_unmap_single(tgtport->dev, iod->rspdma,
+		fc_dma_unmap_single(tgtport, iod->rspdma,
 				sizeof(*iod->rspbuf), DMA_TO_DEVICE);
 		kfree(iod->rqstbuf);
 		list_del(&iod->ls_rcv_list);
@@ -586,7 +597,7 @@ nvmet_fc_free_ls_iodlist(struct nvmet_fc_tgtport *tgtport)
 	int i;
 
 	for (i = 0; i < NVMET_LS_CTX_COUNT; iod++, i++) {
-		fc_dma_unmap_single(tgtport->dev,
+		fc_dma_unmap_single(tgtport,
 				iod->rspdma, sizeof(*iod->rspbuf),
 				DMA_TO_DEVICE);
 		kfree(iod->rqstbuf);
@@ -640,12 +651,12 @@ nvmet_fc_prep_fcp_iodlist(struct nvmet_fc_tgtport *tgtport,
 		list_add_tail(&fod->fcp_list, &queue->fod_list);
 		spin_lock_init(&fod->flock);
 
-		fod->rspdma = fc_dma_map_single(tgtport->dev, &fod->rspiubuf,
+		fod->rspdma = fc_dma_map_single(tgtport, &fod->rspiubuf,
 					sizeof(fod->rspiubuf), DMA_TO_DEVICE);
-		if (fc_dma_mapping_error(tgtport->dev, fod->rspdma)) {
+		if (fc_dma_mapping_error(tgtport, fod->rspdma)) {
 			list_del(&fod->fcp_list);
 			for (fod--, i--; i >= 0; fod--, i--) {
-				fc_dma_unmap_single(tgtport->dev, fod->rspdma,
+				fc_dma_unmap_single(tgtport, fod->rspdma,
 						sizeof(fod->rspiubuf),
 						DMA_TO_DEVICE);
 				fod->rspdma = 0L;
@@ -666,7 +677,7 @@ nvmet_fc_destroy_fcp_iodlist(struct nvmet_fc_tgtport *tgtport,
 
 	for (i = 0; i < queue->sqsize; fod++, i++) {
 		if (fod->rspdma)
-			fc_dma_unmap_single(tgtport->dev, fod->rspdma,
+			fc_dma_unmap_single(tgtport, fod->rspdma,
 				sizeof(fod->rspiubuf), DMA_TO_DEVICE);
 	}
 }
@@ -730,7 +741,7 @@ nvmet_fc_free_fcp_iod(struct nvmet_fc_tgt_queue *queue,
 	struct nvmet_fc_defer_fcp_req *deferfcp;
 	unsigned long flags;
 
-	fc_dma_sync_single_for_cpu(tgtport->dev, fod->rspdma,
+	fc_dma_sync_single_for_cpu(tgtport, fod->rspdma,
 				sizeof(fod->rspiubuf), DMA_TO_DEVICE);
 
 	fcpreq->nvmet_fc_private = NULL;
@@ -1925,7 +1936,7 @@ nvmet_fc_xmt_ls_rsp_done(struct nvmefc_ls_rsp *lsrsp)
 	struct nvmet_fc_ls_iod *iod = lsrsp->nvme_fc_private;
 	struct nvmet_fc_tgtport *tgtport = iod->tgtport;
 
-	fc_dma_sync_single_for_cpu(tgtport->dev, iod->rspdma,
+	fc_dma_sync_single_for_cpu(tgtport, iod->rspdma,
 				sizeof(*iod->rspbuf), DMA_TO_DEVICE);
 	nvmet_fc_free_ls_iod(tgtport, iod);
 	nvmet_fc_tgtport_put(tgtport);
@@ -1937,7 +1948,7 @@ nvmet_fc_xmt_ls_rsp(struct nvmet_fc_tgtport *tgtport,
 {
 	int ret;
 
-	fc_dma_sync_single_for_device(tgtport->dev, iod->rspdma,
+	fc_dma_sync_single_for_device(tgtport, iod->rspdma,
 				  sizeof(*iod->rspbuf), DMA_TO_DEVICE);
 
 	ret = tgtport->ops->xmt_ls_rsp(&tgtport->fc_target_port, iod->lsrsp);
@@ -2091,7 +2102,7 @@ nvmet_fc_alloc_tgt_pgs(struct nvmet_fc_fcp_iod *fod)
 
 	fod->data_sg = sg;
 	fod->data_sg_cnt = nent;
-	fod->data_sg_cnt = fc_dma_map_sg(fod->tgtport->dev, sg, nent,
+	fod->data_sg_cnt = fc_dma_map_sg(fod->tgtport, sg, nent,
 				((fod->io_dir == NVMET_FCP_WRITE) ?
 					DMA_FROM_DEVICE : DMA_TO_DEVICE));
 				/* note: write from initiator perspective */
@@ -2109,7 +2120,7 @@ nvmet_fc_free_tgt_pgs(struct nvmet_fc_fcp_iod *fod)
 	if (!fod->data_sg || !fod->data_sg_cnt)
 		return;
 
-	fc_dma_unmap_sg(fod->tgtport->dev, fod->data_sg, fod->data_sg_cnt,
+	fc_dma_unmap_sg(fod->tgtport, fod->data_sg, fod->data_sg_cnt,
 				((fod->io_dir == NVMET_FCP_WRITE) ?
 					DMA_FROM_DEVICE : DMA_TO_DEVICE));
 	sgl_free(fod->data_sg);
@@ -2193,7 +2204,7 @@ nvmet_fc_prep_fcp_rsp(struct nvmet_fc_tgtport *tgtport,
 		fod->fcpreq->rsplen = sizeof(*ersp);
 	}
 
-	fc_dma_sync_single_for_device(tgtport->dev, fod->rspdma,
+	fc_dma_sync_single_for_device(tgtport, fod->rspdma,
 				  sizeof(fod->rspiubuf), DMA_TO_DEVICE);
 }
 
diff --git a/drivers/nvme/target/fcloop.c b/drivers/nvme/target/fcloop.c
index e56f323fa7d4..2ccb941efb21 100644
--- a/drivers/nvme/target/fcloop.c
+++ b/drivers/nvme/target/fcloop.c
@@ -1043,7 +1043,7 @@ static struct nvmet_fc_target_template tgttemplate = {
 	.max_dif_sgl_segments	= FCLOOP_SGL_SEGS,
 	.dma_boundary		= FCLOOP_DMABOUND_4G,
 	/* optional features */
-	.target_features	= 0,
+	.target_features	= NVMET_FCTGTFEAT_VIRTUAL_DMA,
 	/* sizes of additional private data for data structures */
 	.target_priv_sz		= sizeof(struct fcloop_tport),
 	.lsrqst_priv_sz		= sizeof(struct fcloop_lsreq),
diff --git a/include/linux/nvme-fc-driver.h b/include/linux/nvme-fc-driver.h
index 2a38f2b477a5..675c7ef6df17 100644
--- a/include/linux/nvme-fc-driver.h
+++ b/include/linux/nvme-fc-driver.h
@@ -707,6 +707,8 @@ enum {
 		 * sequence in one LLDD operation. Errors during Data
 		 * sequence transmit must not allow RSP sequence to be sent.
 		 */
+	NVMET_FCTGTFEAT_VIRTUAL_DMA = (1 << 1),
+		/* Bit 1: Virtual LLDD with no DMA support */
 };
 
 
-- 
2.16.4


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 5/7] nvme-fc: use feature flag for virtual LLDD
  2020-09-22 12:14 [PATCH 0/7] nvme-fcloop: fix shutdown and improve logging Hannes Reinecke
                   ` (3 preceding siblings ...)
  2020-09-22 12:14 ` [PATCH 4/7] nvmet-fc: use feature flag for virtual LLDD Hannes Reinecke
@ 2020-09-22 12:14 ` Hannes Reinecke
  2020-10-05 17:38   ` James Smart
  2020-09-22 12:15 ` [PATCH 6/7] nvme-fcloop: use a device for nport Hannes Reinecke
  2020-09-22 12:15 ` [PATCH 7/7] nvme-fcloop: use a device for lport Hannes Reinecke
  6 siblings, 1 reply; 16+ messages in thread
From: Hannes Reinecke @ 2020-09-22 12:14 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Hannes Reinecke, linux-nvme, Sagi Grimberg, Keith Busch, James Smart

Virtual LLDDs like fcloop don't need to do DMA, but still
might want to expose a device. So add a new feature flag
to mark these LLDDs instead of relying on a non-existing
struct device.

Signed-off-by: Hannes Reinecke <hare@suse.de>
---
 drivers/nvme/host/fc.c         | 93 +++++++++++++++++++++++-------------------
 drivers/nvme/target/fcloop.c   |  2 +
 include/linux/nvme-fc-driver.h | 10 +++++
 3 files changed, 64 insertions(+), 41 deletions(-)

diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
index e8ef42b9d50c..d30ea7a39183 100644
--- a/drivers/nvme/host/fc.c
+++ b/drivers/nvme/host/fc.c
@@ -237,6 +237,11 @@ static void __nvme_fc_delete_hw_queue(struct nvme_fc_ctrl *,
 static void nvme_fc_handle_ls_rqst_work(struct work_struct *work);
 
 
+static bool nvme_fc_lport_is_physical(struct nvme_fc_lport *lport)
+{
+	return !(lport->ops->port_features & NVME_FCPORTFEAT_VIRTUAL_DMA);
+}
+
 static void
 nvme_fc_free_lport(struct kref *ref)
 {
@@ -427,7 +432,7 @@ nvme_fc_register_localport(struct nvme_fc_port_info *pinfo,
 	list_add_tail(&newrec->port_list, &nvme_fc_lport_list);
 	spin_unlock_irqrestore(&nvme_fc_lock, flags);
 
-	if (dev)
+	if (nvme_fc_lport_is_physical(newrec))
 		dma_set_seg_boundary(dev, template->dma_boundary);
 
 	*portptr = &newrec->localport;
@@ -953,40 +958,44 @@ EXPORT_SYMBOL_GPL(nvme_fc_set_remoteport_devloss);
  */
 
 static inline dma_addr_t
-fc_dma_map_single(struct device *dev, void *ptr, size_t size,
+fc_dma_map_single(struct nvme_fc_lport *lport, void *ptr, size_t size,
 		enum dma_data_direction dir)
 {
-	return dev ? dma_map_single(dev, ptr, size, dir) : (dma_addr_t)0L;
+	if (nvme_fc_lport_is_physical(lport))
+		return dma_map_single(lport->dev, ptr, size, dir);
+	return (dma_addr_t)0L;
 }
 
 static inline int
-fc_dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
+fc_dma_mapping_error(struct nvme_fc_lport *lport, dma_addr_t dma_addr)
 {
-	return dev ? dma_mapping_error(dev, dma_addr) : 0;
+	if (nvme_fc_lport_is_physical(lport))
+		return dma_mapping_error(lport->dev, dma_addr);
+	return 0;
 }
 
 static inline void
-fc_dma_unmap_single(struct device *dev, dma_addr_t addr, size_t size,
+fc_dma_unmap_single(struct nvme_fc_lport *lport, dma_addr_t addr, size_t size,
 	enum dma_data_direction dir)
 {
-	if (dev)
-		dma_unmap_single(dev, addr, size, dir);
+	if (nvme_fc_lport_is_physical(lport))
+		dma_unmap_single(lport->dev, addr, size, dir);
 }
 
 static inline void
-fc_dma_sync_single_for_cpu(struct device *dev, dma_addr_t addr, size_t size,
-		enum dma_data_direction dir)
+fc_dma_sync_single_for_cpu(struct nvme_fc_lport *lport, dma_addr_t addr,
+	size_t size, enum dma_data_direction dir)
 {
-	if (dev)
-		dma_sync_single_for_cpu(dev, addr, size, dir);
+	if (nvme_fc_lport_is_physical(lport))
+		dma_sync_single_for_cpu(lport->dev, addr, size, dir);
 }
 
 static inline void
-fc_dma_sync_single_for_device(struct device *dev, dma_addr_t addr, size_t size,
-		enum dma_data_direction dir)
+fc_dma_sync_single_for_device(struct nvme_fc_lport *lport, dma_addr_t addr,
+	size_t size, enum dma_data_direction dir)
 {
-	if (dev)
-		dma_sync_single_for_device(dev, addr, size, dir);
+	if (nvme_fc_lport_is_physical(lport))
+		dma_sync_single_for_device(lport->dev, addr, size, dir);
 }
 
 /* pseudo dma_map_sg call */
@@ -1008,18 +1017,20 @@ fc_map_sg(struct scatterlist *sg, int nents)
 }
 
 static inline int
-fc_dma_map_sg(struct device *dev, struct scatterlist *sg, int nents,
+fc_dma_map_sg(struct nvme_fc_lport *lport, struct scatterlist *sg, int nents,
 		enum dma_data_direction dir)
 {
-	return dev ? dma_map_sg(dev, sg, nents, dir) : fc_map_sg(sg, nents);
+	if (nvme_fc_lport_is_physical(lport))
+		return dma_map_sg(lport->dev, sg, nents, dir);
+	return fc_map_sg(sg, nents);
 }
 
 static inline void
-fc_dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nents,
+fc_dma_unmap_sg(struct nvme_fc_lport *lport, struct scatterlist *sg, int nents,
 		enum dma_data_direction dir)
 {
-	if (dev)
-		dma_unmap_sg(dev, sg, nents, dir);
+	if (nvme_fc_lport_is_physical(lport))
+		dma_unmap_sg(lport->dev, sg, nents, dir);
 }
 
 /* *********************** FC-NVME LS Handling **************************** */
@@ -1049,7 +1060,7 @@ __nvme_fc_finish_ls_req(struct nvmefc_ls_req_op *lsop)
 
 	spin_unlock_irqrestore(&rport->lock, flags);
 
-	fc_dma_unmap_single(rport->dev, lsreq->rqstdma,
+	fc_dma_unmap_single(rport->lport, lsreq->rqstdma,
 				  (lsreq->rqstlen + lsreq->rsplen),
 				  DMA_BIDIRECTIONAL);
 
@@ -1077,10 +1088,10 @@ __nvme_fc_send_ls_req(struct nvme_fc_rport *rport,
 	INIT_LIST_HEAD(&lsop->lsreq_list);
 	init_completion(&lsop->ls_done);
 
-	lsreq->rqstdma = fc_dma_map_single(rport->dev, lsreq->rqstaddr,
+	lsreq->rqstdma = fc_dma_map_single(rport->lport, lsreq->rqstaddr,
 				  lsreq->rqstlen + lsreq->rsplen,
 				  DMA_BIDIRECTIONAL);
-	if (fc_dma_mapping_error(rport->dev, lsreq->rqstdma)) {
+	if (fc_dma_mapping_error(rport->lport, lsreq->rqstdma)) {
 		ret = -EFAULT;
 		goto out_putrport;
 	}
@@ -1107,7 +1118,7 @@ __nvme_fc_send_ls_req(struct nvme_fc_rport *rport,
 	lsop->req_queued = false;
 	list_del(&lsop->lsreq_list);
 	spin_unlock_irqrestore(&rport->lock, flags);
-	fc_dma_unmap_single(rport->dev, lsreq->rqstdma,
+	fc_dma_unmap_single(rport->lport, lsreq->rqstdma,
 				  (lsreq->rqstlen + lsreq->rsplen),
 				  DMA_BIDIRECTIONAL);
 out_putrport:
@@ -1465,9 +1476,9 @@ nvme_fc_xmt_ls_rsp_done(struct nvmefc_ls_rsp *lsrsp)
 	list_del(&lsop->lsrcv_list);
 	spin_unlock_irqrestore(&rport->lock, flags);
 
-	fc_dma_sync_single_for_cpu(lport->dev, lsop->rspdma,
+	fc_dma_sync_single_for_cpu(lport, lsop->rspdma,
 				sizeof(*lsop->rspbuf), DMA_TO_DEVICE);
-	fc_dma_unmap_single(lport->dev, lsop->rspdma,
+	fc_dma_unmap_single(lport, lsop->rspdma,
 			sizeof(*lsop->rspbuf), DMA_TO_DEVICE);
 
 	kfree(lsop);
@@ -1483,7 +1494,7 @@ nvme_fc_xmt_ls_rsp(struct nvmefc_ls_rcv_op *lsop)
 	struct fcnvme_ls_rqst_w0 *w0 = &lsop->rqstbuf->w0;
 	int ret;
 
-	fc_dma_sync_single_for_device(lport->dev, lsop->rspdma,
+	fc_dma_sync_single_for_device(lport, lsop->rspdma,
 				  sizeof(*lsop->rspbuf), DMA_TO_DEVICE);
 
 	ret = lport->ops->xmt_ls_rsp(&lport->localport, &rport->remoteport,
@@ -1761,10 +1772,10 @@ nvme_fc_rcv_ls_req(struct nvme_fc_remote_port *portptr,
 	lsop->rqstbuf = (union nvmefc_ls_requests *)&lsop[1];
 	lsop->rspbuf = (union nvmefc_ls_responses *)&lsop->rqstbuf[1];
 
-	lsop->rspdma = fc_dma_map_single(lport->dev, lsop->rspbuf,
+	lsop->rspdma = fc_dma_map_single(lport, lsop->rspbuf,
 					sizeof(*lsop->rspbuf),
 					DMA_TO_DEVICE);
-	if (fc_dma_mapping_error(lport->dev, lsop->rspdma)) {
+	if (fc_dma_mapping_error(lport, lsop->rspdma)) {
 		dev_info(lport->dev,
 			"RCV %s LS failed: DMA mapping failure\n",
 			(w0->ls_cmd <= NVME_FC_LAST_LS_CMD_VALUE) ?
@@ -1793,7 +1804,7 @@ nvme_fc_rcv_ls_req(struct nvme_fc_remote_port *portptr,
 	return 0;
 
 out_unmap:
-	fc_dma_unmap_single(lport->dev, lsop->rspdma,
+	fc_dma_unmap_single(lport, lsop->rspdma,
 			sizeof(*lsop->rspbuf), DMA_TO_DEVICE);
 out_free:
 	kfree(lsop);
@@ -1810,9 +1821,9 @@ static void
 __nvme_fc_exit_request(struct nvme_fc_ctrl *ctrl,
 		struct nvme_fc_fcp_op *op)
 {
-	fc_dma_unmap_single(ctrl->lport->dev, op->fcp_req.rspdma,
+	fc_dma_unmap_single(ctrl->lport, op->fcp_req.rspdma,
 				sizeof(op->rsp_iu), DMA_FROM_DEVICE);
-	fc_dma_unmap_single(ctrl->lport->dev, op->fcp_req.cmddma,
+	fc_dma_unmap_single(ctrl->lport, op->fcp_req.cmddma,
 				sizeof(op->cmd_iu), DMA_TO_DEVICE);
 
 	atomic_set(&op->state, FCPOP_STATE_UNINIT);
@@ -1936,7 +1947,7 @@ nvme_fc_fcpio_done(struct nvmefc_fcp_req *req)
 
 	opstate = atomic_xchg(&op->state, FCPOP_STATE_COMPLETE);
 
-	fc_dma_sync_single_for_cpu(ctrl->lport->dev, op->fcp_req.rspdma,
+	fc_dma_sync_single_for_cpu(ctrl->lport, op->fcp_req.rspdma,
 				sizeof(op->rsp_iu), DMA_FROM_DEVICE);
 
 	if (opstate == FCPOP_STATE_ABORTED)
@@ -2073,19 +2084,19 @@ __nvme_fc_init_request(struct nvme_fc_ctrl *ctrl,
 	else
 		cmdiu->rsv_cat = fccmnd_set_cat_admin(0);
 
-	op->fcp_req.cmddma = fc_dma_map_single(ctrl->lport->dev,
+	op->fcp_req.cmddma = fc_dma_map_single(ctrl->lport,
 				&op->cmd_iu, sizeof(op->cmd_iu), DMA_TO_DEVICE);
-	if (fc_dma_mapping_error(ctrl->lport->dev, op->fcp_req.cmddma)) {
+	if (fc_dma_mapping_error(ctrl->lport, op->fcp_req.cmddma)) {
 		dev_err(ctrl->dev,
 			"FCP Op failed - cmdiu dma mapping failed.\n");
 		ret = -EFAULT;
 		goto out_on_error;
 	}
 
-	op->fcp_req.rspdma = fc_dma_map_single(ctrl->lport->dev,
+	op->fcp_req.rspdma = fc_dma_map_single(ctrl->lport,
 				&op->rsp_iu, sizeof(op->rsp_iu),
 				DMA_FROM_DEVICE);
-	if (fc_dma_mapping_error(ctrl->lport->dev, op->fcp_req.rspdma)) {
+	if (fc_dma_mapping_error(ctrl->lport, op->fcp_req.rspdma)) {
 		dev_err(ctrl->dev,
 			"FCP Op failed - rspiu dma mapping failed.\n");
 		ret = -EFAULT;
@@ -2485,7 +2496,7 @@ nvme_fc_map_data(struct nvme_fc_ctrl *ctrl, struct request *rq,
 
 	op->nents = blk_rq_map_sg(rq->q, rq, freq->sg_table.sgl);
 	WARN_ON(op->nents > blk_rq_nr_phys_segments(rq));
-	freq->sg_cnt = fc_dma_map_sg(ctrl->lport->dev, freq->sg_table.sgl,
+	freq->sg_cnt = fc_dma_map_sg(ctrl->lport, freq->sg_table.sgl,
 				op->nents, rq_dma_dir(rq));
 	if (unlikely(freq->sg_cnt <= 0)) {
 		sg_free_table_chained(&freq->sg_table, NVME_INLINE_SG_CNT);
@@ -2508,7 +2519,7 @@ nvme_fc_unmap_data(struct nvme_fc_ctrl *ctrl, struct request *rq,
 	if (!freq->sg_cnt)
 		return;
 
-	fc_dma_unmap_sg(ctrl->lport->dev, freq->sg_table.sgl, op->nents,
+	fc_dma_unmap_sg(ctrl->lport, freq->sg_table.sgl, op->nents,
 			rq_dma_dir(rq));
 
 	sg_free_table_chained(&freq->sg_table, NVME_INLINE_SG_CNT);
@@ -2609,7 +2620,7 @@ nvme_fc_start_fcp_op(struct nvme_fc_ctrl *ctrl, struct nvme_fc_queue *queue,
 		}
 	}
 
-	fc_dma_sync_single_for_device(ctrl->lport->dev, op->fcp_req.cmddma,
+	fc_dma_sync_single_for_device(ctrl->lport, op->fcp_req.cmddma,
 				  sizeof(op->cmd_iu), DMA_TO_DEVICE);
 
 	atomic_set(&op->state, FCPOP_STATE_ACTIVE);
diff --git a/drivers/nvme/target/fcloop.c b/drivers/nvme/target/fcloop.c
index 2ccb941efb21..7dab98545979 100644
--- a/drivers/nvme/target/fcloop.c
+++ b/drivers/nvme/target/fcloop.c
@@ -1026,6 +1026,8 @@ static struct nvme_fc_port_template fctemplate = {
 	.remote_priv_sz		= sizeof(struct fcloop_rport),
 	.lsrqst_priv_sz		= sizeof(struct fcloop_lsreq),
 	.fcprqst_priv_sz	= sizeof(struct fcloop_ini_fcpreq),
+	/* optional features */
+	.port_features		= NVME_FCPORTFEAT_VIRTUAL_DMA,
 };
 
 static struct nvmet_fc_target_template tgttemplate = {
diff --git a/include/linux/nvme-fc-driver.h b/include/linux/nvme-fc-driver.h
index 675c7ef6df17..17cf967484b2 100644
--- a/include/linux/nvme-fc-driver.h
+++ b/include/linux/nvme-fc-driver.h
@@ -334,6 +334,11 @@ struct nvme_fc_remote_port {
 	enum nvme_fc_obj_state port_state;
 } __aligned(sizeof(u64));	/* alignment for other things alloc'd with */
 
+/* Port Features (Bit fields) LLDD supports */
+enum {
+	NVME_FCPORTFEAT_VIRTUAL_DMA = (1 << 0),
+		/* Bit 0: Virtual LLDD with no DMA support */
+};
 
 /**
  * struct nvme_fc_port_template - structure containing static entrypoints and
@@ -470,6 +475,8 @@ struct nvme_fc_remote_port {
  *       memory area solely for the of the LLDD and its location is
  *       specified by the fcp_request->private pointer.
  *       Value is Mandatory. Allowed to be zero.
+ *
+ * @port_features: Optional feature for the LLDD
  */
 struct nvme_fc_port_template {
 	/* initiator-based functions */
@@ -508,6 +515,9 @@ struct nvme_fc_port_template {
 	u32	remote_priv_sz;
 	u32	lsrqst_priv_sz;
 	u32	fcprqst_priv_sz;
+
+	/* Optional port features */
+	u32	port_features;
 };
 
 
-- 
2.16.4


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 6/7] nvme-fcloop: use a device for nport
  2020-09-22 12:14 [PATCH 0/7] nvme-fcloop: fix shutdown and improve logging Hannes Reinecke
                   ` (4 preceding siblings ...)
  2020-09-22 12:14 ` [PATCH 5/7] nvme-fc: " Hannes Reinecke
@ 2020-09-22 12:15 ` Hannes Reinecke
  2020-10-05 17:41   ` James Smart
  2020-09-22 12:15 ` [PATCH 7/7] nvme-fcloop: use a device for lport Hannes Reinecke
  6 siblings, 1 reply; 16+ messages in thread
From: Hannes Reinecke @ 2020-09-22 12:15 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Hannes Reinecke, linux-nvme, Sagi Grimberg, Keith Busch, James Smart

Add a 'struct device' to the nport such that the nport will show
up in sysfs and we get a valid name in the logging output.

Signed-off-by: Hannes Reinecke <hare@suse.de>
---
 drivers/nvme/target/fcloop.c | 31 +++++++++++++++++++++----------
 1 file changed, 21 insertions(+), 10 deletions(-)

diff --git a/drivers/nvme/target/fcloop.c b/drivers/nvme/target/fcloop.c
index 7dab98545979..b84f8754b3a3 100644
--- a/drivers/nvme/target/fcloop.c
+++ b/drivers/nvme/target/fcloop.c
@@ -245,11 +245,11 @@ struct fcloop_tport {
 };
 
 struct fcloop_nport {
+	struct device dev;
 	struct fcloop_rport *rport;
 	struct fcloop_tport *tport;
 	struct fcloop_lport *lport;
 	struct list_head nport_list;
-	struct kref ref;
 	u64 node_name;
 	u64 port_name;
 	u32 port_role;
@@ -949,10 +949,10 @@ fcloop_fcp_abort(struct nvme_fc_local_port *localport,
 }
 
 static void
-fcloop_nport_free(struct kref *ref)
+fcloop_release_nport(struct device *dev)
 {
 	struct fcloop_nport *nport =
-		container_of(ref, struct fcloop_nport, ref);
+		container_of(dev, struct fcloop_nport, dev);
 	unsigned long flags;
 
 	spin_lock_irqsave(&fcloop_lock, flags);
@@ -967,13 +967,13 @@ fcloop_nport_free(struct kref *ref)
 static void
 fcloop_nport_put(struct fcloop_nport *nport)
 {
-	kref_put(&nport->ref, fcloop_nport_free);
+	put_device(&nport->dev);
 }
 
-static int
+static void
 fcloop_nport_get(struct fcloop_nport *nport)
 {
-	return kref_get_unless_zero(&nport->ref);
+	get_device(&nport->dev);
 }
 
 static void
@@ -1007,6 +1007,8 @@ fcloop_targetport_delete(struct nvmet_fc_target_port *targetport)
 #define	FCLOOP_SGL_SEGS			256
 #define FCLOOP_DMABOUND_4G		0xFFFFFFFF
 
+static struct class *fcloop_class;
+
 static struct nvme_fc_port_template fctemplate = {
 	.localport_delete	= fcloop_localport_delete,
 	.remoteport_delete	= fcloop_remoteport_delete,
@@ -1225,8 +1227,10 @@ fcloop_alloc_nport(const char *buf, size_t count, bool remoteport)
 	newnport->port_id = port_id;
 	if (opts->mask & NVMF_OPT_ROLES)
 		newnport->port_role = opts->roles;
-
-	kref_init(&newnport->ref);
+	newnport->dev.class = fcloop_class;
+	newnport->dev.release = fcloop_release_nport;
+	dev_set_name(&newnport->dev, "nport%d", newnport->port_id);
+	device_initialize(&newnport->dev);
 
 	spin_lock_irqsave(&fcloop_lock, flags);
 
@@ -1267,6 +1271,10 @@ fcloop_alloc_nport(const char *buf, size_t count, bool remoteport)
 		}
 	}
 
+	ret = device_add(&newnport->dev);
+	if (ret)
+		goto out_invalid_opts;
+
 	list_add_tail(&newnport->nport_list, &fcloop_nports);
 
 	spin_unlock_irqrestore(&fcloop_lock, flags);
@@ -1339,6 +1347,8 @@ __unlink_remote_port(struct fcloop_nport *nport)
 	if (rport && nport->tport)
 		nport->tport->remoteport = NULL;
 	nport->rport = NULL;
+	if (!nport->tport)
+		device_del(&nport->dev);
 
 	return rport;
 }
@@ -1406,7 +1416,7 @@ fcloop_create_target_port(struct device *dev, struct device_attribute *attr,
 	tinfo.port_name = nport->port_name;
 	tinfo.port_id = nport->port_id;
 
-	ret = nvmet_fc_register_targetport(&tinfo, &tgttemplate, NULL,
+	ret = nvmet_fc_register_targetport(&tinfo, &tgttemplate, &nport->dev,
 						&targetport);
 	if (ret) {
 		fcloop_nport_put(nport);
@@ -1438,6 +1448,8 @@ __unlink_target_port(struct fcloop_nport *nport)
 	if (tport && nport->rport)
 		nport->rport->targetport = NULL;
 	nport->tport = NULL;
+	if (!nport->rport)
+		device_del(&nport->dev);
 
 	return tport;
 }
@@ -1513,7 +1525,6 @@ static const struct attribute_group *fcloop_dev_attr_groups[] = {
 	NULL,
 };
 
-static struct class *fcloop_class;
 static struct device *fcloop_device;
 
 
-- 
2.16.4


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 7/7] nvme-fcloop: use a device for lport
  2020-09-22 12:14 [PATCH 0/7] nvme-fcloop: fix shutdown and improve logging Hannes Reinecke
                   ` (5 preceding siblings ...)
  2020-09-22 12:15 ` [PATCH 6/7] nvme-fcloop: use a device for nport Hannes Reinecke
@ 2020-09-22 12:15 ` Hannes Reinecke
  2020-10-05 17:45   ` James Smart
  6 siblings, 1 reply; 16+ messages in thread
From: Hannes Reinecke @ 2020-09-22 12:15 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Hannes Reinecke, linux-nvme, Sagi Grimberg, Keith Busch, James Smart

Add a 'struct device' to the lport such that the lport will show
up in sysfs and we get a valid name in the logging output.

Signed-off-by: Hannes Reinecke <hare@suse.de>
---
 drivers/nvme/target/fcloop.c | 53 ++++++++++++++++++++++++++++++++------------
 1 file changed, 39 insertions(+), 14 deletions(-)

diff --git a/drivers/nvme/target/fcloop.c b/drivers/nvme/target/fcloop.c
index b84f8754b3a3..1af1917ddf4c 100644
--- a/drivers/nvme/target/fcloop.c
+++ b/drivers/nvme/target/fcloop.c
@@ -214,6 +214,7 @@ static LIST_HEAD(fcloop_nports);
 static DEFINE_IDA(fcloop_portid_ida);
 
 struct fcloop_lport {
+	struct device dev;
 	struct nvme_fc_local_port *localport;
 	struct list_head lport_list;
 	struct completion unreg_done;
@@ -1053,6 +1054,16 @@ static struct nvmet_fc_target_template tgttemplate = {
 	.lsrqst_priv_sz		= sizeof(struct fcloop_lsreq),
 };
 
+static void fcloop_release_lport(struct device *dev)
+{
+	struct fcloop_lport *lport =
+		container_of(dev, struct fcloop_lport, dev);
+
+	ida_free(&fcloop_portid_ida, lport->port_id);
+
+	kfree(lport);
+}
+
 static ssize_t
 fcloop_create_local_port(struct device *dev, struct device_attribute *attr,
 		const char *buf, size_t count)
@@ -1063,7 +1074,7 @@ fcloop_create_local_port(struct device *dev, struct device_attribute *attr,
 	struct fcloop_lport *lport;
 	struct fcloop_lport_priv *lport_priv;
 	unsigned long flags;
-	int ret = -ENOMEM, port_id;
+	int ret = -ENOMEM, port_id, port_num;
 
 	lport = kzalloc(sizeof(*lport), GFP_KERNEL);
 	if (!lport)
@@ -1094,25 +1105,39 @@ fcloop_create_local_port(struct device *dev, struct device_attribute *attr,
 		goto out_free_ida;
 	}
 
+	lport->port_id = port_id;
+	lport->dev.class = fcloop_class;
+	lport->dev.release = fcloop_release_lport;
+	dev_set_name(&lport->dev, "lport%d", lport->port_id);
+	device_initialize(&lport->dev);
+
 	memset(&pinfo, 0, sizeof(pinfo));
 	pinfo.node_name = opts->wwnn;
 	pinfo.port_name = opts->wwpn;
 	pinfo.port_role = opts->roles;
 	pinfo.port_id = port_id;
 
-	ret = nvme_fc_register_localport(&pinfo, &fctemplate, NULL, &localport);
-	if (!ret) {
-		/* success */
-		lport_priv = localport->private;
-		lport_priv->lport = lport;
+	ret = nvme_fc_register_localport(&pinfo, &fctemplate,
+					 &lport->dev, &localport);
+	if (ret)
+		goto out_free_ida;
 
-		lport->localport = localport;
-		INIT_LIST_HEAD(&lport->lport_list);
+	/* success */
+	lport_priv = localport->private;
+	lport_priv->lport = lport;
 
-		spin_lock_irqsave(&fcloop_lock, flags);
-		list_add_tail(&lport->lport_list, &fcloop_lports);
-		spin_unlock_irqrestore(&fcloop_lock, flags);
+	lport->localport = localport;
+	INIT_LIST_HEAD(&lport->lport_list);
+
+	ret = device_add(&lport->dev);
+	if (ret) {
+		nvme_fc_unregister_localport(lport->localport);
+		goto out_free_ida;
 	}
+	spin_lock_irqsave(&fcloop_lock, flags);
+	list_add_tail(&lport->lport_list, &fcloop_lports);
+	spin_unlock_irqrestore(&fcloop_lock, flags);
+
 out_free_ida:
 	if (ret)
 		ida_free(&fcloop_portid_ida, port_id);
@@ -1138,15 +1163,15 @@ __wait_localport_unreg(struct fcloop_lport *lport)
 {
 	int ret;
 
+	device_del(&lport->dev);
+
 	init_completion(&lport->unreg_done);
 
 	ret = nvme_fc_unregister_localport(lport->localport);
 
 	wait_for_completion(&lport->unreg_done);
 
-	ida_free(&fcloop_portid_ida, lport->port_id);
-
-	kfree(lport);
+	put_device(&lport->dev);
 
 	return ret;
 }
-- 
2.16.4


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/7] nvme-fcloop: flush workqueue before calling nvme_fc_unregister_remoteport()
  2020-09-22 12:14 ` [PATCH 1/7] nvme-fcloop: flush workqueue before calling nvme_fc_unregister_remoteport() Hannes Reinecke
@ 2020-10-05 17:14   ` James Smart
  0 siblings, 0 replies; 16+ messages in thread
From: James Smart @ 2020-10-05 17:14 UTC (permalink / raw)
  To: Hannes Reinecke, Christoph Hellwig; +Cc: linux-nvme, Sagi Grimberg, Keith Busch


[-- Attachment #1.1: Type: text/plain, Size: 1167 bytes --]



On 9/22/2020 5:14 AM, Hannes Reinecke wrote:
> nvme_fc_unregister_remoteport() will be sending LS requests, which then
> would end up on a workqueue for processing. This will deadlock with
> fcloop_remoteport_delete() which would try to flush the very same queue.
>
> Signed-off-by: Hannes Reinecke <hare@suse.de>
> ---
>   drivers/nvme/target/fcloop.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/nvme/target/fcloop.c b/drivers/nvme/target/fcloop.c
> index c97e60b71bbc..082aa4dee406 100644
> --- a/drivers/nvme/target/fcloop.c
> +++ b/drivers/nvme/target/fcloop.c
> @@ -979,7 +979,6 @@ fcloop_remoteport_delete(struct nvme_fc_remote_port *remoteport)
>   {
>   	struct fcloop_rport *rport = remoteport->private;
>   
> -	flush_work(&rport->ls_work);
>   	fcloop_nport_put(rport->nport);
>   }
>   
> @@ -1313,6 +1312,7 @@ __remoteport_unreg(struct fcloop_nport *nport, struct fcloop_rport *rport)
>   	if (!rport)
>   		return -EALREADY;
>   
> +	flush_work(&rport->ls_work);
>   	return nvme_fc_unregister_remoteport(rport->remoteport);
>   }
>   

Reviewed-by: James Smart <james.smart@broadcom.com>

Looks Good.

-- james


[-- Attachment #1.2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4163 bytes --]

[-- Attachment #2: Type: text/plain, Size: 158 bytes --]

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/7] nvmet-fc: use per-target workqueue when removing associations
  2020-09-22 12:14 ` [PATCH 2/7] nvmet-fc: use per-target workqueue when removing associations Hannes Reinecke
@ 2020-10-05 17:18   ` James Smart
  0 siblings, 0 replies; 16+ messages in thread
From: James Smart @ 2020-10-05 17:18 UTC (permalink / raw)
  To: Hannes Reinecke, Christoph Hellwig; +Cc: linux-nvme, Sagi Grimberg, Keith Busch


[-- Attachment #1.1: Type: text/plain, Size: 4002 bytes --]

On 9/22/2020 5:14 AM, Hannes Reinecke wrote:
> When removing target ports all outstanding associations need to be
> terminated / cleaned up. As this involves several exchanges on the
> wire a synchronization point is required to establish when these
> exchanges have ran their course and it's safe to delete the association.
> So add a per-target workqueue and flush this workqueue to ensure
> the association can really be deleted.
>
> Signed-off-by: Hannes Reinecke <hare@suse.de>
> ---
>   drivers/nvme/target/fc.c | 29 +++++++++++++++++++++++------
>   1 file changed, 23 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c
> index 04ec0076ae59..63f5deb3b68a 100644
> --- a/drivers/nvme/target/fc.c
> +++ b/drivers/nvme/target/fc.c
> @@ -99,6 +99,7 @@ struct nvmet_fc_tgtport {
>   	struct list_head		tgt_list; /* nvmet_fc_target_list */
>   	struct device			*dev;	/* dev for dma mapping */
>   	struct nvmet_fc_target_template	*ops;
> +	struct workqueue_struct		*work_q;
>   
>   	struct nvmet_fc_ls_iod		*iod;
>   	spinlock_t			lock;
> @@ -1403,10 +1404,17 @@ nvmet_fc_register_targetport(struct nvmet_fc_port_info *pinfo,
>   	ida_init(&newrec->assoc_cnt);
>   	newrec->max_sg_cnt = template->max_sgl_segments;
>   
> +	newrec->work_q = alloc_workqueue("ntfc%d", 0, 0,
> +					 newrec->fc_target_port.port_num);
> +	if (!newrec->work_q) {
> +		ret = -ENOMEM;
> +		goto out_free_newrec;
> +	}
> +
>   	ret = nvmet_fc_alloc_ls_iodlist(newrec);
>   	if (ret) {
>   		ret = -ENOMEM;
> -		goto out_free_newrec;
> +		goto out_free_workq;
>   	}
>   
>   	nvmet_fc_portentry_rebind_tgt(newrec);
> @@ -1418,6 +1426,8 @@ nvmet_fc_register_targetport(struct nvmet_fc_port_info *pinfo,
>   	*portptr = &newrec->fc_target_port;
>   	return 0;
>   
> +out_free_workq:
> +	destroy_workqueue(newrec->work_q);
>   out_free_newrec:
>   	put_device(dev);
>   out_ida_put:
> @@ -1443,6 +1453,8 @@ nvmet_fc_free_tgtport(struct kref *ref)
>   	list_del(&tgtport->tgt_list);
>   	spin_unlock_irqrestore(&nvmet_fc_tgtlock, flags);
>   
> +	destroy_workqueue(tgtport->work_q);
> +
>   	nvmet_fc_free_ls_iodlist(tgtport);
>   
>   	/* let the LLDD know we've finished tearing it down */
> @@ -1481,11 +1493,13 @@ __nvmet_fc_free_assocs(struct nvmet_fc_tgtport *tgtport)
>   				&tgtport->assoc_list, a_list) {
>   		if (!nvmet_fc_tgt_a_get(assoc))
>   			continue;
> -		if (!schedule_work(&assoc->del_work))
> +		if (!queue_work(tgtport->work_q, &assoc->del_work))
>   			/* already deleting - release local reference */
>   			nvmet_fc_tgt_a_put(assoc);
>   	}
>   	spin_unlock_irqrestore(&tgtport->lock, flags);
> +
> +	flush_workqueue(tgtport->work_q);
>   }
>   
>   /**
> @@ -1536,12 +1550,14 @@ nvmet_fc_invalidate_host(struct nvmet_fc_target_port *target_port,
>   			continue;
>   		assoc->hostport->invalid = 1;
>   		noassoc = false;
> -		if (!schedule_work(&assoc->del_work))
> +		if (!queue_work(tgtport->work_q, &assoc->del_work))
>   			/* already deleting - release local reference */
>   			nvmet_fc_tgt_a_put(assoc);
>   	}
>   	spin_unlock_irqrestore(&tgtport->lock, flags);
>   
> +	flush_workqueue(tgtport->work_q);
> +
>   	/* if there's nothing to wait for - call the callback */
>   	if (noassoc && tgtport->ops->host_release)
>   		tgtport->ops->host_release(hosthandle);
> @@ -1579,14 +1595,15 @@ nvmet_fc_delete_ctrl(struct nvmet_ctrl *ctrl)
>   		}
>   		spin_unlock_irqrestore(&tgtport->lock, flags);
>   
> -		nvmet_fc_tgtport_put(tgtport);
> -
>   		if (found_ctrl) {
> -			if (!schedule_work(&assoc->del_work))
> +			if (!queue_work(tgtport->work_q, &assoc->del_work))
>   				/* already deleting - release local reference */
>   				nvmet_fc_tgt_a_put(assoc);
> +			flush_workqueue(tgtport->work_q);
> +			nvmet_fc_tgtport_put(tgtport);
>   			return;
>   		}
> +		nvmet_fc_tgtport_put(tgtport);
>   
>   		spin_lock_irqsave(&nvmet_fc_tgtlock, flags);
>   	}

Looks reasonable.

Reviewed-by: James Smart <james.smart@broadcom.com>

-- james

[-- Attachment #1.2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4163 bytes --]

[-- Attachment #2: Type: text/plain, Size: 158 bytes --]

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 3/7] nvme-fcloop: use IDA for port ids
  2020-09-22 12:14 ` [PATCH 3/7] nvme-fcloop: use IDA for port ids Hannes Reinecke
@ 2020-10-05 17:33   ` James Smart
  2020-10-09 13:57     ` Hannes Reinecke
  0 siblings, 1 reply; 16+ messages in thread
From: James Smart @ 2020-10-05 17:33 UTC (permalink / raw)
  To: Hannes Reinecke, Christoph Hellwig; +Cc: linux-nvme, Sagi Grimberg, Keith Busch


[-- Attachment #1.1: Type: text/plain, Size: 5991 bytes --]

On 9/22/2020 5:14 AM, Hannes Reinecke wrote:
> Use an IDA to register port IDs to avoid duplicate port IDs. It
> also will generate a unique port ID if none is specified.

I'd prefer a little more description as I didn't pick up what this 
actually did from the comment.  Meaning: Add auto-address assignment if 
one isn't specified. Use an ida for allocation for uniqueness. Use 
user-supplied addresses as long as not in use and within range supported 
for ida.

> Signed-off-by: Hannes Reinecke <hare@suse.de>
> ---
>   drivers/nvme/target/fcloop.c | 51 +++++++++++++++++++++++++++++++++++++-------
>   1 file changed, 43 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/nvme/target/fcloop.c b/drivers/nvme/target/fcloop.c
> index 082aa4dee406..e56f323fa7d4 100644
> --- a/drivers/nvme/target/fcloop.c
> +++ b/drivers/nvme/target/fcloop.c
> @@ -12,6 +12,7 @@
>   #include <linux/nvme-fc-driver.h>
>   #include <linux/nvme-fc.h>
>   
> +#define FCLOOP_MAX_AL_PA 0xEF

This is odd - no reason to bring in ALPA's and arb loop.   Please rename 
it to state it's the max address value allowed.  I'd prefer if this was 
a 24 bit value, but that won't matter much and 255 is fine.

>   
>   enum {
>   	NVMF_OPT_ERR		= 0,
> @@ -63,6 +64,9 @@ fcloop_parse_options(struct fcloop_ctrl_options *opts,
>   	int token, ret = 0;
>   	u64 token64;
>   
> +	/* Default setting */
> +	opts->fcaddr = 1;
> +
>   	options = o = kstrdup(buf, GFP_KERNEL);
>   	if (!options)
>   		return -ENOMEM;
> @@ -102,6 +106,10 @@ fcloop_parse_options(struct fcloop_ctrl_options *opts,
>   				ret = -EINVAL;
>   				goto out_free_options;
>   			}
> +			if (!token || token > FCLOOP_MAX_AL_PA) {
> +				ret = -EINVAL;
> +				goto out_free_options;
> +			}
>   			opts->fcaddr = token;
>   			break;
>   		case NVMF_OPT_LPWWNN:
> @@ -203,11 +211,13 @@ fcloop_parse_nm_options(struct device *dev, u64 *nname, u64 *pname,
>   static DEFINE_SPINLOCK(fcloop_lock);
>   static LIST_HEAD(fcloop_lports);
>   static LIST_HEAD(fcloop_nports);
> +static DEFINE_IDA(fcloop_portid_ida);
>   
>   struct fcloop_lport {
>   	struct nvme_fc_local_port *localport;
>   	struct list_head lport_list;
>   	struct completion unreg_done;
> +	u32 port_id;
>   };
>   
>   struct fcloop_lport_priv {
> @@ -949,6 +959,8 @@ fcloop_nport_free(struct kref *ref)
>   	list_del(&nport->nport_list);
>   	spin_unlock_irqrestore(&fcloop_lock, flags);
>   
> +	ida_free(&fcloop_portid_ida, nport->port_id);
> +
>   	kfree(nport);
>   }
>   
> @@ -1047,7 +1059,7 @@ fcloop_create_local_port(struct device *dev, struct device_attribute *attr,
>   	struct fcloop_lport *lport;
>   	struct fcloop_lport_priv *lport_priv;
>   	unsigned long flags;
> -	int ret = -ENOMEM;
> +	int ret = -ENOMEM, port_id;
>   
>   	lport = kzalloc(sizeof(*lport), GFP_KERNEL);
>   	if (!lport)
> @@ -1067,11 +1079,22 @@ fcloop_create_local_port(struct device *dev, struct device_attribute *attr,
>   		goto out_free_opts;
>   	}
>   
> +	port_id = ida_alloc_range(&fcloop_portid_ida,
> +				  opts->fcaddr, FCLOOP_MAX_AL_PA, GFP_KERNEL);
> +	if (port_id < 0) {
> +		ret = port_id;
> +		goto out_free_opts;
> +	}
> +	if ((opts->mask & NVMF_OPT_FCADDR) && opts->fcaddr != port_id) {
> +		ret = -EINVAL;
> +		goto out_free_ida;
> +	}
> +
>   	memset(&pinfo, 0, sizeof(pinfo));
>   	pinfo.node_name = opts->wwnn;
>   	pinfo.port_name = opts->wwpn;
>   	pinfo.port_role = opts->roles;
> -	pinfo.port_id = opts->fcaddr;
> +	pinfo.port_id = port_id;
>   
>   	ret = nvme_fc_register_localport(&pinfo, &fctemplate, NULL, &localport);
>   	if (!ret) {
> @@ -1086,7 +1109,9 @@ fcloop_create_local_port(struct device *dev, struct device_attribute *attr,
>   		list_add_tail(&lport->lport_list, &fcloop_lports);
>   		spin_unlock_irqrestore(&fcloop_lock, flags);
>   	}
> -
> +out_free_ida:
> +	if (ret)
> +		ida_free(&fcloop_portid_ida, port_id);
>   out_free_opts:
>   	kfree(opts);
>   out_free_lport:
> @@ -1115,6 +1140,8 @@ __wait_localport_unreg(struct fcloop_lport *lport)
>   
>   	wait_for_completion(&lport->unreg_done);
>   
> +	ida_free(&fcloop_portid_ida, lport->port_id);
> +
>   	kfree(lport);
>   
>   	return ret;
> @@ -1162,7 +1189,7 @@ fcloop_alloc_nport(const char *buf, size_t count, bool remoteport)
>   	struct fcloop_ctrl_options *opts;
>   	unsigned long flags;
>   	u32 opts_mask = (remoteport) ? RPORT_OPTS : TGTPORT_OPTS;
> -	int ret;
> +	int ret, port_id;
>   
>   	opts = kzalloc(sizeof(*opts), GFP_KERNEL);
>   	if (!opts)
> @@ -1182,13 +1209,21 @@ fcloop_alloc_nport(const char *buf, size_t count, bool remoteport)
>   	if (!newnport)
>   		goto out_free_opts;
>   
> +	port_id = ida_alloc_range(&fcloop_portid_ida,
> +				  opts->fcaddr, FCLOOP_MAX_AL_PA, GFP_KERNEL);
> +	if (port_id < 0)
> +		goto out_free_newnport;
> +
> +	if ((opts->mask & NVMF_OPT_FCADDR) && opts->fcaddr != port_id)
> +		goto out_free_ida;
> +
>   	INIT_LIST_HEAD(&newnport->nport_list);
>   	newnport->node_name = opts->wwnn;
>   	newnport->port_name = opts->wwpn;
> +	newnport->port_id = port_id;
>   	if (opts->mask & NVMF_OPT_ROLES)
>   		newnport->port_role = opts->roles;
> -	if (opts->mask & NVMF_OPT_FCADDR)
> -		newnport->port_id = opts->fcaddr;
> +
>   	kref_init(&newnport->ref);
>   
>   	spin_lock_irqsave(&fcloop_lock, flags);
> @@ -1226,8 +1261,6 @@ fcloop_alloc_nport(const char *buf, size_t count, bool remoteport)
>   				nport->lport = lport;
>   			if (opts->mask & NVMF_OPT_ROLES)
>   				nport->port_role = opts->roles;
> -			if (opts->mask & NVMF_OPT_FCADDR)
> -				nport->port_id = opts->fcaddr;
>   			goto out_free_newnport;
>   		}
>   	}
> @@ -1241,6 +1274,8 @@ fcloop_alloc_nport(const char *buf, size_t count, bool remoteport)
>   
>   out_invalid_opts:
>   	spin_unlock_irqrestore(&fcloop_lock, flags);
> +out_free_ida:
> +	ida_free(&fcloop_portid_ida, port_id);
>   out_free_newnport:
>   	kfree(newnport);
>   out_free_opts:

These are minor nits - I'm good with the patch

Reviewed-by: James Smart <james.smart@broadcom.com>

-- james

[-- Attachment #1.2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4163 bytes --]

[-- Attachment #2: Type: text/plain, Size: 158 bytes --]

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 4/7] nvmet-fc: use feature flag for virtual LLDD
  2020-09-22 12:14 ` [PATCH 4/7] nvmet-fc: use feature flag for virtual LLDD Hannes Reinecke
@ 2020-10-05 17:37   ` James Smart
  0 siblings, 0 replies; 16+ messages in thread
From: James Smart @ 2020-10-05 17:37 UTC (permalink / raw)
  To: Hannes Reinecke, Christoph Hellwig; +Cc: linux-nvme, Sagi Grimberg, Keith Busch


[-- Attachment #1.1: Type: text/plain, Size: 11046 bytes --]

On 9/22/2020 5:14 AM, Hannes Reinecke wrote:
> Virtual LLDDs like fcloop don't need to do DMA, but still
> might want to expose a device. So add a new feature flag
> to mark these LLDDs instead of relying on a non-existing
> struct device.
>
> Signed-off-by: Hannes Reinecke <hare@suse.de>
> ---
>   drivers/nvme/target/fc.c       | 93 +++++++++++++++++++++++-------------------
>   drivers/nvme/target/fcloop.c   |  2 +-
>   include/linux/nvme-fc-driver.h |  2 +
>   3 files changed, 55 insertions(+), 42 deletions(-)
>
> diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c
> index 63f5deb3b68a..6f5784767d35 100644
> --- a/drivers/nvme/target/fc.c
> +++ b/drivers/nvme/target/fc.c
> @@ -273,41 +273,50 @@ static void nvmet_fc_xmt_ls_rsp(struct nvmet_fc_tgtport *tgtport,
>    * in the scatter list, setting all dma addresses to 0.
>    */
>   
> +static bool fc_lldd_is_virtual(struct nvmet_fc_tgtport *tgtport)
> +{
> +	return !!(tgtport->ops->target_features & NVMET_FCTGTFEAT_VIRTUAL_DMA);
> +}
> +
>   static inline dma_addr_t
> -fc_dma_map_single(struct device *dev, void *ptr, size_t size,
> +fc_dma_map_single(struct nvmet_fc_tgtport *tgtport, void *ptr, size_t size,
>   		enum dma_data_direction dir)
>   {
> -	return dev ? dma_map_single(dev, ptr, size, dir) : (dma_addr_t)0L;
> +	if (fc_lldd_is_virtual(tgtport))
> +		return (dma_addr_t)0L;
> +	return dma_map_single(tgtport->dev, ptr, size, dir);
>   }
>   
>   static inline int
> -fc_dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
> +fc_dma_mapping_error(struct nvmet_fc_tgtport *tgtport, dma_addr_t dma_addr)
>   {
> -	return dev ? dma_mapping_error(dev, dma_addr) : 0;
> +	if (fc_lldd_is_virtual(tgtport))
> +		return 0;
> +	return dma_mapping_error(tgtport->dev, dma_addr);
>   }
>   
>   static inline void
> -fc_dma_unmap_single(struct device *dev, dma_addr_t addr, size_t size,
> -	enum dma_data_direction dir)
> +fc_dma_unmap_single(struct nvmet_fc_tgtport *tgtport, dma_addr_t addr,
> +	size_t size, enum dma_data_direction dir)
>   {
> -	if (dev)
> -		dma_unmap_single(dev, addr, size, dir);
> +	if (!fc_lldd_is_virtual(tgtport))
> +		dma_unmap_single(tgtport->dev, addr, size, dir);
>   }
>   
>   static inline void
> -fc_dma_sync_single_for_cpu(struct device *dev, dma_addr_t addr, size_t size,
> -		enum dma_data_direction dir)
> +fc_dma_sync_single_for_cpu(struct nvmet_fc_tgtport *tgtport, dma_addr_t addr,
> +	size_t size, enum dma_data_direction dir)
>   {
> -	if (dev)
> -		dma_sync_single_for_cpu(dev, addr, size, dir);
> +	if (!fc_lldd_is_virtual(tgtport))
> +		dma_sync_single_for_cpu(tgtport->dev, addr, size, dir);
>   }
>   
>   static inline void
> -fc_dma_sync_single_for_device(struct device *dev, dma_addr_t addr, size_t size,
> -		enum dma_data_direction dir)
> +fc_dma_sync_single_for_device(struct nvmet_fc_tgtport *tgtport, dma_addr_t addr,
> +	size_t size, enum dma_data_direction dir)
>   {
> -	if (dev)
> -		dma_sync_single_for_device(dev, addr, size, dir);
> +	if (!fc_lldd_is_virtual(tgtport))
> +		dma_sync_single_for_device(tgtport->dev, addr, size, dir);
>   }
>   
>   /* pseudo dma_map_sg call */
> @@ -329,18 +338,20 @@ fc_map_sg(struct scatterlist *sg, int nents)
>   }
>   
>   static inline int
> -fc_dma_map_sg(struct device *dev, struct scatterlist *sg, int nents,
> -		enum dma_data_direction dir)
> +fc_dma_map_sg(struct nvmet_fc_tgtport *tgtport, struct scatterlist *sg,
> +	int nents, enum dma_data_direction dir)
>   {
> -	return dev ? dma_map_sg(dev, sg, nents, dir) : fc_map_sg(sg, nents);
> +	if (fc_lldd_is_virtual(tgtport))
> +		return fc_map_sg(sg, nents);
> +	return dma_map_sg(tgtport->dev, sg, nents, dir);
>   }
>   
>   static inline void
> -fc_dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nents,
> -		enum dma_data_direction dir)
> +fc_dma_unmap_sg(struct nvmet_fc_tgtport *tgtport, struct scatterlist *sg,
> +	int nents, enum dma_data_direction dir)
>   {
> -	if (dev)
> -		dma_unmap_sg(dev, sg, nents, dir);
> +	if (!fc_lldd_is_virtual(tgtport))
> +		dma_unmap_sg(tgtport->dev, sg, nents, dir);
>   }
>   
>   
> @@ -368,7 +379,7 @@ __nvmet_fc_finish_ls_req(struct nvmet_fc_ls_req_op *lsop)
>   
>   	spin_unlock_irqrestore(&tgtport->lock, flags);
>   
> -	fc_dma_unmap_single(tgtport->dev, lsreq->rqstdma,
> +	fc_dma_unmap_single(tgtport, lsreq->rqstdma,
>   				  (lsreq->rqstlen + lsreq->rsplen),
>   				  DMA_BIDIRECTIONAL);
>   
> @@ -391,10 +402,10 @@ __nvmet_fc_send_ls_req(struct nvmet_fc_tgtport *tgtport,
>   	lsop->req_queued = false;
>   	INIT_LIST_HEAD(&lsop->lsreq_list);
>   
> -	lsreq->rqstdma = fc_dma_map_single(tgtport->dev, lsreq->rqstaddr,
> +	lsreq->rqstdma = fc_dma_map_single(tgtport, lsreq->rqstaddr,
>   				  lsreq->rqstlen + lsreq->rsplen,
>   				  DMA_BIDIRECTIONAL);
> -	if (fc_dma_mapping_error(tgtport->dev, lsreq->rqstdma))
> +	if (fc_dma_mapping_error(tgtport, lsreq->rqstdma))
>   		return -EFAULT;
>   
>   	lsreq->rspdma = lsreq->rqstdma + lsreq->rqstlen;
> @@ -420,7 +431,7 @@ __nvmet_fc_send_ls_req(struct nvmet_fc_tgtport *tgtport,
>   	lsop->req_queued = false;
>   	list_del(&lsop->lsreq_list);
>   	spin_unlock_irqrestore(&tgtport->lock, flags);
> -	fc_dma_unmap_single(tgtport->dev, lsreq->rqstdma,
> +	fc_dma_unmap_single(tgtport, lsreq->rqstdma,
>   				  (lsreq->rqstlen + lsreq->rsplen),
>   				  DMA_BIDIRECTIONAL);
>   	return ret;
> @@ -555,10 +566,10 @@ nvmet_fc_alloc_ls_iodlist(struct nvmet_fc_tgtport *tgtport)
>   
>   		iod->rspbuf = (union nvmefc_ls_responses *)&iod->rqstbuf[1];
>   
> -		iod->rspdma = fc_dma_map_single(tgtport->dev, iod->rspbuf,
> +		iod->rspdma = fc_dma_map_single(tgtport, iod->rspbuf,
>   						sizeof(*iod->rspbuf),
>   						DMA_TO_DEVICE);
> -		if (fc_dma_mapping_error(tgtport->dev, iod->rspdma))
> +		if (fc_dma_mapping_error(tgtport, iod->rspdma))
>   			goto out_fail;
>   	}
>   
> @@ -568,7 +579,7 @@ nvmet_fc_alloc_ls_iodlist(struct nvmet_fc_tgtport *tgtport)
>   	kfree(iod->rqstbuf);
>   	list_del(&iod->ls_rcv_list);
>   	for (iod--, i--; i >= 0; iod--, i--) {
> -		fc_dma_unmap_single(tgtport->dev, iod->rspdma,
> +		fc_dma_unmap_single(tgtport, iod->rspdma,
>   				sizeof(*iod->rspbuf), DMA_TO_DEVICE);
>   		kfree(iod->rqstbuf);
>   		list_del(&iod->ls_rcv_list);
> @@ -586,7 +597,7 @@ nvmet_fc_free_ls_iodlist(struct nvmet_fc_tgtport *tgtport)
>   	int i;
>   
>   	for (i = 0; i < NVMET_LS_CTX_COUNT; iod++, i++) {
> -		fc_dma_unmap_single(tgtport->dev,
> +		fc_dma_unmap_single(tgtport,
>   				iod->rspdma, sizeof(*iod->rspbuf),
>   				DMA_TO_DEVICE);
>   		kfree(iod->rqstbuf);
> @@ -640,12 +651,12 @@ nvmet_fc_prep_fcp_iodlist(struct nvmet_fc_tgtport *tgtport,
>   		list_add_tail(&fod->fcp_list, &queue->fod_list);
>   		spin_lock_init(&fod->flock);
>   
> -		fod->rspdma = fc_dma_map_single(tgtport->dev, &fod->rspiubuf,
> +		fod->rspdma = fc_dma_map_single(tgtport, &fod->rspiubuf,
>   					sizeof(fod->rspiubuf), DMA_TO_DEVICE);
> -		if (fc_dma_mapping_error(tgtport->dev, fod->rspdma)) {
> +		if (fc_dma_mapping_error(tgtport, fod->rspdma)) {
>   			list_del(&fod->fcp_list);
>   			for (fod--, i--; i >= 0; fod--, i--) {
> -				fc_dma_unmap_single(tgtport->dev, fod->rspdma,
> +				fc_dma_unmap_single(tgtport, fod->rspdma,
>   						sizeof(fod->rspiubuf),
>   						DMA_TO_DEVICE);
>   				fod->rspdma = 0L;
> @@ -666,7 +677,7 @@ nvmet_fc_destroy_fcp_iodlist(struct nvmet_fc_tgtport *tgtport,
>   
>   	for (i = 0; i < queue->sqsize; fod++, i++) {
>   		if (fod->rspdma)
> -			fc_dma_unmap_single(tgtport->dev, fod->rspdma,
> +			fc_dma_unmap_single(tgtport, fod->rspdma,
>   				sizeof(fod->rspiubuf), DMA_TO_DEVICE);
>   	}
>   }
> @@ -730,7 +741,7 @@ nvmet_fc_free_fcp_iod(struct nvmet_fc_tgt_queue *queue,
>   	struct nvmet_fc_defer_fcp_req *deferfcp;
>   	unsigned long flags;
>   
> -	fc_dma_sync_single_for_cpu(tgtport->dev, fod->rspdma,
> +	fc_dma_sync_single_for_cpu(tgtport, fod->rspdma,
>   				sizeof(fod->rspiubuf), DMA_TO_DEVICE);
>   
>   	fcpreq->nvmet_fc_private = NULL;
> @@ -1925,7 +1936,7 @@ nvmet_fc_xmt_ls_rsp_done(struct nvmefc_ls_rsp *lsrsp)
>   	struct nvmet_fc_ls_iod *iod = lsrsp->nvme_fc_private;
>   	struct nvmet_fc_tgtport *tgtport = iod->tgtport;
>   
> -	fc_dma_sync_single_for_cpu(tgtport->dev, iod->rspdma,
> +	fc_dma_sync_single_for_cpu(tgtport, iod->rspdma,
>   				sizeof(*iod->rspbuf), DMA_TO_DEVICE);
>   	nvmet_fc_free_ls_iod(tgtport, iod);
>   	nvmet_fc_tgtport_put(tgtport);
> @@ -1937,7 +1948,7 @@ nvmet_fc_xmt_ls_rsp(struct nvmet_fc_tgtport *tgtport,
>   {
>   	int ret;
>   
> -	fc_dma_sync_single_for_device(tgtport->dev, iod->rspdma,
> +	fc_dma_sync_single_for_device(tgtport, iod->rspdma,
>   				  sizeof(*iod->rspbuf), DMA_TO_DEVICE);
>   
>   	ret = tgtport->ops->xmt_ls_rsp(&tgtport->fc_target_port, iod->lsrsp);
> @@ -2091,7 +2102,7 @@ nvmet_fc_alloc_tgt_pgs(struct nvmet_fc_fcp_iod *fod)
>   
>   	fod->data_sg = sg;
>   	fod->data_sg_cnt = nent;
> -	fod->data_sg_cnt = fc_dma_map_sg(fod->tgtport->dev, sg, nent,
> +	fod->data_sg_cnt = fc_dma_map_sg(fod->tgtport, sg, nent,
>   				((fod->io_dir == NVMET_FCP_WRITE) ?
>   					DMA_FROM_DEVICE : DMA_TO_DEVICE));
>   				/* note: write from initiator perspective */
> @@ -2109,7 +2120,7 @@ nvmet_fc_free_tgt_pgs(struct nvmet_fc_fcp_iod *fod)
>   	if (!fod->data_sg || !fod->data_sg_cnt)
>   		return;
>   
> -	fc_dma_unmap_sg(fod->tgtport->dev, fod->data_sg, fod->data_sg_cnt,
> +	fc_dma_unmap_sg(fod->tgtport, fod->data_sg, fod->data_sg_cnt,
>   				((fod->io_dir == NVMET_FCP_WRITE) ?
>   					DMA_FROM_DEVICE : DMA_TO_DEVICE));
>   	sgl_free(fod->data_sg);
> @@ -2193,7 +2204,7 @@ nvmet_fc_prep_fcp_rsp(struct nvmet_fc_tgtport *tgtport,
>   		fod->fcpreq->rsplen = sizeof(*ersp);
>   	}
>   
> -	fc_dma_sync_single_for_device(tgtport->dev, fod->rspdma,
> +	fc_dma_sync_single_for_device(tgtport, fod->rspdma,
>   				  sizeof(fod->rspiubuf), DMA_TO_DEVICE);
>   }
>   
> diff --git a/drivers/nvme/target/fcloop.c b/drivers/nvme/target/fcloop.c
> index e56f323fa7d4..2ccb941efb21 100644
> --- a/drivers/nvme/target/fcloop.c
> +++ b/drivers/nvme/target/fcloop.c
> @@ -1043,7 +1043,7 @@ static struct nvmet_fc_target_template tgttemplate = {
>   	.max_dif_sgl_segments	= FCLOOP_SGL_SEGS,
>   	.dma_boundary		= FCLOOP_DMABOUND_4G,
>   	/* optional features */
> -	.target_features	= 0,
> +	.target_features	= NVMET_FCTGTFEAT_VIRTUAL_DMA,
>   	/* sizes of additional private data for data structures */
>   	.target_priv_sz		= sizeof(struct fcloop_tport),
>   	.lsrqst_priv_sz		= sizeof(struct fcloop_lsreq),
> diff --git a/include/linux/nvme-fc-driver.h b/include/linux/nvme-fc-driver.h
> index 2a38f2b477a5..675c7ef6df17 100644
> --- a/include/linux/nvme-fc-driver.h
> +++ b/include/linux/nvme-fc-driver.h
> @@ -707,6 +707,8 @@ enum {
>   		 * sequence in one LLDD operation. Errors during Data
>   		 * sequence transmit must not allow RSP sequence to be sent.
>   		 */
> +	NVMET_FCTGTFEAT_VIRTUAL_DMA = (1 << 1),
> +		/* Bit 1: Virtual LLDD with no DMA support */
>   };
>   
>   

Ok.

Reviewed-by: James Smart <james.smart@broadcom.com>


-- james

[-- Attachment #1.2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4163 bytes --]

[-- Attachment #2: Type: text/plain, Size: 158 bytes --]

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 5/7] nvme-fc: use feature flag for virtual LLDD
  2020-09-22 12:14 ` [PATCH 5/7] nvme-fc: " Hannes Reinecke
@ 2020-10-05 17:38   ` James Smart
  0 siblings, 0 replies; 16+ messages in thread
From: James Smart @ 2020-10-05 17:38 UTC (permalink / raw)
  To: Hannes Reinecke, Christoph Hellwig; +Cc: linux-nvme, Sagi Grimberg, Keith Busch


[-- Attachment #1.1: Type: text/plain, Size: 12554 bytes --]

On 9/22/2020 5:14 AM, Hannes Reinecke wrote:
> Virtual LLDDs like fcloop don't need to do DMA, but still
> might want to expose a device. So add a new feature flag
> to mark these LLDDs instead of relying on a non-existing
> struct device.
>
> Signed-off-by: Hannes Reinecke <hare@suse.de>
> ---
>   drivers/nvme/host/fc.c         | 93 +++++++++++++++++++++++-------------------
>   drivers/nvme/target/fcloop.c   |  2 +
>   include/linux/nvme-fc-driver.h | 10 +++++
>   3 files changed, 64 insertions(+), 41 deletions(-)
>
> diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
> index e8ef42b9d50c..d30ea7a39183 100644
> --- a/drivers/nvme/host/fc.c
> +++ b/drivers/nvme/host/fc.c
> @@ -237,6 +237,11 @@ static void __nvme_fc_delete_hw_queue(struct nvme_fc_ctrl *,
>   static void nvme_fc_handle_ls_rqst_work(struct work_struct *work);
>   
>   
> +static bool nvme_fc_lport_is_physical(struct nvme_fc_lport *lport)
> +{
> +	return !(lport->ops->port_features & NVME_FCPORTFEAT_VIRTUAL_DMA);
> +}
> +
>   static void
>   nvme_fc_free_lport(struct kref *ref)
>   {
> @@ -427,7 +432,7 @@ nvme_fc_register_localport(struct nvme_fc_port_info *pinfo,
>   	list_add_tail(&newrec->port_list, &nvme_fc_lport_list);
>   	spin_unlock_irqrestore(&nvme_fc_lock, flags);
>   
> -	if (dev)
> +	if (nvme_fc_lport_is_physical(newrec))
>   		dma_set_seg_boundary(dev, template->dma_boundary);
>   
>   	*portptr = &newrec->localport;
> @@ -953,40 +958,44 @@ EXPORT_SYMBOL_GPL(nvme_fc_set_remoteport_devloss);
>    */
>   
>   static inline dma_addr_t
> -fc_dma_map_single(struct device *dev, void *ptr, size_t size,
> +fc_dma_map_single(struct nvme_fc_lport *lport, void *ptr, size_t size,
>   		enum dma_data_direction dir)
>   {
> -	return dev ? dma_map_single(dev, ptr, size, dir) : (dma_addr_t)0L;
> +	if (nvme_fc_lport_is_physical(lport))
> +		return dma_map_single(lport->dev, ptr, size, dir);
> +	return (dma_addr_t)0L;
>   }
>   
>   static inline int
> -fc_dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
> +fc_dma_mapping_error(struct nvme_fc_lport *lport, dma_addr_t dma_addr)
>   {
> -	return dev ? dma_mapping_error(dev, dma_addr) : 0;
> +	if (nvme_fc_lport_is_physical(lport))
> +		return dma_mapping_error(lport->dev, dma_addr);
> +	return 0;
>   }
>   
>   static inline void
> -fc_dma_unmap_single(struct device *dev, dma_addr_t addr, size_t size,
> +fc_dma_unmap_single(struct nvme_fc_lport *lport, dma_addr_t addr, size_t size,
>   	enum dma_data_direction dir)
>   {
> -	if (dev)
> -		dma_unmap_single(dev, addr, size, dir);
> +	if (nvme_fc_lport_is_physical(lport))
> +		dma_unmap_single(lport->dev, addr, size, dir);
>   }
>   
>   static inline void
> -fc_dma_sync_single_for_cpu(struct device *dev, dma_addr_t addr, size_t size,
> -		enum dma_data_direction dir)
> +fc_dma_sync_single_for_cpu(struct nvme_fc_lport *lport, dma_addr_t addr,
> +	size_t size, enum dma_data_direction dir)
>   {
> -	if (dev)
> -		dma_sync_single_for_cpu(dev, addr, size, dir);
> +	if (nvme_fc_lport_is_physical(lport))
> +		dma_sync_single_for_cpu(lport->dev, addr, size, dir);
>   }
>   
>   static inline void
> -fc_dma_sync_single_for_device(struct device *dev, dma_addr_t addr, size_t size,
> -		enum dma_data_direction dir)
> +fc_dma_sync_single_for_device(struct nvme_fc_lport *lport, dma_addr_t addr,
> +	size_t size, enum dma_data_direction dir)
>   {
> -	if (dev)
> -		dma_sync_single_for_device(dev, addr, size, dir);
> +	if (nvme_fc_lport_is_physical(lport))
> +		dma_sync_single_for_device(lport->dev, addr, size, dir);
>   }
>   
>   /* pseudo dma_map_sg call */
> @@ -1008,18 +1017,20 @@ fc_map_sg(struct scatterlist *sg, int nents)
>   }
>   
>   static inline int
> -fc_dma_map_sg(struct device *dev, struct scatterlist *sg, int nents,
> +fc_dma_map_sg(struct nvme_fc_lport *lport, struct scatterlist *sg, int nents,
>   		enum dma_data_direction dir)
>   {
> -	return dev ? dma_map_sg(dev, sg, nents, dir) : fc_map_sg(sg, nents);
> +	if (nvme_fc_lport_is_physical(lport))
> +		return dma_map_sg(lport->dev, sg, nents, dir);
> +	return fc_map_sg(sg, nents);
>   }
>   
>   static inline void
> -fc_dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nents,
> +fc_dma_unmap_sg(struct nvme_fc_lport *lport, struct scatterlist *sg, int nents,
>   		enum dma_data_direction dir)
>   {
> -	if (dev)
> -		dma_unmap_sg(dev, sg, nents, dir);
> +	if (nvme_fc_lport_is_physical(lport))
> +		dma_unmap_sg(lport->dev, sg, nents, dir);
>   }
>   
>   /* *********************** FC-NVME LS Handling **************************** */
> @@ -1049,7 +1060,7 @@ __nvme_fc_finish_ls_req(struct nvmefc_ls_req_op *lsop)
>   
>   	spin_unlock_irqrestore(&rport->lock, flags);
>   
> -	fc_dma_unmap_single(rport->dev, lsreq->rqstdma,
> +	fc_dma_unmap_single(rport->lport, lsreq->rqstdma,
>   				  (lsreq->rqstlen + lsreq->rsplen),
>   				  DMA_BIDIRECTIONAL);
>   
> @@ -1077,10 +1088,10 @@ __nvme_fc_send_ls_req(struct nvme_fc_rport *rport,
>   	INIT_LIST_HEAD(&lsop->lsreq_list);
>   	init_completion(&lsop->ls_done);
>   
> -	lsreq->rqstdma = fc_dma_map_single(rport->dev, lsreq->rqstaddr,
> +	lsreq->rqstdma = fc_dma_map_single(rport->lport, lsreq->rqstaddr,
>   				  lsreq->rqstlen + lsreq->rsplen,
>   				  DMA_BIDIRECTIONAL);
> -	if (fc_dma_mapping_error(rport->dev, lsreq->rqstdma)) {
> +	if (fc_dma_mapping_error(rport->lport, lsreq->rqstdma)) {
>   		ret = -EFAULT;
>   		goto out_putrport;
>   	}
> @@ -1107,7 +1118,7 @@ __nvme_fc_send_ls_req(struct nvme_fc_rport *rport,
>   	lsop->req_queued = false;
>   	list_del(&lsop->lsreq_list);
>   	spin_unlock_irqrestore(&rport->lock, flags);
> -	fc_dma_unmap_single(rport->dev, lsreq->rqstdma,
> +	fc_dma_unmap_single(rport->lport, lsreq->rqstdma,
>   				  (lsreq->rqstlen + lsreq->rsplen),
>   				  DMA_BIDIRECTIONAL);
>   out_putrport:
> @@ -1465,9 +1476,9 @@ nvme_fc_xmt_ls_rsp_done(struct nvmefc_ls_rsp *lsrsp)
>   	list_del(&lsop->lsrcv_list);
>   	spin_unlock_irqrestore(&rport->lock, flags);
>   
> -	fc_dma_sync_single_for_cpu(lport->dev, lsop->rspdma,
> +	fc_dma_sync_single_for_cpu(lport, lsop->rspdma,
>   				sizeof(*lsop->rspbuf), DMA_TO_DEVICE);
> -	fc_dma_unmap_single(lport->dev, lsop->rspdma,
> +	fc_dma_unmap_single(lport, lsop->rspdma,
>   			sizeof(*lsop->rspbuf), DMA_TO_DEVICE);
>   
>   	kfree(lsop);
> @@ -1483,7 +1494,7 @@ nvme_fc_xmt_ls_rsp(struct nvmefc_ls_rcv_op *lsop)
>   	struct fcnvme_ls_rqst_w0 *w0 = &lsop->rqstbuf->w0;
>   	int ret;
>   
> -	fc_dma_sync_single_for_device(lport->dev, lsop->rspdma,
> +	fc_dma_sync_single_for_device(lport, lsop->rspdma,
>   				  sizeof(*lsop->rspbuf), DMA_TO_DEVICE);
>   
>   	ret = lport->ops->xmt_ls_rsp(&lport->localport, &rport->remoteport,
> @@ -1761,10 +1772,10 @@ nvme_fc_rcv_ls_req(struct nvme_fc_remote_port *portptr,
>   	lsop->rqstbuf = (union nvmefc_ls_requests *)&lsop[1];
>   	lsop->rspbuf = (union nvmefc_ls_responses *)&lsop->rqstbuf[1];
>   
> -	lsop->rspdma = fc_dma_map_single(lport->dev, lsop->rspbuf,
> +	lsop->rspdma = fc_dma_map_single(lport, lsop->rspbuf,
>   					sizeof(*lsop->rspbuf),
>   					DMA_TO_DEVICE);
> -	if (fc_dma_mapping_error(lport->dev, lsop->rspdma)) {
> +	if (fc_dma_mapping_error(lport, lsop->rspdma)) {
>   		dev_info(lport->dev,
>   			"RCV %s LS failed: DMA mapping failure\n",
>   			(w0->ls_cmd <= NVME_FC_LAST_LS_CMD_VALUE) ?
> @@ -1793,7 +1804,7 @@ nvme_fc_rcv_ls_req(struct nvme_fc_remote_port *portptr,
>   	return 0;
>   
>   out_unmap:
> -	fc_dma_unmap_single(lport->dev, lsop->rspdma,
> +	fc_dma_unmap_single(lport, lsop->rspdma,
>   			sizeof(*lsop->rspbuf), DMA_TO_DEVICE);
>   out_free:
>   	kfree(lsop);
> @@ -1810,9 +1821,9 @@ static void
>   __nvme_fc_exit_request(struct nvme_fc_ctrl *ctrl,
>   		struct nvme_fc_fcp_op *op)
>   {
> -	fc_dma_unmap_single(ctrl->lport->dev, op->fcp_req.rspdma,
> +	fc_dma_unmap_single(ctrl->lport, op->fcp_req.rspdma,
>   				sizeof(op->rsp_iu), DMA_FROM_DEVICE);
> -	fc_dma_unmap_single(ctrl->lport->dev, op->fcp_req.cmddma,
> +	fc_dma_unmap_single(ctrl->lport, op->fcp_req.cmddma,
>   				sizeof(op->cmd_iu), DMA_TO_DEVICE);
>   
>   	atomic_set(&op->state, FCPOP_STATE_UNINIT);
> @@ -1936,7 +1947,7 @@ nvme_fc_fcpio_done(struct nvmefc_fcp_req *req)
>   
>   	opstate = atomic_xchg(&op->state, FCPOP_STATE_COMPLETE);
>   
> -	fc_dma_sync_single_for_cpu(ctrl->lport->dev, op->fcp_req.rspdma,
> +	fc_dma_sync_single_for_cpu(ctrl->lport, op->fcp_req.rspdma,
>   				sizeof(op->rsp_iu), DMA_FROM_DEVICE);
>   
>   	if (opstate == FCPOP_STATE_ABORTED)
> @@ -2073,19 +2084,19 @@ __nvme_fc_init_request(struct nvme_fc_ctrl *ctrl,
>   	else
>   		cmdiu->rsv_cat = fccmnd_set_cat_admin(0);
>   
> -	op->fcp_req.cmddma = fc_dma_map_single(ctrl->lport->dev,
> +	op->fcp_req.cmddma = fc_dma_map_single(ctrl->lport,
>   				&op->cmd_iu, sizeof(op->cmd_iu), DMA_TO_DEVICE);
> -	if (fc_dma_mapping_error(ctrl->lport->dev, op->fcp_req.cmddma)) {
> +	if (fc_dma_mapping_error(ctrl->lport, op->fcp_req.cmddma)) {
>   		dev_err(ctrl->dev,
>   			"FCP Op failed - cmdiu dma mapping failed.\n");
>   		ret = -EFAULT;
>   		goto out_on_error;
>   	}
>   
> -	op->fcp_req.rspdma = fc_dma_map_single(ctrl->lport->dev,
> +	op->fcp_req.rspdma = fc_dma_map_single(ctrl->lport,
>   				&op->rsp_iu, sizeof(op->rsp_iu),
>   				DMA_FROM_DEVICE);
> -	if (fc_dma_mapping_error(ctrl->lport->dev, op->fcp_req.rspdma)) {
> +	if (fc_dma_mapping_error(ctrl->lport, op->fcp_req.rspdma)) {
>   		dev_err(ctrl->dev,
>   			"FCP Op failed - rspiu dma mapping failed.\n");
>   		ret = -EFAULT;
> @@ -2485,7 +2496,7 @@ nvme_fc_map_data(struct nvme_fc_ctrl *ctrl, struct request *rq,
>   
>   	op->nents = blk_rq_map_sg(rq->q, rq, freq->sg_table.sgl);
>   	WARN_ON(op->nents > blk_rq_nr_phys_segments(rq));
> -	freq->sg_cnt = fc_dma_map_sg(ctrl->lport->dev, freq->sg_table.sgl,
> +	freq->sg_cnt = fc_dma_map_sg(ctrl->lport, freq->sg_table.sgl,
>   				op->nents, rq_dma_dir(rq));
>   	if (unlikely(freq->sg_cnt <= 0)) {
>   		sg_free_table_chained(&freq->sg_table, NVME_INLINE_SG_CNT);
> @@ -2508,7 +2519,7 @@ nvme_fc_unmap_data(struct nvme_fc_ctrl *ctrl, struct request *rq,
>   	if (!freq->sg_cnt)
>   		return;
>   
> -	fc_dma_unmap_sg(ctrl->lport->dev, freq->sg_table.sgl, op->nents,
> +	fc_dma_unmap_sg(ctrl->lport, freq->sg_table.sgl, op->nents,
>   			rq_dma_dir(rq));
>   
>   	sg_free_table_chained(&freq->sg_table, NVME_INLINE_SG_CNT);
> @@ -2609,7 +2620,7 @@ nvme_fc_start_fcp_op(struct nvme_fc_ctrl *ctrl, struct nvme_fc_queue *queue,
>   		}
>   	}
>   
> -	fc_dma_sync_single_for_device(ctrl->lport->dev, op->fcp_req.cmddma,
> +	fc_dma_sync_single_for_device(ctrl->lport, op->fcp_req.cmddma,
>   				  sizeof(op->cmd_iu), DMA_TO_DEVICE);
>   
>   	atomic_set(&op->state, FCPOP_STATE_ACTIVE);
> diff --git a/drivers/nvme/target/fcloop.c b/drivers/nvme/target/fcloop.c
> index 2ccb941efb21..7dab98545979 100644
> --- a/drivers/nvme/target/fcloop.c
> +++ b/drivers/nvme/target/fcloop.c
> @@ -1026,6 +1026,8 @@ static struct nvme_fc_port_template fctemplate = {
>   	.remote_priv_sz		= sizeof(struct fcloop_rport),
>   	.lsrqst_priv_sz		= sizeof(struct fcloop_lsreq),
>   	.fcprqst_priv_sz	= sizeof(struct fcloop_ini_fcpreq),
> +	/* optional features */
> +	.port_features		= NVME_FCPORTFEAT_VIRTUAL_DMA,
>   };
>   
>   static struct nvmet_fc_target_template tgttemplate = {
> diff --git a/include/linux/nvme-fc-driver.h b/include/linux/nvme-fc-driver.h
> index 675c7ef6df17..17cf967484b2 100644
> --- a/include/linux/nvme-fc-driver.h
> +++ b/include/linux/nvme-fc-driver.h
> @@ -334,6 +334,11 @@ struct nvme_fc_remote_port {
>   	enum nvme_fc_obj_state port_state;
>   } __aligned(sizeof(u64));	/* alignment for other things alloc'd with */
>   
> +/* Port Features (Bit fields) LLDD supports */
> +enum {
> +	NVME_FCPORTFEAT_VIRTUAL_DMA = (1 << 0),
> +		/* Bit 0: Virtual LLDD with no DMA support */
> +};
>   
>   /**
>    * struct nvme_fc_port_template - structure containing static entrypoints and
> @@ -470,6 +475,8 @@ struct nvme_fc_remote_port {
>    *       memory area solely for the of the LLDD and its location is
>    *       specified by the fcp_request->private pointer.
>    *       Value is Mandatory. Allowed to be zero.
> + *
> + * @port_features: Optional feature for the LLDD
>    */
>   struct nvme_fc_port_template {
>   	/* initiator-based functions */
> @@ -508,6 +515,9 @@ struct nvme_fc_port_template {
>   	u32	remote_priv_sz;
>   	u32	lsrqst_priv_sz;
>   	u32	fcprqst_priv_sz;
> +
> +	/* Optional port features */
> +	u32	port_features;
>   };
>   
>   

OK.

Reviewed-by: James Smart <james.smart@broadcom.com>


-- james

[-- Attachment #1.2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4163 bytes --]

[-- Attachment #2: Type: text/plain, Size: 158 bytes --]

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 6/7] nvme-fcloop: use a device for nport
  2020-09-22 12:15 ` [PATCH 6/7] nvme-fcloop: use a device for nport Hannes Reinecke
@ 2020-10-05 17:41   ` James Smart
  0 siblings, 0 replies; 16+ messages in thread
From: James Smart @ 2020-10-05 17:41 UTC (permalink / raw)
  To: Hannes Reinecke, Christoph Hellwig; +Cc: linux-nvme, Sagi Grimberg, Keith Busch


[-- Attachment #1.1: Type: text/plain, Size: 4015 bytes --]

On 9/22/2020 5:15 AM, Hannes Reinecke wrote:
> Add a 'struct device' to the nport such that the nport will show
> up in sysfs and we get a valid name in the logging output.
>
> Signed-off-by: Hannes Reinecke <hare@suse.de>
> ---
>   drivers/nvme/target/fcloop.c | 31 +++++++++++++++++++++----------
>   1 file changed, 21 insertions(+), 10 deletions(-)
>
> diff --git a/drivers/nvme/target/fcloop.c b/drivers/nvme/target/fcloop.c
> index 7dab98545979..b84f8754b3a3 100644
> --- a/drivers/nvme/target/fcloop.c
> +++ b/drivers/nvme/target/fcloop.c
> @@ -245,11 +245,11 @@ struct fcloop_tport {
>   };
>   
>   struct fcloop_nport {
> +	struct device dev;
>   	struct fcloop_rport *rport;
>   	struct fcloop_tport *tport;
>   	struct fcloop_lport *lport;
>   	struct list_head nport_list;
> -	struct kref ref;
>   	u64 node_name;
>   	u64 port_name;
>   	u32 port_role;
> @@ -949,10 +949,10 @@ fcloop_fcp_abort(struct nvme_fc_local_port *localport,
>   }
>   
>   static void
> -fcloop_nport_free(struct kref *ref)
> +fcloop_release_nport(struct device *dev)
>   {
>   	struct fcloop_nport *nport =
> -		container_of(ref, struct fcloop_nport, ref);
> +		container_of(dev, struct fcloop_nport, dev);
>   	unsigned long flags;
>   
>   	spin_lock_irqsave(&fcloop_lock, flags);
> @@ -967,13 +967,13 @@ fcloop_nport_free(struct kref *ref)
>   static void
>   fcloop_nport_put(struct fcloop_nport *nport)
>   {
> -	kref_put(&nport->ref, fcloop_nport_free);
> +	put_device(&nport->dev);
>   }
>   
> -static int
> +static void
>   fcloop_nport_get(struct fcloop_nport *nport)
>   {
> -	return kref_get_unless_zero(&nport->ref);
> +	get_device(&nport->dev);
>   }
>   
>   static void
> @@ -1007,6 +1007,8 @@ fcloop_targetport_delete(struct nvmet_fc_target_port *targetport)
>   #define	FCLOOP_SGL_SEGS			256
>   #define FCLOOP_DMABOUND_4G		0xFFFFFFFF
>   
> +static struct class *fcloop_class;
> +
>   static struct nvme_fc_port_template fctemplate = {
>   	.localport_delete	= fcloop_localport_delete,
>   	.remoteport_delete	= fcloop_remoteport_delete,
> @@ -1225,8 +1227,10 @@ fcloop_alloc_nport(const char *buf, size_t count, bool remoteport)
>   	newnport->port_id = port_id;
>   	if (opts->mask & NVMF_OPT_ROLES)
>   		newnport->port_role = opts->roles;
> -
> -	kref_init(&newnport->ref);
> +	newnport->dev.class = fcloop_class;
> +	newnport->dev.release = fcloop_release_nport;
> +	dev_set_name(&newnport->dev, "nport%d", newnport->port_id);
> +	device_initialize(&newnport->dev);
>   
>   	spin_lock_irqsave(&fcloop_lock, flags);
>   
> @@ -1267,6 +1271,10 @@ fcloop_alloc_nport(const char *buf, size_t count, bool remoteport)
>   		}
>   	}
>   
> +	ret = device_add(&newnport->dev);
> +	if (ret)
> +		goto out_invalid_opts;
> +
>   	list_add_tail(&newnport->nport_list, &fcloop_nports);
>   
>   	spin_unlock_irqrestore(&fcloop_lock, flags);
> @@ -1339,6 +1347,8 @@ __unlink_remote_port(struct fcloop_nport *nport)
>   	if (rport && nport->tport)
>   		nport->tport->remoteport = NULL;
>   	nport->rport = NULL;
> +	if (!nport->tport)
> +		device_del(&nport->dev);
>   
>   	return rport;
>   }
> @@ -1406,7 +1416,7 @@ fcloop_create_target_port(struct device *dev, struct device_attribute *attr,
>   	tinfo.port_name = nport->port_name;
>   	tinfo.port_id = nport->port_id;
>   
> -	ret = nvmet_fc_register_targetport(&tinfo, &tgttemplate, NULL,
> +	ret = nvmet_fc_register_targetport(&tinfo, &tgttemplate, &nport->dev,
>   						&targetport);
>   	if (ret) {
>   		fcloop_nport_put(nport);
> @@ -1438,6 +1448,8 @@ __unlink_target_port(struct fcloop_nport *nport)
>   	if (tport && nport->rport)
>   		nport->rport->targetport = NULL;
>   	nport->tport = NULL;
> +	if (!nport->rport)
> +		device_del(&nport->dev);
>   
>   	return tport;
>   }
> @@ -1513,7 +1525,6 @@ static const struct attribute_group *fcloop_dev_attr_groups[] = {
>   	NULL,
>   };
>   
> -static struct class *fcloop_class;
>   static struct device *fcloop_device;
>   
>   

ok.

Reviewed-by: James Smart <james.smart@broadcom.com>

-- james

[-- Attachment #1.2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4163 bytes --]

[-- Attachment #2: Type: text/plain, Size: 158 bytes --]

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 7/7] nvme-fcloop: use a device for lport
  2020-09-22 12:15 ` [PATCH 7/7] nvme-fcloop: use a device for lport Hannes Reinecke
@ 2020-10-05 17:45   ` James Smart
  0 siblings, 0 replies; 16+ messages in thread
From: James Smart @ 2020-10-05 17:45 UTC (permalink / raw)
  To: Hannes Reinecke, Christoph Hellwig; +Cc: linux-nvme, Sagi Grimberg, Keith Busch


[-- Attachment #1.1: Type: text/plain, Size: 3780 bytes --]

On 9/22/2020 5:15 AM, Hannes Reinecke wrote:
> Add a 'struct device' to the lport such that the lport will show
> up in sysfs and we get a valid name in the logging output.
>
> Signed-off-by: Hannes Reinecke <hare@suse.de>
> ---
>   drivers/nvme/target/fcloop.c | 53 ++++++++++++++++++++++++++++++++------------
>   1 file changed, 39 insertions(+), 14 deletions(-)
>
> diff --git a/drivers/nvme/target/fcloop.c b/drivers/nvme/target/fcloop.c
> index b84f8754b3a3..1af1917ddf4c 100644
> --- a/drivers/nvme/target/fcloop.c
> +++ b/drivers/nvme/target/fcloop.c
> @@ -214,6 +214,7 @@ static LIST_HEAD(fcloop_nports);
>   static DEFINE_IDA(fcloop_portid_ida);
>   
>   struct fcloop_lport {
> +	struct device dev;
>   	struct nvme_fc_local_port *localport;
>   	struct list_head lport_list;
>   	struct completion unreg_done;
> @@ -1053,6 +1054,16 @@ static struct nvmet_fc_target_template tgttemplate = {
>   	.lsrqst_priv_sz		= sizeof(struct fcloop_lsreq),
>   };
>   
> +static void fcloop_release_lport(struct device *dev)
> +{
> +	struct fcloop_lport *lport =
> +		container_of(dev, struct fcloop_lport, dev);
> +
> +	ida_free(&fcloop_portid_ida, lport->port_id);
> +
> +	kfree(lport);
> +}
> +
>   static ssize_t
>   fcloop_create_local_port(struct device *dev, struct device_attribute *attr,
>   		const char *buf, size_t count)
> @@ -1063,7 +1074,7 @@ fcloop_create_local_port(struct device *dev, struct device_attribute *attr,
>   	struct fcloop_lport *lport;
>   	struct fcloop_lport_priv *lport_priv;
>   	unsigned long flags;
> -	int ret = -ENOMEM, port_id;
> +	int ret = -ENOMEM, port_id, port_num;
>   
>   	lport = kzalloc(sizeof(*lport), GFP_KERNEL);
>   	if (!lport)
> @@ -1094,25 +1105,39 @@ fcloop_create_local_port(struct device *dev, struct device_attribute *attr,
>   		goto out_free_ida;
>   	}
>   
> +	lport->port_id = port_id;
> +	lport->dev.class = fcloop_class;
> +	lport->dev.release = fcloop_release_lport;
> +	dev_set_name(&lport->dev, "lport%d", lport->port_id);
> +	device_initialize(&lport->dev);
> +
>   	memset(&pinfo, 0, sizeof(pinfo));
>   	pinfo.node_name = opts->wwnn;
>   	pinfo.port_name = opts->wwpn;
>   	pinfo.port_role = opts->roles;
>   	pinfo.port_id = port_id;
>   
> -	ret = nvme_fc_register_localport(&pinfo, &fctemplate, NULL, &localport);
> -	if (!ret) {
> -		/* success */
> -		lport_priv = localport->private;
> -		lport_priv->lport = lport;
> +	ret = nvme_fc_register_localport(&pinfo, &fctemplate,
> +					 &lport->dev, &localport);
> +	if (ret)
> +		goto out_free_ida;
>   
> -		lport->localport = localport;
> -		INIT_LIST_HEAD(&lport->lport_list);
> +	/* success */
> +	lport_priv = localport->private;
> +	lport_priv->lport = lport;
>   
> -		spin_lock_irqsave(&fcloop_lock, flags);
> -		list_add_tail(&lport->lport_list, &fcloop_lports);
> -		spin_unlock_irqrestore(&fcloop_lock, flags);
> +	lport->localport = localport;
> +	INIT_LIST_HEAD(&lport->lport_list);
> +
> +	ret = device_add(&lport->dev);
> +	if (ret) {
> +		nvme_fc_unregister_localport(lport->localport);
> +		goto out_free_ida;
>   	}
> +	spin_lock_irqsave(&fcloop_lock, flags);
> +	list_add_tail(&lport->lport_list, &fcloop_lports);
> +	spin_unlock_irqrestore(&fcloop_lock, flags);
> +
>   out_free_ida:
>   	if (ret)
>   		ida_free(&fcloop_portid_ida, port_id);
> @@ -1138,15 +1163,15 @@ __wait_localport_unreg(struct fcloop_lport *lport)
>   {
>   	int ret;
>   
> +	device_del(&lport->dev);
> +
>   	init_completion(&lport->unreg_done);
>   
>   	ret = nvme_fc_unregister_localport(lport->localport);
>   
>   	wait_for_completion(&lport->unreg_done);
>   
> -	ida_free(&fcloop_portid_ida, lport->port_id);
> -
> -	kfree(lport);
> +	put_device(&lport->dev);
>   
>   	return ret;
>   }
ok.

Reviewed-by: James Smart <james.smart@broadcom.com>

-- james

[-- Attachment #1.2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4163 bytes --]

[-- Attachment #2: Type: text/plain, Size: 158 bytes --]

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 3/7] nvme-fcloop: use IDA for port ids
  2020-10-05 17:33   ` James Smart
@ 2020-10-09 13:57     ` Hannes Reinecke
  0 siblings, 0 replies; 16+ messages in thread
From: Hannes Reinecke @ 2020-10-09 13:57 UTC (permalink / raw)
  To: linux-nvme

On 10/5/20 7:33 PM, James Smart wrote:
> On 9/22/2020 5:14 AM, Hannes Reinecke wrote:
>> Use an IDA to register port IDs to avoid duplicate port IDs. It
>> also will generate a unique port ID if none is specified.
> 
> I'd prefer a little more description as I didn't pick up what this 
> actually did from the comment.  Meaning: Add auto-address assignment if 
> one isn't specified. Use an ida for allocation for uniqueness. Use 
> user-supplied addresses as long as not in use and within range supported 
> for ida.
> 

Ok.

>> Signed-off-by: Hannes Reinecke <hare@suse.de>
>> ---
>>   drivers/nvme/target/fcloop.c | 51 
>> +++++++++++++++++++++++++++++++++++++-------
>>   1 file changed, 43 insertions(+), 8 deletions(-)
>>
>> diff --git a/drivers/nvme/target/fcloop.c b/drivers/nvme/target/fcloop.c
>> index 082aa4dee406..e56f323fa7d4 100644
>> --- a/drivers/nvme/target/fcloop.c
>> +++ b/drivers/nvme/target/fcloop.c
>> @@ -12,6 +12,7 @@
>>   #include <linux/nvme-fc-driver.h>
>>   #include <linux/nvme-fc.h>
>> +#define FCLOOP_MAX_AL_PA 0xEF
> 
> This is odd - no reason to bring in ALPA's and arb loop.   Please rename 
> it to state it's the max address value allowed.  I'd prefer if this was 
> a 24 bit value, but that won't matter much and 255 is fine.
> 
Hehe; yeah, pulling in ALPA etc is probably not necessary.
But I wanted to restrict things to less than 255 to get alignment with 
real PtP Port IDs.

I'll be resending the patchset.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2020-10-09 13:57 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-22 12:14 [PATCH 0/7] nvme-fcloop: fix shutdown and improve logging Hannes Reinecke
2020-09-22 12:14 ` [PATCH 1/7] nvme-fcloop: flush workqueue before calling nvme_fc_unregister_remoteport() Hannes Reinecke
2020-10-05 17:14   ` James Smart
2020-09-22 12:14 ` [PATCH 2/7] nvmet-fc: use per-target workqueue when removing associations Hannes Reinecke
2020-10-05 17:18   ` James Smart
2020-09-22 12:14 ` [PATCH 3/7] nvme-fcloop: use IDA for port ids Hannes Reinecke
2020-10-05 17:33   ` James Smart
2020-10-09 13:57     ` Hannes Reinecke
2020-09-22 12:14 ` [PATCH 4/7] nvmet-fc: use feature flag for virtual LLDD Hannes Reinecke
2020-10-05 17:37   ` James Smart
2020-09-22 12:14 ` [PATCH 5/7] nvme-fc: " Hannes Reinecke
2020-10-05 17:38   ` James Smart
2020-09-22 12:15 ` [PATCH 6/7] nvme-fcloop: use a device for nport Hannes Reinecke
2020-10-05 17:41   ` James Smart
2020-09-22 12:15 ` [PATCH 7/7] nvme-fcloop: use a device for lport Hannes Reinecke
2020-10-05 17:45   ` James Smart

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).