All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2] nvmet_fc: work around overalloc of queue elements
@ 2017-08-01 22:12 ` James Smart
  0 siblings, 0 replies; 24+ messages in thread
From: James Smart @ 2017-08-01 22:12 UTC (permalink / raw)
  To: linux-nvme; +Cc: linux-scsi, James Smart

At queue creation, the transport allocates a local job struct
(struct nvmet_fc_fcp_iod) for each possible element of the queue.
When a new CMD is received from the wire, a jobs struct is allocated
from the queue and then used for the duration of the command.
The job struct contains buffer space for the wire command iu. Thus,
upon allocation of the job struct, the cmd iu buffer is copied to
the job struct and the LLDD may immediately free/reuse the CMD IU
buffer passed in the call.

However, in some circumstances, due to the packetized nature of FC
and the api of the FC LLDD which may issue a hw command to send the
wire response, but the LLDD may not get the hw completion for the
command and upcall the nvmet_fc layer before a new command may be
asynchronously received on the wire. In other words, its possible
for the initiator to get the response from the wire, thus believe a
command slot free, and send a new command iu. The new command iu
may be received by the LLDD and passed to the transport before the
LLDD had serviced the hw completion and made the teardown calls for
the original job struct. As such, there is no available job struct
available for the new io. E.g. it appears like the host sent more
queue elements than the queue size. It didn't based on it's
understanding.

Rather than treat this temporarily overflow of the queue resources
as a hard connection failure, change the api slightly to temporarily
queue the new command until the job resource is freed up. The impact
to the LLDD is in the cases where release can't be immediate, the
LLDD must wait until a new callback is called notifying the release
of the cmd iu buffer. Support for this api change is indicated by a
non-null callback function, and the new buffer-held semantic is
indicated by an -EOVERFLOW status code return from
nvmet_fc_rcv_fcp_req().


James Smart (2):
  nvmet_fc: add defer_req callback for deferment of cmd buffer return
  lpfc: support nvmet_fc defer_rcv callback

 drivers/nvme/target/fc.c         | 212 +++++++++++++++++++++++++++++++++------
 drivers/scsi/lpfc/lpfc_attr.c    |   4 +-
 drivers/scsi/lpfc/lpfc_debugfs.c |   5 +-
 drivers/scsi/lpfc/lpfc_nvmet.c   |  30 ++++++
 drivers/scsi/lpfc/lpfc_nvmet.h   |   1 +
 include/linux/nvme-fc-driver.h   |   7 ++
 6 files changed, 229 insertions(+), 30 deletions(-)

-- 
2.13.1

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH 0/2] nvmet_fc: work around overalloc of queue elements
@ 2017-08-01 22:12 ` James Smart
  0 siblings, 0 replies; 24+ messages in thread
From: James Smart @ 2017-08-01 22:12 UTC (permalink / raw)


At queue creation, the transport allocates a local job struct
(struct nvmet_fc_fcp_iod) for each possible element of the queue.
When a new CMD is received from the wire, a jobs struct is allocated
from the queue and then used for the duration of the command.
The job struct contains buffer space for the wire command iu. Thus,
upon allocation of the job struct, the cmd iu buffer is copied to
the job struct and the LLDD may immediately free/reuse the CMD IU
buffer passed in the call.

However, in some circumstances, due to the packetized nature of FC
and the api of the FC LLDD which may issue a hw command to send the
wire response, but the LLDD may not get the hw completion for the
command and upcall the nvmet_fc layer before a new command may be
asynchronously received on the wire. In other words, its possible
for the initiator to get the response from the wire, thus believe a
command slot free, and send a new command iu. The new command iu
may be received by the LLDD and passed to the transport before the
LLDD had serviced the hw completion and made the teardown calls for
the original job struct. As such, there is no available job struct
available for the new io. E.g. it appears like the host sent more
queue elements than the queue size. It didn't based on it's
understanding.

Rather than treat this temporarily overflow of the queue resources
as a hard connection failure, change the api slightly to temporarily
queue the new command until the job resource is freed up. The impact
to the LLDD is in the cases where release can't be immediate, the
LLDD must wait until a new callback is called notifying the release
of the cmd iu buffer. Support for this api change is indicated by a
non-null callback function, and the new buffer-held semantic is
indicated by an -EOVERFLOW status code return from
nvmet_fc_rcv_fcp_req().


James Smart (2):
  nvmet_fc: add defer_req callback for deferment of cmd buffer return
  lpfc: support nvmet_fc defer_rcv callback

 drivers/nvme/target/fc.c         | 212 +++++++++++++++++++++++++++++++++------
 drivers/scsi/lpfc/lpfc_attr.c    |   4 +-
 drivers/scsi/lpfc/lpfc_debugfs.c |   5 +-
 drivers/scsi/lpfc/lpfc_nvmet.c   |  30 ++++++
 drivers/scsi/lpfc/lpfc_nvmet.h   |   1 +
 include/linux/nvme-fc-driver.h   |   7 ++
 6 files changed, 229 insertions(+), 30 deletions(-)

-- 
2.13.1

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH 1/2] nvmet_fc: add defer_req callback for deferment of cmd buffer return
  2017-08-01 22:12 ` James Smart
@ 2017-08-01 22:12   ` James Smart
  -1 siblings, 0 replies; 24+ messages in thread
From: James Smart @ 2017-08-01 22:12 UTC (permalink / raw)
  To: linux-nvme; +Cc: linux-scsi, James Smart, James Smart

At queue creation, the transport allocates a local job struct
(struct nvmet_fc_fcp_iod) for each possible element of the queue.
When a new CMD is received from the wire, a jobs struct is allocated
from the queue and then used for the duration of the command.
The job struct contains buffer space for the wire command iu. Thus,
upon allocation of the job struct, the cmd iu buffer is copied to
the job struct and the LLDD may immediately free/reuse the CMD IU
buffer passed in the call.

However, in some circumstances, due to the packetized nature of FC
and the api of the FC LLDD which may issue a hw command to send the
wire response, but the LLDD may not get the hw completion for the
command and upcall the nvmet_fc layer before a new command may be
asynchronously received on the wire. In other words, its possible
for the initiator to get the response from the wire, thus believe a
command slot free, and send a new command iu. The new command iu
may be received by the LLDD and passed to the transport before the
LLDD had serviced the hw completion and made the teardown calls for
the original job struct. As such, there is no available job struct
available for the new io. E.g. it appears like the host sent more
queue elements than the queue size. It didn't based on it's
understanding.

Rather than treat this as a hard connection failure queue the new
request until the job struct does free up. As the buffer isn't
copied as there's no job struct, a special return value must be
returned to the LLDD to signify to hold off on recycling the cmd
iu buffer.  And later, when a job struct is allocated and the
buffer copied, a new LLDD callback is introduced to notify the
LLDD and allow it to recycle it's command iu buffer.

Signed-off-by: James Smart <james.smart@broadcom.com>
---
 drivers/nvme/target/fc.c       | 212 +++++++++++++++++++++++++++++++++++------
 include/linux/nvme-fc-driver.h |   7 ++
 2 files changed, 191 insertions(+), 28 deletions(-)

diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c
index 49948a412dcc..3d16870367d0 100644
--- a/drivers/nvme/target/fc.c
+++ b/drivers/nvme/target/fc.c
@@ -114,6 +114,11 @@ struct nvmet_fc_tgtport {
 	u32				max_sg_cnt;
 };
 
+struct nvmet_fc_defer_fcp_req {
+	struct list_head		req_list;
+	struct nvmefc_tgt_fcp_req	*fcp_req;
+};
+
 struct nvmet_fc_tgt_queue {
 	bool				ninetypercent;
 	u16				qid;
@@ -132,6 +137,8 @@ struct nvmet_fc_tgt_queue {
 	struct nvmet_fc_tgt_assoc	*assoc;
 	struct nvmet_fc_fcp_iod		*fod;		/* array of fcp_iods */
 	struct list_head		fod_list;
+	struct list_head		pending_cmd_list;
+	struct list_head		avail_defer_list;
 	struct workqueue_struct		*work_q;
 	struct kref			ref;
 } __aligned(sizeof(unsigned long long));
@@ -223,6 +230,8 @@ static void nvmet_fc_tgt_q_put(struct nvmet_fc_tgt_queue *queue);
 static int nvmet_fc_tgt_q_get(struct nvmet_fc_tgt_queue *queue);
 static void nvmet_fc_tgtport_put(struct nvmet_fc_tgtport *tgtport);
 static int nvmet_fc_tgtport_get(struct nvmet_fc_tgtport *tgtport);
+static void nvmet_fc_handle_fcp_rqst(struct nvmet_fc_tgtport *tgtport,
+					struct nvmet_fc_fcp_iod *fod);
 
 
 /* *********************** FC-NVME DMA Handling **************************** */
@@ -463,9 +472,9 @@ static struct nvmet_fc_fcp_iod *
 nvmet_fc_alloc_fcp_iod(struct nvmet_fc_tgt_queue *queue)
 {
 	static struct nvmet_fc_fcp_iod *fod;
-	unsigned long flags;
 
-	spin_lock_irqsave(&queue->qlock, flags);
+	/* Caller must hold queue->qlock */
+
 	fod = list_first_entry_or_null(&queue->fod_list,
 					struct nvmet_fc_fcp_iod, fcp_list);
 	if (fod) {
@@ -477,17 +486,37 @@ nvmet_fc_alloc_fcp_iod(struct nvmet_fc_tgt_queue *queue)
 		 * will "inherit" that reference.
 		 */
 	}
-	spin_unlock_irqrestore(&queue->qlock, flags);
 	return fod;
 }
 
 
 static void
+nvmet_fc_queue_fcp_req(struct nvmet_fc_tgtport *tgtport,
+		       struct nvmet_fc_tgt_queue *queue,
+		       struct nvmefc_tgt_fcp_req *fcpreq)
+{
+	struct nvmet_fc_fcp_iod *fod = fcpreq->nvmet_fc_private;
+
+	/*
+	 * put all admin cmds on hw queue id 0. All io commands go to
+	 * the respective hw queue based on a modulo basis
+	 */
+	fcpreq->hwqid = queue->qid ?
+			((queue->qid - 1) % tgtport->ops->max_hw_queues) : 0;
+
+	if (tgtport->ops->target_features & NVMET_FCTGTFEAT_CMD_IN_ISR)
+		queue_work_on(queue->cpu, queue->work_q, &fod->work);
+	else
+		nvmet_fc_handle_fcp_rqst(tgtport, fod);
+}
+
+static void
 nvmet_fc_free_fcp_iod(struct nvmet_fc_tgt_queue *queue,
 			struct nvmet_fc_fcp_iod *fod)
 {
 	struct nvmefc_tgt_fcp_req *fcpreq = fod->fcpreq;
 	struct nvmet_fc_tgtport *tgtport = fod->tgtport;
+	struct nvmet_fc_defer_fcp_req *deferfcp;
 	unsigned long flags;
 
 	fc_dma_sync_single_for_cpu(tgtport->dev, fod->rspdma,
@@ -495,21 +524,56 @@ nvmet_fc_free_fcp_iod(struct nvmet_fc_tgt_queue *queue,
 
 	fcpreq->nvmet_fc_private = NULL;
 
-	spin_lock_irqsave(&queue->qlock, flags);
-	list_add_tail(&fod->fcp_list, &fod->queue->fod_list);
 	fod->active = false;
 	fod->abort = false;
 	fod->aborted = false;
 	fod->writedataactive = false;
 	fod->fcpreq = NULL;
+
+	tgtport->ops->fcp_req_release(&tgtport->fc_target_port, fcpreq);
+
+	spin_lock_irqsave(&queue->qlock, flags);
+	deferfcp = list_first_entry_or_null(&queue->pending_cmd_list,
+				struct nvmet_fc_defer_fcp_req, req_list);
+	if (!deferfcp) {
+		list_add_tail(&fod->fcp_list, &fod->queue->fod_list);
+		spin_unlock_irqrestore(&queue->qlock, flags);
+
+		/* Release reference taken at queue lookup and fod allocation */
+		nvmet_fc_tgt_q_put(queue);
+		return;
+	}
+
+	/* Re-use the fod for the next pending cmd that was deferred */
+	list_del(&deferfcp->req_list);
+
+	fcpreq = deferfcp->fcp_req;
+
+	/* deferfcp can be reused for another IO at a later date */
+	list_add_tail(&deferfcp->req_list, &queue->avail_defer_list);
+
 	spin_unlock_irqrestore(&queue->qlock, flags);
 
+	/* Save NVME CMD IO in fod */
+	memcpy(&fod->cmdiubuf, fcpreq->rspaddr, fcpreq->rsplen);
+
+	/* Setup new fcpreq to be processed */
+	fcpreq->rspaddr = NULL;
+	fcpreq->rsplen  = 0;
+	fcpreq->nvmet_fc_private = fod;
+	fod->fcpreq = fcpreq;
+	fod->active = true;
+
+	/* inform LLDD IO is now being processed */
+	tgtport->ops->defer_rcv(&tgtport->fc_target_port, fcpreq);
+
+	/* Submit deferred IO for processing */
+	nvmet_fc_queue_fcp_req(tgtport, queue, fcpreq);
+
 	/*
-	 * release the reference taken at queue lookup and fod allocation
+	 * Leave the queue lookup get reference taken when
+	 * fod was originally allocated.
 	 */
-	nvmet_fc_tgt_q_put(queue);
-
-	tgtport->ops->fcp_req_release(&tgtport->fc_target_port, fcpreq);
 }
 
 static int
@@ -569,6 +633,8 @@ nvmet_fc_alloc_target_queue(struct nvmet_fc_tgt_assoc *assoc,
 	queue->port = assoc->tgtport->port;
 	queue->cpu = nvmet_fc_queue_to_cpu(assoc->tgtport, qid);
 	INIT_LIST_HEAD(&queue->fod_list);
+	INIT_LIST_HEAD(&queue->avail_defer_list);
+	INIT_LIST_HEAD(&queue->pending_cmd_list);
 	atomic_set(&queue->connected, 0);
 	atomic_set(&queue->sqtail, 0);
 	atomic_set(&queue->rsn, 1);
@@ -638,6 +704,7 @@ nvmet_fc_delete_target_queue(struct nvmet_fc_tgt_queue *queue)
 {
 	struct nvmet_fc_tgtport *tgtport = queue->assoc->tgtport;
 	struct nvmet_fc_fcp_iod *fod = queue->fod;
+	struct nvmet_fc_defer_fcp_req *deferfcp;
 	unsigned long flags;
 	int i, writedataactive;
 	bool disconnect;
@@ -666,6 +733,35 @@ nvmet_fc_delete_target_queue(struct nvmet_fc_tgt_queue *queue)
 			}
 		}
 	}
+
+	/* Cleanup defer'ed IOs in queue */
+	list_for_each_entry(deferfcp, &queue->avail_defer_list, req_list) {
+		list_del(&deferfcp->req_list);
+		kfree(deferfcp);
+	}
+
+	for (;;) {
+		deferfcp = list_first_entry_or_null(&queue->pending_cmd_list,
+				struct nvmet_fc_defer_fcp_req, req_list);
+		if (!deferfcp)
+			break;
+
+		list_del(&deferfcp->req_list);
+		spin_unlock_irqrestore(&queue->qlock, flags);
+
+		tgtport->ops->defer_rcv(&tgtport->fc_target_port,
+				deferfcp->fcp_req);
+
+		tgtport->ops->fcp_abort(&tgtport->fc_target_port,
+				deferfcp->fcp_req);
+
+		tgtport->ops->fcp_req_release(&tgtport->fc_target_port,
+				deferfcp->fcp_req);
+
+		kfree(deferfcp);
+
+		spin_lock_irqsave(&queue->qlock, flags);
+	}
 	spin_unlock_irqrestore(&queue->qlock, flags);
 
 	flush_workqueue(queue->work_q);
@@ -2144,11 +2240,38 @@ nvmet_fc_handle_fcp_rqst_work(struct work_struct *work)
  * Pass a FC-NVME FCP CMD IU received from the FC link to the nvmet-fc
  * layer for processing.
  *
- * The nvmet-fc layer will copy cmd payload to an internal structure for
- * processing.  As such, upon completion of the routine, the LLDD may
- * immediately free/reuse the CMD IU buffer passed in the call.
+ * The nvmet_fc layer allocates a local job structure (struct
+ * nvmet_fc_fcp_iod) from the queue for the io and copies the
+ * CMD IU buffer to the job structure. As such, on a successful
+ * completion (returns 0), the LLDD may immediately free/reuse
+ * the CMD IU buffer passed in the call.
+ *
+ * However, in some circumstances, due to the packetized nature of FC
+ * and the api of the FC LLDD which may issue a hw command to send the
+ * response, but the LLDD may not get the hw completion for that command
+ * and upcall the nvmet_fc layer before a new command may be
+ * asynchronously received - its possible for a command to be received
+ * before the LLDD and nvmet_fc have recycled the job structure. It gives
+ * the appearance of more commands received than fits in the sq.
+ * To alleviate this scenario, a temporary queue is maintained in the
+ * transport for pending LLDD requests waiting for a queue job structure.
+ * In these "overrun" cases, a temporary queue element is allocated
+ * the LLDD request and CMD iu buffer information remembered, and the
+ * routine returns a -EOVERFLOW status. Subsequently, when a queue job
+ * structure is freed, it is immediately reallocated for anything on the
+ * pending request list. The LLDDs defer_rcv() callback is called,
+ * informing the LLDD that it may reuse the CMD IU buffer, and the io
+ * is then started normally with the transport.
  *
- * If this routine returns error, the lldd should abort the exchange.
+ * The LLDD, when receiving an -EOVERFLOW completion status, is to treat
+ * the completion as successful but must not reuse the CMD IU buffer
+ * until the LLDD's defer_rcv() callback has been called for the
+ * corresponding struct nvmefc_tgt_fcp_req pointer.
+ *
+ * If there is any other condition in which an error occurs, the
+ * transport will return a non-zero status indicating the error.
+ * In all cases other than -EOVERFLOW, the transport has not accepted the
+ * request and the LLDD should abort the exchange.
  *
  * @target_port: pointer to the (registered) target port the FCP CMD IU
  *              was received on.
@@ -2166,6 +2289,8 @@ nvmet_fc_rcv_fcp_req(struct nvmet_fc_target_port *target_port,
 	struct nvme_fc_cmd_iu *cmdiu = cmdiubuf;
 	struct nvmet_fc_tgt_queue *queue;
 	struct nvmet_fc_fcp_iod *fod;
+	struct nvmet_fc_defer_fcp_req *deferfcp;
+	unsigned long flags;
 
 	/* validate iu, so the connection id can be used to find the queue */
 	if ((cmdiubuf_len != sizeof(*cmdiu)) ||
@@ -2186,29 +2311,60 @@ nvmet_fc_rcv_fcp_req(struct nvmet_fc_target_port *target_port,
 	 * when the fod is freed.
 	 */
 
+	spin_lock_irqsave(&queue->qlock, flags);
+
 	fod = nvmet_fc_alloc_fcp_iod(queue);
-	if (!fod) {
+	if (fod) {
+		spin_unlock_irqrestore(&queue->qlock, flags);
+
+		fcpreq->nvmet_fc_private = fod;
+		fod->fcpreq = fcpreq;
+
+		memcpy(&fod->cmdiubuf, cmdiubuf, cmdiubuf_len);
+
+		nvmet_fc_queue_fcp_req(tgtport, queue, fcpreq);
+
+		return 0;
+	}
+
+	if (!tgtport->ops->defer_rcv) {
+		spin_unlock_irqrestore(&queue->qlock, flags);
 		/* release the queue lookup reference */
 		nvmet_fc_tgt_q_put(queue);
 		return -ENOENT;
 	}
 
-	fcpreq->nvmet_fc_private = fod;
-	fod->fcpreq = fcpreq;
-	/*
-	 * put all admin cmds on hw queue id 0. All io commands go to
-	 * the respective hw queue based on a modulo basis
-	 */
-	fcpreq->hwqid = queue->qid ?
-			((queue->qid - 1) % tgtport->ops->max_hw_queues) : 0;
-	memcpy(&fod->cmdiubuf, cmdiubuf, cmdiubuf_len);
+	deferfcp = list_first_entry_or_null(&queue->avail_defer_list,
+			struct nvmet_fc_defer_fcp_req, req_list);
+	if (deferfcp) {
+		/* Just re-use one that was previously allocated */
+		list_del(&deferfcp->req_list);
+	} else {
+		spin_unlock_irqrestore(&queue->qlock, flags);
 
-	if (tgtport->ops->target_features & NVMET_FCTGTFEAT_CMD_IN_ISR)
-		queue_work_on(queue->cpu, queue->work_q, &fod->work);
-	else
-		nvmet_fc_handle_fcp_rqst(tgtport, fod);
+		/* Now we need to dynamically allocate one */
+		deferfcp = kmalloc(sizeof(*deferfcp), GFP_KERNEL);
+		if (!deferfcp) {
+			/* release the queue lookup reference */
+			nvmet_fc_tgt_q_put(queue);
+			return -ENOMEM;
+		}
+		spin_lock_irqsave(&queue->qlock, flags);
+	}
 
-	return 0;
+	/* For now, use rspaddr / rsplen to save payload information */
+	fcpreq->rspaddr = cmdiubuf;
+	fcpreq->rsplen  = cmdiubuf_len;
+	deferfcp->fcp_req = fcpreq;
+
+	/* defer processing till a fod becomes available */
+	list_add_tail(&deferfcp->req_list, &queue->pending_cmd_list);
+
+	/* NOTE: the queue lookup reference is still valid */
+
+	spin_unlock_irqrestore(&queue->qlock, flags);
+
+	return -EOVERFLOW;
 }
 EXPORT_SYMBOL_GPL(nvmet_fc_rcv_fcp_req);
 
diff --git a/include/linux/nvme-fc-driver.h b/include/linux/nvme-fc-driver.h
index c659953236d3..9c5cb4480806 100644
--- a/include/linux/nvme-fc-driver.h
+++ b/include/linux/nvme-fc-driver.h
@@ -346,6 +346,11 @@ struct nvme_fc_remote_port {
  *       indicating an FC transport Aborted status.
  *       Entrypoint is Mandatory.
  *
+ * @defer_rcv:  Called by the transport to signal the LLLD that it has
+ *       begun processing of a previously received NVME CMD IU. The LLDD
+ *       is now free to re-use the rcv buffer associated with the
+ *       nvmefc_tgt_fcp_req.
+ *
  * @max_hw_queues:  indicates the maximum number of hw queues the LLDD
  *       supports for cpu affinitization.
  *       Value is Mandatory. Must be at least 1.
@@ -846,6 +851,8 @@ struct nvmet_fc_target_template {
 				struct nvmefc_tgt_fcp_req *fcpreq);
 	void (*fcp_req_release)(struct nvmet_fc_target_port *tgtport,
 				struct nvmefc_tgt_fcp_req *fcpreq);
+	void (*defer_rcv)(struct nvmet_fc_target_port *tgtport,
+				struct nvmefc_tgt_fcp_req *fcpreq);
 
 	u32	max_hw_queues;
 	u16	max_sgl_segments;
-- 
2.13.1

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 1/2] nvmet_fc: add defer_req callback for deferment of cmd buffer return
@ 2017-08-01 22:12   ` James Smart
  0 siblings, 0 replies; 24+ messages in thread
From: James Smart @ 2017-08-01 22:12 UTC (permalink / raw)


At queue creation, the transport allocates a local job struct
(struct nvmet_fc_fcp_iod) for each possible element of the queue.
When a new CMD is received from the wire, a jobs struct is allocated
from the queue and then used for the duration of the command.
The job struct contains buffer space for the wire command iu. Thus,
upon allocation of the job struct, the cmd iu buffer is copied to
the job struct and the LLDD may immediately free/reuse the CMD IU
buffer passed in the call.

However, in some circumstances, due to the packetized nature of FC
and the api of the FC LLDD which may issue a hw command to send the
wire response, but the LLDD may not get the hw completion for the
command and upcall the nvmet_fc layer before a new command may be
asynchronously received on the wire. In other words, its possible
for the initiator to get the response from the wire, thus believe a
command slot free, and send a new command iu. The new command iu
may be received by the LLDD and passed to the transport before the
LLDD had serviced the hw completion and made the teardown calls for
the original job struct. As such, there is no available job struct
available for the new io. E.g. it appears like the host sent more
queue elements than the queue size. It didn't based on it's
understanding.

Rather than treat this as a hard connection failure queue the new
request until the job struct does free up. As the buffer isn't
copied as there's no job struct, a special return value must be
returned to the LLDD to signify to hold off on recycling the cmd
iu buffer.  And later, when a job struct is allocated and the
buffer copied, a new LLDD callback is introduced to notify the
LLDD and allow it to recycle it's command iu buffer.

Signed-off-by: James Smart <james.smart at broadcom.com>
---
 drivers/nvme/target/fc.c       | 212 +++++++++++++++++++++++++++++++++++------
 include/linux/nvme-fc-driver.h |   7 ++
 2 files changed, 191 insertions(+), 28 deletions(-)

diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c
index 49948a412dcc..3d16870367d0 100644
--- a/drivers/nvme/target/fc.c
+++ b/drivers/nvme/target/fc.c
@@ -114,6 +114,11 @@ struct nvmet_fc_tgtport {
 	u32				max_sg_cnt;
 };
 
+struct nvmet_fc_defer_fcp_req {
+	struct list_head		req_list;
+	struct nvmefc_tgt_fcp_req	*fcp_req;
+};
+
 struct nvmet_fc_tgt_queue {
 	bool				ninetypercent;
 	u16				qid;
@@ -132,6 +137,8 @@ struct nvmet_fc_tgt_queue {
 	struct nvmet_fc_tgt_assoc	*assoc;
 	struct nvmet_fc_fcp_iod		*fod;		/* array of fcp_iods */
 	struct list_head		fod_list;
+	struct list_head		pending_cmd_list;
+	struct list_head		avail_defer_list;
 	struct workqueue_struct		*work_q;
 	struct kref			ref;
 } __aligned(sizeof(unsigned long long));
@@ -223,6 +230,8 @@ static void nvmet_fc_tgt_q_put(struct nvmet_fc_tgt_queue *queue);
 static int nvmet_fc_tgt_q_get(struct nvmet_fc_tgt_queue *queue);
 static void nvmet_fc_tgtport_put(struct nvmet_fc_tgtport *tgtport);
 static int nvmet_fc_tgtport_get(struct nvmet_fc_tgtport *tgtport);
+static void nvmet_fc_handle_fcp_rqst(struct nvmet_fc_tgtport *tgtport,
+					struct nvmet_fc_fcp_iod *fod);
 
 
 /* *********************** FC-NVME DMA Handling **************************** */
@@ -463,9 +472,9 @@ static struct nvmet_fc_fcp_iod *
 nvmet_fc_alloc_fcp_iod(struct nvmet_fc_tgt_queue *queue)
 {
 	static struct nvmet_fc_fcp_iod *fod;
-	unsigned long flags;
 
-	spin_lock_irqsave(&queue->qlock, flags);
+	/* Caller must hold queue->qlock */
+
 	fod = list_first_entry_or_null(&queue->fod_list,
 					struct nvmet_fc_fcp_iod, fcp_list);
 	if (fod) {
@@ -477,17 +486,37 @@ nvmet_fc_alloc_fcp_iod(struct nvmet_fc_tgt_queue *queue)
 		 * will "inherit" that reference.
 		 */
 	}
-	spin_unlock_irqrestore(&queue->qlock, flags);
 	return fod;
 }
 
 
 static void
+nvmet_fc_queue_fcp_req(struct nvmet_fc_tgtport *tgtport,
+		       struct nvmet_fc_tgt_queue *queue,
+		       struct nvmefc_tgt_fcp_req *fcpreq)
+{
+	struct nvmet_fc_fcp_iod *fod = fcpreq->nvmet_fc_private;
+
+	/*
+	 * put all admin cmds on hw queue id 0. All io commands go to
+	 * the respective hw queue based on a modulo basis
+	 */
+	fcpreq->hwqid = queue->qid ?
+			((queue->qid - 1) % tgtport->ops->max_hw_queues) : 0;
+
+	if (tgtport->ops->target_features & NVMET_FCTGTFEAT_CMD_IN_ISR)
+		queue_work_on(queue->cpu, queue->work_q, &fod->work);
+	else
+		nvmet_fc_handle_fcp_rqst(tgtport, fod);
+}
+
+static void
 nvmet_fc_free_fcp_iod(struct nvmet_fc_tgt_queue *queue,
 			struct nvmet_fc_fcp_iod *fod)
 {
 	struct nvmefc_tgt_fcp_req *fcpreq = fod->fcpreq;
 	struct nvmet_fc_tgtport *tgtport = fod->tgtport;
+	struct nvmet_fc_defer_fcp_req *deferfcp;
 	unsigned long flags;
 
 	fc_dma_sync_single_for_cpu(tgtport->dev, fod->rspdma,
@@ -495,21 +524,56 @@ nvmet_fc_free_fcp_iod(struct nvmet_fc_tgt_queue *queue,
 
 	fcpreq->nvmet_fc_private = NULL;
 
-	spin_lock_irqsave(&queue->qlock, flags);
-	list_add_tail(&fod->fcp_list, &fod->queue->fod_list);
 	fod->active = false;
 	fod->abort = false;
 	fod->aborted = false;
 	fod->writedataactive = false;
 	fod->fcpreq = NULL;
+
+	tgtport->ops->fcp_req_release(&tgtport->fc_target_port, fcpreq);
+
+	spin_lock_irqsave(&queue->qlock, flags);
+	deferfcp = list_first_entry_or_null(&queue->pending_cmd_list,
+				struct nvmet_fc_defer_fcp_req, req_list);
+	if (!deferfcp) {
+		list_add_tail(&fod->fcp_list, &fod->queue->fod_list);
+		spin_unlock_irqrestore(&queue->qlock, flags);
+
+		/* Release reference taken at queue lookup and fod allocation */
+		nvmet_fc_tgt_q_put(queue);
+		return;
+	}
+
+	/* Re-use the fod for the next pending cmd that was deferred */
+	list_del(&deferfcp->req_list);
+
+	fcpreq = deferfcp->fcp_req;
+
+	/* deferfcp can be reused for another IO at a later date */
+	list_add_tail(&deferfcp->req_list, &queue->avail_defer_list);
+
 	spin_unlock_irqrestore(&queue->qlock, flags);
 
+	/* Save NVME CMD IO in fod */
+	memcpy(&fod->cmdiubuf, fcpreq->rspaddr, fcpreq->rsplen);
+
+	/* Setup new fcpreq to be processed */
+	fcpreq->rspaddr = NULL;
+	fcpreq->rsplen  = 0;
+	fcpreq->nvmet_fc_private = fod;
+	fod->fcpreq = fcpreq;
+	fod->active = true;
+
+	/* inform LLDD IO is now being processed */
+	tgtport->ops->defer_rcv(&tgtport->fc_target_port, fcpreq);
+
+	/* Submit deferred IO for processing */
+	nvmet_fc_queue_fcp_req(tgtport, queue, fcpreq);
+
 	/*
-	 * release the reference taken at queue lookup and fod allocation
+	 * Leave the queue lookup get reference taken when
+	 * fod was originally allocated.
 	 */
-	nvmet_fc_tgt_q_put(queue);
-
-	tgtport->ops->fcp_req_release(&tgtport->fc_target_port, fcpreq);
 }
 
 static int
@@ -569,6 +633,8 @@ nvmet_fc_alloc_target_queue(struct nvmet_fc_tgt_assoc *assoc,
 	queue->port = assoc->tgtport->port;
 	queue->cpu = nvmet_fc_queue_to_cpu(assoc->tgtport, qid);
 	INIT_LIST_HEAD(&queue->fod_list);
+	INIT_LIST_HEAD(&queue->avail_defer_list);
+	INIT_LIST_HEAD(&queue->pending_cmd_list);
 	atomic_set(&queue->connected, 0);
 	atomic_set(&queue->sqtail, 0);
 	atomic_set(&queue->rsn, 1);
@@ -638,6 +704,7 @@ nvmet_fc_delete_target_queue(struct nvmet_fc_tgt_queue *queue)
 {
 	struct nvmet_fc_tgtport *tgtport = queue->assoc->tgtport;
 	struct nvmet_fc_fcp_iod *fod = queue->fod;
+	struct nvmet_fc_defer_fcp_req *deferfcp;
 	unsigned long flags;
 	int i, writedataactive;
 	bool disconnect;
@@ -666,6 +733,35 @@ nvmet_fc_delete_target_queue(struct nvmet_fc_tgt_queue *queue)
 			}
 		}
 	}
+
+	/* Cleanup defer'ed IOs in queue */
+	list_for_each_entry(deferfcp, &queue->avail_defer_list, req_list) {
+		list_del(&deferfcp->req_list);
+		kfree(deferfcp);
+	}
+
+	for (;;) {
+		deferfcp = list_first_entry_or_null(&queue->pending_cmd_list,
+				struct nvmet_fc_defer_fcp_req, req_list);
+		if (!deferfcp)
+			break;
+
+		list_del(&deferfcp->req_list);
+		spin_unlock_irqrestore(&queue->qlock, flags);
+
+		tgtport->ops->defer_rcv(&tgtport->fc_target_port,
+				deferfcp->fcp_req);
+
+		tgtport->ops->fcp_abort(&tgtport->fc_target_port,
+				deferfcp->fcp_req);
+
+		tgtport->ops->fcp_req_release(&tgtport->fc_target_port,
+				deferfcp->fcp_req);
+
+		kfree(deferfcp);
+
+		spin_lock_irqsave(&queue->qlock, flags);
+	}
 	spin_unlock_irqrestore(&queue->qlock, flags);
 
 	flush_workqueue(queue->work_q);
@@ -2144,11 +2240,38 @@ nvmet_fc_handle_fcp_rqst_work(struct work_struct *work)
  * Pass a FC-NVME FCP CMD IU received from the FC link to the nvmet-fc
  * layer for processing.
  *
- * The nvmet-fc layer will copy cmd payload to an internal structure for
- * processing.  As such, upon completion of the routine, the LLDD may
- * immediately free/reuse the CMD IU buffer passed in the call.
+ * The nvmet_fc layer allocates a local job structure (struct
+ * nvmet_fc_fcp_iod) from the queue for the io and copies the
+ * CMD IU buffer to the job structure. As such, on a successful
+ * completion (returns 0), the LLDD may immediately free/reuse
+ * the CMD IU buffer passed in the call.
+ *
+ * However, in some circumstances, due to the packetized nature of FC
+ * and the api of the FC LLDD which may issue a hw command to send the
+ * response, but the LLDD may not get the hw completion for that command
+ * and upcall the nvmet_fc layer before a new command may be
+ * asynchronously received - its possible for a command to be received
+ * before the LLDD and nvmet_fc have recycled the job structure. It gives
+ * the appearance of more commands received than fits in the sq.
+ * To alleviate this scenario, a temporary queue is maintained in the
+ * transport for pending LLDD requests waiting for a queue job structure.
+ * In these "overrun" cases, a temporary queue element is allocated
+ * the LLDD request and CMD iu buffer information remembered, and the
+ * routine returns a -EOVERFLOW status. Subsequently, when a queue job
+ * structure is freed, it is immediately reallocated for anything on the
+ * pending request list. The LLDDs defer_rcv() callback is called,
+ * informing the LLDD that it may reuse the CMD IU buffer, and the io
+ * is then started normally with the transport.
  *
- * If this routine returns error, the lldd should abort the exchange.
+ * The LLDD, when receiving an -EOVERFLOW completion status, is to treat
+ * the completion as successful but must not reuse the CMD IU buffer
+ * until the LLDD's defer_rcv() callback has been called for the
+ * corresponding struct nvmefc_tgt_fcp_req pointer.
+ *
+ * If there is any other condition in which an error occurs, the
+ * transport will return a non-zero status indicating the error.
+ * In all cases other than -EOVERFLOW, the transport has not accepted the
+ * request and the LLDD should abort the exchange.
  *
  * @target_port: pointer to the (registered) target port the FCP CMD IU
  *              was received on.
@@ -2166,6 +2289,8 @@ nvmet_fc_rcv_fcp_req(struct nvmet_fc_target_port *target_port,
 	struct nvme_fc_cmd_iu *cmdiu = cmdiubuf;
 	struct nvmet_fc_tgt_queue *queue;
 	struct nvmet_fc_fcp_iod *fod;
+	struct nvmet_fc_defer_fcp_req *deferfcp;
+	unsigned long flags;
 
 	/* validate iu, so the connection id can be used to find the queue */
 	if ((cmdiubuf_len != sizeof(*cmdiu)) ||
@@ -2186,29 +2311,60 @@ nvmet_fc_rcv_fcp_req(struct nvmet_fc_target_port *target_port,
 	 * when the fod is freed.
 	 */
 
+	spin_lock_irqsave(&queue->qlock, flags);
+
 	fod = nvmet_fc_alloc_fcp_iod(queue);
-	if (!fod) {
+	if (fod) {
+		spin_unlock_irqrestore(&queue->qlock, flags);
+
+		fcpreq->nvmet_fc_private = fod;
+		fod->fcpreq = fcpreq;
+
+		memcpy(&fod->cmdiubuf, cmdiubuf, cmdiubuf_len);
+
+		nvmet_fc_queue_fcp_req(tgtport, queue, fcpreq);
+
+		return 0;
+	}
+
+	if (!tgtport->ops->defer_rcv) {
+		spin_unlock_irqrestore(&queue->qlock, flags);
 		/* release the queue lookup reference */
 		nvmet_fc_tgt_q_put(queue);
 		return -ENOENT;
 	}
 
-	fcpreq->nvmet_fc_private = fod;
-	fod->fcpreq = fcpreq;
-	/*
-	 * put all admin cmds on hw queue id 0. All io commands go to
-	 * the respective hw queue based on a modulo basis
-	 */
-	fcpreq->hwqid = queue->qid ?
-			((queue->qid - 1) % tgtport->ops->max_hw_queues) : 0;
-	memcpy(&fod->cmdiubuf, cmdiubuf, cmdiubuf_len);
+	deferfcp = list_first_entry_or_null(&queue->avail_defer_list,
+			struct nvmet_fc_defer_fcp_req, req_list);
+	if (deferfcp) {
+		/* Just re-use one that was previously allocated */
+		list_del(&deferfcp->req_list);
+	} else {
+		spin_unlock_irqrestore(&queue->qlock, flags);
 
-	if (tgtport->ops->target_features & NVMET_FCTGTFEAT_CMD_IN_ISR)
-		queue_work_on(queue->cpu, queue->work_q, &fod->work);
-	else
-		nvmet_fc_handle_fcp_rqst(tgtport, fod);
+		/* Now we need to dynamically allocate one */
+		deferfcp = kmalloc(sizeof(*deferfcp), GFP_KERNEL);
+		if (!deferfcp) {
+			/* release the queue lookup reference */
+			nvmet_fc_tgt_q_put(queue);
+			return -ENOMEM;
+		}
+		spin_lock_irqsave(&queue->qlock, flags);
+	}
 
-	return 0;
+	/* For now, use rspaddr / rsplen to save payload information */
+	fcpreq->rspaddr = cmdiubuf;
+	fcpreq->rsplen  = cmdiubuf_len;
+	deferfcp->fcp_req = fcpreq;
+
+	/* defer processing till a fod becomes available */
+	list_add_tail(&deferfcp->req_list, &queue->pending_cmd_list);
+
+	/* NOTE: the queue lookup reference is still valid */
+
+	spin_unlock_irqrestore(&queue->qlock, flags);
+
+	return -EOVERFLOW;
 }
 EXPORT_SYMBOL_GPL(nvmet_fc_rcv_fcp_req);
 
diff --git a/include/linux/nvme-fc-driver.h b/include/linux/nvme-fc-driver.h
index c659953236d3..9c5cb4480806 100644
--- a/include/linux/nvme-fc-driver.h
+++ b/include/linux/nvme-fc-driver.h
@@ -346,6 +346,11 @@ struct nvme_fc_remote_port {
  *       indicating an FC transport Aborted status.
  *       Entrypoint is Mandatory.
  *
+ * @defer_rcv:  Called by the transport to signal the LLLD that it has
+ *       begun processing of a previously received NVME CMD IU. The LLDD
+ *       is now free to re-use the rcv buffer associated with the
+ *       nvmefc_tgt_fcp_req.
+ *
  * @max_hw_queues:  indicates the maximum number of hw queues the LLDD
  *       supports for cpu affinitization.
  *       Value is Mandatory. Must be at least 1.
@@ -846,6 +851,8 @@ struct nvmet_fc_target_template {
 				struct nvmefc_tgt_fcp_req *fcpreq);
 	void (*fcp_req_release)(struct nvmet_fc_target_port *tgtport,
 				struct nvmefc_tgt_fcp_req *fcpreq);
+	void (*defer_rcv)(struct nvmet_fc_target_port *tgtport,
+				struct nvmefc_tgt_fcp_req *fcpreq);
 
 	u32	max_hw_queues;
 	u16	max_sgl_segments;
-- 
2.13.1

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 2/2] lpfc: support nvmet_fc defer_rcv callback
  2017-08-01 22:12 ` James Smart
@ 2017-08-01 22:12   ` James Smart
  -1 siblings, 0 replies; 24+ messages in thread
From: James Smart @ 2017-08-01 22:12 UTC (permalink / raw)
  To: linux-nvme; +Cc: linux-scsi, James Smart, Dick Kennedy, James Smart

Currently, calls to nvmet_fc_rcv_fcp_req() always copied the
FC-NVME cmd iu to a temporary buffer before returning, allowing
the driver to immediately repost the buffer to the hardware.

To address timing conditions on queue element structures vs async
command reception, the nvmet_fc transport occasionally may need to
hold on to the command iu buffer for a short period. In these cases,
the nvmet_fc_rcv_fcp_req() will return a special return code
(-EOVERFLOW). In these cases, the LLDD must delay until the new
defer_rcv lldd callback is called before recycling the buffer back
to the hw.

This patch adds support for the new nvmet_fc transport defer_rcv
callback and recognition of the new error code when passing commands
to the transport.

This patch is intended to enter the kernel through the nvme block
tree which pulls in the nvmet_fc api change at the same time. It is
not to be merged via the scsi trees without the latest nvme support
in it.

Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
Signed-off-by: James Smart <james.smart@broadcom.com>
---
 drivers/scsi/lpfc/lpfc_attr.c    |  4 +++-
 drivers/scsi/lpfc/lpfc_debugfs.c |  5 ++++-
 drivers/scsi/lpfc/lpfc_nvmet.c   | 30 ++++++++++++++++++++++++++++++
 drivers/scsi/lpfc/lpfc_nvmet.h   |  1 +
 4 files changed, 38 insertions(+), 2 deletions(-)

diff --git a/drivers/scsi/lpfc/lpfc_attr.c b/drivers/scsi/lpfc/lpfc_attr.c
index 4ed48ed38e79..7ee1a94c0b33 100644
--- a/drivers/scsi/lpfc/lpfc_attr.c
+++ b/drivers/scsi/lpfc/lpfc_attr.c
@@ -205,8 +205,10 @@ lpfc_nvme_info_show(struct device *dev, struct device_attribute *attr,
 				atomic_read(&tgtp->xmt_ls_rsp_error));
 
 		len += snprintf(buf+len, PAGE_SIZE-len,
-				"FCP: Rcv %08x Release %08x Drop %08x\n",
+				"FCP: Rcv %08x Defer %08x Release %08x "
+				"Drop %08x\n",
 				atomic_read(&tgtp->rcv_fcp_cmd_in),
+				atomic_read(&tgtp->rcv_fcp_cmd_defer),
 				atomic_read(&tgtp->xmt_fcp_release),
 				atomic_read(&tgtp->rcv_fcp_cmd_drop));
 
diff --git a/drivers/scsi/lpfc/lpfc_debugfs.c b/drivers/scsi/lpfc/lpfc_debugfs.c
index 5cc8b0f7d885..744f3f395b64 100644
--- a/drivers/scsi/lpfc/lpfc_debugfs.c
+++ b/drivers/scsi/lpfc/lpfc_debugfs.c
@@ -782,8 +782,11 @@ lpfc_debugfs_nvmestat_data(struct lpfc_vport *vport, char *buf, int size)
 				atomic_read(&tgtp->xmt_ls_rsp_error));
 
 		len += snprintf(buf + len, size - len,
-				"FCP: Rcv %08x Drop %08x\n",
+				"FCP: Rcv %08x Defer %08x Release %08x "
+				"Drop %08x\n",
 				atomic_read(&tgtp->rcv_fcp_cmd_in),
+				atomic_read(&tgtp->rcv_fcp_cmd_defer),
+				atomic_read(&tgtp->xmt_fcp_release),
 				atomic_read(&tgtp->rcv_fcp_cmd_drop));
 
 		if (atomic_read(&tgtp->rcv_fcp_cmd_in) !=
diff --git a/drivers/scsi/lpfc/lpfc_nvmet.c b/drivers/scsi/lpfc/lpfc_nvmet.c
index fbeec344c6cc..a5dc1b83064a 100644
--- a/drivers/scsi/lpfc/lpfc_nvmet.c
+++ b/drivers/scsi/lpfc/lpfc_nvmet.c
@@ -841,12 +841,31 @@ lpfc_nvmet_xmt_fcp_release(struct nvmet_fc_target_port *tgtport,
 	lpfc_nvmet_ctxbuf_post(phba, ctxp->ctxbuf);
 }
 
+static void
+lpfc_nvmet_defer_rcv(struct nvmet_fc_target_port *tgtport,
+		     struct nvmefc_tgt_fcp_req *rsp)
+{
+	struct lpfc_nvmet_tgtport *tgtp;
+	struct lpfc_nvmet_rcv_ctx *ctxp =
+		container_of(rsp, struct lpfc_nvmet_rcv_ctx, ctx.fcp_req);
+	struct rqb_dmabuf *nvmebuf = ctxp->rqb_buffer;
+	struct lpfc_hba *phba = ctxp->phba;
+
+	lpfc_nvmeio_data(phba, "NVMET DEFERRCV: xri x%x sz %d CPU %02x\n",
+			 ctxp->oxid, ctxp->size, smp_processor_id());
+
+	tgtp = (struct lpfc_nvmet_tgtport *)phba->targetport->private;
+	atomic_inc(&tgtp->rcv_fcp_cmd_defer);
+	lpfc_rq_buf_free(phba, &nvmebuf->hbuf); /* repost */
+}
+
 static struct nvmet_fc_target_template lpfc_tgttemplate = {
 	.targetport_delete = lpfc_nvmet_targetport_delete,
 	.xmt_ls_rsp     = lpfc_nvmet_xmt_ls_rsp,
 	.fcp_op         = lpfc_nvmet_xmt_fcp_op,
 	.fcp_abort      = lpfc_nvmet_xmt_fcp_abort,
 	.fcp_req_release = lpfc_nvmet_xmt_fcp_release,
+	.defer_rcv	= lpfc_nvmet_defer_rcv,
 
 	.max_hw_queues  = 1,
 	.max_sgl_segments = LPFC_NVMET_DEFAULT_SEGS,
@@ -1504,6 +1523,17 @@ lpfc_nvmet_unsol_fcp_buffer(struct lpfc_hba *phba,
 		return;
 	}
 
+	/* Processing of FCP command is deferred */
+	if (rc == -EOVERFLOW) {
+		lpfc_nvmeio_data(phba,
+				 "NVMET RCV BUSY: xri x%x sz %d from %06x\n",
+				 oxid, size, sid);
+		/* defer reposting rcv buffer till .defer_rcv callback */
+		ctxp->rqb_buffer = (void *)nvmebuf;
+		atomic_inc(&tgtp->rcv_fcp_cmd_out);
+		return;
+	}
+
 	atomic_inc(&tgtp->rcv_fcp_cmd_drop);
 	lpfc_printf_log(phba, KERN_ERR, LOG_NVME_IOERR,
 			"6159 FCP Drop IO x%x: err x%x: x%x x%x x%x\n",
diff --git a/drivers/scsi/lpfc/lpfc_nvmet.h b/drivers/scsi/lpfc/lpfc_nvmet.h
index e675ef17be08..48a76788b003 100644
--- a/drivers/scsi/lpfc/lpfc_nvmet.h
+++ b/drivers/scsi/lpfc/lpfc_nvmet.h
@@ -49,6 +49,7 @@ struct lpfc_nvmet_tgtport {
 	atomic_t rcv_fcp_cmd_in;
 	atomic_t rcv_fcp_cmd_out;
 	atomic_t rcv_fcp_cmd_drop;
+	atomic_t rcv_fcp_cmd_defer;
 	atomic_t xmt_fcp_release;
 
 	/* Stats counters - lpfc_nvmet_xmt_fcp_op */
-- 
2.13.1

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 2/2] lpfc: support nvmet_fc defer_rcv callback
@ 2017-08-01 22:12   ` James Smart
  0 siblings, 0 replies; 24+ messages in thread
From: James Smart @ 2017-08-01 22:12 UTC (permalink / raw)


Currently, calls to nvmet_fc_rcv_fcp_req() always copied the
FC-NVME cmd iu to a temporary buffer before returning, allowing
the driver to immediately repost the buffer to the hardware.

To address timing conditions on queue element structures vs async
command reception, the nvmet_fc transport occasionally may need to
hold on to the command iu buffer for a short period. In these cases,
the nvmet_fc_rcv_fcp_req() will return a special return code
(-EOVERFLOW). In these cases, the LLDD must delay until the new
defer_rcv lldd callback is called before recycling the buffer back
to the hw.

This patch adds support for the new nvmet_fc transport defer_rcv
callback and recognition of the new error code when passing commands
to the transport.

This patch is intended to enter the kernel through the nvme block
tree which pulls in the nvmet_fc api change at the same time. It is
not to be merged via the scsi trees without the latest nvme support
in it.

Signed-off-by: Dick Kennedy <dick.kennedy at broadcom.com>
Signed-off-by: James Smart <james.smart at broadcom.com>
---
 drivers/scsi/lpfc/lpfc_attr.c    |  4 +++-
 drivers/scsi/lpfc/lpfc_debugfs.c |  5 ++++-
 drivers/scsi/lpfc/lpfc_nvmet.c   | 30 ++++++++++++++++++++++++++++++
 drivers/scsi/lpfc/lpfc_nvmet.h   |  1 +
 4 files changed, 38 insertions(+), 2 deletions(-)

diff --git a/drivers/scsi/lpfc/lpfc_attr.c b/drivers/scsi/lpfc/lpfc_attr.c
index 4ed48ed38e79..7ee1a94c0b33 100644
--- a/drivers/scsi/lpfc/lpfc_attr.c
+++ b/drivers/scsi/lpfc/lpfc_attr.c
@@ -205,8 +205,10 @@ lpfc_nvme_info_show(struct device *dev, struct device_attribute *attr,
 				atomic_read(&tgtp->xmt_ls_rsp_error));
 
 		len += snprintf(buf+len, PAGE_SIZE-len,
-				"FCP: Rcv %08x Release %08x Drop %08x\n",
+				"FCP: Rcv %08x Defer %08x Release %08x "
+				"Drop %08x\n",
 				atomic_read(&tgtp->rcv_fcp_cmd_in),
+				atomic_read(&tgtp->rcv_fcp_cmd_defer),
 				atomic_read(&tgtp->xmt_fcp_release),
 				atomic_read(&tgtp->rcv_fcp_cmd_drop));
 
diff --git a/drivers/scsi/lpfc/lpfc_debugfs.c b/drivers/scsi/lpfc/lpfc_debugfs.c
index 5cc8b0f7d885..744f3f395b64 100644
--- a/drivers/scsi/lpfc/lpfc_debugfs.c
+++ b/drivers/scsi/lpfc/lpfc_debugfs.c
@@ -782,8 +782,11 @@ lpfc_debugfs_nvmestat_data(struct lpfc_vport *vport, char *buf, int size)
 				atomic_read(&tgtp->xmt_ls_rsp_error));
 
 		len += snprintf(buf + len, size - len,
-				"FCP: Rcv %08x Drop %08x\n",
+				"FCP: Rcv %08x Defer %08x Release %08x "
+				"Drop %08x\n",
 				atomic_read(&tgtp->rcv_fcp_cmd_in),
+				atomic_read(&tgtp->rcv_fcp_cmd_defer),
+				atomic_read(&tgtp->xmt_fcp_release),
 				atomic_read(&tgtp->rcv_fcp_cmd_drop));
 
 		if (atomic_read(&tgtp->rcv_fcp_cmd_in) !=
diff --git a/drivers/scsi/lpfc/lpfc_nvmet.c b/drivers/scsi/lpfc/lpfc_nvmet.c
index fbeec344c6cc..a5dc1b83064a 100644
--- a/drivers/scsi/lpfc/lpfc_nvmet.c
+++ b/drivers/scsi/lpfc/lpfc_nvmet.c
@@ -841,12 +841,31 @@ lpfc_nvmet_xmt_fcp_release(struct nvmet_fc_target_port *tgtport,
 	lpfc_nvmet_ctxbuf_post(phba, ctxp->ctxbuf);
 }
 
+static void
+lpfc_nvmet_defer_rcv(struct nvmet_fc_target_port *tgtport,
+		     struct nvmefc_tgt_fcp_req *rsp)
+{
+	struct lpfc_nvmet_tgtport *tgtp;
+	struct lpfc_nvmet_rcv_ctx *ctxp =
+		container_of(rsp, struct lpfc_nvmet_rcv_ctx, ctx.fcp_req);
+	struct rqb_dmabuf *nvmebuf = ctxp->rqb_buffer;
+	struct lpfc_hba *phba = ctxp->phba;
+
+	lpfc_nvmeio_data(phba, "NVMET DEFERRCV: xri x%x sz %d CPU %02x\n",
+			 ctxp->oxid, ctxp->size, smp_processor_id());
+
+	tgtp = (struct lpfc_nvmet_tgtport *)phba->targetport->private;
+	atomic_inc(&tgtp->rcv_fcp_cmd_defer);
+	lpfc_rq_buf_free(phba, &nvmebuf->hbuf); /* repost */
+}
+
 static struct nvmet_fc_target_template lpfc_tgttemplate = {
 	.targetport_delete = lpfc_nvmet_targetport_delete,
 	.xmt_ls_rsp     = lpfc_nvmet_xmt_ls_rsp,
 	.fcp_op         = lpfc_nvmet_xmt_fcp_op,
 	.fcp_abort      = lpfc_nvmet_xmt_fcp_abort,
 	.fcp_req_release = lpfc_nvmet_xmt_fcp_release,
+	.defer_rcv	= lpfc_nvmet_defer_rcv,
 
 	.max_hw_queues  = 1,
 	.max_sgl_segments = LPFC_NVMET_DEFAULT_SEGS,
@@ -1504,6 +1523,17 @@ lpfc_nvmet_unsol_fcp_buffer(struct lpfc_hba *phba,
 		return;
 	}
 
+	/* Processing of FCP command is deferred */
+	if (rc == -EOVERFLOW) {
+		lpfc_nvmeio_data(phba,
+				 "NVMET RCV BUSY: xri x%x sz %d from %06x\n",
+				 oxid, size, sid);
+		/* defer reposting rcv buffer till .defer_rcv callback */
+		ctxp->rqb_buffer = (void *)nvmebuf;
+		atomic_inc(&tgtp->rcv_fcp_cmd_out);
+		return;
+	}
+
 	atomic_inc(&tgtp->rcv_fcp_cmd_drop);
 	lpfc_printf_log(phba, KERN_ERR, LOG_NVME_IOERR,
 			"6159 FCP Drop IO x%x: err x%x: x%x x%x x%x\n",
diff --git a/drivers/scsi/lpfc/lpfc_nvmet.h b/drivers/scsi/lpfc/lpfc_nvmet.h
index e675ef17be08..48a76788b003 100644
--- a/drivers/scsi/lpfc/lpfc_nvmet.h
+++ b/drivers/scsi/lpfc/lpfc_nvmet.h
@@ -49,6 +49,7 @@ struct lpfc_nvmet_tgtport {
 	atomic_t rcv_fcp_cmd_in;
 	atomic_t rcv_fcp_cmd_out;
 	atomic_t rcv_fcp_cmd_drop;
+	atomic_t rcv_fcp_cmd_defer;
 	atomic_t xmt_fcp_release;
 
 	/* Stats counters - lpfc_nvmet_xmt_fcp_op */
-- 
2.13.1

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH 1/2] nvmet_fc: add defer_req callback for deferment of cmd buffer return
  2017-08-01 22:12   ` James Smart
@ 2017-08-02  8:16     ` Johannes Thumshirn
  -1 siblings, 0 replies; 24+ messages in thread
From: Johannes Thumshirn @ 2017-08-02  8:16 UTC (permalink / raw)
  To: James Smart; +Cc: linux-nvme, James Smart, linux-scsi

On Tue, Aug 01, 2017 at 03:12:39PM -0700, James Smart wrote:
> @@ -463,9 +472,9 @@ static struct nvmet_fc_fcp_iod *
>  nvmet_fc_alloc_fcp_iod(struct nvmet_fc_tgt_queue *queue)
>  {
>  	static struct nvmet_fc_fcp_iod *fod;
> -	unsigned long flags;
>  
> -	spin_lock_irqsave(&queue->qlock, flags);
> +	/* Caller must hold queue->qlock */
+       lockped_assert_held(&queue->qlock); 
So we can check if the caller really holds the queue lock.

> +
>  	fod = list_first_entry_or_null(&queue->fod_list,
>  					struct nvmet_fc_fcp_iod, fcp_list);
>  	if (fod) {

[...]

> +	for (;;) {
> +		deferfcp = list_first_entry_or_null(&queue->pending_cmd_list,
> +				struct nvmet_fc_defer_fcp_req, req_list);
> +		if (!deferfcp)
> +			break;

        while ((deferfcp = list_first_entry_or_null(&queue->pending_cmd_list,
				struct nvmet_fc_defer_fcp_req,
				req_list)) != NULL) {
?

Other than that,
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>

-- 
Johannes Thumshirn                                          Storage
jthumshirn@suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH 1/2] nvmet_fc: add defer_req callback for deferment of cmd buffer return
@ 2017-08-02  8:16     ` Johannes Thumshirn
  0 siblings, 0 replies; 24+ messages in thread
From: Johannes Thumshirn @ 2017-08-02  8:16 UTC (permalink / raw)


On Tue, Aug 01, 2017@03:12:39PM -0700, James Smart wrote:
> @@ -463,9 +472,9 @@ static struct nvmet_fc_fcp_iod *
>  nvmet_fc_alloc_fcp_iod(struct nvmet_fc_tgt_queue *queue)
>  {
>  	static struct nvmet_fc_fcp_iod *fod;
> -	unsigned long flags;
>  
> -	spin_lock_irqsave(&queue->qlock, flags);
> +	/* Caller must hold queue->qlock */
+       lockped_assert_held(&queue->qlock); 
So we can check if the caller really holds the queue lock.

> +
>  	fod = list_first_entry_or_null(&queue->fod_list,
>  					struct nvmet_fc_fcp_iod, fcp_list);
>  	if (fod) {

[...]

> +	for (;;) {
> +		deferfcp = list_first_entry_or_null(&queue->pending_cmd_list,
> +				struct nvmet_fc_defer_fcp_req, req_list);
> +		if (!deferfcp)
> +			break;

        while ((deferfcp = list_first_entry_or_null(&queue->pending_cmd_list,
				struct nvmet_fc_defer_fcp_req,
				req_list)) != NULL) {
?

Other than that,
Reviewed-by: Johannes Thumshirn <jthumshirn at suse.de>

-- 
Johannes Thumshirn                                          Storage
jthumshirn at suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg
GF: Felix Imend?rffer, Jane Smithard, Graham Norton
HRB 21284 (AG N?rnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 2/2] lpfc: support nvmet_fc defer_rcv callback
  2017-08-01 22:12   ` James Smart
@ 2017-08-02  8:32     ` Johannes Thumshirn
  -1 siblings, 0 replies; 24+ messages in thread
From: Johannes Thumshirn @ 2017-08-02  8:32 UTC (permalink / raw)
  To: James Smart; +Cc: linux-nvme, Dick Kennedy, James Smart, linux-scsi

On Tue, Aug 01, 2017 at 03:12:40PM -0700, James Smart wrote:
> This patch is intended to enter the kernel through the nvme block
> tree which pulls in the nvmet_fc api change at the same time. It is
> not to be merged via the scsi trees without the latest nvme support
> in it.

This should be placed below the '---' separator, so git won't pick it up and
preserve it in the history.

[...]

>  
>  		len += snprintf(buf+len, PAGE_SIZE-len,
> -				"FCP: Rcv %08x Release %08x Drop %08x\n",
> +				"FCP: Rcv %08x Defer %08x Release %08x "
> +				"Drop %08x\n",

Please don't split the string across lines, it makes grepping hard.
Checkpatch actually warns you about that.

>  				atomic_read(&tgtp->rcv_fcp_cmd_in),
> +				atomic_read(&tgtp->rcv_fcp_cmd_defer),
>  				atomic_read(&tgtp->xmt_fcp_release),
>  				atomic_read(&tgtp->rcv_fcp_cmd_drop));
>  
> diff --git a/drivers/scsi/lpfc/lpfc_debugfs.c b/drivers/scsi/lpfc/lpfc_debugfs.c
> index 5cc8b0f7d885..744f3f395b64 100644
> --- a/drivers/scsi/lpfc/lpfc_debugfs.c
> +++ b/drivers/scsi/lpfc/lpfc_debugfs.c
> @@ -782,8 +782,11 @@ lpfc_debugfs_nvmestat_data(struct lpfc_vport *vport, char *buf, int size)
>  				atomic_read(&tgtp->xmt_ls_rsp_error));
>  
>  		len += snprintf(buf + len, size - len,
> -				"FCP: Rcv %08x Drop %08x\n",
> +				"FCP: Rcv %08x Defer %08x Release %08x "
> +				"Drop %08x\n",
>  				atomic_read(&tgtp->rcv_fcp_cmd_in),
> +				atomic_read(&tgtp->rcv_fcp_cmd_defer),
> +				atomic_read(&tgtp->xmt_fcp_release),
>  				atomic_read(&tgtp->rcv_fcp_cmd_drop));

Ditto.


[...]

> +	tgtp = (struct lpfc_nvmet_tgtport *)phba->targetport->private;

No need to cast from void *

[...]

> +	/* Processing of FCP command is deferred */
> +	if (rc == -EOVERFLOW) {
> +		lpfc_nvmeio_data(phba,
> +				 "NVMET RCV BUSY: xri x%x sz %d from %06x\n",
> +				 oxid, size, sid);
> +		/* defer reposting rcv buffer till .defer_rcv callback */
> +		ctxp->rqb_buffer = (void *)nvmebuf;

nvmebuf is a 'struct rqb_dmabuf *' and ctxp->rqb_buffer expects 'struct
rqb_dmabuf *', why do you need a void * cast here?


-- 
Johannes Thumshirn                                          Storage
jthumshirn@suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH 2/2] lpfc: support nvmet_fc defer_rcv callback
@ 2017-08-02  8:32     ` Johannes Thumshirn
  0 siblings, 0 replies; 24+ messages in thread
From: Johannes Thumshirn @ 2017-08-02  8:32 UTC (permalink / raw)


On Tue, Aug 01, 2017@03:12:40PM -0700, James Smart wrote:
> This patch is intended to enter the kernel through the nvme block
> tree which pulls in the nvmet_fc api change at the same time. It is
> not to be merged via the scsi trees without the latest nvme support
> in it.

This should be placed below the '---' separator, so git won't pick it up and
preserve it in the history.

[...]

>  
>  		len += snprintf(buf+len, PAGE_SIZE-len,
> -				"FCP: Rcv %08x Release %08x Drop %08x\n",
> +				"FCP: Rcv %08x Defer %08x Release %08x "
> +				"Drop %08x\n",

Please don't split the string across lines, it makes grepping hard.
Checkpatch actually warns you about that.

>  				atomic_read(&tgtp->rcv_fcp_cmd_in),
> +				atomic_read(&tgtp->rcv_fcp_cmd_defer),
>  				atomic_read(&tgtp->xmt_fcp_release),
>  				atomic_read(&tgtp->rcv_fcp_cmd_drop));
>  
> diff --git a/drivers/scsi/lpfc/lpfc_debugfs.c b/drivers/scsi/lpfc/lpfc_debugfs.c
> index 5cc8b0f7d885..744f3f395b64 100644
> --- a/drivers/scsi/lpfc/lpfc_debugfs.c
> +++ b/drivers/scsi/lpfc/lpfc_debugfs.c
> @@ -782,8 +782,11 @@ lpfc_debugfs_nvmestat_data(struct lpfc_vport *vport, char *buf, int size)
>  				atomic_read(&tgtp->xmt_ls_rsp_error));
>  
>  		len += snprintf(buf + len, size - len,
> -				"FCP: Rcv %08x Drop %08x\n",
> +				"FCP: Rcv %08x Defer %08x Release %08x "
> +				"Drop %08x\n",
>  				atomic_read(&tgtp->rcv_fcp_cmd_in),
> +				atomic_read(&tgtp->rcv_fcp_cmd_defer),
> +				atomic_read(&tgtp->xmt_fcp_release),
>  				atomic_read(&tgtp->rcv_fcp_cmd_drop));

Ditto.


[...]

> +	tgtp = (struct lpfc_nvmet_tgtport *)phba->targetport->private;

No need to cast from void *

[...]

> +	/* Processing of FCP command is deferred */
> +	if (rc == -EOVERFLOW) {
> +		lpfc_nvmeio_data(phba,
> +				 "NVMET RCV BUSY: xri x%x sz %d from %06x\n",
> +				 oxid, size, sid);
> +		/* defer reposting rcv buffer till .defer_rcv callback */
> +		ctxp->rqb_buffer = (void *)nvmebuf;

nvmebuf is a 'struct rqb_dmabuf *' and ctxp->rqb_buffer expects 'struct
rqb_dmabuf *', why do you need a void * cast here?


-- 
Johannes Thumshirn                                          Storage
jthumshirn at suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg
GF: Felix Imend?rffer, Jane Smithard, Graham Norton
HRB 21284 (AG N?rnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 2/2] lpfc: support nvmet_fc defer_rcv callback
  2017-08-02  8:32     ` Johannes Thumshirn
@ 2017-08-02  9:17       ` Johannes Thumshirn
  -1 siblings, 0 replies; 24+ messages in thread
From: Johannes Thumshirn @ 2017-08-02  9:17 UTC (permalink / raw)
  To: James Smart; +Cc: Dick Kennedy, James Smart, linux-nvme, linux-scsi

On Wed, Aug 02, 2017 at 10:32:20AM +0200, Johannes Thumshirn wrote:
> On Tue, Aug 01, 2017 at 03:12:40PM -0700, James Smart wrote:
> > This patch is intended to enter the kernel through the nvme block
> > tree which pulls in the nvmet_fc api change at the same time. It is
> > not to be merged via the scsi trees without the latest nvme support
> > in it.
> 
> This should be placed below the '---' separator, so git won't pick it up and
> preserve it in the history.

Btw, I just found out and thought it's worth sharing: You can use 'git notes'
for these notes and if you use git format-patch --notes git automatically
places you r notes underneath the separator.

Byte,
	Johannes

-- 
Johannes Thumshirn                                          Storage
jthumshirn@suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH 2/2] lpfc: support nvmet_fc defer_rcv callback
@ 2017-08-02  9:17       ` Johannes Thumshirn
  0 siblings, 0 replies; 24+ messages in thread
From: Johannes Thumshirn @ 2017-08-02  9:17 UTC (permalink / raw)


On Wed, Aug 02, 2017@10:32:20AM +0200, Johannes Thumshirn wrote:
> On Tue, Aug 01, 2017@03:12:40PM -0700, James Smart wrote:
> > This patch is intended to enter the kernel through the nvme block
> > tree which pulls in the nvmet_fc api change at the same time. It is
> > not to be merged via the scsi trees without the latest nvme support
> > in it.
> 
> This should be placed below the '---' separator, so git won't pick it up and
> preserve it in the history.

Btw, I just found out and thought it's worth sharing: You can use 'git notes'
for these notes and if you use git format-patch --notes git automatically
places you r notes underneath the separator.

Byte,
	Johannes

-- 
Johannes Thumshirn                                          Storage
jthumshirn at suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg
GF: Felix Imend?rffer, Jane Smithard, Graham Norton
HRB 21284 (AG N?rnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 1/2] nvmet_fc: add defer_req callback for deferment of cmd buffer return
  2017-08-02  8:16     ` Johannes Thumshirn
@ 2017-08-02 14:29       ` James Smart
  -1 siblings, 0 replies; 24+ messages in thread
From: James Smart @ 2017-08-02 14:29 UTC (permalink / raw)
  To: Johannes Thumshirn; +Cc: linux-nvme, James Smart, linux-scsi

On 8/2/2017 1:16 AM, Johannes Thumshirn wrote:
>> +	for (;;) {
>> +		deferfcp = list_first_entry_or_null(&queue->pending_cmd_list,
>> +				struct nvmet_fc_defer_fcp_req, req_list);
>> +		if (!deferfcp)
>> +			break;
> 
>          while ((deferfcp = list_first_entry_or_null(&queue->pending_cmd_list,
> 				struct nvmet_fc_defer_fcp_req,
> 				req_list)) != NULL) {
> ?

To me - this is harder to read. But not a big deal.

-- james

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH 1/2] nvmet_fc: add defer_req callback for deferment of cmd buffer return
@ 2017-08-02 14:29       ` James Smart
  0 siblings, 0 replies; 24+ messages in thread
From: James Smart @ 2017-08-02 14:29 UTC (permalink / raw)


On 8/2/2017 1:16 AM, Johannes Thumshirn wrote:
>> +	for (;;) {
>> +		deferfcp = list_first_entry_or_null(&queue->pending_cmd_list,
>> +				struct nvmet_fc_defer_fcp_req, req_list);
>> +		if (!deferfcp)
>> +			break;
> 
>          while ((deferfcp = list_first_entry_or_null(&queue->pending_cmd_list,
> 				struct nvmet_fc_defer_fcp_req,
> 				req_list)) != NULL) {
> ?

To me - this is harder to read. But not a big deal.

-- james

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 2/2] lpfc: support nvmet_fc defer_rcv callback
  2017-08-02  8:32     ` Johannes Thumshirn
@ 2017-08-02 14:32       ` James Smart
  -1 siblings, 0 replies; 24+ messages in thread
From: James Smart @ 2017-08-02 14:32 UTC (permalink / raw)
  To: Johannes Thumshirn; +Cc: linux-nvme, Dick Kennedy, James Smart, linux-scsi

On 8/2/2017 1:32 AM, Johannes Thumshirn wrote:
>>   		len += snprintf(buf+len, PAGE_SIZE-len,
>> -				"FCP: Rcv %08x Release %08x Drop %08x\n",
>> +				"FCP: Rcv %08x Defer %08x Release %08x "
>> +				"Drop %08x\n",
> 
> Please don't split the string across lines, it makes grepping hard.
> Checkpatch actually warns you about that.

The indentation and length of the string exceeds the 80 chararacters. so 
has to be split.  lpfc is littered with this in the debug path. It'll be 
one of the things we address in the efct refactoring. For now, this 
style has been grandfathered in.

-- james

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH 2/2] lpfc: support nvmet_fc defer_rcv callback
@ 2017-08-02 14:32       ` James Smart
  0 siblings, 0 replies; 24+ messages in thread
From: James Smart @ 2017-08-02 14:32 UTC (permalink / raw)


On 8/2/2017 1:32 AM, Johannes Thumshirn wrote:
>>   		len += snprintf(buf+len, PAGE_SIZE-len,
>> -				"FCP: Rcv %08x Release %08x Drop %08x\n",
>> +				"FCP: Rcv %08x Defer %08x Release %08x "
>> +				"Drop %08x\n",
> 
> Please don't split the string across lines, it makes grepping hard.
> Checkpatch actually warns you about that.

The indentation and length of the string exceeds the 80 chararacters. so 
has to be split.  lpfc is littered with this in the debug path. It'll be 
one of the things we address in the efct refactoring. For now, this 
style has been grandfathered in.

-- james

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 1/2] nvmet_fc: add defer_req callback for deferment of cmd buffer return
  2017-08-02  8:16     ` Johannes Thumshirn
@ 2017-08-10  9:04       ` Christoph Hellwig
  -1 siblings, 0 replies; 24+ messages in thread
From: Christoph Hellwig @ 2017-08-10  9:04 UTC (permalink / raw)
  To: Johannes Thumshirn; +Cc: James Smart, James Smart, linux-nvme, linux-scsi

On Wed, Aug 02, 2017 at 10:16:17AM +0200, Johannes Thumshirn wrote:
> On Tue, Aug 01, 2017 at 03:12:39PM -0700, James Smart wrote:
> > @@ -463,9 +472,9 @@ static struct nvmet_fc_fcp_iod *
> >  nvmet_fc_alloc_fcp_iod(struct nvmet_fc_tgt_queue *queue)
> >  {
> >  	static struct nvmet_fc_fcp_iod *fod;
> > -	unsigned long flags;
> >  
> > -	spin_lock_irqsave(&queue->qlock, flags);
> > +	/* Caller must hold queue->qlock */
> +       lockped_assert_held(&queue->qlock); 
> So we can check if the caller really holds the queue lock.

I've added the assert after applying the patch to the nvme-4.13 tree.

> > +
> >  	fod = list_first_entry_or_null(&queue->fod_list,
> >  					struct nvmet_fc_fcp_iod, fcp_list);
> >  	if (fod) {
> 
> [...]
> 
> > +	for (;;) {
> > +		deferfcp = list_first_entry_or_null(&queue->pending_cmd_list,
> > +				struct nvmet_fc_defer_fcp_req, req_list);
> > +		if (!deferfcp)
> > +			break;
> 
>         while ((deferfcp = list_first_entry_or_null(&queue->pending_cmd_list,
> 				struct nvmet_fc_defer_fcp_req,
> 				req_list)) != NULL) {
> ?
> 
> Other than that,

I have to admit that I'd just prefer an opencoded list_empty +
container_of in both cases.  But given that all that is just personal
preference I've left the code as submitted by James.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH 1/2] nvmet_fc: add defer_req callback for deferment of cmd buffer return
@ 2017-08-10  9:04       ` Christoph Hellwig
  0 siblings, 0 replies; 24+ messages in thread
From: Christoph Hellwig @ 2017-08-10  9:04 UTC (permalink / raw)


On Wed, Aug 02, 2017@10:16:17AM +0200, Johannes Thumshirn wrote:
> On Tue, Aug 01, 2017@03:12:39PM -0700, James Smart wrote:
> > @@ -463,9 +472,9 @@ static struct nvmet_fc_fcp_iod *
> >  nvmet_fc_alloc_fcp_iod(struct nvmet_fc_tgt_queue *queue)
> >  {
> >  	static struct nvmet_fc_fcp_iod *fod;
> > -	unsigned long flags;
> >  
> > -	spin_lock_irqsave(&queue->qlock, flags);
> > +	/* Caller must hold queue->qlock */
> +       lockped_assert_held(&queue->qlock); 
> So we can check if the caller really holds the queue lock.

I've added the assert after applying the patch to the nvme-4.13 tree.

> > +
> >  	fod = list_first_entry_or_null(&queue->fod_list,
> >  					struct nvmet_fc_fcp_iod, fcp_list);
> >  	if (fod) {
> 
> [...]
> 
> > +	for (;;) {
> > +		deferfcp = list_first_entry_or_null(&queue->pending_cmd_list,
> > +				struct nvmet_fc_defer_fcp_req, req_list);
> > +		if (!deferfcp)
> > +			break;
> 
>         while ((deferfcp = list_first_entry_or_null(&queue->pending_cmd_list,
> 				struct nvmet_fc_defer_fcp_req,
> 				req_list)) != NULL) {
> ?
> 
> Other than that,

I have to admit that I'd just prefer an opencoded list_empty +
container_of in both cases.  But given that all that is just personal
preference I've left the code as submitted by James.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 1/2] nvmet_fc: add defer_req callback for deferment of cmd buffer return
  2017-08-01 22:12   ` James Smart
@ 2017-08-10  9:06     ` Christoph Hellwig
  -1 siblings, 0 replies; 24+ messages in thread
From: Christoph Hellwig @ 2017-08-10  9:06 UTC (permalink / raw)
  To: James Smart; +Cc: linux-nvme, James Smart, linux-scsi

> @@ -463,9 +472,9 @@ static struct nvmet_fc_fcp_iod *
>  nvmet_fc_alloc_fcp_iod(struct nvmet_fc_tgt_queue *queue)
>  {
>  	static struct nvmet_fc_fcp_iod *fod;

This isn't new, but is this really supposed to be a static variable,
that is all instances of this code sharing it use the same?

After a short code inspection this looks like a nasty bug to me.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH 1/2] nvmet_fc: add defer_req callback for deferment of cmd buffer return
@ 2017-08-10  9:06     ` Christoph Hellwig
  0 siblings, 0 replies; 24+ messages in thread
From: Christoph Hellwig @ 2017-08-10  9:06 UTC (permalink / raw)


> @@ -463,9 +472,9 @@ static struct nvmet_fc_fcp_iod *
>  nvmet_fc_alloc_fcp_iod(struct nvmet_fc_tgt_queue *queue)
>  {
>  	static struct nvmet_fc_fcp_iod *fod;

This isn't new, but is this really supposed to be a static variable,
that is all instances of this code sharing it use the same?

After a short code inspection this looks like a nasty bug to me.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 2/2] lpfc: support nvmet_fc defer_rcv callback
  2017-08-01 22:12   ` James Smart
@ 2017-08-10  9:08     ` Christoph Hellwig
  -1 siblings, 0 replies; 24+ messages in thread
From: Christoph Hellwig @ 2017-08-10  9:08 UTC (permalink / raw)
  To: James Smart; +Cc: linux-nvme, Dick Kennedy, James Smart, linux-scsi

Applied to nvme-4.13 with the void casts removed, but the strings left
as the surrounding code.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH 2/2] lpfc: support nvmet_fc defer_rcv callback
@ 2017-08-10  9:08     ` Christoph Hellwig
  0 siblings, 0 replies; 24+ messages in thread
From: Christoph Hellwig @ 2017-08-10  9:08 UTC (permalink / raw)


Applied to nvme-4.13 with the void casts removed, but the strings left
as the surrounding code.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 1/2] nvmet_fc: add defer_req callback for deferment of cmd buffer return
  2017-08-10  9:06     ` Christoph Hellwig
@ 2017-08-16  7:53       ` Christoph Hellwig
  -1 siblings, 0 replies; 24+ messages in thread
From: Christoph Hellwig @ 2017-08-16  7:53 UTC (permalink / raw)
  To: James Smart; +Cc: James Smart, linux-nvme, linux-scsi

On Thu, Aug 10, 2017 at 11:06:04AM +0200, Christoph Hellwig wrote:
> > @@ -463,9 +472,9 @@ static struct nvmet_fc_fcp_iod *
> >  nvmet_fc_alloc_fcp_iod(struct nvmet_fc_tgt_queue *queue)
> >  {
> >  	static struct nvmet_fc_fcp_iod *fod;
> 
> This isn't new, but is this really supposed to be a static variable,
> that is all instances of this code sharing it use the same?
> 
> After a short code inspection this looks like a nasty bug to me.

James, can you look into this asap?

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH 1/2] nvmet_fc: add defer_req callback for deferment of cmd buffer return
@ 2017-08-16  7:53       ` Christoph Hellwig
  0 siblings, 0 replies; 24+ messages in thread
From: Christoph Hellwig @ 2017-08-16  7:53 UTC (permalink / raw)


On Thu, Aug 10, 2017@11:06:04AM +0200, Christoph Hellwig wrote:
> > @@ -463,9 +472,9 @@ static struct nvmet_fc_fcp_iod *
> >  nvmet_fc_alloc_fcp_iod(struct nvmet_fc_tgt_queue *queue)
> >  {
> >  	static struct nvmet_fc_fcp_iod *fod;
> 
> This isn't new, but is this really supposed to be a static variable,
> that is all instances of this code sharing it use the same?
> 
> After a short code inspection this looks like a nasty bug to me.

James, can you look into this asap?

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2017-08-16  7:53 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-08-01 22:12 [PATCH 0/2] nvmet_fc: work around overalloc of queue elements James Smart
2017-08-01 22:12 ` James Smart
2017-08-01 22:12 ` [PATCH 1/2] nvmet_fc: add defer_req callback for deferment of cmd buffer return James Smart
2017-08-01 22:12   ` James Smart
2017-08-02  8:16   ` Johannes Thumshirn
2017-08-02  8:16     ` Johannes Thumshirn
2017-08-02 14:29     ` James Smart
2017-08-02 14:29       ` James Smart
2017-08-10  9:04     ` Christoph Hellwig
2017-08-10  9:04       ` Christoph Hellwig
2017-08-10  9:06   ` Christoph Hellwig
2017-08-10  9:06     ` Christoph Hellwig
2017-08-16  7:53     ` Christoph Hellwig
2017-08-16  7:53       ` Christoph Hellwig
2017-08-01 22:12 ` [PATCH 2/2] lpfc: support nvmet_fc defer_rcv callback James Smart
2017-08-01 22:12   ` James Smart
2017-08-02  8:32   ` Johannes Thumshirn
2017-08-02  8:32     ` Johannes Thumshirn
2017-08-02  9:17     ` Johannes Thumshirn
2017-08-02  9:17       ` Johannes Thumshirn
2017-08-02 14:32     ` James Smart
2017-08-02 14:32       ` James Smart
2017-08-10  9:08   ` Christoph Hellwig
2017-08-10  9:08     ` Christoph Hellwig

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.