All of lore.kernel.org
 help / color / mirror / Atom feed
* nvme-tcp: SUCCESS Flag in C2HData PDU
@ 2019-03-08  0:16 Steve Blightman
  2019-03-08  1:06 ` Sagi Grimberg
  0 siblings, 1 reply; 6+ messages in thread
From: Steve Blightman @ 2019-03-08  0:16 UTC (permalink / raw)


Do the Linux nvme-tcp host & target implemetations support the use of 
the SUCCESS Flag in the C2HData PDU?? I could not find any reference to 
it in the code,? even though TP 8000 doesn't seem to suggest it's 
optional.? Am I missing something?

^ permalink raw reply	[flat|nested] 6+ messages in thread

* nvme-tcp: SUCCESS Flag in C2HData PDU
  2019-03-08  0:16 nvme-tcp: SUCCESS Flag in C2HData PDU Steve Blightman
@ 2019-03-08  1:06 ` Sagi Grimberg
  2019-03-08 16:46   ` Steve Blightman
  0 siblings, 1 reply; 6+ messages in thread
From: Sagi Grimberg @ 2019-03-08  1:06 UTC (permalink / raw)


Hi Steve,

> Do the Linux nvme-tcp host & target implemetations support the use of 
> the SUCCESS Flag in the C2HData PDU?

Currently this optimization is not used by the target and not expected
by the host (although it should be very easy to support).

> I could not find any reference to 
> it in the code,? even though TP 8000 doesn't seem to suggest it's 
> optional.? Am I missing something?

It is optional, the target may set the success flag. The host needs to
be able to support it though. So it should be fairly simple to add.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* nvme-tcp: SUCCESS Flag in C2HData PDU
  2019-03-08  1:06 ` Sagi Grimberg
@ 2019-03-08 16:46   ` Steve Blightman
  2019-03-08 20:59     ` Oliver Smith-Denny
  0 siblings, 1 reply; 6+ messages in thread
From: Steve Blightman @ 2019-03-08 16:46 UTC (permalink / raw)


Thanks Sagi,
I understand use of the SUCCESS flag is optional for a target, but I 
think the Spec should either state that support for a host is mandatory, 
or, I think better, there should be a flag in the Connect Initialize PDU 
(IcReq) that indicates whether the host supports it or not.? Otherwise 
implementing it in a target may be problematic.


On 3/7/2019 5:06 PM, Sagi Grimberg wrote:
> Hi Steve,
>
>> Do the Linux nvme-tcp host & target implemetations support the use of 
>> the SUCCESS Flag in the C2HData PDU?
>
> Currently this optimization is not used by the target and not expected
> by the host (although it should be very easy to support).
>
>> I could not find any reference to it in the code,? even though TP 
>> 8000 doesn't seem to suggest it's optional.? Am I missing something?
>
> It is optional, the target may set the success flag. The host needs to
> be able to support it though. So it should be fairly simple to add.
>
> _______________________________________________
> Linux-nvme mailing list
> Linux-nvme at lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 6+ messages in thread

* nvme-tcp: SUCCESS Flag in C2HData PDU
  2019-03-08 16:46   ` Steve Blightman
@ 2019-03-08 20:59     ` Oliver Smith-Denny
  2019-03-08 23:04       ` Sagi Grimberg
  0 siblings, 1 reply; 6+ messages in thread
From: Oliver Smith-Denny @ 2019-03-08 20:59 UTC (permalink / raw)


On 3/7/2019 5:06 PM, Sagi Grimberg wrote:
>> It is optional, the target may set the success flag. The host needs to
>> be able to support it though. So it should be fairly simple to add.

Sagi - is the following patch in the right direction? If it is,
I will write some controller code to test it and send an actual
patch email. I just wanted to get your feedback before continuing
down this path. Do we need to submit an actual CQE to the NVMe
layer, or does just the nvme_end_request suffice? My understanding
is the below patch handles both the case of data digest and not
data digest, but please let me know if I am missing something. So far
this patch is untested, but it does compile on the nvme-5.1 branch.


diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
index 208ee51..50de81c 100644
--- a/drivers/nvme/host/tcp.c
+++ b/drivers/nvme/host/tcp.c
@@ -618,6 +618,41 @@ static int nvme_tcp_recv_pdu(struct nvme_tcp_queue 
*queue, struct sk_buff *skb,
         return ret;
  }

+/*
+ * if this is the last PDU and the SUCCESS flag is set,
+ * the controller won't send a Response Capsule. We need
+ * to synthesize a cqe with both the status field and
+ * and submit it to the NVMe layer.
+ *
+ * if this is not the last PDU and we receive the SUCCESS
+ * flag, we treat this as a fatal transport error.
+ */
+static int nvme_tcp_process_success_flag(struct nvme_tcp_queue *queue)
+{
+       struct nvme_tcp_data_pdu *pdu = (void *)queue->pdu;
+       struct request *rq;
+       union nvme_result res = {};
+
+       rq = blk_mq_tag_to_rq(nvme_tcp_tagset(queue), pdu->command_id);
+       if (!rq) {
+               dev_err(queue->ctrl->ctrl.device,
+                       "queue %d tag %#x not found\n",
+                       nvme_tcp_queue_id(queue), pdu->command_id);
+               return -ENOENT;
+       }
+
+       if (!(pdu->hdr.flags & NVME_TCP_F_DATA_LAST)) {
+               dev_err(queue->ctrl->ctrl.device,
+                       "queue %d tag 0x%x SUCCESS flag set but not last 
PDU\n",
+                       nvme_tcp_queue_id(queue), rq->tag);
+               return -EINVAL;
+       }
+
+       nvme_end_request(rq, cpu_to_le16(NVME_SC_SUCCESS), res);
+
+       return 0;
+}
+
  static int nvme_tcp_recv_data(struct nvme_tcp_queue *queue, struct 
sk_buff *skb,
                               unsigned int *offset, size_t *len)
  {
@@ -685,7 +720,10 @@ static int nvme_tcp_recv_data(struct nvme_tcp_queue 
*queue, struct sk_buff *skb,
                         nvme_tcp_ddgst_final(queue->rcv_hash, 
&queue->exp_ddgst);
                         queue->ddgst_remaining = NVME_TCP_DIGEST_LENGTH;
                 } else {
-                       nvme_tcp_init_recv_ctx(queue);
+                       if (pdu->hdr.flags & NVME_TCP_F_DATA_SUCCESS)
+                               return nvme_tcp_process_success_flag(queue);
+
+                       nvme_tcp_init_recv_ctx(queue);
                 }
         }

@@ -695,6 +733,7 @@ static int nvme_tcp_recv_data(struct nvme_tcp_queue 
*queue, struct sk_buff *skb,
  static int nvme_tcp_recv_ddgst(struct nvme_tcp_queue *queue,
                 struct sk_buff *skb, unsigned int *offset, size_t *len)
  {
+       struct nvme_tcp_data_pdu *pdu = (void *)queue->pdu;
         char *ddgst = (char *)&queue->recv_ddgst;
         size_t recv_len = min_t(size_t, *len, queue->ddgst_remaining);
         off_t off = NVME_TCP_DIGEST_LENGTH - queue->ddgst_remaining;
@@ -718,6 +757,10 @@ static int nvme_tcp_recv_ddgst(struct 
nvme_tcp_queue *queue,
                 return -EIO;
         }

+       if (!queue->data_remaining &&
+           pdu->hdr.flags & NVME_TCP_F_DATA_SUCCESS)
+               return nvme_tcp_process_success_flag(queue);
+
         nvme_tcp_init_recv_ctx(queue);
         return 0;
  }

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* nvme-tcp: SUCCESS Flag in C2HData PDU
  2019-03-08 20:59     ` Oliver Smith-Denny
@ 2019-03-08 23:04       ` Sagi Grimberg
  2019-03-08 23:09         ` Oliver Smith-Denny
  0 siblings, 1 reply; 6+ messages in thread
From: Sagi Grimberg @ 2019-03-08 23:04 UTC (permalink / raw)


Hi Oliver,

> Sagi - is the following patch in the right direction? If it is,
> I will write some controller code to test it and send an actual
> patch email. I just wanted to get your feedback before continuing
> down this path. Do we need to submit an actual CQE to the NVMe
> layer, or does just the nvme_end_request suffice? My understanding
> is the below patch handles both the case of data digest and not
> data digest, but please let me know if I am missing something. So far
> this patch is untested, but it does compile on the nvme-5.1 branch.

The below is in the right direction indeed, its actually pretty similar
to what I did, but your version seems slightly more elegant.

Thanks for looking at it, I have it implemented on both the host and the
controller, would be great to have a tester for it?

^ permalink raw reply	[flat|nested] 6+ messages in thread

* nvme-tcp: SUCCESS Flag in C2HData PDU
  2019-03-08 23:04       ` Sagi Grimberg
@ 2019-03-08 23:09         ` Oliver Smith-Denny
  0 siblings, 0 replies; 6+ messages in thread
From: Oliver Smith-Denny @ 2019-03-08 23:09 UTC (permalink / raw)


On 03/08/2019 03:04 PM, Sagi Grimberg wrote:
> Hi Oliver,
> 
>> Sagi - is the following patch in the right direction? If it is,
>> I will write some controller code to test it and send an actual
>> patch email. I just wanted to get your feedback before continuing
>> down this path. Do we need to submit an actual CQE to the NVMe
>> layer, or does just the nvme_end_request suffice? My understanding
>> is the below patch handles both the case of data digest and not
>> data digest, but please let me know if I am missing something. So far
>> this patch is untested, but it does compile on the nvme-5.1 branch.
> 
> The below is in the right direction indeed, its actually pretty similar
> to what I did, but your version seems slightly more elegant.

Whew, thanks. Glad it looked okay (:

> Thanks for looking at it, I have it implemented on both the host and the
> controller, would be great to have a tester for it?


Sure, I'm happy to test out your implementation! You haven't sent a 
patch for it yet, have you? If so I missed it...

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2019-03-08 23:09 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-03-08  0:16 nvme-tcp: SUCCESS Flag in C2HData PDU Steve Blightman
2019-03-08  1:06 ` Sagi Grimberg
2019-03-08 16:46   ` Steve Blightman
2019-03-08 20:59     ` Oliver Smith-Denny
2019-03-08 23:04       ` Sagi Grimberg
2019-03-08 23:09         ` Oliver Smith-Denny

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.