All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2] nvmet-tcp: fix connect error when inline data size to 0
@ 2021-05-20 11:30 Hou Pu
  2021-05-20 11:30 ` [PATCH 1/2] nvmet-tcp: fix inline data size comparison in nvmet_tcp_queue_response Hou Pu
  2021-05-20 11:30 ` [PATCH 2/2] nvmet-tcp: fix connect error when setting param_inline_data_size to zero Hou Pu
  0 siblings, 2 replies; 10+ messages in thread
From: Hou Pu @ 2021-05-20 11:30 UTC (permalink / raw)
  To: sagi, hch, chaitanya.kulkarni
  Cc: linux-nvme, houpu.main, larrystevenwise, maxg, elad.grupi

Hi,

For nvme over tcp, connect error happens when setting inline data size to 0.
Patch 2 tries to fix this.

Patch 1 is a small fixes, it could be merged with patch 2 if you want.


Hou Pu (2):
  nvmet-tcp: fix inline data size comparison in nvmet_tcp_queue_response
  nvmet-tcp: fix connect error when setting param_inline_data_size to
    zero.

 drivers/nvme/target/tcp.c | 24 +++++++++++++++++++++---
 1 file changed, 21 insertions(+), 3 deletions(-)

-- 
2.28.0


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 1/2] nvmet-tcp: fix inline data size comparison in nvmet_tcp_queue_response
  2021-05-20 11:30 [PATCH 0/2] nvmet-tcp: fix connect error when inline data size to 0 Hou Pu
@ 2021-05-20 11:30 ` Hou Pu
  2021-05-20 22:39   ` Sagi Grimberg
  2021-05-25  7:24   ` Christoph Hellwig
  2021-05-20 11:30 ` [PATCH 2/2] nvmet-tcp: fix connect error when setting param_inline_data_size to zero Hou Pu
  1 sibling, 2 replies; 10+ messages in thread
From: Hou Pu @ 2021-05-20 11:30 UTC (permalink / raw)
  To: sagi, hch, chaitanya.kulkarni
  Cc: linux-nvme, houpu.main, larrystevenwise, maxg, elad.grupi

Using "<=" instead "<" to compare inline data size.

Fixes: bdaf13279192 ("nvmet-tcp: fix a segmentation fault during io parsing error")
Signed-off-by: Hou Pu <houpu.main@gmail.com>
---
 drivers/nvme/target/tcp.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
index f9f34f6caf5e..d8aceef83284 100644
--- a/drivers/nvme/target/tcp.c
+++ b/drivers/nvme/target/tcp.c
@@ -550,7 +550,7 @@ static void nvmet_tcp_queue_response(struct nvmet_req *req)
 		 * nvmet_req_init is completed.
 		 */
 		if (queue->rcv_state == NVMET_TCP_RECV_PDU &&
-		    len && len < cmd->req.port->inline_data_size &&
+		    len && len <= cmd->req.port->inline_data_size &&
 		    nvme_is_write(cmd->req.cmd))
 			return;
 	}
-- 
2.28.0


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 2/2] nvmet-tcp: fix connect error when setting param_inline_data_size to zero.
  2021-05-20 11:30 [PATCH 0/2] nvmet-tcp: fix connect error when inline data size to 0 Hou Pu
  2021-05-20 11:30 ` [PATCH 1/2] nvmet-tcp: fix inline data size comparison in nvmet_tcp_queue_response Hou Pu
@ 2021-05-20 11:30 ` Hou Pu
  2021-05-20 22:44   ` Sagi Grimberg
  1 sibling, 1 reply; 10+ messages in thread
From: Hou Pu @ 2021-05-20 11:30 UTC (permalink / raw)
  To: sagi, hch, chaitanya.kulkarni
  Cc: linux-nvme, houpu.main, larrystevenwise, maxg, elad.grupi

When setting inline_data_size to zero, connect failed. This could be
reproduced with following steps.

Controller side:
mkdir /sys/kernel/config/nvmet/ports/1
cd /sys/kernel/config/nvmet/ports/1
echo 0.0.0.0 > addr_traddr
echo 4421 > addr_trsvcid
echo ipv4 > addr_adrfam
echo tcp > addr_trtype
echo 0 > param_inline_data_size
ln -s /sys/kernel/config/nvmet/subsystems/mysub /sys/kernel/config/nvmet/ports/1/subsystems/mysub

Host side:
[  325.145323][  T203] nvme nvme1: Connect command failed, error wo/DNR bit: 22
[  325.159481][  T203] nvme nvme1: failed to connect queue: 0 ret=16406
Failed to write to /dev/nvme-fabrics: Input/output error

Kernel log from controller side is:
[  114.567411][   T56] nvmet_tcp: queue 0: failed to map data
[  114.568093][   T56] nvmet_tcp: unexpected pdu type 201

When admin-connect command comes with 1024 inline data size, in nvmet_tcp_map_data(),
this size is compared with cmd->req.port->inline_data_size (which is 0),
thus the command is responded with an error code. But admin-connect command
is always allowed to use no more than 8192 bytes according to the nvme over
fabrics specification.

The host side decides the inline data size when allocating the queue
according the queue number, the size of queue 0 is 8k and others is ioccsz*16.
The target side should do like this.

Fixes: 0d5ee2b2ab4f ("nvmet-rdma: support max(16KB, PAGE_SIZE) inline data")
Signed-off-by: Hou Pu <houpu.main@gmail.com>
---
 drivers/nvme/target/tcp.c | 24 +++++++++++++++++++++---
 1 file changed, 21 insertions(+), 3 deletions(-)

diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
index d8aceef83284..83985ab8c3aa 100644
--- a/drivers/nvme/target/tcp.c
+++ b/drivers/nvme/target/tcp.c
@@ -167,6 +167,24 @@ static const struct nvmet_fabrics_ops nvmet_tcp_ops;
 static void nvmet_tcp_free_cmd(struct nvmet_tcp_cmd *c);
 static void nvmet_tcp_finish_cmd(struct nvmet_tcp_cmd *cmd);
 
+static inline int nvmet_tcp_inline_data_size(struct nvmet_tcp_cmd *cmd)
+{
+	struct nvmet_tcp_queue *queue = cmd->queue;
+	struct nvme_command *nvme_cmd = cmd->req.cmd;
+	int inline_data_size = NVME_TCP_ADMIN_CCSZ;
+	u16 qid = 0;
+
+	if (likely(queue->nvme_sq.ctrl)) {
+		/* The connect admin/io queue has been executed. */
+		qid = queue->nvme_sq.qid;
+		if (qid)
+			inline_data_size = cmd->req.port->inline_data_size;
+	} else if (nvme_cmd->connect.qid)
+		inline_data_size = cmd->req.port->inline_data_size;
+
+	return inline_data_size;
+}
+
 static inline u16 nvmet_tcp_cmd_tag(struct nvmet_tcp_queue *queue,
 		struct nvmet_tcp_cmd *cmd)
 {
@@ -367,7 +385,7 @@ static int nvmet_tcp_map_data(struct nvmet_tcp_cmd *cmd)
 		if (!nvme_is_write(cmd->req.cmd))
 			return NVME_SC_INVALID_FIELD | NVME_SC_DNR;
 
-		if (len > cmd->req.port->inline_data_size)
+		if (len > nvmet_tcp_inline_data_size(cmd))
 			return NVME_SC_SGL_INVALID_OFFSET | NVME_SC_DNR;
 		cmd->pdu_len = len;
 	}
@@ -550,7 +568,7 @@ static void nvmet_tcp_queue_response(struct nvmet_req *req)
 		 * nvmet_req_init is completed.
 		 */
 		if (queue->rcv_state == NVMET_TCP_RECV_PDU &&
-		    len && len <= cmd->req.port->inline_data_size &&
+		    len && len <= nvmet_tcp_inline_data_size(cmd) &&
 		    nvme_is_write(cmd->req.cmd))
 			return;
 	}
@@ -907,7 +925,7 @@ static void nvmet_tcp_handle_req_failure(struct nvmet_tcp_queue *queue,
 	int ret;
 
 	if (!nvme_is_write(cmd->req.cmd) ||
-	    data_len > cmd->req.port->inline_data_size) {
+	    data_len > nvmet_tcp_inline_data_size(cmd)) {
 		nvmet_prepare_receive_pdu(queue);
 		return;
 	}
-- 
2.28.0


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/2] nvmet-tcp: fix inline data size comparison in nvmet_tcp_queue_response
  2021-05-20 11:30 ` [PATCH 1/2] nvmet-tcp: fix inline data size comparison in nvmet_tcp_queue_response Hou Pu
@ 2021-05-20 22:39   ` Sagi Grimberg
  2021-05-21  2:36     ` Hou Pu
  2021-05-25  7:24   ` Christoph Hellwig
  1 sibling, 1 reply; 10+ messages in thread
From: Sagi Grimberg @ 2021-05-20 22:39 UTC (permalink / raw)
  To: Hou Pu, hch, chaitanya.kulkarni
  Cc: linux-nvme, larrystevenwise, maxg, elad.grupi

Reviewed-by: Sagi Grimberg <sagi@grimberg.me>

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/2] nvmet-tcp: fix connect error when setting param_inline_data_size to zero.
  2021-05-20 11:30 ` [PATCH 2/2] nvmet-tcp: fix connect error when setting param_inline_data_size to zero Hou Pu
@ 2021-05-20 22:44   ` Sagi Grimberg
  2021-05-21  2:57     ` Hou Pu
  0 siblings, 1 reply; 10+ messages in thread
From: Sagi Grimberg @ 2021-05-20 22:44 UTC (permalink / raw)
  To: Hou Pu, hch, chaitanya.kulkarni
  Cc: linux-nvme, larrystevenwise, maxg, elad.grupi


> When setting inline_data_size to zero, connect failed. This could be
> reproduced with following steps.
> 
> Controller side:
> mkdir /sys/kernel/config/nvmet/ports/1
> cd /sys/kernel/config/nvmet/ports/1
> echo 0.0.0.0 > addr_traddr
> echo 4421 > addr_trsvcid
> echo ipv4 > addr_adrfam
> echo tcp > addr_trtype
> echo 0 > param_inline_data_size
> ln -s /sys/kernel/config/nvmet/subsystems/mysub /sys/kernel/config/nvmet/ports/1/subsystems/mysub
> 
> Host side:
> [  325.145323][  T203] nvme nvme1: Connect command failed, error wo/DNR bit: 22
> [  325.159481][  T203] nvme nvme1: failed to connect queue: 0 ret=16406
> Failed to write to /dev/nvme-fabrics: Input/output error
> 
> Kernel log from controller side is:
> [  114.567411][   T56] nvmet_tcp: queue 0: failed to map data
> [  114.568093][   T56] nvmet_tcp: unexpected pdu type 201
> 
> When admin-connect command comes with 1024 inline data size, in nvmet_tcp_map_data(),
> this size is compared with cmd->req.port->inline_data_size (which is 0),
> thus the command is responded with an error code. But admin-connect command
> is always allowed to use no more than 8192 bytes according to the nvme over
> fabrics specification.
> 
> The host side decides the inline data size when allocating the queue
> according the queue number, the size of queue 0 is 8k and others is ioccsz*16.
> The target side should do like this.
> 
> Fixes: 0d5ee2b2ab4f ("nvmet-rdma: support max(16KB, PAGE_SIZE) inline data")
> Signed-off-by: Hou Pu <houpu.main@gmail.com>
> ---
>   drivers/nvme/target/tcp.c | 24 +++++++++++++++++++++---
>   1 file changed, 21 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
> index d8aceef83284..83985ab8c3aa 100644
> --- a/drivers/nvme/target/tcp.c
> +++ b/drivers/nvme/target/tcp.c
> @@ -167,6 +167,24 @@ static const struct nvmet_fabrics_ops nvmet_tcp_ops;
>   static void nvmet_tcp_free_cmd(struct nvmet_tcp_cmd *c);
>   static void nvmet_tcp_finish_cmd(struct nvmet_tcp_cmd *cmd);
>   
> +static inline int nvmet_tcp_inline_data_size(struct nvmet_tcp_cmd *cmd)
> +{
> +	struct nvmet_tcp_queue *queue = cmd->queue;
> +	struct nvme_command *nvme_cmd = cmd->req.cmd;
> +	int inline_data_size = NVME_TCP_ADMIN_CCSZ;
> +	u16 qid = 0;
> +
> +	if (likely(queue->nvme_sq.ctrl)) {
> +		/* The connect admin/io queue has been executed. */
> +		qid = queue->nvme_sq.qid;
> +		if (qid)
> +			inline_data_size = cmd->req.port->inline_data_size;
> +	} else if (nvme_cmd->connect.qid)
> +		inline_data_size = cmd->req.port->inline_data_size;

How can a connection to an I/O queue arrive without having the ctrl
reference installed? Is this for the failure case?

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/2] nvmet-tcp: fix inline data size comparison in nvmet_tcp_queue_response
  2021-05-20 22:39   ` Sagi Grimberg
@ 2021-05-21  2:36     ` Hou Pu
  0 siblings, 0 replies; 10+ messages in thread
From: Hou Pu @ 2021-05-21  2:36 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: hch, Chaitanya Kulkarni, linux-nvme, larrystevenwise, maxg, Grupi, Elad

On Fri, May 21, 2021 at 6:39 AM Sagi Grimberg <sagi@grimberg.me> wrote:
>
> Reviewed-by: Sagi Grimberg <sagi@grimberg.me>

Thanks,
Hou

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/2] nvmet-tcp: fix connect error when setting param_inline_data_size to zero.
  2021-05-20 22:44   ` Sagi Grimberg
@ 2021-05-21  2:57     ` Hou Pu
  2021-05-21 18:04       ` Sagi Grimberg
  0 siblings, 1 reply; 10+ messages in thread
From: Hou Pu @ 2021-05-21  2:57 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: hch, Chaitanya Kulkarni, linux-nvme, larrystevenwise, maxg, Grupi, Elad

On Fri, May 21, 2021 at 6:44 AM Sagi Grimberg <sagi@grimberg.me> wrote:
>
>
> > When setting inline_data_size to zero, connect failed. This could be
> > reproduced with following steps.
> >
> > Controller side:
> > mkdir /sys/kernel/config/nvmet/ports/1
> > cd /sys/kernel/config/nvmet/ports/1
> > echo 0.0.0.0 > addr_traddr
> > echo 4421 > addr_trsvcid
> > echo ipv4 > addr_adrfam
> > echo tcp > addr_trtype
> > echo 0 > param_inline_data_size
> > ln -s /sys/kernel/config/nvmet/subsystems/mysub /sys/kernel/config/nvmet/ports/1/subsystems/mysub
> >
> > Host side:
> > [  325.145323][  T203] nvme nvme1: Connect command failed, error wo/DNR bit: 22
> > [  325.159481][  T203] nvme nvme1: failed to connect queue: 0 ret=16406
> > Failed to write to /dev/nvme-fabrics: Input/output error
> >
> > Kernel log from controller side is:
> > [  114.567411][   T56] nvmet_tcp: queue 0: failed to map data
> > [  114.568093][   T56] nvmet_tcp: unexpected pdu type 201
> >
> > When admin-connect command comes with 1024 inline data size, in nvmet_tcp_map_data(),
> > this size is compared with cmd->req.port->inline_data_size (which is 0),
> > thus the command is responded with an error code. But admin-connect command
> > is always allowed to use no more than 8192 bytes according to the nvme over
> > fabrics specification.
> >
> > The host side decides the inline data size when allocating the queue
> > according the queue number, the size of queue 0 is 8k and others is ioccsz*16.
> > The target side should do like this.
> >
> > Fixes: 0d5ee2b2ab4f ("nvmet-rdma: support max(16KB, PAGE_SIZE) inline data")
> > Signed-off-by: Hou Pu <houpu.main@gmail.com>
> > ---
> >   drivers/nvme/target/tcp.c | 24 +++++++++++++++++++++---
> >   1 file changed, 21 insertions(+), 3 deletions(-)
> >
> > diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
> > index d8aceef83284..83985ab8c3aa 100644
> > --- a/drivers/nvme/target/tcp.c
> > +++ b/drivers/nvme/target/tcp.c
> > @@ -167,6 +167,24 @@ static const struct nvmet_fabrics_ops nvmet_tcp_ops;
> >   static void nvmet_tcp_free_cmd(struct nvmet_tcp_cmd *c);
> >   static void nvmet_tcp_finish_cmd(struct nvmet_tcp_cmd *cmd);
> >
> > +static inline int nvmet_tcp_inline_data_size(struct nvmet_tcp_cmd *cmd)
> > +{
> > +     struct nvmet_tcp_queue *queue = cmd->queue;
> > +     struct nvme_command *nvme_cmd = cmd->req.cmd;
> > +     int inline_data_size = NVME_TCP_ADMIN_CCSZ;
> > +     u16 qid = 0;
> > +
> > +     if (likely(queue->nvme_sq.ctrl)) {
> > +             /* The connect admin/io queue has been executed. */
> > +             qid = queue->nvme_sq.qid;
> > +             if (qid)
> > +                     inline_data_size = cmd->req.port->inline_data_size;
> > +     } else if (nvme_cmd->connect.qid)
> > +             inline_data_size = cmd->req.port->inline_data_size;
>
> How can a connection to an I/O queue arrive without having the ctrl
> reference installed? Is this for the failure case?

Hi Sagi,
AFAIK after the host finishes setting up the admin queue,
it connects to the io queue and sends the io-connect command. At this
point the nvmet_tcp_queue is firstly allocated and does not have a valid
queue->nvme_sq.ctrl. It is assigned after io-connect in nvmet_install_queue().
So this function tries to find the correct queue number before or after a
fabrics connect command.

Thank,
Hou

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/2] nvmet-tcp: fix connect error when setting param_inline_data_size to zero.
  2021-05-21  2:57     ` Hou Pu
@ 2021-05-21 18:04       ` Sagi Grimberg
  2021-05-22 14:27         ` Hou Pu
  0 siblings, 1 reply; 10+ messages in thread
From: Sagi Grimberg @ 2021-05-21 18:04 UTC (permalink / raw)
  To: Hou Pu
  Cc: hch, Chaitanya Kulkarni, linux-nvme, larrystevenwise, maxg, Grupi, Elad


>>> +static inline int nvmet_tcp_inline_data_size(struct nvmet_tcp_cmd *cmd)
>>> +{
>>> +     struct nvmet_tcp_queue *queue = cmd->queue;
>>> +     struct nvme_command *nvme_cmd = cmd->req.cmd;
>>> +     int inline_data_size = NVME_TCP_ADMIN_CCSZ;
>>> +     u16 qid = 0;
>>> +
>>> +     if (likely(queue->nvme_sq.ctrl)) {
>>> +             /* The connect admin/io queue has been executed. */
>>> +             qid = queue->nvme_sq.qid;
>>> +             if (qid)
>>> +                     inline_data_size = cmd->req.port->inline_data_size;
>>> +     } else if (nvme_cmd->connect.qid)
>>> +             inline_data_size = cmd->req.port->inline_data_size;
>>
>> How can a connection to an I/O queue arrive without having the ctrl
>> reference installed? Is this for the failure case?
> 
> Hi Sagi,
> AFAIK after the host finishes setting up the admin queue,
> it connects to the io queue and sends the io-connect command. At this
> point the nvmet_tcp_queue is firstly allocated and does not have a valid
> queue->nvme_sq.ctrl. It is assigned after io-connect in nvmet_install_queue().
> So this function tries to find the correct queue number before or after a
> fabrics connect command.

Why do you need the inline_data_size before the connect? its only
relevant for nvme I/O..

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/2] nvmet-tcp: fix connect error when setting param_inline_data_size to zero.
  2021-05-21 18:04       ` Sagi Grimberg
@ 2021-05-22 14:27         ` Hou Pu
  0 siblings, 0 replies; 10+ messages in thread
From: Hou Pu @ 2021-05-22 14:27 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: hch, Chaitanya Kulkarni, linux-nvme, larrystevenwise, maxg, Grupi, Elad

On Sat, May 22, 2021 at 2:04 AM Sagi Grimberg <sagi@grimberg.me> wrote:
>
>
> >>> +static inline int nvmet_tcp_inline_data_size(struct nvmet_tcp_cmd *cmd)
> >>> +{
> >>> +     struct nvmet_tcp_queue *queue = cmd->queue;
> >>> +     struct nvme_command *nvme_cmd = cmd->req.cmd;
> >>> +     int inline_data_size = NVME_TCP_ADMIN_CCSZ;
> >>> +     u16 qid = 0;
> >>> +
> >>> +     if (likely(queue->nvme_sq.ctrl)) {
> >>> +             /* The connect admin/io queue has been executed. */
> >>> +             qid = queue->nvme_sq.qid;
> >>> +             if (qid)
> >>> +                     inline_data_size = cmd->req.port->inline_data_size;
> >>> +     } else if (nvme_cmd->connect.qid)
> >>> +             inline_data_size = cmd->req.port->inline_data_size;
> >>
> >> How can a connection to an I/O queue arrive without having the ctrl
> >> reference installed? Is this for the failure case?
> >
> > Hi Sagi,
> > AFAIK after the host finishes setting up the admin queue,
> > it connects to the io queue and sends the io-connect command. At this
> > point the nvmet_tcp_queue is firstly allocated and does not have a valid
> > queue->nvme_sq.ctrl. It is assigned after io-connect in nvmet_install_queue().
> > So this function tries to find the correct queue number before or after a
> > fabrics connect command.
>
> Why do you need the inline_data_size before the connect? its only
> relevant for nvme I/O..

From my observation, the host always sends an admin-connect command with
nvmf_connect_data as inline data (by setting sg type to NVME_SGL_FMT_DATA_DESC).
This is fixed from the host side.

For io-connect command, the host sends nvmf_connect_data according
to ioccsz. If the size of nvmf_connect_data is smaller than ioccsz*16
it is sent as inline data. ioccsz is obtained by identifying the controller and
is controlled by param_inline_data_size from the target side.

If param_inline_data_size is set to zero, admin-connect will not
be received by target correctly. As target side thought there is
no inline data (by looking param_inline_data_size), but in fact
admin-connect has inline data.

This patch is trying to make param_inline_data_size only affect io-queue.
and admin-queue always allow 8k inline data. This is what the host
side do currently and also what the specification says AFAIK.

Thanks,
Hou

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/2] nvmet-tcp: fix inline data size comparison in nvmet_tcp_queue_response
  2021-05-20 11:30 ` [PATCH 1/2] nvmet-tcp: fix inline data size comparison in nvmet_tcp_queue_response Hou Pu
  2021-05-20 22:39   ` Sagi Grimberg
@ 2021-05-25  7:24   ` Christoph Hellwig
  1 sibling, 0 replies; 10+ messages in thread
From: Christoph Hellwig @ 2021-05-25  7:24 UTC (permalink / raw)
  To: Hou Pu
  Cc: sagi, hch, chaitanya.kulkarni, linux-nvme, larrystevenwise, maxg,
	elad.grupi

Thanks,

applied to nvme-5.13.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2021-05-25  7:24 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-20 11:30 [PATCH 0/2] nvmet-tcp: fix connect error when inline data size to 0 Hou Pu
2021-05-20 11:30 ` [PATCH 1/2] nvmet-tcp: fix inline data size comparison in nvmet_tcp_queue_response Hou Pu
2021-05-20 22:39   ` Sagi Grimberg
2021-05-21  2:36     ` Hou Pu
2021-05-25  7:24   ` Christoph Hellwig
2021-05-20 11:30 ` [PATCH 2/2] nvmet-tcp: fix connect error when setting param_inline_data_size to zero Hou Pu
2021-05-20 22:44   ` Sagi Grimberg
2021-05-21  2:57     ` Hou Pu
2021-05-21 18:04       ` Sagi Grimberg
2021-05-22 14:27         ` Hou Pu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.