All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH RFC 0/4] nvmet: add support for port based io priority
@ 2019-05-16  3:21 Chaitanya Kulkarni
  2019-05-16  3:21 ` [RFC PATCH 1/4] nvmet: add iopriority definitions for port Chaitanya Kulkarni
                   ` (5 more replies)
  0 siblings, 6 replies; 16+ messages in thread
From: Chaitanya Kulkarni @ 2019-05-16  3:21 UTC (permalink / raw)


Hi all,

This is a small patch series which adds the support for port-based
ioprio field so that user on the NVMeOF target side can set the desired
io priority from the configfs.

For each request, we derived the io priority value from port associated
with the request for bio (bdev) and kiocb (file).

In this way, the target side can prioritize the port and eventually the
io request if the scheduler (BFQ) is in the picture. 

This is useful in the following scenarion where:-

Port 1-------> NVMeOF Ctrl1 ------>  NSID1______
                                                |------> ----------------------
                                                         | Actual Namesapce A |
                                          ______|------> ----------------------
Port 2-------> NVMeOF Ctrl2 ------>  NSID2

Any feedback is welcome, I'd like to understand if this approach is desirable,
if not then what is the right way to use the io priorities on that target side
so that NVMeOF target can take advantage of the scheduler infrastructure ?

Regards,
Chaitanya

Chaitanya Kulkarni (4):
  nvmet: add iopriority definitions for port
  nvmet: allow user to specify ioprioty from cfgfs
  nvmet: set iopriority for each bio
  nvmet: set iopriority for each kiocb for file

 drivers/nvme/target/configfs.c    | 76 +++++++++++++++++++++++++++++++
 drivers/nvme/target/io-cmd-bdev.c |  1 +
 drivers/nvme/target/io-cmd-file.c |  1 +
 drivers/nvme/target/nvmet.h       |  3 ++
 4 files changed, 81 insertions(+)

-- 
2.17.0

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [RFC PATCH 1/4] nvmet: add iopriority definitions for port
  2019-05-16  3:21 [PATCH RFC 0/4] nvmet: add support for port based io priority Chaitanya Kulkarni
@ 2019-05-16  3:21 ` Chaitanya Kulkarni
  2019-05-16  3:21 ` [PATCH] nvmet: get rid of extra line in the tcp code Chaitanya Kulkarni
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 16+ messages in thread
From: Chaitanya Kulkarni @ 2019-05-16  3:21 UTC (permalink / raw)


This patch adds iopriority class and value per port which gets
initialized from the user through configfs.

Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni at wdc.com>
---
 drivers/nvme/target/nvmet.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index c25d88fc9dec..b81b1501cf55 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -140,6 +140,9 @@ struct nvmet_port {
 	void				*priv;
 	bool				enabled;
 	int				inline_data_size;
+	u8				ioprio_class;
+	u8				ioprio_value;
+	u16				ioprio;
 };
 
 static inline struct nvmet_port *to_nvmet_port(struct config_item *item)
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH] nvmet: get rid of extra line in the tcp code
  2019-05-16  3:21 [PATCH RFC 0/4] nvmet: add support for port based io priority Chaitanya Kulkarni
  2019-05-16  3:21 ` [RFC PATCH 1/4] nvmet: add iopriority definitions for port Chaitanya Kulkarni
@ 2019-05-16  3:21 ` Chaitanya Kulkarni
  2019-05-16  3:22   ` Chaitanya Kulkarni
  2019-05-16  3:21 ` [RFC PATCH 2/4] nvmet: allow user to specify ioprioty from cfgfs Chaitanya Kulkarni
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 16+ messages in thread
From: Chaitanya Kulkarni @ 2019-05-16  3:21 UTC (permalink / raw)


This is a code cleanup patch, it doesn't change any functionality,
It removes the extra line at the end of the function.

Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni at wdc.com>
---
 drivers/nvme/target/tcp.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
index 69b83fa0c76c..f019af264081 100644
--- a/drivers/nvme/target/tcp.c
+++ b/drivers/nvme/target/tcp.c
@@ -557,7 +557,6 @@ static int nvmet_try_send_data(struct nvmet_tcp_cmd *cmd)
 	}
 
 	return 1;
-
 }
 
 static int nvmet_try_send_response(struct nvmet_tcp_cmd *cmd,
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH 2/4] nvmet: allow user to specify ioprioty from cfgfs
  2019-05-16  3:21 [PATCH RFC 0/4] nvmet: add support for port based io priority Chaitanya Kulkarni
  2019-05-16  3:21 ` [RFC PATCH 1/4] nvmet: add iopriority definitions for port Chaitanya Kulkarni
  2019-05-16  3:21 ` [PATCH] nvmet: get rid of extra line in the tcp code Chaitanya Kulkarni
@ 2019-05-16  3:21 ` Chaitanya Kulkarni
  2019-05-16  3:21 ` [RFC PATCH 3/4] nvmet: set iopriority for each bio Chaitanya Kulkarni
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 16+ messages in thread
From: Chaitanya Kulkarni @ 2019-05-16  3:21 UTC (permalink / raw)


This patch adds the configfs interface to initialize the iopriority
class and values from the user for each port. By default we use
NONE iopriority class.

Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni at wdc.com>
---
 drivers/nvme/target/configfs.c | 76 ++++++++++++++++++++++++++++++++++
 1 file changed, 76 insertions(+)

diff --git a/drivers/nvme/target/configfs.c b/drivers/nvme/target/configfs.c
index 08dd5af357f7..51548b0f605e 100644
--- a/drivers/nvme/target/configfs.c
+++ b/drivers/nvme/target/configfs.c
@@ -11,6 +11,7 @@
 #include <linux/ctype.h>
 #include <linux/pci.h>
 #include <linux/pci-p2pdma.h>
+#include <linux/ioprio.h>
 
 #include "nvmet.h"
 
@@ -143,6 +144,74 @@ static ssize_t nvmet_addr_traddr_store(struct config_item *item,
 
 CONFIGFS_ATTR(nvmet_, addr_traddr);
 
+static ssize_t nvmet_ioprio_class_show(struct config_item *item, char *page)
+{
+	return sprintf(page, "%d\n", to_nvmet_port(item)->ioprio_class);
+}
+
+static ssize_t nvmet_ioprio_class_store(struct config_item *item,
+		const char *page, size_t count)
+{
+	struct nvmet_port *port = to_nvmet_port(item);
+	u8 class;
+
+	if (kstrtou8(page, 0, &class))
+		return -EINVAL;
+
+	if (class > 3) {
+		pr_err("class %u is out of range ioprio_class [0-3]\n", class);
+		port->ioprio_class = 0;
+		port->ioprio_value = 0;
+		return -EINVAL;
+	}
+	/*
+	 * Right now update ioprio value everytime class or value is
+	 * initialized.
+	 *
+	 * XXX: add lock here.
+	 */
+	port->ioprio = IOPRIO_PRIO_VALUE(port->ioprio_class,
+			port->ioprio_value);
+	port->ioprio_class = class;
+	return count;
+}
+
+CONFIGFS_ATTR(nvmet_, ioprio_class);
+
+static ssize_t nvmet_ioprio_value_show(struct config_item *item, char *page)
+{
+	return sprintf(page, "%d\n", to_nvmet_port(item)->ioprio_value);
+}
+
+static ssize_t nvmet_ioprio_value_store(struct config_item *item,
+		const char *page, size_t count)
+{
+	struct nvmet_port *port = to_nvmet_port(item);
+	u8 value;
+
+	if (kstrtou8(page, 0, &value))
+		return -EINVAL;
+
+	if (value > 7) {
+		pr_err("value %u is out of range ioprio_value [0-7]\n", value);
+		port->ioprio_class = 0;
+		port->ioprio_value = 0;
+		return -EINVAL;
+	}
+	/*
+	 * Right now update ioprio value everytime class or value is
+	 * initialized.
+	 *
+	 * XXX: add lock here.
+	 */
+	port->ioprio = IOPRIO_PRIO_VALUE(port->ioprio_class,
+			port->ioprio_value);
+	port->ioprio_value = value;
+	return count;
+}
+
+CONFIGFS_ATTR(nvmet_, ioprio_value);
+
 static ssize_t nvmet_addr_treq_show(struct config_item *item,
 		char *page)
 {
@@ -1158,6 +1227,8 @@ static struct configfs_attribute *nvmet_port_attrs[] = {
 	&nvmet_attr_addr_trsvcid,
 	&nvmet_attr_addr_trtype,
 	&nvmet_attr_param_inline_data_size,
+	&nvmet_attr_ioprio_class,
+	&nvmet_attr_ioprio_value,
 	NULL,
 };
 
@@ -1206,6 +1277,11 @@ static struct config_group *nvmet_ports_make(struct config_group *group,
 	INIT_LIST_HEAD(&port->referrals);
 	port->inline_data_size = -1;	/* < 0 == let the transport choose */
 
+	port->ioprio_class = IOPRIO_CLASS_NONE;
+	port->ioprio_value = 0;
+	port->ioprio = IOPRIO_PRIO_VALUE(port->ioprio_class,
+			port->ioprio_value);
+
 	port->disc_addr.portid = cpu_to_le16(portid);
 	port->disc_addr.treq = NVMF_TREQ_DISABLE_SQFLOW;
 	config_group_init_type_name(&port->group, name, &nvmet_port_type);
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH 3/4] nvmet: set iopriority for each bio
  2019-05-16  3:21 [PATCH RFC 0/4] nvmet: add support for port based io priority Chaitanya Kulkarni
                   ` (2 preceding siblings ...)
  2019-05-16  3:21 ` [RFC PATCH 2/4] nvmet: allow user to specify ioprioty from cfgfs Chaitanya Kulkarni
@ 2019-05-16  3:21 ` Chaitanya Kulkarni
  2019-05-16  3:21 ` [RFC PATCH 4/4] nvmet: set iopriority for each kiocb for file Chaitanya Kulkarni
  2019-05-17  9:45 ` [PATCH RFC 0/4] nvmet: add support for port based io priority Christoph Hellwig
  5 siblings, 0 replies; 16+ messages in thread
From: Chaitanya Kulkarni @ 2019-05-16  3:21 UTC (permalink / raw)


This patch sets up the bio iopriority value before we submit this bio.

Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni at wdc.com>
---
 drivers/nvme/target/io-cmd-bdev.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/nvme/target/io-cmd-bdev.c b/drivers/nvme/target/io-cmd-bdev.c
index 3efc52f9c309..099dba477df8 100644
--- a/drivers/nvme/target/io-cmd-bdev.c
+++ b/drivers/nvme/target/io-cmd-bdev.c
@@ -152,6 +152,7 @@ static void nvmet_bdev_execute_rw(struct nvmet_req *req)
 			bio_set_dev(bio, req->ns->bdev);
 			bio->bi_iter.bi_sector = sector;
 			bio_set_op_attrs(bio, op, op_flags);
+			bio_set_prio(bio, req->port->ioprio);
 
 			bio_chain(bio, prev);
 			submit_bio(prev);
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH 4/4] nvmet: set iopriority for each kiocb for file
  2019-05-16  3:21 [PATCH RFC 0/4] nvmet: add support for port based io priority Chaitanya Kulkarni
                   ` (3 preceding siblings ...)
  2019-05-16  3:21 ` [RFC PATCH 3/4] nvmet: set iopriority for each bio Chaitanya Kulkarni
@ 2019-05-16  3:21 ` Chaitanya Kulkarni
  2019-05-16  7:32   ` Johannes Thumshirn
  2019-05-17  9:45 ` [PATCH RFC 0/4] nvmet: add support for port based io priority Christoph Hellwig
  5 siblings, 1 reply; 16+ messages in thread
From: Chaitanya Kulkarni @ 2019-05-16  3:21 UTC (permalink / raw)


This patch sets up the iopriority value before we call the respective
iter function.

Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni at wdc.com>
---
 drivers/nvme/target/io-cmd-file.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/nvme/target/io-cmd-file.c b/drivers/nvme/target/io-cmd-file.c
index 05453f5d1448..e05fdc9ae025 100644
--- a/drivers/nvme/target/io-cmd-file.c
+++ b/drivers/nvme/target/io-cmd-file.c
@@ -110,6 +110,7 @@ static ssize_t nvmet_file_submit_bvec(struct nvmet_req *req, loff_t pos,
 	iocb->ki_pos = pos;
 	iocb->ki_filp = req->ns->file;
 	iocb->ki_flags = ki_flags | iocb_flags(req->ns->file);
+	iocb->ki_ioprio = req->port->ioprio;
 
 	return call_iter(iocb, &iter);
 }
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH] nvmet: get rid of extra line in the tcp code
  2019-05-16  3:21 ` [PATCH] nvmet: get rid of extra line in the tcp code Chaitanya Kulkarni
@ 2019-05-16  3:22   ` Chaitanya Kulkarni
  0 siblings, 0 replies; 16+ messages in thread
From: Chaitanya Kulkarni @ 2019-05-16  3:22 UTC (permalink / raw)


Please ignore this patch, sorry for the noise.

On 5/15/19 8:21 PM, Chaitanya Kulkarni wrote:
> This is a code cleanup patch, it doesn't change any functionality,
> It removes the extra line at the end of the function.
> 
> Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni at wdc.com>
> ---
>   drivers/nvme/target/tcp.c | 1 -
>   1 file changed, 1 deletion(-)
> 
> diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
> index 69b83fa0c76c..f019af264081 100644
> --- a/drivers/nvme/target/tcp.c
> +++ b/drivers/nvme/target/tcp.c
> @@ -557,7 +557,6 @@ static int nvmet_try_send_data(struct nvmet_tcp_cmd *cmd)
>   	}
>   
>   	return 1;
> -
>   }
>   
>   static int nvmet_try_send_response(struct nvmet_tcp_cmd *cmd,
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [RFC PATCH 4/4] nvmet: set iopriority for each kiocb for file
  2019-05-16  3:21 ` [RFC PATCH 4/4] nvmet: set iopriority for each kiocb for file Chaitanya Kulkarni
@ 2019-05-16  7:32   ` Johannes Thumshirn
  0 siblings, 0 replies; 16+ messages in thread
From: Johannes Thumshirn @ 2019-05-16  7:32 UTC (permalink / raw)


I'd merge this patch into the previous one.
-- 
Johannes Thumshirn                            SUSE Labs Filesystems
jthumshirn at suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg
GF: Felix Imend?rffer, Mary Higgins, Sri Rasiah
HRB 21284 (AG N?rnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH RFC 0/4] nvmet: add support for port based io priority
  2019-05-16  3:21 [PATCH RFC 0/4] nvmet: add support for port based io priority Chaitanya Kulkarni
                   ` (4 preceding siblings ...)
  2019-05-16  3:21 ` [RFC PATCH 4/4] nvmet: set iopriority for each kiocb for file Chaitanya Kulkarni
@ 2019-05-17  9:45 ` Christoph Hellwig
  2019-05-17 16:28   ` Chaitanya Kulkarni
  2019-05-24  8:08   ` Sagi Grimberg
  5 siblings, 2 replies; 16+ messages in thread
From: Christoph Hellwig @ 2019-05-17  9:45 UTC (permalink / raw)


On Wed, May 15, 2019@08:21:04PM -0700, Chaitanya Kulkarni wrote:
> Any feedback is welcome, I'd like to understand if this approach is desirable,
> if not then what is the right way to use the io priorities on that target side
> so that NVMeOF target can take advantage of the scheduler infrastructure ?

I don't really see the point.  Why would we treat I/O differently
depending on which port it came into?  Until NVMe has an actually
working priority scheme trying to hack support into our infrastructure
seems rather futile.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH RFC 0/4] nvmet: add support for port based io priority
  2019-05-17  9:45 ` [PATCH RFC 0/4] nvmet: add support for port based io priority Christoph Hellwig
@ 2019-05-17 16:28   ` Chaitanya Kulkarni
  2019-05-21  6:56     ` Christoph Hellwig
  2019-05-24  8:08   ` Sagi Grimberg
  1 sibling, 1 reply; 16+ messages in thread
From: Chaitanya Kulkarni @ 2019-05-17 16:28 UTC (permalink / raw)


On 5/17/19 2:45 AM, Christoph Hellwig wrote:
> On Wed, May 15, 2019@08:21:04PM -0700, Chaitanya Kulkarni wrote:
>> Any feedback is welcome, I'd like to understand if this approach is desirable,
>> if not then what is the right way to use the io priorities on that target side
>> so that NVMeOF target can take advantage of the scheduler infrastructure ?
> I don't really see the point.  Why would we treat I/O differently
> depending on which port it came into?  Until NVMe has an actually
> working priority scheme trying to hack support into our infrastructure
> seems rather futile.
>
Okay, thanks for the reply Christoph. Quick question, when you mention

"Until NVMe has an actually working priority" -> host side support for

priority based block layer request -> NVMe Request processing with

having weighted round robin priority SQ for NVMe Host  ?

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH RFC 0/4] nvmet: add support for port based io priority
  2019-05-17 16:28   ` Chaitanya Kulkarni
@ 2019-05-21  6:56     ` Christoph Hellwig
  0 siblings, 0 replies; 16+ messages in thread
From: Christoph Hellwig @ 2019-05-21  6:56 UTC (permalink / raw)


On Fri, May 17, 2019@04:28:43PM +0000, Chaitanya Kulkarni wrote:
> Okay, thanks for the reply Christoph. Quick question, when you mention
> 
> "Until NVMe has an actually working priority" -> host side support for
> 
> priority based block layer request -> NVMe Request processing with
> 
> having weighted round robin priority SQ for NVMe Host  ?

Weighted round robing is _NOT_ and I/O priority - it just just changes
arbitration.  But there had been priority proposals in the working
group before, without much progress at the moment, though.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH RFC 0/4] nvmet: add support for port based io priority
  2019-05-17  9:45 ` [PATCH RFC 0/4] nvmet: add support for port based io priority Christoph Hellwig
  2019-05-17 16:28   ` Chaitanya Kulkarni
@ 2019-05-24  8:08   ` Sagi Grimberg
  1 sibling, 0 replies; 16+ messages in thread
From: Sagi Grimberg @ 2019-05-24  8:08 UTC (permalink / raw)



>> Any feedback is welcome, I'd like to understand if this approach is desirable,
>> if not then what is the right way to use the io priorities on that target side
>> so that NVMeOF target can take advantage of the scheduler infrastructure ?
> 
> I don't really see the point.  Why would we treat I/O differently
> depending on which port it came into?

Agree this seems somewhat sketchy...

I don't understand what is the usage model at all.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH] nvmet: get rid of extra line in the tcp code
  2019-05-15 16:10 [PATCH] nvmet: get rid of extra line in the tcp code Chaitanya Kulkarni
  2019-05-21  6:56 ` Christoph Hellwig
  2019-05-21  8:49 ` Max Gurtovoy
@ 2019-05-23  3:17 ` Sagi Grimberg
  2 siblings, 0 replies; 16+ messages in thread
From: Sagi Grimberg @ 2019-05-23  3:17 UTC (permalink / raw)


Reviewed-by: Sagi Grimberg <sagi at grimberg.me>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH] nvmet: get rid of extra line in the tcp code
  2019-05-15 16:10 [PATCH] nvmet: get rid of extra line in the tcp code Chaitanya Kulkarni
  2019-05-21  6:56 ` Christoph Hellwig
@ 2019-05-21  8:49 ` Max Gurtovoy
  2019-05-23  3:17 ` Sagi Grimberg
  2 siblings, 0 replies; 16+ messages in thread
From: Max Gurtovoy @ 2019-05-21  8:49 UTC (permalink / raw)



On 5/15/2019 7:10 PM, Chaitanya Kulkarni wrote:
> This is a code cleanup patch, it doesn't change any functionality,
> It removes the extra line at the end of the function.
>
> Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni at wdc.com>
> ---
>   drivers/nvme/target/tcp.c | 1 -
>   1 file changed, 1 deletion(-)
>
> diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
> index 69b83fa0c76c..f019af264081 100644
> --- a/drivers/nvme/target/tcp.c
> +++ b/drivers/nvme/target/tcp.c
> @@ -557,7 +557,6 @@ static int nvmet_try_send_data(struct nvmet_tcp_cmd *cmd)
>   	}
>   
>   	return 1;
> -
>   }
>   
>   static int nvmet_try_send_response(struct nvmet_tcp_cmd *cmd,


Looks good,

Reviewed-by: Max Gurtovoy <maxg at mellanox.com>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH] nvmet: get rid of extra line in the tcp code
  2019-05-15 16:10 [PATCH] nvmet: get rid of extra line in the tcp code Chaitanya Kulkarni
@ 2019-05-21  6:56 ` Christoph Hellwig
  2019-05-21  8:49 ` Max Gurtovoy
  2019-05-23  3:17 ` Sagi Grimberg
  2 siblings, 0 replies; 16+ messages in thread
From: Christoph Hellwig @ 2019-05-21  6:56 UTC (permalink / raw)


Looks fine,

Reviewed-by: Christoph Hellwig <hch at lst.de>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH] nvmet: get rid of extra line in the tcp code
@ 2019-05-15 16:10 Chaitanya Kulkarni
  2019-05-21  6:56 ` Christoph Hellwig
                   ` (2 more replies)
  0 siblings, 3 replies; 16+ messages in thread
From: Chaitanya Kulkarni @ 2019-05-15 16:10 UTC (permalink / raw)


This is a code cleanup patch, it doesn't change any functionality,
It removes the extra line at the end of the function.

Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni at wdc.com>
---
 drivers/nvme/target/tcp.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
index 69b83fa0c76c..f019af264081 100644
--- a/drivers/nvme/target/tcp.c
+++ b/drivers/nvme/target/tcp.c
@@ -557,7 +557,6 @@ static int nvmet_try_send_data(struct nvmet_tcp_cmd *cmd)
 	}
 
 	return 1;
-
 }
 
 static int nvmet_try_send_response(struct nvmet_tcp_cmd *cmd,
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2019-05-24  8:08 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-05-16  3:21 [PATCH RFC 0/4] nvmet: add support for port based io priority Chaitanya Kulkarni
2019-05-16  3:21 ` [RFC PATCH 1/4] nvmet: add iopriority definitions for port Chaitanya Kulkarni
2019-05-16  3:21 ` [PATCH] nvmet: get rid of extra line in the tcp code Chaitanya Kulkarni
2019-05-16  3:22   ` Chaitanya Kulkarni
2019-05-16  3:21 ` [RFC PATCH 2/4] nvmet: allow user to specify ioprioty from cfgfs Chaitanya Kulkarni
2019-05-16  3:21 ` [RFC PATCH 3/4] nvmet: set iopriority for each bio Chaitanya Kulkarni
2019-05-16  3:21 ` [RFC PATCH 4/4] nvmet: set iopriority for each kiocb for file Chaitanya Kulkarni
2019-05-16  7:32   ` Johannes Thumshirn
2019-05-17  9:45 ` [PATCH RFC 0/4] nvmet: add support for port based io priority Christoph Hellwig
2019-05-17 16:28   ` Chaitanya Kulkarni
2019-05-21  6:56     ` Christoph Hellwig
2019-05-24  8:08   ` Sagi Grimberg
  -- strict thread matches above, loose matches on Subject: below --
2019-05-15 16:10 [PATCH] nvmet: get rid of extra line in the tcp code Chaitanya Kulkarni
2019-05-21  6:56 ` Christoph Hellwig
2019-05-21  8:49 ` Max Gurtovoy
2019-05-23  3:17 ` Sagi Grimberg

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.