Linux-NVME Archive on lore.kernel.org
 help / color / Atom feed
* [PATCH] 2/2] nvmet-tcp: set SO_PRIORITY for accepted sockets
@ 2019-10-30 22:27 Wunderlich, Mark
  2019-11-06 16:45 ` Sagi Grimberg
  0 siblings, 1 reply; 4+ messages in thread
From: Wunderlich, Mark @ 2019-10-30 22:27 UTC (permalink / raw)
  To: linux-nvme

Enable ability to associate all sockets related to NVMf TCP
traffic to a priority group that will perform optimized
network processing for this traffic class. Maintain initial
default behavior of using priority of zero.

Signed-off-by: Kiran Patil <kiran.patil@intel.com>
Signed-off-by: Mark Wunderlich <mark.wunderlich@intel.com>
---
 drivers/nvme/target/tcp.c |   24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
index 4f79b77..4879194 100644
--- a/drivers/nvme/target/tcp.c
+++ b/drivers/nvme/target/tcp.c
@@ -20,6 +20,16 @@
 
 #define NVMET_TCP_DEF_INLINE_DATA_SIZE	(4 * PAGE_SIZE)
 
+/* Define the socket priority to use for connections were it is desirable
+ * that the NIC consider performing optimized packet processing or filtering.
+ * A non-zero value being sufficient to indicate general consideration of any
+ * possible optimization.  Making it a module param allows for alternative
+ * values that may be unique for some NIC implementations.
+ */
+static int so_priority;
+module_param(so_priority, int, 0644);
+MODULE_PARM_DESC(so_priority, "nvmet tcp socket optimize priority");
+
 #define NVMET_TCP_RECV_BUDGET		8
 #define NVMET_TCP_SEND_BUDGET		8
 
@@ -1451,6 +1461,13 @@ static int nvmet_tcp_set_queue_sock(struct nvmet_tcp_queue *queue)
 	if (ret)
 		return ret;
 
+	ret = kernel_setsockopt(sock, SOL_SOCKET, SO_PRIORITY,
+			(char *)&so_priority, sizeof(so_priority));
+	if (ret) {
+		pr_err("failed to set SO_PRIORITY sock opt %d\n", ret);
+		return ret;
+	}
+
 	/* Set socket type of service */
 	if (inet->rcv_tos > 0) {
 		int tos = inet->rcv_tos;
@@ -1640,6 +1657,13 @@ static int nvmet_tcp_add_port(struct nvmet_port *nport)
 		goto err_sock;
 	}
 
+	ret = kernel_setsockopt(port->sock, SOL_SOCKET, SO_PRIORITY,
+			(char *)&so_priority, sizeof(so_priority));
+	if (ret) {
+		pr_err("failed to set SO_PRIORITY sock opt %d\n", ret);
+		goto err_sock;
+	}
+
 	ret = kernel_bind(port->sock, (struct sockaddr *)&port->addr,
 			sizeof(port->addr));
 	if (ret) {


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] 2/2] nvmet-tcp: set SO_PRIORITY for accepted sockets
  2019-10-30 22:27 [PATCH] 2/2] nvmet-tcp: set SO_PRIORITY for accepted sockets Wunderlich, Mark
@ 2019-11-06 16:45 ` Sagi Grimberg
  2019-11-07 16:39   ` Wunderlich, Mark
  0 siblings, 1 reply; 4+ messages in thread
From: Sagi Grimberg @ 2019-11-06 16:45 UTC (permalink / raw)
  To: Wunderlich, Mark, linux-nvme

Reviewed-by: Sagi Grimberg <sagi@grimberg.me>

We could do this with configfs, but it given that this
is very much specific to tcp transport I find it hard to
justify.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 4+ messages in thread

* RE: [PATCH] 2/2] nvmet-tcp: set SO_PRIORITY for accepted sockets
  2019-11-06 16:45 ` Sagi Grimberg
@ 2019-11-07 16:39   ` Wunderlich, Mark
  0 siblings, 0 replies; 4+ messages in thread
From: Wunderlich, Mark @ 2019-11-07 16:39 UTC (permalink / raw)
  To: Sagi Grimberg, linux-nvme


>Reviewed-by: Sagi Grimberg <sagi@grimberg.me>

>We could do this with configfs, but it given that this is very much specific to tcp transport I find it hard to justify.

Assume we will discuss more when re-posting in new series.  Agree there are various location options to define so_priority.  Options considered in the past being:
- Just make it a module define to specific value (say 1) that is only leveraged if busy polling enabled.
- Make it a sysctl variable, like sysctl_net_busy_poll and sysctl_net_busy_read that are related to busy polling, and again only leverage it if busy polling enabled.
- Or, as shown in this patch, where it was suggested that a module parameter is preferable over use of sysctl.
- And as you indicate, configfs.  Given this is more specific to adjusting tcp sockets behavior, use of configfs did not seem right.

If there is a preference, would be happy to adjust before re-posting of the patches.

Cheers --- Mark
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [PATCH 2/2] nvmet-tcp: set SO_PRIORITY for accepted sockets
@ 2020-01-16  0:46 " Wunderlich, Mark
  0 siblings, 0 replies; 4+ messages in thread
From: Wunderlich, Mark @ 2020-01-16  0:46 UTC (permalink / raw)
  To: linux-nvme; +Cc: Sagi Grimberg

nvmet-tcp: set SO_PRIORITY for accepted sockets

Enable ability to associate all sockets related to NVMf TCP
traffic to a priority group that will perform optimized
network processing for this traffic class. Maintain initial
default behavior of using priority of zero.

Signed-off-by: Kiran Patil <kiran.patil@intel.com>
Signed-off-by: Mark Wunderlich <mark.wunderlich@intel.com>
---
 drivers/nvme/target/tcp.c |   26 ++++++++++++++++++++++++++
 1 file changed, 26 insertions(+)

diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
index af674fc0bb1e..cbff1038bdb3 100644
--- a/drivers/nvme/target/tcp.c
+++ b/drivers/nvme/target/tcp.c
@@ -19,6 +19,16 @@
 
 #define NVMET_TCP_DEF_INLINE_DATA_SIZE	(4 * PAGE_SIZE)
 
+/* Define the socket priority to use for connections were it is desirable
+ * that the NIC consider performing optimized packet processing or filtering.
+ * A non-zero value being sufficient to indicate general consideration of any
+ * possible optimization.  Making it a module param allows for alternative
+ * values that may be unique for some NIC implementations.
+ */
+static int so_priority;
+module_param(so_priority, int, 0644);
+MODULE_PARM_DESC(so_priority, "nvmet tcp socket optimize priority");
+
 #define NVMET_TCP_RECV_BUDGET		8
 #define NVMET_TCP_SEND_BUDGET		8
 #define NVMET_TCP_IO_WORK_BUDGET	64
@@ -1433,6 +1443,13 @@ static int nvmet_tcp_set_queue_sock(struct nvmet_tcp_queue *queue)
 	if (ret)
 		return ret;
 
+	if (so_priority > 0) {
+		ret = kernel_setsockopt(sock, SOL_SOCKET, SO_PRIORITY,
+				(char *)&so_priority, sizeof(so_priority));
+		if (ret)
+			return ret;
+	}
+
 	/* Set socket type of service */
 	if (inet->rcv_tos > 0) {
 		int tos = inet->rcv_tos;
@@ -1622,6 +1639,15 @@ static int nvmet_tcp_add_port(struct nvmet_port *nport)
 		goto err_sock;
 	}
 
+	if (so_priority > 0) {
+		ret = kernel_setsockopt(port->sock, SOL_SOCKET, SO_PRIORITY,
+				(char *)&so_priority, sizeof(so_priority));
+		if (ret) {
+			pr_err("failed to set SO_PRIORITY sock opt %d\n", ret);
+			goto err_sock;
+		}
+	}
+
 	ret = kernel_bind(port->sock, (struct sockaddr *)&port->addr,
 			sizeof(port->addr));
 	if (ret) {

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, back to index

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-30 22:27 [PATCH] 2/2] nvmet-tcp: set SO_PRIORITY for accepted sockets Wunderlich, Mark
2019-11-06 16:45 ` Sagi Grimberg
2019-11-07 16:39   ` Wunderlich, Mark
2020-01-16  0:46 [PATCH " Wunderlich, Mark

Linux-NVME Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-nvme/0 linux-nvme/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-nvme linux-nvme/ https://lore.kernel.org/linux-nvme \
		linux-nvme@lists.infradead.org
	public-inbox-index linux-nvme

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.infradead.lists.linux-nvme


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git