All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH rfc 0/3] Expose cpu mapping hints to a nvme target port
@ 2017-07-02 15:01 ` Sagi Grimberg
  0 siblings, 0 replies; 36+ messages in thread
From: Sagi Grimberg @ 2017-07-02 15:01 UTC (permalink / raw)
  To: linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Christoph Hellwig,
	James Smart, Keith Busch, linux-rdma-u79uwXL29TY76Z2rM5mHXA

I've heard feedback from folks on several occasions that we can
improve in a multi-socket target array configuration.

Today, rdma transport simply spread IO threads accross all system cpus
without any specific configuration in mind (same for fc). It isn't really
possible to exclude a nvmet port IO threads affinity to a specific
numa socket which can be useful to reduce the inter-socket DMA traffic.

This can make sense if the user wants to expose a set of backend devices
via a nvme target port (HBA port) which are connected to the same
numa-socket in order to optimize NUMA locality and minimize the inter-socket
DMA traffic.

This RFC exposes a cpu mapping to a specific nvmet port. The user can choose
to provide a affinity hint a to nvme target port that will contain IO threads
to specific cpu cores and the transport will _try_ to enforce it (if it knows
how to). We default to the online cpumap.

Note, this is based on the nvme and rdma msix affinity mapping patches pending
to 4.13.

Feedback is welcome!

Sagi Grimberg (3):
  nvmet: allow assignment of a cpulist for each nvmet port
  RDMA/core: expose cpu affinity based completion vector lookup
  nvmet-rdma: assign cq completion vector based on the port allowed cpus

 drivers/infiniband/core/verbs.c | 41 ++++++++++++++++++++++
 drivers/nvme/target/configfs.c  | 75 +++++++++++++++++++++++++++++++++++++++++
 drivers/nvme/target/nvmet.h     |  4 +++
 drivers/nvme/target/rdma.c      | 40 +++++++++++++++-------
 include/rdma/ib_verbs.h         |  3 ++
 5 files changed, 151 insertions(+), 12 deletions(-)

-- 
2.7.4

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH rfc 0/3] Expose cpu mapping hints to a nvme target port
@ 2017-07-02 15:01 ` Sagi Grimberg
  0 siblings, 0 replies; 36+ messages in thread
From: Sagi Grimberg @ 2017-07-02 15:01 UTC (permalink / raw)


I've heard feedback from folks on several occasions that we can
improve in a multi-socket target array configuration.

Today, rdma transport simply spread IO threads accross all system cpus
without any specific configuration in mind (same for fc). It isn't really
possible to exclude a nvmet port IO threads affinity to a specific
numa socket which can be useful to reduce the inter-socket DMA traffic.

This can make sense if the user wants to expose a set of backend devices
via a nvme target port (HBA port) which are connected to the same
numa-socket in order to optimize NUMA locality and minimize the inter-socket
DMA traffic.

This RFC exposes a cpu mapping to a specific nvmet port. The user can choose
to provide a affinity hint a to nvme target port that will contain IO threads
to specific cpu cores and the transport will _try_ to enforce it (if it knows
how to). We default to the online cpumap.

Note, this is based on the nvme and rdma msix affinity mapping patches pending
to 4.13.

Feedback is welcome!

Sagi Grimberg (3):
  nvmet: allow assignment of a cpulist for each nvmet port
  RDMA/core: expose cpu affinity based completion vector lookup
  nvmet-rdma: assign cq completion vector based on the port allowed cpus

 drivers/infiniband/core/verbs.c | 41 ++++++++++++++++++++++
 drivers/nvme/target/configfs.c  | 75 +++++++++++++++++++++++++++++++++++++++++
 drivers/nvme/target/nvmet.h     |  4 +++
 drivers/nvme/target/rdma.c      | 40 +++++++++++++++-------
 include/rdma/ib_verbs.h         |  3 ++
 5 files changed, 151 insertions(+), 12 deletions(-)

-- 
2.7.4

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH rfc 1/3] nvmet: allow assignment of a cpulist for each nvmet port
  2017-07-02 15:01 ` Sagi Grimberg
@ 2017-07-02 15:01     ` Sagi Grimberg
  -1 siblings, 0 replies; 36+ messages in thread
From: Sagi Grimberg @ 2017-07-02 15:01 UTC (permalink / raw)
  To: linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Christoph Hellwig,
	James Smart, Keith Busch, linux-rdma-u79uwXL29TY76Z2rM5mHXA

Users might want to assign specific affinity in the form of
a cpumap to a nvmet port. This can make sense in multi-socket
systems where each socket is connected to a HBA (e.g. RDMA device)
and a set of backend storage devices (e.g. NVMe or other PCI
storage devices) where the user wants to provision the backend
storage via the HBA belonging to the same numa socket.

So, allow the user to pass a cpulist, however if the
underlying devices do not expose access to these mappings
the transport drivers is not obligated to enforce it so it
is merely a hint.

Default to all online cpumap.

Signed-off-by: Sagi Grimberg <sagi-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org>
---
 drivers/nvme/target/configfs.c | 75 ++++++++++++++++++++++++++++++++++++++++++
 drivers/nvme/target/nvmet.h    |  4 +++
 2 files changed, 79 insertions(+)

diff --git a/drivers/nvme/target/configfs.c b/drivers/nvme/target/configfs.c
index a358ecd93e11..095c2e6b4116 100644
--- a/drivers/nvme/target/configfs.c
+++ b/drivers/nvme/target/configfs.c
@@ -17,12 +17,63 @@
 #include <linux/slab.h>
 #include <linux/stat.h>
 #include <linux/ctype.h>
+#include <linux/cpumask.h>
 
 #include "nvmet.h"
 
 static struct config_item_type nvmet_host_type;
 static struct config_item_type nvmet_subsys_type;
 
+static ssize_t nvmet_addr_cpulist_show(struct config_item *item,
+		char *page)
+{
+	struct nvmet_port *port = to_nvmet_port(item);
+
+	return sprintf(page, "%*pbl\n", cpumask_pr_args(port->cpumask));
+}
+
+static ssize_t nvmet_addr_cpulist_store(struct config_item *item,
+		const char *page, size_t count)
+{
+	struct nvmet_port *port = to_nvmet_port(item);
+	cpumask_var_t cpumask;
+	int i, err;
+
+	if (port->enabled) {
+		pr_err("Cannot specify cpulist while enabled\n");
+		pr_err("Disable the port before changing cores\n");
+		return -EACCES;
+	}
+
+	if (!alloc_cpumask_var(&cpumask, GFP_KERNEL))
+		return -ENOMEM;
+
+	err = cpulist_parse(page, cpumask);
+	if (err) {
+		pr_err("bad cpumask given (%d): %s\n", err, page);
+		return err;
+	}
+
+	if (!cpumask_intersects(cpumask, cpu_online_mask)) {
+		pr_err("cpulist consists of offline cpus: %s\n", page);
+		return err;
+	}
+
+	/* copy cpumask */
+	cpumask_copy(port->cpumask, cpumask);
+	free_cpumask_var(cpumask);
+
+	/* clear port cpulist */
+	port->nr_cpus = 0;
+	/* reset port cpulist */
+	for_each_cpu(i, cpumask)
+		port->cpus[port->nr_cpus++] = i;
+
+	return count;
+}
+
+CONFIGFS_ATTR(nvmet_, addr_cpulist);
+
 /*
  * nvmet_port Generic ConfigFS definitions.
  * Used in any place in the ConfigFS tree that refers to an address.
@@ -821,6 +872,7 @@ static struct config_group *nvmet_referral_make(
 		return ERR_PTR(-ENOMEM);
 
 	INIT_LIST_HEAD(&port->entry);
+
 	config_group_init_type_name(&port->group, name, &nvmet_referral_type);
 
 	return &port->group;
@@ -842,6 +894,8 @@ static void nvmet_port_release(struct config_item *item)
 {
 	struct nvmet_port *port = to_nvmet_port(item);
 
+	kfree(port->cpus);
+	free_cpumask_var(port->cpumask);
 	kfree(port);
 }
 
@@ -851,6 +905,7 @@ static struct configfs_attribute *nvmet_port_attrs[] = {
 	&nvmet_attr_addr_traddr,
 	&nvmet_attr_addr_trsvcid,
 	&nvmet_attr_addr_trtype,
+	&nvmet_attr_addr_cpulist,
 	NULL,
 };
 
@@ -869,6 +924,7 @@ static struct config_group *nvmet_ports_make(struct config_group *group,
 {
 	struct nvmet_port *port;
 	u16 portid;
+	int i;
 
 	if (kstrtou16(name, 0, &portid))
 		return ERR_PTR(-EINVAL);
@@ -881,6 +937,20 @@ static struct config_group *nvmet_ports_make(struct config_group *group,
 	INIT_LIST_HEAD(&port->subsystems);
 	INIT_LIST_HEAD(&port->referrals);
 
+	if (!alloc_cpumask_var(&port->cpumask, GFP_KERNEL))
+		goto err_free_port;
+
+	port->nr_cpus = num_possible_cpus();
+
+	port->cpus = kcalloc(sizeof(int), port->nr_cpus, GFP_KERNEL);
+	if (!port->cpus)
+		goto err_free_cpumask;
+
+	for_each_possible_cpu(i) {
+		cpumask_set_cpu(i, port->cpumask);
+		port->cpus[i] = i;
+	}
+
 	port->disc_addr.portid = cpu_to_le16(portid);
 	config_group_init_type_name(&port->group, name, &nvmet_port_type);
 
@@ -893,6 +963,11 @@ static struct config_group *nvmet_ports_make(struct config_group *group,
 	configfs_add_default_group(&port->referrals_group, &port->group);
 
 	return &port->group;
+
+err_free_cpumask:
+	free_cpumask_var(port->cpumask);
+err_free_port:
+	return ERR_PTR(-ENOMEM);
 }
 
 static struct configfs_group_operations nvmet_ports_group_ops = {
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index 747bbdb4f9c6..20ed676dc335 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -97,6 +97,10 @@ struct nvmet_port {
 	struct list_head		referrals;
 	void				*priv;
 	bool				enabled;
+
+	int				nr_cpus;
+	cpumask_var_t			cpumask;
+	int				*cpus;
 };
 
 static inline struct nvmet_port *to_nvmet_port(struct config_item *item)
-- 
2.7.4

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH rfc 1/3] nvmet: allow assignment of a cpulist for each nvmet port
@ 2017-07-02 15:01     ` Sagi Grimberg
  0 siblings, 0 replies; 36+ messages in thread
From: Sagi Grimberg @ 2017-07-02 15:01 UTC (permalink / raw)


Users might want to assign specific affinity in the form of
a cpumap to a nvmet port. This can make sense in multi-socket
systems where each socket is connected to a HBA (e.g. RDMA device)
and a set of backend storage devices (e.g. NVMe or other PCI
storage devices) where the user wants to provision the backend
storage via the HBA belonging to the same numa socket.

So, allow the user to pass a cpulist, however if the
underlying devices do not expose access to these mappings
the transport drivers is not obligated to enforce it so it
is merely a hint.

Default to all online cpumap.

Signed-off-by: Sagi Grimberg <sagi at grimberg.me>
---
 drivers/nvme/target/configfs.c | 75 ++++++++++++++++++++++++++++++++++++++++++
 drivers/nvme/target/nvmet.h    |  4 +++
 2 files changed, 79 insertions(+)

diff --git a/drivers/nvme/target/configfs.c b/drivers/nvme/target/configfs.c
index a358ecd93e11..095c2e6b4116 100644
--- a/drivers/nvme/target/configfs.c
+++ b/drivers/nvme/target/configfs.c
@@ -17,12 +17,63 @@
 #include <linux/slab.h>
 #include <linux/stat.h>
 #include <linux/ctype.h>
+#include <linux/cpumask.h>
 
 #include "nvmet.h"
 
 static struct config_item_type nvmet_host_type;
 static struct config_item_type nvmet_subsys_type;
 
+static ssize_t nvmet_addr_cpulist_show(struct config_item *item,
+		char *page)
+{
+	struct nvmet_port *port = to_nvmet_port(item);
+
+	return sprintf(page, "%*pbl\n", cpumask_pr_args(port->cpumask));
+}
+
+static ssize_t nvmet_addr_cpulist_store(struct config_item *item,
+		const char *page, size_t count)
+{
+	struct nvmet_port *port = to_nvmet_port(item);
+	cpumask_var_t cpumask;
+	int i, err;
+
+	if (port->enabled) {
+		pr_err("Cannot specify cpulist while enabled\n");
+		pr_err("Disable the port before changing cores\n");
+		return -EACCES;
+	}
+
+	if (!alloc_cpumask_var(&cpumask, GFP_KERNEL))
+		return -ENOMEM;
+
+	err = cpulist_parse(page, cpumask);
+	if (err) {
+		pr_err("bad cpumask given (%d): %s\n", err, page);
+		return err;
+	}
+
+	if (!cpumask_intersects(cpumask, cpu_online_mask)) {
+		pr_err("cpulist consists of offline cpus: %s\n", page);
+		return err;
+	}
+
+	/* copy cpumask */
+	cpumask_copy(port->cpumask, cpumask);
+	free_cpumask_var(cpumask);
+
+	/* clear port cpulist */
+	port->nr_cpus = 0;
+	/* reset port cpulist */
+	for_each_cpu(i, cpumask)
+		port->cpus[port->nr_cpus++] = i;
+
+	return count;
+}
+
+CONFIGFS_ATTR(nvmet_, addr_cpulist);
+
 /*
  * nvmet_port Generic ConfigFS definitions.
  * Used in any place in the ConfigFS tree that refers to an address.
@@ -821,6 +872,7 @@ static struct config_group *nvmet_referral_make(
 		return ERR_PTR(-ENOMEM);
 
 	INIT_LIST_HEAD(&port->entry);
+
 	config_group_init_type_name(&port->group, name, &nvmet_referral_type);
 
 	return &port->group;
@@ -842,6 +894,8 @@ static void nvmet_port_release(struct config_item *item)
 {
 	struct nvmet_port *port = to_nvmet_port(item);
 
+	kfree(port->cpus);
+	free_cpumask_var(port->cpumask);
 	kfree(port);
 }
 
@@ -851,6 +905,7 @@ static struct configfs_attribute *nvmet_port_attrs[] = {
 	&nvmet_attr_addr_traddr,
 	&nvmet_attr_addr_trsvcid,
 	&nvmet_attr_addr_trtype,
+	&nvmet_attr_addr_cpulist,
 	NULL,
 };
 
@@ -869,6 +924,7 @@ static struct config_group *nvmet_ports_make(struct config_group *group,
 {
 	struct nvmet_port *port;
 	u16 portid;
+	int i;
 
 	if (kstrtou16(name, 0, &portid))
 		return ERR_PTR(-EINVAL);
@@ -881,6 +937,20 @@ static struct config_group *nvmet_ports_make(struct config_group *group,
 	INIT_LIST_HEAD(&port->subsystems);
 	INIT_LIST_HEAD(&port->referrals);
 
+	if (!alloc_cpumask_var(&port->cpumask, GFP_KERNEL))
+		goto err_free_port;
+
+	port->nr_cpus = num_possible_cpus();
+
+	port->cpus = kcalloc(sizeof(int), port->nr_cpus, GFP_KERNEL);
+	if (!port->cpus)
+		goto err_free_cpumask;
+
+	for_each_possible_cpu(i) {
+		cpumask_set_cpu(i, port->cpumask);
+		port->cpus[i] = i;
+	}
+
 	port->disc_addr.portid = cpu_to_le16(portid);
 	config_group_init_type_name(&port->group, name, &nvmet_port_type);
 
@@ -893,6 +963,11 @@ static struct config_group *nvmet_ports_make(struct config_group *group,
 	configfs_add_default_group(&port->referrals_group, &port->group);
 
 	return &port->group;
+
+err_free_cpumask:
+	free_cpumask_var(port->cpumask);
+err_free_port:
+	return ERR_PTR(-ENOMEM);
 }
 
 static struct configfs_group_operations nvmet_ports_group_ops = {
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index 747bbdb4f9c6..20ed676dc335 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -97,6 +97,10 @@ struct nvmet_port {
 	struct list_head		referrals;
 	void				*priv;
 	bool				enabled;
+
+	int				nr_cpus;
+	cpumask_var_t			cpumask;
+	int				*cpus;
 };
 
 static inline struct nvmet_port *to_nvmet_port(struct config_item *item)
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH rfc 2/3] RDMA/core: expose cpu affinity based completion vector lookup
  2017-07-02 15:01 ` Sagi Grimberg
@ 2017-07-02 15:01     ` Sagi Grimberg
  -1 siblings, 0 replies; 36+ messages in thread
From: Sagi Grimberg @ 2017-07-02 15:01 UTC (permalink / raw)
  To: linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Christoph Hellwig,
	James Smart, Keith Busch, linux-rdma-u79uwXL29TY76Z2rM5mHXA

A ULP might want to lookup an optimal completion vector based on
a given cpu core affinity. Expose a lookup routine for it iterating
on the device completion vectors searching a vector with affinity
matching the given cpu.

Signed-off-by: Sagi Grimberg <sagi-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org>
---
 drivers/infiniband/core/verbs.c | 41 +++++++++++++++++++++++++++++++++++++++++
 include/rdma/ib_verbs.h         |  3 +++
 2 files changed, 44 insertions(+)

diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c
index 4792f5209ac2..f0dfb1ca952b 100644
--- a/drivers/infiniband/core/verbs.c
+++ b/drivers/infiniband/core/verbs.c
@@ -2099,3 +2099,44 @@ void ib_drain_qp(struct ib_qp *qp)
 		ib_drain_rq(qp);
 }
 EXPORT_SYMBOL(ib_drain_qp);
+
+/**
+ * ib_find_cpu_vector() - Find the first completion vector mapped to a given cpu core
+ * @device:            rdma device
+ * @cpu:               cpu for the corresponding completion vector affinity
+ * @vector:            output target completion vector
+ *
+ * If the device expose vector affinity we will search each of the vectors
+ * and if we find one that gives us the desired cpu core we return true
+ * and assign @vector to the corresponding completion vector. Otherwise
+ * we return false. We stop at the first appropriate completion vector
+ * we find as we don't have any preference for multiple vectors with the
+ * same affinity.
+ */
+bool ib_find_cpu_vector(struct ib_device *device, unsigned int cpu,
+		unsigned int *vector)
+{
+	bool found = false;
+	unsigned int c;
+	int vec;
+
+	for (vec = 0; vec < device->num_comp_vectors; vec++) {
+		const struct cpumask *mask;
+
+		mask = ib_get_vector_affinity(device, vec);
+		if (!mask)
+			goto out;
+
+		for_each_cpu(c, mask) {
+			if (c == cpu) {
+				*vector = vec;
+				found = true;
+				goto out;
+			}
+		}
+	}
+
+out:
+	return found;
+}
+EXPORT_SYMBOL(ib_find_cpu_vector);
diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
index 2349143297c9..8af48ef811f8 100644
--- a/include/rdma/ib_verbs.h
+++ b/include/rdma/ib_verbs.h
@@ -3662,4 +3662,7 @@ ib_get_vector_affinity(struct ib_device *device, int comp_vector)
 
 }
 
+bool ib_find_cpu_vector(struct ib_device *device, unsigned int cpu,
+		unsigned int *vector);
+
 #endif /* IB_VERBS_H */
-- 
2.7.4

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH rfc 2/3] RDMA/core: expose cpu affinity based completion vector lookup
@ 2017-07-02 15:01     ` Sagi Grimberg
  0 siblings, 0 replies; 36+ messages in thread
From: Sagi Grimberg @ 2017-07-02 15:01 UTC (permalink / raw)


A ULP might want to lookup an optimal completion vector based on
a given cpu core affinity. Expose a lookup routine for it iterating
on the device completion vectors searching a vector with affinity
matching the given cpu.

Signed-off-by: Sagi Grimberg <sagi at grimberg.me>
---
 drivers/infiniband/core/verbs.c | 41 +++++++++++++++++++++++++++++++++++++++++
 include/rdma/ib_verbs.h         |  3 +++
 2 files changed, 44 insertions(+)

diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c
index 4792f5209ac2..f0dfb1ca952b 100644
--- a/drivers/infiniband/core/verbs.c
+++ b/drivers/infiniband/core/verbs.c
@@ -2099,3 +2099,44 @@ void ib_drain_qp(struct ib_qp *qp)
 		ib_drain_rq(qp);
 }
 EXPORT_SYMBOL(ib_drain_qp);
+
+/**
+ * ib_find_cpu_vector() - Find the first completion vector mapped to a given cpu core
+ * @device:            rdma device
+ * @cpu:               cpu for the corresponding completion vector affinity
+ * @vector:            output target completion vector
+ *
+ * If the device expose vector affinity we will search each of the vectors
+ * and if we find one that gives us the desired cpu core we return true
+ * and assign @vector to the corresponding completion vector. Otherwise
+ * we return false. We stop at the first appropriate completion vector
+ * we find as we don't have any preference for multiple vectors with the
+ * same affinity.
+ */
+bool ib_find_cpu_vector(struct ib_device *device, unsigned int cpu,
+		unsigned int *vector)
+{
+	bool found = false;
+	unsigned int c;
+	int vec;
+
+	for (vec = 0; vec < device->num_comp_vectors; vec++) {
+		const struct cpumask *mask;
+
+		mask = ib_get_vector_affinity(device, vec);
+		if (!mask)
+			goto out;
+
+		for_each_cpu(c, mask) {
+			if (c == cpu) {
+				*vector = vec;
+				found = true;
+				goto out;
+			}
+		}
+	}
+
+out:
+	return found;
+}
+EXPORT_SYMBOL(ib_find_cpu_vector);
diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
index 2349143297c9..8af48ef811f8 100644
--- a/include/rdma/ib_verbs.h
+++ b/include/rdma/ib_verbs.h
@@ -3662,4 +3662,7 @@ ib_get_vector_affinity(struct ib_device *device, int comp_vector)
 
 }
 
+bool ib_find_cpu_vector(struct ib_device *device, unsigned int cpu,
+		unsigned int *vector);
+
 #endif /* IB_VERBS_H */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH rfc 3/3] nvmet-rdma: assign cq completion vector based on the port allowed cpus
  2017-07-02 15:01 ` Sagi Grimberg
@ 2017-07-02 15:01     ` Sagi Grimberg
  -1 siblings, 0 replies; 36+ messages in thread
From: Sagi Grimberg @ 2017-07-02 15:01 UTC (permalink / raw)
  To: linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Christoph Hellwig,
	James Smart, Keith Busch, linux-rdma-u79uwXL29TY76Z2rM5mHXA

We first take a cpu assignment from the port configured cpulist
(spread uniformly accross them) with rdma core API.

If the device does not expose a vector affinity mask,  or we
couldn't find a match, we fallback to the old behavior as we
don't have sufficient information to do the "correct" vector
assignment.

Signed-off-by: Sagi Grimberg <sagi-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org>
---
 drivers/nvme/target/rdma.c | 40 ++++++++++++++++++++++++++++------------
 1 file changed, 28 insertions(+), 12 deletions(-)

diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
index 56a4cba690b5..a1725d3e174a 100644
--- a/drivers/nvme/target/rdma.c
+++ b/drivers/nvme/target/rdma.c
@@ -889,27 +889,43 @@ nvmet_rdma_find_get_device(struct rdma_cm_id *cm_id)
 	return NULL;
 }
 
-static int nvmet_rdma_create_queue_ib(struct nvmet_rdma_queue *queue)
+static int nvmet_rdma_assign_vector(struct nvmet_rdma_queue *queue)
 {
-	struct ib_qp_init_attr qp_attr;
-	struct nvmet_rdma_device *ndev = queue->dev;
-	int comp_vector, nr_cqe, ret, i;
+	struct ib_device *dev = queue->dev->device;
+	struct nvmet_port *port = queue->port;
+	int vec, cpu;
 
 	/*
-	 * Spread the io queues across completion vectors,
-	 * but still keep all admin queues on vector 0.
+	 * Spread the io queues across port cpus,
+	 * but still keep all admin queues on cpu 0.
 	 */
-	comp_vector = !queue->host_qid ? 0 :
-		queue->idx % ndev->device->num_comp_vectors;
+	cpu = !queue->host_qid ? 0 : port->cpus[queue->idx % port->nr_cpus];
+
+	if (ib_find_cpu_vector(dev, cpu, &vec))
+		return vec;
+
+	pr_debug("device %s could not provide vector to match cpu %d\n",
+			dev->name, cpu);
+	/*
+	 * No corresponding vector affinity found, fallback to
+	 * the old behavior where we spread vectors all over...
+	 */
+	return !queue->host_qid ? 0 : queue->idx % dev->num_comp_vectors;
+}
+
+static int nvmet_rdma_create_queue_ib(struct nvmet_rdma_queue *queue)
+{
+	struct ib_qp_init_attr qp_attr;
+	struct nvmet_rdma_device *ndev = queue->dev;
+	int nr_cqe, ret, i;
 
 	/*
 	 * Reserve CQ slots for RECV + RDMA_READ/RDMA_WRITE + RDMA_SEND.
 	 */
 	nr_cqe = queue->recv_queue_size + 2 * queue->send_queue_size;
 
-	queue->cq = ib_alloc_cq(ndev->device, queue,
-			nr_cqe + 1, comp_vector,
-			IB_POLL_WORKQUEUE);
+	queue->cq = ib_alloc_cq(ndev->device, queue, nr_cqe + 1,
+			nvmet_rdma_assign_vector(queue), IB_POLL_WORKQUEUE);
 	if (IS_ERR(queue->cq)) {
 		ret = PTR_ERR(queue->cq);
 		pr_err("failed to create CQ cqe= %d ret= %d\n",
@@ -1080,6 +1096,7 @@ nvmet_rdma_alloc_queue(struct nvmet_rdma_device *ndev,
 	INIT_WORK(&queue->release_work, nvmet_rdma_release_queue_work);
 	queue->dev = ndev;
 	queue->cm_id = cm_id;
+	queue->port = cm_id->context;
 
 	spin_lock_init(&queue->state_lock);
 	queue->state = NVMET_RDMA_Q_CONNECTING;
@@ -1198,7 +1215,6 @@ static int nvmet_rdma_queue_connect(struct rdma_cm_id *cm_id,
 		ret = -ENOMEM;
 		goto put_device;
 	}
-	queue->port = cm_id->context;
 
 	if (queue->host_qid == 0) {
 		/* Let inflight controller teardown complete */
-- 
2.7.4

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH rfc 3/3] nvmet-rdma: assign cq completion vector based on the port allowed cpus
@ 2017-07-02 15:01     ` Sagi Grimberg
  0 siblings, 0 replies; 36+ messages in thread
From: Sagi Grimberg @ 2017-07-02 15:01 UTC (permalink / raw)


We first take a cpu assignment from the port configured cpulist
(spread uniformly accross them) with rdma core API.

If the device does not expose a vector affinity mask,  or we
couldn't find a match, we fallback to the old behavior as we
don't have sufficient information to do the "correct" vector
assignment.

Signed-off-by: Sagi Grimberg <sagi at grimberg.me>
---
 drivers/nvme/target/rdma.c | 40 ++++++++++++++++++++++++++++------------
 1 file changed, 28 insertions(+), 12 deletions(-)

diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
index 56a4cba690b5..a1725d3e174a 100644
--- a/drivers/nvme/target/rdma.c
+++ b/drivers/nvme/target/rdma.c
@@ -889,27 +889,43 @@ nvmet_rdma_find_get_device(struct rdma_cm_id *cm_id)
 	return NULL;
 }
 
-static int nvmet_rdma_create_queue_ib(struct nvmet_rdma_queue *queue)
+static int nvmet_rdma_assign_vector(struct nvmet_rdma_queue *queue)
 {
-	struct ib_qp_init_attr qp_attr;
-	struct nvmet_rdma_device *ndev = queue->dev;
-	int comp_vector, nr_cqe, ret, i;
+	struct ib_device *dev = queue->dev->device;
+	struct nvmet_port *port = queue->port;
+	int vec, cpu;
 
 	/*
-	 * Spread the io queues across completion vectors,
-	 * but still keep all admin queues on vector 0.
+	 * Spread the io queues across port cpus,
+	 * but still keep all admin queues on cpu 0.
 	 */
-	comp_vector = !queue->host_qid ? 0 :
-		queue->idx % ndev->device->num_comp_vectors;
+	cpu = !queue->host_qid ? 0 : port->cpus[queue->idx % port->nr_cpus];
+
+	if (ib_find_cpu_vector(dev, cpu, &vec))
+		return vec;
+
+	pr_debug("device %s could not provide vector to match cpu %d\n",
+			dev->name, cpu);
+	/*
+	 * No corresponding vector affinity found, fallback to
+	 * the old behavior where we spread vectors all over...
+	 */
+	return !queue->host_qid ? 0 : queue->idx % dev->num_comp_vectors;
+}
+
+static int nvmet_rdma_create_queue_ib(struct nvmet_rdma_queue *queue)
+{
+	struct ib_qp_init_attr qp_attr;
+	struct nvmet_rdma_device *ndev = queue->dev;
+	int nr_cqe, ret, i;
 
 	/*
 	 * Reserve CQ slots for RECV + RDMA_READ/RDMA_WRITE + RDMA_SEND.
 	 */
 	nr_cqe = queue->recv_queue_size + 2 * queue->send_queue_size;
 
-	queue->cq = ib_alloc_cq(ndev->device, queue,
-			nr_cqe + 1, comp_vector,
-			IB_POLL_WORKQUEUE);
+	queue->cq = ib_alloc_cq(ndev->device, queue, nr_cqe + 1,
+			nvmet_rdma_assign_vector(queue), IB_POLL_WORKQUEUE);
 	if (IS_ERR(queue->cq)) {
 		ret = PTR_ERR(queue->cq);
 		pr_err("failed to create CQ cqe= %d ret= %d\n",
@@ -1080,6 +1096,7 @@ nvmet_rdma_alloc_queue(struct nvmet_rdma_device *ndev,
 	INIT_WORK(&queue->release_work, nvmet_rdma_release_queue_work);
 	queue->dev = ndev;
 	queue->cm_id = cm_id;
+	queue->port = cm_id->context;
 
 	spin_lock_init(&queue->state_lock);
 	queue->state = NVMET_RDMA_Q_CONNECTING;
@@ -1198,7 +1215,6 @@ static int nvmet_rdma_queue_connect(struct rdma_cm_id *cm_id,
 		ret = -ENOMEM;
 		goto put_device;
 	}
-	queue->port = cm_id->context;
 
 	if (queue->host_qid == 0) {
 		/* Let inflight controller teardown complete */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* Re: [PATCH rfc 0/3] Expose cpu mapping hints to a nvme target port
  2017-07-02 15:01 ` Sagi Grimberg
@ 2017-07-02 16:30     ` Max Gurtovoy
  -1 siblings, 0 replies; 36+ messages in thread
From: Max Gurtovoy @ 2017-07-02 16:30 UTC (permalink / raw)
  To: Sagi Grimberg, linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	Christoph Hellwig, James Smart, Keith Busch,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA



On 7/2/2017 6:01 PM, Sagi Grimberg wrote:
> I've heard feedback from folks on several occasions that we can
> improve in a multi-socket target array configuration.
>
> Today, rdma transport simply spread IO threads accross all system cpus
> without any specific configuration in mind (same for fc). It isn't really
> possible to exclude a nvmet port IO threads affinity to a specific
> numa socket which can be useful to reduce the inter-socket DMA traffic.
>
> This can make sense if the user wants to expose a set of backend devices
> via a nvme target port (HBA port) which are connected to the same
> numa-socket in order to optimize NUMA locality and minimize the inter-socket
> DMA traffic.
>
> This RFC exposes a cpu mapping to a specific nvmet port. The user can choose
> to provide a affinity hint a to nvme target port that will contain IO threads
> to specific cpu cores and the transport will _try_ to enforce it (if it knows
> how to). We default to the online cpumap.
>
> Note, this is based on the nvme and rdma msix affinity mapping patches pending
> to 4.13.
>
> Feedback is welcome!

Hi Sagi,
Very interesting patchset. You give a lot of power to the user here, we 
need to hope that he will use it right :).
Do you have some fio numbers to compare w/w.o this series ? also cpu 
utilization measures are interesting too..

>
> Sagi Grimberg (3):
>   nvmet: allow assignment of a cpulist for each nvmet port
>   RDMA/core: expose cpu affinity based completion vector lookup
>   nvmet-rdma: assign cq completion vector based on the port allowed cpus
>
>  drivers/infiniband/core/verbs.c | 41 ++++++++++++++++++++++
>  drivers/nvme/target/configfs.c  | 75 +++++++++++++++++++++++++++++++++++++++++
>  drivers/nvme/target/nvmet.h     |  4 +++
>  drivers/nvme/target/rdma.c      | 40 +++++++++++++++-------
>  include/rdma/ib_verbs.h         |  3 ++
>  5 files changed, 151 insertions(+), 12 deletions(-)
>
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH rfc 0/3] Expose cpu mapping hints to a nvme target port
@ 2017-07-02 16:30     ` Max Gurtovoy
  0 siblings, 0 replies; 36+ messages in thread
From: Max Gurtovoy @ 2017-07-02 16:30 UTC (permalink / raw)




On 7/2/2017 6:01 PM, Sagi Grimberg wrote:
> I've heard feedback from folks on several occasions that we can
> improve in a multi-socket target array configuration.
>
> Today, rdma transport simply spread IO threads accross all system cpus
> without any specific configuration in mind (same for fc). It isn't really
> possible to exclude a nvmet port IO threads affinity to a specific
> numa socket which can be useful to reduce the inter-socket DMA traffic.
>
> This can make sense if the user wants to expose a set of backend devices
> via a nvme target port (HBA port) which are connected to the same
> numa-socket in order to optimize NUMA locality and minimize the inter-socket
> DMA traffic.
>
> This RFC exposes a cpu mapping to a specific nvmet port. The user can choose
> to provide a affinity hint a to nvme target port that will contain IO threads
> to specific cpu cores and the transport will _try_ to enforce it (if it knows
> how to). We default to the online cpumap.
>
> Note, this is based on the nvme and rdma msix affinity mapping patches pending
> to 4.13.
>
> Feedback is welcome!

Hi Sagi,
Very interesting patchset. You give a lot of power to the user here, we 
need to hope that he will use it right :).
Do you have some fio numbers to compare w/w.o this series ? also cpu 
utilization measures are interesting too..

>
> Sagi Grimberg (3):
>   nvmet: allow assignment of a cpulist for each nvmet port
>   RDMA/core: expose cpu affinity based completion vector lookup
>   nvmet-rdma: assign cq completion vector based on the port allowed cpus
>
>  drivers/infiniband/core/verbs.c | 41 ++++++++++++++++++++++
>  drivers/nvme/target/configfs.c  | 75 +++++++++++++++++++++++++++++++++++++++++
>  drivers/nvme/target/nvmet.h     |  4 +++
>  drivers/nvme/target/rdma.c      | 40 +++++++++++++++-------
>  include/rdma/ib_verbs.h         |  3 ++
>  5 files changed, 151 insertions(+), 12 deletions(-)
>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH rfc 0/3] Expose cpu mapping hints to a nvme target port
  2017-07-02 16:30     ` Max Gurtovoy
@ 2017-07-02 17:41         ` Sagi Grimberg
  -1 siblings, 0 replies; 36+ messages in thread
From: Sagi Grimberg @ 2017-07-02 17:41 UTC (permalink / raw)
  To: Max Gurtovoy, linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	Christoph Hellwig, James Smart, Keith Busch,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA


> Hi Sagi,
> Very interesting patchset. You give a lot of power to the user here, we 
> need to hope that he will use it right :).

I don't think so, its equivalent to running an application with a given
taskset, nothing fancy here...

The straight forward configuration this is targeting is a dual-socket system
where on each node you have one (or more) HCA and some NVMe devices
(say 4). All this is doing is allowing the user to contain nvme target 
port cpu cores
to its own numa socket so if on that port only expose the local NVMe devices
DMA traffic won't cross QPI.

While a subsystem is the collection of devices, the port is where I/O 
threads
really live as they feed of the device IRQ affinity. Especially with SRQ 
which I'll
be touching soon. The user does indeed need to be aware of all this, but 
if he
isn't, then he shouldn't touch this setting.

> Do you have some fio numbers to compare w/w.o this series ? also cpu 
> utilization measures are interesting too..

Not really, this is an RFC level code, lightly tested on my VM...

If this is interesting to you I can use some testing if you volunteer ;)
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH rfc 0/3] Expose cpu mapping hints to a nvme target port
@ 2017-07-02 17:41         ` Sagi Grimberg
  0 siblings, 0 replies; 36+ messages in thread
From: Sagi Grimberg @ 2017-07-02 17:41 UTC (permalink / raw)



> Hi Sagi,
> Very interesting patchset. You give a lot of power to the user here, we 
> need to hope that he will use it right :).

I don't think so, its equivalent to running an application with a given
taskset, nothing fancy here...

The straight forward configuration this is targeting is a dual-socket system
where on each node you have one (or more) HCA and some NVMe devices
(say 4). All this is doing is allowing the user to contain nvme target 
port cpu cores
to its own numa socket so if on that port only expose the local NVMe devices
DMA traffic won't cross QPI.

While a subsystem is the collection of devices, the port is where I/O 
threads
really live as they feed of the device IRQ affinity. Especially with SRQ 
which I'll
be touching soon. The user does indeed need to be aware of all this, but 
if he
isn't, then he shouldn't touch this setting.

> Do you have some fio numbers to compare w/w.o this series ? also cpu 
> utilization measures are interesting too..

Not really, this is an RFC level code, lightly tested on my VM...

If this is interesting to you I can use some testing if you volunteer ;)

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH rfc 0/3] Expose cpu mapping hints to a nvme target port
  2017-07-02 17:41         ` Sagi Grimberg
@ 2017-07-03  9:52             ` Max Gurtovoy
  -1 siblings, 0 replies; 36+ messages in thread
From: Max Gurtovoy @ 2017-07-03  9:52 UTC (permalink / raw)
  To: Sagi Grimberg, linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	Christoph Hellwig, James Smart, Keith Busch,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA



On 7/2/2017 8:41 PM, Sagi Grimberg wrote:
>
>> Hi Sagi,
>> Very interesting patchset. You give a lot of power to the user here,
>> we need to hope that he will use it right :).
>
> I don't think so, its equivalent to running an application with a given
> taskset, nothing fancy here...
>
> The straight forward configuration this is targeting is a dual-socket
> system
> where on each node you have one (or more) HCA and some NVMe devices
> (say 4). All this is doing is allowing the user to contain nvme target
> port cpu cores
> to its own numa socket so if on that port only expose the local NVMe
> devices
> DMA traffic won't cross QPI.

Maybe I'm missing something but how do you make sure that all your 
allocations buffers for DMA (of the NVMe + HCA) are done on the same 
socket ?
 From the code I understood that you make sure that the cq is assigned 
to appropriate completion vector according to the port CPUs (given by 
the user) and all the interrupts will be routed to the relevant socket 
(no QPI cross here since the MSI MMIO address is mapped to "local" 
node), but IMO more work is needed to make sure that _all_ the allocated 
buffers/pages are done from the memory assigned to that CPU node (or is 
it something that is done already ?)

>
> While a subsystem is the collection of devices, the port is where I/O
> threads
> really live as they feed of the device IRQ affinity. Especially with SRQ
> which I'll
> be touching soon. The user does indeed need to be aware of all this, but
> if he
> isn't, then he shouldn't touch this setting.
>
>> Do you have some fio numbers to compare w/w.o this series ? also cpu
>> utilization measures are interesting too..
>
> Not really, this is an RFC level code, lightly tested on my VM...
>
> If this is interesting to you I can use some testing if you volunteer ;)

Yes it is. I'll need to find some time slot for this though...
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH rfc 0/3] Expose cpu mapping hints to a nvme target port
@ 2017-07-03  9:52             ` Max Gurtovoy
  0 siblings, 0 replies; 36+ messages in thread
From: Max Gurtovoy @ 2017-07-03  9:52 UTC (permalink / raw)




On 7/2/2017 8:41 PM, Sagi Grimberg wrote:
>
>> Hi Sagi,
>> Very interesting patchset. You give a lot of power to the user here,
>> we need to hope that he will use it right :).
>
> I don't think so, its equivalent to running an application with a given
> taskset, nothing fancy here...
>
> The straight forward configuration this is targeting is a dual-socket
> system
> where on each node you have one (or more) HCA and some NVMe devices
> (say 4). All this is doing is allowing the user to contain nvme target
> port cpu cores
> to its own numa socket so if on that port only expose the local NVMe
> devices
> DMA traffic won't cross QPI.

Maybe I'm missing something but how do you make sure that all your 
allocations buffers for DMA (of the NVMe + HCA) are done on the same 
socket ?
 From the code I understood that you make sure that the cq is assigned 
to appropriate completion vector according to the port CPUs (given by 
the user) and all the interrupts will be routed to the relevant socket 
(no QPI cross here since the MSI MMIO address is mapped to "local" 
node), but IMO more work is needed to make sure that _all_ the allocated 
buffers/pages are done from the memory assigned to that CPU node (or is 
it something that is done already ?)

>
> While a subsystem is the collection of devices, the port is where I/O
> threads
> really live as they feed of the device IRQ affinity. Especially with SRQ
> which I'll
> be touching soon. The user does indeed need to be aware of all this, but
> if he
> isn't, then he shouldn't touch this setting.
>
>> Do you have some fio numbers to compare w/w.o this series ? also cpu
>> utilization measures are interesting too..
>
> Not really, this is an RFC level code, lightly tested on my VM...
>
> If this is interesting to you I can use some testing if you volunteer ;)

Yes it is. I'll need to find some time slot for this though...

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH rfc 0/3] Expose cpu mapping hints to a nvme target port
  2017-07-03  9:52             ` Max Gurtovoy
@ 2017-07-03 10:14                 ` Sagi Grimberg
  -1 siblings, 0 replies; 36+ messages in thread
From: Sagi Grimberg @ 2017-07-03 10:14 UTC (permalink / raw)
  To: Max Gurtovoy, linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	Christoph Hellwig, James Smart, Keith Busch,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA


> Maybe I'm missing something but how do you make sure that all your 
> allocations buffers for DMA (of the NVMe + HCA) are done on the same 
> socket ?
>  From the code I understood that you make sure that the cq is assigned 
> to appropriate completion vector according to the port CPUs (given by 
> the user) and all the interrupts will be routed to the relevant socket 
> (no QPI cross here since the MSI MMIO address is mapped to "local" 
> node), but IMO more work is needed to make sure that _all_ the allocated 
> buffers/pages are done from the memory assigned to that CPU node (or is 
> it something that is done already ?)

The allocator takes care of that for us (if it can...). By assigning
the completion vector we will run the IO thread from the corresponding
cpu, then in turn, page allocation will attempt to grab one which is
local to the running numa node and only if not available will fall back
to the far numa node...

However, in-capsule page allocation is not numa aware. We could also
modify to alloc_pages_node in case we have a clear numa node indication,
but we can defer that to a later stage...
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH rfc 0/3] Expose cpu mapping hints to a nvme target port
@ 2017-07-03 10:14                 ` Sagi Grimberg
  0 siblings, 0 replies; 36+ messages in thread
From: Sagi Grimberg @ 2017-07-03 10:14 UTC (permalink / raw)



> Maybe I'm missing something but how do you make sure that all your 
> allocations buffers for DMA (of the NVMe + HCA) are done on the same 
> socket ?
>  From the code I understood that you make sure that the cq is assigned 
> to appropriate completion vector according to the port CPUs (given by 
> the user) and all the interrupts will be routed to the relevant socket 
> (no QPI cross here since the MSI MMIO address is mapped to "local" 
> node), but IMO more work is needed to make sure that _all_ the allocated 
> buffers/pages are done from the memory assigned to that CPU node (or is 
> it something that is done already ?)

The allocator takes care of that for us (if it can...). By assigning
the completion vector we will run the IO thread from the corresponding
cpu, then in turn, page allocation will attempt to grab one which is
local to the running numa node and only if not available will fall back
to the far numa node...

However, in-capsule page allocation is not numa aware. We could also
modify to alloc_pages_node in case we have a clear numa node indication,
but we can defer that to a later stage...

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH rfc 0/3] Expose cpu mapping hints to a nvme target port
  2017-07-02 15:01 ` Sagi Grimberg
@ 2017-07-10  6:08     ` Sagi Grimberg
  -1 siblings, 0 replies; 36+ messages in thread
From: Sagi Grimberg @ 2017-07-10  6:08 UTC (permalink / raw)
  To: linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Christoph Hellwig,
	James Smart, Keith Busch, linux-rdma-u79uwXL29TY76Z2rM5mHXA

Does silence mean acceptance? :)
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH rfc 0/3] Expose cpu mapping hints to a nvme target port
@ 2017-07-10  6:08     ` Sagi Grimberg
  0 siblings, 0 replies; 36+ messages in thread
From: Sagi Grimberg @ 2017-07-10  6:08 UTC (permalink / raw)


Does silence mean acceptance? :)

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH rfc 0/3] Expose cpu mapping hints to a nvme target port
  2017-07-10  6:08     ` Sagi Grimberg
@ 2017-07-11  7:27         ` Leon Romanovsky
  -1 siblings, 0 replies; 36+ messages in thread
From: Leon Romanovsky @ 2017-07-11  7:27 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Christoph Hellwig,
	James Smart, Keith Busch, linux-rdma-u79uwXL29TY76Z2rM5mHXA

[-- Attachment #1: Type: text/plain, Size: 391 bytes --]

On Mon, Jul 10, 2017 at 09:08:27AM +0300, Sagi Grimberg wrote:
> Does silence mean acceptance? :)

I'm having the same uncertainty for most of our patches.

Thanks

> --
> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH rfc 0/3] Expose cpu mapping hints to a nvme target port
@ 2017-07-11  7:27         ` Leon Romanovsky
  0 siblings, 0 replies; 36+ messages in thread
From: Leon Romanovsky @ 2017-07-11  7:27 UTC (permalink / raw)


On Mon, Jul 10, 2017@09:08:27AM +0300, Sagi Grimberg wrote:
> Does silence mean acceptance? :)

I'm having the same uncertainty for most of our patches.

Thanks

> --
> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> the body of a message to majordomo at vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: not available
URL: <http://lists.infradead.org/pipermail/linux-nvme/attachments/20170711/5d4412a2/attachment.sig>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH rfc 2/3] RDMA/core: expose cpu affinity based completion vector lookup
  2017-07-02 15:01     ` Sagi Grimberg
@ 2017-07-13 15:50         ` Christoph Hellwig
  -1 siblings, 0 replies; 36+ messages in thread
From: Christoph Hellwig @ 2017-07-13 15:50 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Christoph Hellwig,
	James Smart, Keith Busch, linux-rdma-u79uwXL29TY76Z2rM5mHXA

On Sun, Jul 02, 2017 at 06:01:33PM +0300, Sagi Grimberg wrote:
> A ULP might want to lookup an optimal completion vector based on
> a given cpu core affinity. Expose a lookup routine for it iterating
> on the device completion vectors searching a vector with affinity
> matching the given cpu.

Shouldn't this return the mask of cpus with the matching affinity
instead of always the first one?
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH rfc 2/3] RDMA/core: expose cpu affinity based completion vector lookup
@ 2017-07-13 15:50         ` Christoph Hellwig
  0 siblings, 0 replies; 36+ messages in thread
From: Christoph Hellwig @ 2017-07-13 15:50 UTC (permalink / raw)


On Sun, Jul 02, 2017@06:01:33PM +0300, Sagi Grimberg wrote:
> A ULP might want to lookup an optimal completion vector based on
> a given cpu core affinity. Expose a lookup routine for it iterating
> on the device completion vectors searching a vector with affinity
> matching the given cpu.

Shouldn't this return the mask of cpus with the matching affinity
instead of always the first one?

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH rfc 3/3] nvmet-rdma: assign cq completion vector based on the port allowed cpus
  2017-07-02 15:01     ` Sagi Grimberg
@ 2017-07-13 15:50         ` Christoph Hellwig
  -1 siblings, 0 replies; 36+ messages in thread
From: Christoph Hellwig @ 2017-07-13 15:50 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Christoph Hellwig,
	James Smart, Keith Busch, linux-rdma-u79uwXL29TY76Z2rM5mHXA

We really shouldn't be doing any of this in NVMe I think.  We'll need
to go back to the cq pool API first.  The last version I had was here:

	http://git.infradead.org/users/hch/rdma.git/shortlog/refs/heads/rdma-cq

and then do the affinity in common code.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH rfc 3/3] nvmet-rdma: assign cq completion vector based on the port allowed cpus
@ 2017-07-13 15:50         ` Christoph Hellwig
  0 siblings, 0 replies; 36+ messages in thread
From: Christoph Hellwig @ 2017-07-13 15:50 UTC (permalink / raw)


We really shouldn't be doing any of this in NVMe I think.  We'll need
to go back to the cq pool API first.  The last version I had was here:

	http://git.infradead.org/users/hch/rdma.git/shortlog/refs/heads/rdma-cq

and then do the affinity in common code.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH rfc 2/3] RDMA/core: expose cpu affinity based completion vector lookup
  2017-07-13 15:50         ` Christoph Hellwig
@ 2017-07-13 16:30             ` Sagi Grimberg
  -1 siblings, 0 replies; 36+ messages in thread
From: Sagi Grimberg @ 2017-07-13 16:30 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, James Smart,
	Keith Busch, linux-rdma-u79uwXL29TY76Z2rM5mHXA


>> A ULP might want to lookup an optimal completion vector based on
>> a given cpu core affinity. Expose a lookup routine for it iterating
>> on the device completion vectors searching a vector with affinity
>> matching the given cpu.
> 
> Shouldn't this return the mask of cpus with the matching affinity
> instead of always the first one?

Its the opposite, I'm looking for the vector matching a given cpu.

In case there are multiple vectors with the same cpu assignment,
it doesn't really matter which one is it.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH rfc 2/3] RDMA/core: expose cpu affinity based completion vector lookup
@ 2017-07-13 16:30             ` Sagi Grimberg
  0 siblings, 0 replies; 36+ messages in thread
From: Sagi Grimberg @ 2017-07-13 16:30 UTC (permalink / raw)



>> A ULP might want to lookup an optimal completion vector based on
>> a given cpu core affinity. Expose a lookup routine for it iterating
>> on the device completion vectors searching a vector with affinity
>> matching the given cpu.
> 
> Shouldn't this return the mask of cpus with the matching affinity
> instead of always the first one?

Its the opposite, I'm looking for the vector matching a given cpu.

In case there are multiple vectors with the same cpu assignment,
it doesn't really matter which one is it.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH rfc 3/3] nvmet-rdma: assign cq completion vector based on the port allowed cpus
  2017-07-13 15:50         ` Christoph Hellwig
@ 2017-07-13 16:37             ` Sagi Grimberg
  -1 siblings, 0 replies; 36+ messages in thread
From: Sagi Grimberg @ 2017-07-13 16:37 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, James Smart,
	Keith Busch, linux-rdma-u79uwXL29TY76Z2rM5mHXA


> We really shouldn't be doing any of this in NVMe I think.  We'll need
> to go back to the cq pool API first.  The last version I had was here:
> 
> 	http://git.infradead.org/users/hch/rdma.git/shortlog/refs/heads/rdma-cq

Not against the idea, though the ULP will need to pass a desired cpu
affinity.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH rfc 3/3] nvmet-rdma: assign cq completion vector based on the port allowed cpus
@ 2017-07-13 16:37             ` Sagi Grimberg
  0 siblings, 0 replies; 36+ messages in thread
From: Sagi Grimberg @ 2017-07-13 16:37 UTC (permalink / raw)



> We really shouldn't be doing any of this in NVMe I think.  We'll need
> to go back to the cq pool API first.  The last version I had was here:
> 
> 	http://git.infradead.org/users/hch/rdma.git/shortlog/refs/heads/rdma-cq

Not against the idea, though the ULP will need to pass a desired cpu
affinity.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH rfc 3/3] nvmet-rdma: assign cq completion vector based on the port allowed cpus
  2017-07-13 16:37             ` Sagi Grimberg
@ 2017-07-13 17:16                 ` Christoph Hellwig
  -1 siblings, 0 replies; 36+ messages in thread
From: Christoph Hellwig @ 2017-07-13 17:16 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: Christoph Hellwig, linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	James Smart, Keith Busch, linux-rdma-u79uwXL29TY76Z2rM5mHXA

On Thu, Jul 13, 2017 at 07:37:13PM +0300, Sagi Grimberg wrote:
>
>> We really shouldn't be doing any of this in NVMe I think.  We'll need
>> to go back to the cq pool API first.  The last version I had was here:
>>
>> 	http://git.infradead.org/users/hch/rdma.git/shortlog/refs/heads/rdma-cq
>
> Not against the idea, though the ULP will need to pass a desired cpu
> affinity.

Yes.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH rfc 3/3] nvmet-rdma: assign cq completion vector based on the port allowed cpus
@ 2017-07-13 17:16                 ` Christoph Hellwig
  0 siblings, 0 replies; 36+ messages in thread
From: Christoph Hellwig @ 2017-07-13 17:16 UTC (permalink / raw)


On Thu, Jul 13, 2017@07:37:13PM +0300, Sagi Grimberg wrote:
>
>> We really shouldn't be doing any of this in NVMe I think.  We'll need
>> to go back to the cq pool API first.  The last version I had was here:
>>
>> 	http://git.infradead.org/users/hch/rdma.git/shortlog/refs/heads/rdma-cq
>
> Not against the idea, though the ULP will need to pass a desired cpu
> affinity.

Yes.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH rfc 3/3] nvmet-rdma: assign cq completion vector based on the port allowed cpus
  2017-07-13 15:50         ` Christoph Hellwig
@ 2017-07-13 17:19             ` Chuck Lever
  -1 siblings, 0 replies; 36+ messages in thread
From: Chuck Lever @ 2017-07-13 17:19 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Sagi Grimberg, linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	James Smart, Keith Busch, linux-rdma-u79uwXL29TY76Z2rM5mHXA


> On Jul 13, 2017, at 11:50 AM, Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org> wrote:
> 
> We really shouldn't be doing any of this in NVMe I think.  We'll need
> to go back to the cq pool API first.  The last version I had was here:
> 
> 	http://git.infradead.org/users/hch/rdma.git/shortlog/refs/heads/rdma-cq
> 
> and then do the affinity in common code.

This seems to address the problem I mentioned to you with properly
estimating send CQ size when using the rdma_rw API. If these are
going to be merged soon, I can drop the new API I proposed here:

http://git.linux-nfs.org/?p=cel/cel-2.6.git;a=commit;h=2156cb956101da854f64918066710ff4e7affc5b


--
Chuck Lever



--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH rfc 3/3] nvmet-rdma: assign cq completion vector based on the port allowed cpus
@ 2017-07-13 17:19             ` Chuck Lever
  0 siblings, 0 replies; 36+ messages in thread
From: Chuck Lever @ 2017-07-13 17:19 UTC (permalink / raw)



> On Jul 13, 2017,@11:50 AM, Christoph Hellwig <hch@lst.de> wrote:
> 
> We really shouldn't be doing any of this in NVMe I think.  We'll need
> to go back to the cq pool API first.  The last version I had was here:
> 
> 	http://git.infradead.org/users/hch/rdma.git/shortlog/refs/heads/rdma-cq
> 
> and then do the affinity in common code.

This seems to address the problem I mentioned to you with properly
estimating send CQ size when using the rdma_rw API. If these are
going to be merged soon, I can drop the new API I proposed here:

http://git.linux-nfs.org/?p=cel/cel-2.6.git;a=commit;h=2156cb956101da854f64918066710ff4e7affc5b


--
Chuck Lever

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH rfc 3/3] nvmet-rdma: assign cq completion vector based on the port allowed cpus
  2017-07-13 17:19             ` Chuck Lever
@ 2017-07-13 17:24                 ` Christoph Hellwig
  -1 siblings, 0 replies; 36+ messages in thread
From: Christoph Hellwig @ 2017-07-13 17:24 UTC (permalink / raw)
  To: Chuck Lever
  Cc: Christoph Hellwig, Sagi Grimberg,
	linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, James Smart,
	Keith Busch, linux-rdma-u79uwXL29TY76Z2rM5mHXA

On Thu, Jul 13, 2017 at 01:19:00PM -0400, Chuck Lever wrote:
> 
> > On Jul 13, 2017, at 11:50 AM, Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org> wrote:
> > 
> > We really shouldn't be doing any of this in NVMe I think.  We'll need
> > to go back to the cq pool API first.  The last version I had was here:
> > 
> > 	http://git.infradead.org/users/hch/rdma.git/shortlog/refs/heads/rdma-cq
> > 
> > and then do the affinity in common code.
> 
> This seems to address the problem I mentioned to you with properly
> estimating send CQ size when using the rdma_rw API. If these are
> going to be merged soon, I can drop the new API I proposed here:
> 
> http://git.linux-nfs.org/?p=cel/cel-2.6.git;a=commit;h=2156cb956101da854f64918066710ff4e7affc5b

I'd need to find time to get back to it, and I have a few big
chunks on my todo list.  Any chance you (or someone else interested)
could take the series over?
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH rfc 3/3] nvmet-rdma: assign cq completion vector based on the port allowed cpus
@ 2017-07-13 17:24                 ` Christoph Hellwig
  0 siblings, 0 replies; 36+ messages in thread
From: Christoph Hellwig @ 2017-07-13 17:24 UTC (permalink / raw)


On Thu, Jul 13, 2017@01:19:00PM -0400, Chuck Lever wrote:
> 
> > On Jul 13, 2017,@11:50 AM, Christoph Hellwig <hch@lst.de> wrote:
> > 
> > We really shouldn't be doing any of this in NVMe I think.  We'll need
> > to go back to the cq pool API first.  The last version I had was here:
> > 
> > 	http://git.infradead.org/users/hch/rdma.git/shortlog/refs/heads/rdma-cq
> > 
> > and then do the affinity in common code.
> 
> This seems to address the problem I mentioned to you with properly
> estimating send CQ size when using the rdma_rw API. If these are
> going to be merged soon, I can drop the new API I proposed here:
> 
> http://git.linux-nfs.org/?p=cel/cel-2.6.git;a=commit;h=2156cb956101da854f64918066710ff4e7affc5b

I'd need to find time to get back to it, and I have a few big
chunks on my todo list.  Any chance you (or someone else interested)
could take the series over?

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH rfc 3/3] nvmet-rdma: assign cq completion vector based on the port allowed cpus
  2017-07-13 17:24                 ` Christoph Hellwig
@ 2017-07-13 17:31                     ` Chuck Lever
  -1 siblings, 0 replies; 36+ messages in thread
From: Chuck Lever @ 2017-07-13 17:31 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Sagi Grimberg, linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	James Smart, Keith Busch, linux-rdma-u79uwXL29TY76Z2rM5mHXA


> On Jul 13, 2017, at 1:24 PM, Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org> wrote:
> 
> On Thu, Jul 13, 2017 at 01:19:00PM -0400, Chuck Lever wrote:
>> 
>>> On Jul 13, 2017, at 11:50 AM, Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org> wrote:
>>> 
>>> We really shouldn't be doing any of this in NVMe I think.  We'll need
>>> to go back to the cq pool API first.  The last version I had was here:
>>> 
>>> 	http://git.infradead.org/users/hch/rdma.git/shortlog/refs/heads/rdma-cq
>>> 
>>> and then do the affinity in common code.
>> 
>> This seems to address the problem I mentioned to you with properly
>> estimating send CQ size when using the rdma_rw API. If these are
>> going to be merged soon, I can drop the new API I proposed here:
>> 
>> http://git.linux-nfs.org/?p=cel/cel-2.6.git;a=commit;h=2156cb956101da854f64918066710ff4e7affc5b
> 
> I'd need to find time to get back to it, and I have a few big
> chunks on my todo list.  Any chance you (or someone else interested)
> could take the series over?

Seems like it's right in Sagi's ballpark, and might be a pre-req
for his affinity work. If he's not interested, I can take it.


--
Chuck Lever



--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH rfc 3/3] nvmet-rdma: assign cq completion vector based on the port allowed cpus
@ 2017-07-13 17:31                     ` Chuck Lever
  0 siblings, 0 replies; 36+ messages in thread
From: Chuck Lever @ 2017-07-13 17:31 UTC (permalink / raw)



> On Jul 13, 2017,@1:24 PM, Christoph Hellwig <hch@lst.de> wrote:
> 
> On Thu, Jul 13, 2017@01:19:00PM -0400, Chuck Lever wrote:
>> 
>>> On Jul 13, 2017,@11:50 AM, Christoph Hellwig <hch@lst.de> wrote:
>>> 
>>> We really shouldn't be doing any of this in NVMe I think.  We'll need
>>> to go back to the cq pool API first.  The last version I had was here:
>>> 
>>> 	http://git.infradead.org/users/hch/rdma.git/shortlog/refs/heads/rdma-cq
>>> 
>>> and then do the affinity in common code.
>> 
>> This seems to address the problem I mentioned to you with properly
>> estimating send CQ size when using the rdma_rw API. If these are
>> going to be merged soon, I can drop the new API I proposed here:
>> 
>> http://git.linux-nfs.org/?p=cel/cel-2.6.git;a=commit;h=2156cb956101da854f64918066710ff4e7affc5b
> 
> I'd need to find time to get back to it, and I have a few big
> chunks on my todo list.  Any chance you (or someone else interested)
> could take the series over?

Seems like it's right in Sagi's ballpark, and might be a pre-req
for his affinity work. If he's not interested, I can take it.


--
Chuck Lever

^ permalink raw reply	[flat|nested] 36+ messages in thread

end of thread, other threads:[~2017-07-13 17:31 UTC | newest]

Thread overview: 36+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-07-02 15:01 [PATCH rfc 0/3] Expose cpu mapping hints to a nvme target port Sagi Grimberg
2017-07-02 15:01 ` Sagi Grimberg
     [not found] ` <1499007694-7231-1-git-send-email-sagi-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org>
2017-07-02 15:01   ` [PATCH rfc 1/3] nvmet: allow assignment of a cpulist for each nvmet port Sagi Grimberg
2017-07-02 15:01     ` Sagi Grimberg
2017-07-02 15:01   ` [PATCH rfc 2/3] RDMA/core: expose cpu affinity based completion vector lookup Sagi Grimberg
2017-07-02 15:01     ` Sagi Grimberg
     [not found]     ` <1499007694-7231-3-git-send-email-sagi-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org>
2017-07-13 15:50       ` Christoph Hellwig
2017-07-13 15:50         ` Christoph Hellwig
     [not found]         ` <20170713155000.GA2577-jcswGhMUV9g@public.gmane.org>
2017-07-13 16:30           ` Sagi Grimberg
2017-07-13 16:30             ` Sagi Grimberg
2017-07-02 15:01   ` [PATCH rfc 3/3] nvmet-rdma: assign cq completion vector based on the port allowed cpus Sagi Grimberg
2017-07-02 15:01     ` Sagi Grimberg
     [not found]     ` <1499007694-7231-4-git-send-email-sagi-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org>
2017-07-13 15:50       ` Christoph Hellwig
2017-07-13 15:50         ` Christoph Hellwig
     [not found]         ` <20170713155003.GB2577-jcswGhMUV9g@public.gmane.org>
2017-07-13 16:37           ` Sagi Grimberg
2017-07-13 16:37             ` Sagi Grimberg
     [not found]             ` <da6a9e33-163e-4964-8305-614b7b830c17-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org>
2017-07-13 17:16               ` Christoph Hellwig
2017-07-13 17:16                 ` Christoph Hellwig
2017-07-13 17:19           ` Chuck Lever
2017-07-13 17:19             ` Chuck Lever
     [not found]             ` <C2439AB7-BC2E-4B71-87CA-3F8313282828-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
2017-07-13 17:24               ` Christoph Hellwig
2017-07-13 17:24                 ` Christoph Hellwig
     [not found]                 ` <20170713172437.GA5236-jcswGhMUV9g@public.gmane.org>
2017-07-13 17:31                   ` Chuck Lever
2017-07-13 17:31                     ` Chuck Lever
2017-07-02 16:30   ` [PATCH rfc 0/3] Expose cpu mapping hints to a nvme target port Max Gurtovoy
2017-07-02 16:30     ` Max Gurtovoy
     [not found]     ` <b77be367-6cbc-c12c-8135-eddff0aec90e-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
2017-07-02 17:41       ` Sagi Grimberg
2017-07-02 17:41         ` Sagi Grimberg
     [not found]         ` <e33f16a7-4e1d-7fd6-6d2c-5a5bac450c73-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org>
2017-07-03  9:52           ` Max Gurtovoy
2017-07-03  9:52             ` Max Gurtovoy
     [not found]             ` <8a5f82d8-e475-5f0e-9110-8b2c68580988-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
2017-07-03 10:14               ` Sagi Grimberg
2017-07-03 10:14                 ` Sagi Grimberg
2017-07-10  6:08   ` Sagi Grimberg
2017-07-10  6:08     ` Sagi Grimberg
     [not found]     ` <70d04521-df66-d847-1f46-c35ba5de4053-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org>
2017-07-11  7:27       ` Leon Romanovsky
2017-07-11  7:27         ` Leon Romanovsky

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.