All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2] check the number of hw queues mapped to sw queues
@ 2016-06-08 19:48 ` Ming Lin
  0 siblings, 0 replies; 22+ messages in thread
From: Ming Lin @ 2016-06-08 19:48 UTC (permalink / raw)
  To: linux-nvme, linux-block
  Cc: Christoph Hellwig, Keith Busch, Jens Axboe, James Smart

From: Ming Lin <ming.l@samsung.com>

Please see patch 2 for detail bug description.

Say, on a machine with 8 CPUs, we create 6 io queues(blk-mq hw queues)
    
echo "transport=rdma,traddr=192.168.2.2,nqn=testiqn,nr_io_queues=6" \
            > /dev/nvme-fabrics
    
Then actually only 4 hw queues were mapped to CPU sw queues.
    
HW Queue 1 <-> CPU 0,4
HW Queue 2 <-> CPU 1,5
HW Queue 3 <-> None
HW Queue 4 <-> CPU 2,6
HW Queue 5 <-> CPU 3,7
HW Queue 6 <-> None

Back to Jan 2016, I send a patch:
[PATCH] blk-mq: check if all HW queues are mapped to cpu
http://www.spinics.net/lists/linux-block/msg01038.html

It adds check code to blk_mq_update_queue_map().
But it seems too aggresive because it's not an error that some hw queues
were not mapped to sw queues.

So this series just add a new function blk_mq_hctx_mapped() to check
how many hw queues were mapped. And the driver(for example, nvme-rdma)
that cares about it will do the check.

Ming Lin (2):
  blk-mq: add a function to return number of hw queues mapped
  nvme-rdma: check the number of hw queues mapped

 block/blk-mq.c           | 15 +++++++++++++++
 drivers/nvme/host/rdma.c | 11 +++++++++++
 include/linux/blk-mq.h   |  1 +
 3 files changed, 27 insertions(+)

-- 
1.9.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 0/2] check the number of hw queues mapped to sw queues
@ 2016-06-08 19:48 ` Ming Lin
  0 siblings, 0 replies; 22+ messages in thread
From: Ming Lin @ 2016-06-08 19:48 UTC (permalink / raw)


From: Ming Lin <ming.l@samsung.com>

Please see patch 2 for detail bug description.

Say, on a machine with 8 CPUs, we create 6 io queues(blk-mq hw queues)
    
echo "transport=rdma,traddr=192.168.2.2,nqn=testiqn,nr_io_queues=6" \
            > /dev/nvme-fabrics
    
Then actually only 4 hw queues were mapped to CPU sw queues.
    
HW Queue 1 <-> CPU 0,4
HW Queue 2 <-> CPU 1,5
HW Queue 3 <-> None
HW Queue 4 <-> CPU 2,6
HW Queue 5 <-> CPU 3,7
HW Queue 6 <-> None

Back to Jan 2016, I send a patch:
[PATCH] blk-mq: check if all HW queues are mapped to cpu
http://www.spinics.net/lists/linux-block/msg01038.html

It adds check code to blk_mq_update_queue_map().
But it seems too aggresive because it's not an error that some hw queues
were not mapped to sw queues.

So this series just add a new function blk_mq_hctx_mapped() to check
how many hw queues were mapped. And the driver(for example, nvme-rdma)
that cares about it will do the check.

Ming Lin (2):
  blk-mq: add a function to return number of hw queues mapped
  nvme-rdma: check the number of hw queues mapped

 block/blk-mq.c           | 15 +++++++++++++++
 drivers/nvme/host/rdma.c | 11 +++++++++++
 include/linux/blk-mq.h   |  1 +
 3 files changed, 27 insertions(+)

-- 
1.9.1

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 1/2] blk-mq: add a function to return number of hw queues mapped
  2016-06-08 19:48 ` Ming Lin
@ 2016-06-08 19:48   ` Ming Lin
  -1 siblings, 0 replies; 22+ messages in thread
From: Ming Lin @ 2016-06-08 19:48 UTC (permalink / raw)
  To: linux-nvme, linux-block
  Cc: Christoph Hellwig, Keith Busch, Jens Axboe, James Smart

From: Ming Lin <ming.l@samsung.com>

Signed-off-by: Ming Lin <ming.l@samsung.com>
---
 block/blk-mq.c         | 15 +++++++++++++++
 include/linux/blk-mq.h |  1 +
 2 files changed, 16 insertions(+)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index b59d2ef..4c80046 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1888,6 +1888,21 @@ static void blk_mq_map_swqueue(struct request_queue *q,
 	}
 }
 
+/* The number of hw queues that are mapped by sw queues */
+int blk_mq_hctx_mapped(struct request_queue *q)
+{
+	struct blk_mq_hw_ctx *hctx;
+	unsigned int i;
+	int mapped = 0;
+
+	queue_for_each_hw_ctx(q, hctx, i)
+		if (blk_mq_hw_queue_mapped(hctx))
+			mapped++;
+
+	return mapped;
+}
+EXPORT_SYMBOL_GPL(blk_mq_hctx_mapped);
+
 static void queue_set_hctx_shared(struct request_queue *q, bool shared)
 {
 	struct blk_mq_hw_ctx *hctx;
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index 9a5d581..7cc4d51 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -250,6 +250,7 @@ void blk_mq_freeze_queue_start(struct request_queue *q);
 int blk_mq_reinit_tagset(struct blk_mq_tag_set *set);
 
 void blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set, int nr_hw_queues);
+int blk_mq_hctx_mapped(struct request_queue *q);
 
 /*
  * Driver command data is immediately after the request. So subtract request
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 1/2] blk-mq: add a function to return number of hw queues mapped
@ 2016-06-08 19:48   ` Ming Lin
  0 siblings, 0 replies; 22+ messages in thread
From: Ming Lin @ 2016-06-08 19:48 UTC (permalink / raw)


From: Ming Lin <ming.l@samsung.com>

Signed-off-by: Ming Lin <ming.l at samsung.com>
---
 block/blk-mq.c         | 15 +++++++++++++++
 include/linux/blk-mq.h |  1 +
 2 files changed, 16 insertions(+)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index b59d2ef..4c80046 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1888,6 +1888,21 @@ static void blk_mq_map_swqueue(struct request_queue *q,
 	}
 }
 
+/* The number of hw queues that are mapped by sw queues */
+int blk_mq_hctx_mapped(struct request_queue *q)
+{
+	struct blk_mq_hw_ctx *hctx;
+	unsigned int i;
+	int mapped = 0;
+
+	queue_for_each_hw_ctx(q, hctx, i)
+		if (blk_mq_hw_queue_mapped(hctx))
+			mapped++;
+
+	return mapped;
+}
+EXPORT_SYMBOL_GPL(blk_mq_hctx_mapped);
+
 static void queue_set_hctx_shared(struct request_queue *q, bool shared)
 {
 	struct blk_mq_hw_ctx *hctx;
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index 9a5d581..7cc4d51 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -250,6 +250,7 @@ void blk_mq_freeze_queue_start(struct request_queue *q);
 int blk_mq_reinit_tagset(struct blk_mq_tag_set *set);
 
 void blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set, int nr_hw_queues);
+int blk_mq_hctx_mapped(struct request_queue *q);
 
 /*
  * Driver command data is immediately after the request. So subtract request
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 2/2] nvme-rdma: check the number of hw queues mapped
  2016-06-08 19:48 ` Ming Lin
@ 2016-06-08 19:48   ` Ming Lin
  -1 siblings, 0 replies; 22+ messages in thread
From: Ming Lin @ 2016-06-08 19:48 UTC (permalink / raw)
  To: linux-nvme, linux-block
  Cc: Christoph Hellwig, Keith Busch, Jens Axboe, James Smart

From: Ming Lin <ming.l@samsung.com>

The connect_q requires all blk-mq hw queues being mapped to cpu
sw queues. Otherwise, we got below crash.

[42139.726531] BUG: unable to handle kernel NULL pointer dereference at 0000000000000004
[42139.734962] IP: [<ffffffff8130e3b5>] blk_mq_get_tag+0x65/0xb0

[42139.977715] Stack:
[42139.980382]  0000000081306e9b ffff880035dbc380 ffff88006f71bbf8 ffffffff8130a016
[42139.988436]  ffff880035dbc380 0000000000000000 0000000000000001 ffff88011887f000
[42139.996497]  ffff88006f71bc50 ffffffff8130bc2a ffff880035dbc380 ffff880000000002
[42140.004560] Call Trace:
[42140.007681]  [<ffffffff8130a016>] __blk_mq_alloc_request+0x16/0x200
[42140.014584]  [<ffffffff8130bc2a>] blk_mq_alloc_request_hctx+0x8a/0xd0
[42140.021662]  [<ffffffffc087f28e>] nvme_alloc_request+0x2e/0xa0 [nvme_core]
[42140.029171]  [<ffffffffc087f32c>] __nvme_submit_sync_cmd+0x2c/0xc0 [nvme_core]
[42140.037024]  [<ffffffffc08d514a>] nvmf_connect_io_queue+0x10a/0x160 [nvme_fabrics]
[42140.045228]  [<ffffffffc08de255>] nvme_rdma_connect_io_queues+0x35/0x50 [nvme_rdma]
[42140.053517]  [<ffffffffc08e0690>] nvme_rdma_create_ctrl+0x490/0x6f0 [nvme_rdma]
[42140.061464]  [<ffffffffc08d4e48>] nvmf_dev_write+0x728/0x920 [nvme_fabrics]
[42140.069072]  [<ffffffff81197da3>] __vfs_write+0x23/0x120
[42140.075049]  [<ffffffff812de193>] ? apparmor_file_permission+0x13/0x20
[42140.082225]  [<ffffffff812a3ab8>] ? security_file_permission+0x38/0xc0
[42140.089391]  [<ffffffff81198744>] ? rw_verify_area+0x44/0xb0
[42140.095706]  [<ffffffff8119898d>] vfs_write+0xad/0x1a0
[42140.101508]  [<ffffffff81199c71>] SyS_write+0x41/0xa0
[42140.107213]  [<ffffffff816f1af6>] entry_SYSCALL_64_fastpath+0x1e/0xa8

Say, on a machine with 8 CPUs, we create 6 io queues,

echo "transport=rdma,traddr=192.168.2.2,nqn=testiqn,nr_io_queues=6" \
		> /dev/nvme-fabrics

Then actually only 4 hw queues were mapped to CPU sw queues.

HW Queue 1 <-> CPU 0,4
HW Queue 2 <-> CPU 1,5
HW Queue 3 <-> None
HW Queue 4 <-> CPU 2,6
HW Queue 5 <-> CPU 3,7
HW Queue 6 <-> None

So when connecting to IO queue 3, it will crash in blk_mq_get_tag()
because hctx->tags is NULL.

This patches doesn't really fix the hw/sw queues mapping, but it returns
error if not all hw queues were mapped.

"nvme nvme4: 6 hw queues created, but only 4 were mapped to sw queues"

Reported-by: James Smart <james.smart@broadcom.com>
Signed-off-by: Ming Lin <ming.l@samsung.com>
---
 drivers/nvme/host/rdma.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index 4edc912..2e8f556 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -1771,6 +1771,7 @@ static const struct nvme_ctrl_ops nvme_rdma_ctrl_ops = {
 static int nvme_rdma_create_io_queues(struct nvme_rdma_ctrl *ctrl)
 {
 	struct nvmf_ctrl_options *opts = ctrl->ctrl.opts;
+	int hw_queue_mapped;
 	int ret;
 
 	ret = nvme_set_queue_count(&ctrl->ctrl, &opts->nr_io_queues);
@@ -1819,6 +1820,16 @@ static int nvme_rdma_create_io_queues(struct nvme_rdma_ctrl *ctrl)
 		goto out_free_tag_set;
 	}
 
+	hw_queue_mapped = blk_mq_hctx_mapped(ctrl->ctrl.connect_q);
+	if (hw_queue_mapped < ctrl->ctrl.connect_q->nr_hw_queues) {
+		dev_err(ctrl->ctrl.device,
+			"%d hw queues created, but only %d were mapped to sw queues\n",
+			ctrl->ctrl.connect_q->nr_hw_queues,
+			hw_queue_mapped);
+		ret = -EINVAL;
+		goto out_cleanup_connect_q;
+	}
+
 	ret = nvme_rdma_connect_io_queues(ctrl);
 	if (ret)
 		goto out_cleanup_connect_q;
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 2/2] nvme-rdma: check the number of hw queues mapped
@ 2016-06-08 19:48   ` Ming Lin
  0 siblings, 0 replies; 22+ messages in thread
From: Ming Lin @ 2016-06-08 19:48 UTC (permalink / raw)


From: Ming Lin <ming.l@samsung.com>

The connect_q requires all blk-mq hw queues being mapped to cpu
sw queues. Otherwise, we got below crash.

[42139.726531] BUG: unable to handle kernel NULL pointer dereference at 0000000000000004
[42139.734962] IP: [<ffffffff8130e3b5>] blk_mq_get_tag+0x65/0xb0

[42139.977715] Stack:
[42139.980382]  0000000081306e9b ffff880035dbc380 ffff88006f71bbf8 ffffffff8130a016
[42139.988436]  ffff880035dbc380 0000000000000000 0000000000000001 ffff88011887f000
[42139.996497]  ffff88006f71bc50 ffffffff8130bc2a ffff880035dbc380 ffff880000000002
[42140.004560] Call Trace:
[42140.007681]  [<ffffffff8130a016>] __blk_mq_alloc_request+0x16/0x200
[42140.014584]  [<ffffffff8130bc2a>] blk_mq_alloc_request_hctx+0x8a/0xd0
[42140.021662]  [<ffffffffc087f28e>] nvme_alloc_request+0x2e/0xa0 [nvme_core]
[42140.029171]  [<ffffffffc087f32c>] __nvme_submit_sync_cmd+0x2c/0xc0 [nvme_core]
[42140.037024]  [<ffffffffc08d514a>] nvmf_connect_io_queue+0x10a/0x160 [nvme_fabrics]
[42140.045228]  [<ffffffffc08de255>] nvme_rdma_connect_io_queues+0x35/0x50 [nvme_rdma]
[42140.053517]  [<ffffffffc08e0690>] nvme_rdma_create_ctrl+0x490/0x6f0 [nvme_rdma]
[42140.061464]  [<ffffffffc08d4e48>] nvmf_dev_write+0x728/0x920 [nvme_fabrics]
[42140.069072]  [<ffffffff81197da3>] __vfs_write+0x23/0x120
[42140.075049]  [<ffffffff812de193>] ? apparmor_file_permission+0x13/0x20
[42140.082225]  [<ffffffff812a3ab8>] ? security_file_permission+0x38/0xc0
[42140.089391]  [<ffffffff81198744>] ? rw_verify_area+0x44/0xb0
[42140.095706]  [<ffffffff8119898d>] vfs_write+0xad/0x1a0
[42140.101508]  [<ffffffff81199c71>] SyS_write+0x41/0xa0
[42140.107213]  [<ffffffff816f1af6>] entry_SYSCALL_64_fastpath+0x1e/0xa8

Say, on a machine with 8 CPUs, we create 6 io queues,

echo "transport=rdma,traddr=192.168.2.2,nqn=testiqn,nr_io_queues=6" \
		> /dev/nvme-fabrics

Then actually only 4 hw queues were mapped to CPU sw queues.

HW Queue 1 <-> CPU 0,4
HW Queue 2 <-> CPU 1,5
HW Queue 3 <-> None
HW Queue 4 <-> CPU 2,6
HW Queue 5 <-> CPU 3,7
HW Queue 6 <-> None

So when connecting to IO queue 3, it will crash in blk_mq_get_tag()
because hctx->tags is NULL.

This patches doesn't really fix the hw/sw queues mapping, but it returns
error if not all hw queues were mapped.

"nvme nvme4: 6 hw queues created, but only 4 were mapped to sw queues"

Reported-by: James Smart <james.smart at broadcom.com>
Signed-off-by: Ming Lin <ming.l at samsung.com>
---
 drivers/nvme/host/rdma.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index 4edc912..2e8f556 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -1771,6 +1771,7 @@ static const struct nvme_ctrl_ops nvme_rdma_ctrl_ops = {
 static int nvme_rdma_create_io_queues(struct nvme_rdma_ctrl *ctrl)
 {
 	struct nvmf_ctrl_options *opts = ctrl->ctrl.opts;
+	int hw_queue_mapped;
 	int ret;
 
 	ret = nvme_set_queue_count(&ctrl->ctrl, &opts->nr_io_queues);
@@ -1819,6 +1820,16 @@ static int nvme_rdma_create_io_queues(struct nvme_rdma_ctrl *ctrl)
 		goto out_free_tag_set;
 	}
 
+	hw_queue_mapped = blk_mq_hctx_mapped(ctrl->ctrl.connect_q);
+	if (hw_queue_mapped < ctrl->ctrl.connect_q->nr_hw_queues) {
+		dev_err(ctrl->ctrl.device,
+			"%d hw queues created, but only %d were mapped to sw queues\n",
+			ctrl->ctrl.connect_q->nr_hw_queues,
+			hw_queue_mapped);
+		ret = -EINVAL;
+		goto out_cleanup_connect_q;
+	}
+
 	ret = nvme_rdma_connect_io_queues(ctrl);
 	if (ret)
 		goto out_cleanup_connect_q;
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH 0/2] check the number of hw queues mapped to sw queues
  2016-06-08 19:48 ` Ming Lin
@ 2016-06-08 22:25   ` Keith Busch
  -1 siblings, 0 replies; 22+ messages in thread
From: Keith Busch @ 2016-06-08 22:25 UTC (permalink / raw)
  To: Ming Lin
  Cc: linux-nvme, linux-block, Christoph Hellwig, Jens Axboe, James Smart

On Wed, Jun 08, 2016 at 03:48:10PM -0400, Ming Lin wrote:
> Back to Jan 2016, I send a patch:
> [PATCH] blk-mq: check if all HW queues are mapped to cpu
> http://www.spinics.net/lists/linux-block/msg01038.html
> 
> It adds check code to blk_mq_update_queue_map().
> But it seems too aggresive because it's not an error that some hw queues
> were not mapped to sw queues.
> 
> So this series just add a new function blk_mq_hctx_mapped() to check
> how many hw queues were mapped. And the driver(for example, nvme-rdma)
> that cares about it will do the check.

Wouldn't you prefer all 6 get assigned in this scenario instead of
utilizing fewer resources than your controller provides? I would like
blk-mq to use them all.

I've been trying to change blk_mq_update_queue_map to do this, but it's
not as easy as it sounds. The following is the simplest patch I came
up with that gets a better mapping *most* of the time.

I have 31 queues and 32 CPUs, and these are the results:

  # for i in $(ls -1v /sys/block/nvme0n1/mq/); do
      printf "hctx_idx %2d: " $i
      cat /sys/block/nvme0n1/mq/$i/cpu_list
    done

Before:

hctx_idx  0: 0, 16
hctx_idx  1: 1, 17
hctx_idx  3: 2, 18
hctx_idx  5: 3, 19
hctx_idx  7: 4, 20
hctx_idx  9: 5, 21
hctx_idx 11: 6, 22
hctx_idx 13: 7, 23
hctx_idx 15: 8, 24
hctx_idx 17: 9, 25
hctx_idx 19: 10, 26
hctx_idx 21: 11, 27
hctx_idx 23: 12, 28
hctx_idx 25: 13, 29
hctx_idx 27: 14, 30
hctx_idx 29: 15, 31

After:

hctx_id  0: 0, 16
hctx_id  1: 1
hctx_id  2: 2
hctx_id  3: 3
hctx_id  4: 4
hctx_id  5: 5
hctx_id  6: 6
hctx_id  7: 7
hctx_id  8: 8
hctx_id  9: 9
hctx_id 10: 10
hctx_id 11: 11
hctx_id 12: 12
hctx_id 13: 13
hctx_id 14: 14
hctx_id 15: 15
hctx_id 16: 17
hctx_id 17: 18
hctx_id 18: 19
hctx_id 19: 20
hctx_id 20: 21
hctx_id 21: 22
hctx_id 22: 23
hctx_id 23: 24
hctx_id 24: 25
hctx_id 25: 26
hctx_id 26: 27
hctx_id 27: 28
hctx_id 28: 29
hctx_id 29: 30
hctx_id 30: 31

---
diff --git a/block/blk-mq-cpumap.c b/block/blk-mq-cpumap.c
index d0634bc..941c406 100644
--- a/block/blk-mq-cpumap.c
+++ b/block/blk-mq-cpumap.c
@@ -75,11 +75,12 @@ int blk_mq_update_queue_map(unsigned int *map, unsigned int nr_queues,
 		*/
 		first_sibling = get_first_sibling(i);
 		if (first_sibling == i) {
-			map[i] = cpu_to_queue_index(nr_uniq_cpus, nr_queues,
-							queue);
+			map[i] = cpu_to_queue_index(max(nr_queues, (nr_cpus - queue)), nr_queues, queue);
 			queue++;
-		} else
+		} else {
 			map[i] = map[first_sibling];
+			--nr_cpus;
+		}
 	}

 	free_cpumask_var(cpus);
--

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 0/2] check the number of hw queues mapped to sw queues
@ 2016-06-08 22:25   ` Keith Busch
  0 siblings, 0 replies; 22+ messages in thread
From: Keith Busch @ 2016-06-08 22:25 UTC (permalink / raw)


On Wed, Jun 08, 2016@03:48:10PM -0400, Ming Lin wrote:
> Back to Jan 2016, I send a patch:
> [PATCH] blk-mq: check if all HW queues are mapped to cpu
> http://www.spinics.net/lists/linux-block/msg01038.html
> 
> It adds check code to blk_mq_update_queue_map().
> But it seems too aggresive because it's not an error that some hw queues
> were not mapped to sw queues.
> 
> So this series just add a new function blk_mq_hctx_mapped() to check
> how many hw queues were mapped. And the driver(for example, nvme-rdma)
> that cares about it will do the check.

Wouldn't you prefer all 6 get assigned in this scenario instead of
utilizing fewer resources than your controller provides? I would like
blk-mq to use them all.

I've been trying to change blk_mq_update_queue_map to do this, but it's
not as easy as it sounds. The following is the simplest patch I came
up with that gets a better mapping *most* of the time.

I have 31 queues and 32 CPUs, and these are the results:

  # for i in $(ls -1v /sys/block/nvme0n1/mq/); do
      printf "hctx_idx %2d: " $i
      cat /sys/block/nvme0n1/mq/$i/cpu_list
    done

Before:

hctx_idx  0: 0, 16
hctx_idx  1: 1, 17
hctx_idx  3: 2, 18
hctx_idx  5: 3, 19
hctx_idx  7: 4, 20
hctx_idx  9: 5, 21
hctx_idx 11: 6, 22
hctx_idx 13: 7, 23
hctx_idx 15: 8, 24
hctx_idx 17: 9, 25
hctx_idx 19: 10, 26
hctx_idx 21: 11, 27
hctx_idx 23: 12, 28
hctx_idx 25: 13, 29
hctx_idx 27: 14, 30
hctx_idx 29: 15, 31

After:

hctx_id  0: 0, 16
hctx_id  1: 1
hctx_id  2: 2
hctx_id  3: 3
hctx_id  4: 4
hctx_id  5: 5
hctx_id  6: 6
hctx_id  7: 7
hctx_id  8: 8
hctx_id  9: 9
hctx_id 10: 10
hctx_id 11: 11
hctx_id 12: 12
hctx_id 13: 13
hctx_id 14: 14
hctx_id 15: 15
hctx_id 16: 17
hctx_id 17: 18
hctx_id 18: 19
hctx_id 19: 20
hctx_id 20: 21
hctx_id 21: 22
hctx_id 22: 23
hctx_id 23: 24
hctx_id 24: 25
hctx_id 25: 26
hctx_id 26: 27
hctx_id 27: 28
hctx_id 28: 29
hctx_id 29: 30
hctx_id 30: 31

---
diff --git a/block/blk-mq-cpumap.c b/block/blk-mq-cpumap.c
index d0634bc..941c406 100644
--- a/block/blk-mq-cpumap.c
+++ b/block/blk-mq-cpumap.c
@@ -75,11 +75,12 @@ int blk_mq_update_queue_map(unsigned int *map, unsigned int nr_queues,
 		*/
 		first_sibling = get_first_sibling(i);
 		if (first_sibling == i) {
-			map[i] = cpu_to_queue_index(nr_uniq_cpus, nr_queues,
-							queue);
+			map[i] = cpu_to_queue_index(max(nr_queues, (nr_cpus - queue)), nr_queues, queue);
 			queue++;
-		} else
+		} else {
 			map[i] = map[first_sibling];
+			--nr_cpus;
+		}
 	}

 	free_cpumask_var(cpus);
--

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH 0/2] check the number of hw queues mapped to sw queues
  2016-06-08 22:25   ` Keith Busch
@ 2016-06-08 22:47     ` Ming Lin
  -1 siblings, 0 replies; 22+ messages in thread
From: Ming Lin @ 2016-06-08 22:47 UTC (permalink / raw)
  To: Keith Busch
  Cc: linux-nvme, linux-block, Christoph Hellwig, Jens Axboe, James Smart

On Wed, Jun 8, 2016 at 3:25 PM, Keith Busch <keith.busch@intel.com> wrote:
> On Wed, Jun 08, 2016 at 03:48:10PM -0400, Ming Lin wrote:
>> Back to Jan 2016, I send a patch:
>> [PATCH] blk-mq: check if all HW queues are mapped to cpu
>> http://www.spinics.net/lists/linux-block/msg01038.html
>>
>> It adds check code to blk_mq_update_queue_map().
>> But it seems too aggresive because it's not an error that some hw queues
>> were not mapped to sw queues.
>>
>> So this series just add a new function blk_mq_hctx_mapped() to check
>> how many hw queues were mapped. And the driver(for example, nvme-rdma)
>> that cares about it will do the check.
>
> Wouldn't you prefer all 6 get assigned in this scenario instead of
> utilizing fewer resources than your controller provides? I would like
> blk-mq to use them all.

That's ideal.

But we'll always see corner case that there is hctx(s) not mapped.
So I want to at least prevent the crash and return an error in the driver.

Here is another example that I create 64 queues on a server with 72 cpus.

hctx index 0: 0, 36
hctx index 1: 1, 37
hctx index 3: 2, 38
hctx index 5: 3, 39
hctx index 7: 4, 40
hctx index 8: 5, 41
hctx index 10: 6, 42
hctx index 12: 7, 43
hctx index 14: 8, 44
hctx index 16: 9, 45
hctx index 17: 10, 46
hctx index 19: 11, 47
hctx index 21: 12, 48
hctx index 23: 13, 49
hctx index 24: 14, 50
hctx index 26: 15, 51
hctx index 28: 16, 52
hctx index 30: 17, 53
hctx index 32: 18, 54
hctx index 33: 19, 55
hctx index 35: 20, 56
hctx index 37: 21, 57
hctx index 39: 22, 58
hctx index 40: 23, 59
hctx index 42: 24, 60
hctx index 44: 25, 61
hctx index 46: 26, 62
hctx index 48: 27, 63
hctx index 49: 28, 64
hctx index 51: 29, 65
hctx index 53  30, 66
hctx index 55:  31, 67
hctx index 56:  32, 68
hctx index 58:  33, 69
hctx index 60:  34, 70
hctx index 62:  35, 71

Other hctxs are not mapped.


>
> I've been trying to change blk_mq_update_queue_map to do this, but it's
> not as easy as it sounds. The following is the simplest patch I came
> up with that gets a better mapping *most* of the time.

Not working for my case with 6 hw queues(8 cpus):

[  108.318247] nvme nvme0: 6 hw queues created, but only 5 were mapped
to sw queues

hctx_idx 0: 0 1 4 5
hctx_idx 1: None
hctx_idx 2: 2
hctx_idx 3: 3
hctx_idx 4: 6
hctx_idx 5: 7

>
> ---
> diff --git a/block/blk-mq-cpumap.c b/block/blk-mq-cpumap.c
> index d0634bc..941c406 100644
> --- a/block/blk-mq-cpumap.c
> +++ b/block/blk-mq-cpumap.c
> @@ -75,11 +75,12 @@ int blk_mq_update_queue_map(unsigned int *map, unsigned int nr_queues,
>                 */
>                 first_sibling = get_first_sibling(i);
>                 if (first_sibling == i) {
> -                       map[i] = cpu_to_queue_index(nr_uniq_cpus, nr_queues,
> -                                                       queue);
> +                       map[i] = cpu_to_queue_index(max(nr_queues, (nr_cpus - queue)), nr_queues, queue);
>                         queue++;
> -               } else
> +               } else {
>                         map[i] = map[first_sibling];
> +                       --nr_cpus;
> +               }
>         }
>
>         free_cpumask_var(cpus);
> --

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 0/2] check the number of hw queues mapped to sw queues
@ 2016-06-08 22:47     ` Ming Lin
  0 siblings, 0 replies; 22+ messages in thread
From: Ming Lin @ 2016-06-08 22:47 UTC (permalink / raw)


On Wed, Jun 8, 2016@3:25 PM, Keith Busch <keith.busch@intel.com> wrote:
> On Wed, Jun 08, 2016@03:48:10PM -0400, Ming Lin wrote:
>> Back to Jan 2016, I send a patch:
>> [PATCH] blk-mq: check if all HW queues are mapped to cpu
>> http://www.spinics.net/lists/linux-block/msg01038.html
>>
>> It adds check code to blk_mq_update_queue_map().
>> But it seems too aggresive because it's not an error that some hw queues
>> were not mapped to sw queues.
>>
>> So this series just add a new function blk_mq_hctx_mapped() to check
>> how many hw queues were mapped. And the driver(for example, nvme-rdma)
>> that cares about it will do the check.
>
> Wouldn't you prefer all 6 get assigned in this scenario instead of
> utilizing fewer resources than your controller provides? I would like
> blk-mq to use them all.

That's ideal.

But we'll always see corner case that there is hctx(s) not mapped.
So I want to at least prevent the crash and return an error in the driver.

Here is another example that I create 64 queues on a server with 72 cpus.

hctx index 0: 0, 36
hctx index 1: 1, 37
hctx index 3: 2, 38
hctx index 5: 3, 39
hctx index 7: 4, 40
hctx index 8: 5, 41
hctx index 10: 6, 42
hctx index 12: 7, 43
hctx index 14: 8, 44
hctx index 16: 9, 45
hctx index 17: 10, 46
hctx index 19: 11, 47
hctx index 21: 12, 48
hctx index 23: 13, 49
hctx index 24: 14, 50
hctx index 26: 15, 51
hctx index 28: 16, 52
hctx index 30: 17, 53
hctx index 32: 18, 54
hctx index 33: 19, 55
hctx index 35: 20, 56
hctx index 37: 21, 57
hctx index 39: 22, 58
hctx index 40: 23, 59
hctx index 42: 24, 60
hctx index 44: 25, 61
hctx index 46: 26, 62
hctx index 48: 27, 63
hctx index 49: 28, 64
hctx index 51: 29, 65
hctx index 53  30, 66
hctx index 55:  31, 67
hctx index 56:  32, 68
hctx index 58:  33, 69
hctx index 60:  34, 70
hctx index 62:  35, 71

Other hctxs are not mapped.


>
> I've been trying to change blk_mq_update_queue_map to do this, but it's
> not as easy as it sounds. The following is the simplest patch I came
> up with that gets a better mapping *most* of the time.

Not working for my case with 6 hw queues(8 cpus):

[  108.318247] nvme nvme0: 6 hw queues created, but only 5 were mapped
to sw queues

hctx_idx 0: 0 1 4 5
hctx_idx 1: None
hctx_idx 2: 2
hctx_idx 3: 3
hctx_idx 4: 6
hctx_idx 5: 7

>
> ---
> diff --git a/block/blk-mq-cpumap.c b/block/blk-mq-cpumap.c
> index d0634bc..941c406 100644
> --- a/block/blk-mq-cpumap.c
> +++ b/block/blk-mq-cpumap.c
> @@ -75,11 +75,12 @@ int blk_mq_update_queue_map(unsigned int *map, unsigned int nr_queues,
>                 */
>                 first_sibling = get_first_sibling(i);
>                 if (first_sibling == i) {
> -                       map[i] = cpu_to_queue_index(nr_uniq_cpus, nr_queues,
> -                                                       queue);
> +                       map[i] = cpu_to_queue_index(max(nr_queues, (nr_cpus - queue)), nr_queues, queue);
>                         queue++;
> -               } else
> +               } else {
>                         map[i] = map[first_sibling];
> +                       --nr_cpus;
> +               }
>         }
>
>         free_cpumask_var(cpus);
> --

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 0/2] check the number of hw queues mapped to sw queues
  2016-06-08 22:47     ` Ming Lin
@ 2016-06-08 23:05       ` Keith Busch
  -1 siblings, 0 replies; 22+ messages in thread
From: Keith Busch @ 2016-06-08 23:05 UTC (permalink / raw)
  To: Ming Lin
  Cc: linux-nvme, linux-block, Christoph Hellwig, Jens Axboe, James Smart

On Wed, Jun 08, 2016 at 03:47:10PM -0700, Ming Lin wrote:
> On Wed, Jun 8, 2016 at 3:25 PM, Keith Busch <keith.busch@intel.com> wrote:
> > I've been trying to change blk_mq_update_queue_map to do this, but it's
> > not as easy as it sounds. The following is the simplest patch I came
> > up with that gets a better mapping *most* of the time.
> 
> Not working for my case with 6 hw queues(8 cpus):
> 
> [  108.318247] nvme nvme0: 6 hw queues created, but only 5 were mapped
> to sw queues
> 
> hctx_idx 0: 0 1 4 5
> hctx_idx 1: None
> hctx_idx 2: 2
> hctx_idx 3: 3
> hctx_idx 4: 6
> hctx_idx 5: 7

Heh, not one of the good cases I see. I don't think there's a simple
change to use all contexts. Might need a larger rewrite.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 0/2] check the number of hw queues mapped to sw queues
@ 2016-06-08 23:05       ` Keith Busch
  0 siblings, 0 replies; 22+ messages in thread
From: Keith Busch @ 2016-06-08 23:05 UTC (permalink / raw)


On Wed, Jun 08, 2016@03:47:10PM -0700, Ming Lin wrote:
> On Wed, Jun 8, 2016@3:25 PM, Keith Busch <keith.busch@intel.com> wrote:
> > I've been trying to change blk_mq_update_queue_map to do this, but it's
> > not as easy as it sounds. The following is the simplest patch I came
> > up with that gets a better mapping *most* of the time.
> 
> Not working for my case with 6 hw queues(8 cpus):
> 
> [  108.318247] nvme nvme0: 6 hw queues created, but only 5 were mapped
> to sw queues
> 
> hctx_idx 0: 0 1 4 5
> hctx_idx 1: None
> hctx_idx 2: 2
> hctx_idx 3: 3
> hctx_idx 4: 6
> hctx_idx 5: 7

Heh, not one of the good cases I see. I don't think there's a simple
change to use all contexts. Might need a larger rewrite.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 2/2] nvme-rdma: check the number of hw queues mapped
  2016-06-08 19:48   ` Ming Lin
@ 2016-06-09 11:19     ` Sagi Grimberg
  -1 siblings, 0 replies; 22+ messages in thread
From: Sagi Grimberg @ 2016-06-09 11:19 UTC (permalink / raw)
  To: Ming Lin, linux-nvme, linux-block
  Cc: Keith Busch, Jens Axboe, Christoph Hellwig, James Smart

This needs documentation in the form of:

/*
  * XXX: blk-mq might not map all our hw contexts but this is a must for
  * us for fabric connects. So until we can fix blk-mq we check that.
  */

> +	hw_queue_mapped = blk_mq_hctx_mapped(ctrl->ctrl.connect_q);
> +	if (hw_queue_mapped < ctrl->ctrl.connect_q->nr_hw_queues) {
> +		dev_err(ctrl->ctrl.device,
> +			"%d hw queues created, but only %d were mapped to sw queues\n",
> +			ctrl->ctrl.connect_q->nr_hw_queues,
> +			hw_queue_mapped);
> +		ret = -EINVAL;
> +		goto out_cleanup_connect_q;
> +	}
> +
>   	ret = nvme_rdma_connect_io_queues(ctrl);
>   	if (ret)
>   		goto out_cleanup_connect_q;
>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 2/2] nvme-rdma: check the number of hw queues mapped
@ 2016-06-09 11:19     ` Sagi Grimberg
  0 siblings, 0 replies; 22+ messages in thread
From: Sagi Grimberg @ 2016-06-09 11:19 UTC (permalink / raw)


This needs documentation in the form of:

/*
  * XXX: blk-mq might not map all our hw contexts but this is a must for
  * us for fabric connects. So until we can fix blk-mq we check that.
  */

> +	hw_queue_mapped = blk_mq_hctx_mapped(ctrl->ctrl.connect_q);
> +	if (hw_queue_mapped < ctrl->ctrl.connect_q->nr_hw_queues) {
> +		dev_err(ctrl->ctrl.device,
> +			"%d hw queues created, but only %d were mapped to sw queues\n",
> +			ctrl->ctrl.connect_q->nr_hw_queues,
> +			hw_queue_mapped);
> +		ret = -EINVAL;
> +		goto out_cleanup_connect_q;
> +	}
> +
>   	ret = nvme_rdma_connect_io_queues(ctrl);
>   	if (ret)
>   		goto out_cleanup_connect_q;
>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 0/2] check the number of hw queues mapped to sw queues
  2016-06-08 19:48 ` Ming Lin
@ 2016-06-09 14:09   ` Christoph Hellwig
  -1 siblings, 0 replies; 22+ messages in thread
From: Christoph Hellwig @ 2016-06-09 14:09 UTC (permalink / raw)
  To: Ming Lin
  Cc: linux-nvme, linux-block, Christoph Hellwig, Keith Busch,
	Jens Axboe, James Smart

On Wed, Jun 08, 2016 at 03:48:10PM -0400, Ming Lin wrote:
> It adds check code to blk_mq_update_queue_map().
> But it seems too aggresive because it's not an error that some hw queues
> were not mapped to sw queues.
> 
> So this series just add a new function blk_mq_hctx_mapped() to check
> how many hw queues were mapped. And the driver(for example, nvme-rdma)
> that cares about it will do the check.

I think it would be better to have this number available a structure
field.  Any reason not to update nr_hw_queues in the tag set
with the actual number of queues?

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 0/2] check the number of hw queues mapped to sw queues
@ 2016-06-09 14:09   ` Christoph Hellwig
  0 siblings, 0 replies; 22+ messages in thread
From: Christoph Hellwig @ 2016-06-09 14:09 UTC (permalink / raw)


On Wed, Jun 08, 2016@03:48:10PM -0400, Ming Lin wrote:
> It adds check code to blk_mq_update_queue_map().
> But it seems too aggresive because it's not an error that some hw queues
> were not mapped to sw queues.
> 
> So this series just add a new function blk_mq_hctx_mapped() to check
> how many hw queues were mapped. And the driver(for example, nvme-rdma)
> that cares about it will do the check.

I think it would be better to have this number available a structure
field.  Any reason not to update nr_hw_queues in the tag set
with the actual number of queues?

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 2/2] nvme-rdma: check the number of hw queues mapped
  2016-06-09 11:19     ` Sagi Grimberg
@ 2016-06-09 14:10       ` Christoph Hellwig
  -1 siblings, 0 replies; 22+ messages in thread
From: Christoph Hellwig @ 2016-06-09 14:10 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: Ming Lin, linux-nvme, linux-block, Keith Busch, Jens Axboe,
	Christoph Hellwig, James Smart

On Thu, Jun 09, 2016 at 02:19:55PM +0300, Sagi Grimberg wrote:
> This needs documentation in the form of:
>
> /*
>  * XXX: blk-mq might not map all our hw contexts but this is a must for
>  * us for fabric connects. So until we can fix blk-mq we check that.
>  */

I think the right thing to do is to have a member of actually mapped
queues in the block layer, and I also don't think we need the XXX comment
as there are valid reasons for not mapping all queues.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 2/2] nvme-rdma: check the number of hw queues mapped
@ 2016-06-09 14:10       ` Christoph Hellwig
  0 siblings, 0 replies; 22+ messages in thread
From: Christoph Hellwig @ 2016-06-09 14:10 UTC (permalink / raw)


On Thu, Jun 09, 2016@02:19:55PM +0300, Sagi Grimberg wrote:
> This needs documentation in the form of:
>
> /*
>  * XXX: blk-mq might not map all our hw contexts but this is a must for
>  * us for fabric connects. So until we can fix blk-mq we check that.
>  */

I think the right thing to do is to have a member of actually mapped
queues in the block layer, and I also don't think we need the XXX comment
as there are valid reasons for not mapping all queues.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 0/2] check the number of hw queues mapped to sw queues
  2016-06-09 14:09   ` Christoph Hellwig
@ 2016-06-09 19:43     ` Ming Lin
  -1 siblings, 0 replies; 22+ messages in thread
From: Ming Lin @ 2016-06-09 19:43 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: linux-nvme, linux-block, Keith Busch, Jens Axboe, James Smart

On Thu, Jun 9, 2016 at 7:09 AM, Christoph Hellwig <hch@lst.de> wrote:
> On Wed, Jun 08, 2016 at 03:48:10PM -0400, Ming Lin wrote:
>> It adds check code to blk_mq_update_queue_map().
>> But it seems too aggresive because it's not an error that some hw queues
>> were not mapped to sw queues.
>>
>> So this series just add a new function blk_mq_hctx_mapped() to check
>> how many hw queues were mapped. And the driver(for example, nvme-rdma)
>> that cares about it will do the check.
>
> I think it would be better to have this number available a structure
> field.  Any reason not to update nr_hw_queues in the tag set
> with the actual number of queues?

One reason is we don't know which hctx(s) not mapped.

HW Queue 1 <-> CPU 0,4
HW Queue 2 <-> CPU 1,5
HW Queue 3 <-> None
HW Queue 4 <-> CPU 2,6
HW Queue 5 <-> CPU 3,7
HW Queue 6 <-> None

If we updated nr_hw_queues from 6 to 4,
then queue_for_each_hw_ctx will not work.

#define queue_for_each_hw_ctx(q, hctx, i)                               \
        for ((i) = 0; (i) < (q)->nr_hw_queues &&                        \
             ({ hctx = (q)->queue_hw_ctx[i]; 1; }); (i)++)

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 0/2] check the number of hw queues mapped to sw queues
@ 2016-06-09 19:43     ` Ming Lin
  0 siblings, 0 replies; 22+ messages in thread
From: Ming Lin @ 2016-06-09 19:43 UTC (permalink / raw)


On Thu, Jun 9, 2016@7:09 AM, Christoph Hellwig <hch@lst.de> wrote:
> On Wed, Jun 08, 2016@03:48:10PM -0400, Ming Lin wrote:
>> It adds check code to blk_mq_update_queue_map().
>> But it seems too aggresive because it's not an error that some hw queues
>> were not mapped to sw queues.
>>
>> So this series just add a new function blk_mq_hctx_mapped() to check
>> how many hw queues were mapped. And the driver(for example, nvme-rdma)
>> that cares about it will do the check.
>
> I think it would be better to have this number available a structure
> field.  Any reason not to update nr_hw_queues in the tag set
> with the actual number of queues?

One reason is we don't know which hctx(s) not mapped.

HW Queue 1 <-> CPU 0,4
HW Queue 2 <-> CPU 1,5
HW Queue 3 <-> None
HW Queue 4 <-> CPU 2,6
HW Queue 5 <-> CPU 3,7
HW Queue 6 <-> None

If we updated nr_hw_queues from 6 to 4,
then queue_for_each_hw_ctx will not work.

#define queue_for_each_hw_ctx(q, hctx, i)                               \
        for ((i) = 0; (i) < (q)->nr_hw_queues &&                        \
             ({ hctx = (q)->queue_hw_ctx[i]; 1; }); (i)++)

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 2/2] nvme-rdma: check the number of hw queues mapped
  2016-06-09 14:10       ` Christoph Hellwig
@ 2016-06-09 19:47         ` Ming Lin
  -1 siblings, 0 replies; 22+ messages in thread
From: Ming Lin @ 2016-06-09 19:47 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Sagi Grimberg, linux-nvme, linux-block, Keith Busch, Jens Axboe,
	James Smart

On Thu, Jun 9, 2016 at 7:10 AM, Christoph Hellwig <hch@lst.de> wrote:
> On Thu, Jun 09, 2016 at 02:19:55PM +0300, Sagi Grimberg wrote:
>> This needs documentation in the form of:
>>
>> /*
>>  * XXX: blk-mq might not map all our hw contexts but this is a must for
>>  * us for fabric connects. So until we can fix blk-mq we check that.
>>  */
>
> I think the right thing to do is to have a member of actually mapped
> queues in the block layer, and I also don't think we need the XXX comment
> as there are valid reasons for not mapping all queues.

I think it is a rare case that we need all hw contexts mapped.
Seems unnecessary to add a new field to "struct request_queue" for the
rare case.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 2/2] nvme-rdma: check the number of hw queues mapped
@ 2016-06-09 19:47         ` Ming Lin
  0 siblings, 0 replies; 22+ messages in thread
From: Ming Lin @ 2016-06-09 19:47 UTC (permalink / raw)


On Thu, Jun 9, 2016@7:10 AM, Christoph Hellwig <hch@lst.de> wrote:
> On Thu, Jun 09, 2016@02:19:55PM +0300, Sagi Grimberg wrote:
>> This needs documentation in the form of:
>>
>> /*
>>  * XXX: blk-mq might not map all our hw contexts but this is a must for
>>  * us for fabric connects. So until we can fix blk-mq we check that.
>>  */
>
> I think the right thing to do is to have a member of actually mapped
> queues in the block layer, and I also don't think we need the XXX comment
> as there are valid reasons for not mapping all queues.

I think it is a rare case that we need all hw contexts mapped.
Seems unnecessary to add a new field to "struct request_queue" for the
rare case.

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2016-06-09 19:47 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-06-08 19:48 [PATCH 0/2] check the number of hw queues mapped to sw queues Ming Lin
2016-06-08 19:48 ` Ming Lin
2016-06-08 19:48 ` [PATCH 1/2] blk-mq: add a function to return number of hw queues mapped Ming Lin
2016-06-08 19:48   ` Ming Lin
2016-06-08 19:48 ` [PATCH 2/2] nvme-rdma: check the " Ming Lin
2016-06-08 19:48   ` Ming Lin
2016-06-09 11:19   ` Sagi Grimberg
2016-06-09 11:19     ` Sagi Grimberg
2016-06-09 14:10     ` Christoph Hellwig
2016-06-09 14:10       ` Christoph Hellwig
2016-06-09 19:47       ` Ming Lin
2016-06-09 19:47         ` Ming Lin
2016-06-08 22:25 ` [PATCH 0/2] check the number of hw queues mapped to sw queues Keith Busch
2016-06-08 22:25   ` Keith Busch
2016-06-08 22:47   ` Ming Lin
2016-06-08 22:47     ` Ming Lin
2016-06-08 23:05     ` Keith Busch
2016-06-08 23:05       ` Keith Busch
2016-06-09 14:09 ` Christoph Hellwig
2016-06-09 14:09   ` Christoph Hellwig
2016-06-09 19:43   ` Ming Lin
2016-06-09 19:43     ` Ming Lin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.