All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3] nvme-pci: adjust irq max_vector to avoid WARN()
@ 2019-06-08 18:02 Minwoo Im
  2019-06-08 18:02 ` [PATCH 1/3] nvme-pci: remove unnecessary zero for static var Minwoo Im
                   ` (2 more replies)
  0 siblings, 3 replies; 13+ messages in thread
From: Minwoo Im @ 2019-06-08 18:02 UTC (permalink / raw)


Hi All,

This patch series is to adjust the max_vector value from the
nvme_setup_irqs() which provides the max_vector before recalculating the
irq set_size in nvme_calc_irq_sets().  The third patch is mainly focused
on avoding the WARN() which might occur in irq_build_affinity_masks()
[1].

This series has been tested with combination of the following criteria:
	1) write_queues=0..128
	1) poll_queues=0..128

Please feel free to let me know if I have thought of something useless
or feel this patch is useless.

Thanks in advance,

[1] WARN messages when modprobe nvme write_queues=32 poll_queues=0:
root at target:~/nvme# nproc
8
root at target:~/nvme# modprobe nvme write_queues=32 poll_queues=0
[   17.925326] nvme nvme0: pci function 0000:00:04.0
[   17.940601] WARNING: CPU: 3 PID: 1030 at kernel/irq/affinity.c:221 irq_create_affinity_masks+0x222/0x330
[   17.940602] Modules linked in: nvme nvme_core [last unloaded: nvme]
[   17.940605] CPU: 3 PID: 1030 Comm: kworker/u17:4 Tainted: G        W         5.1.0+ #156
[   17.940605] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.12.1-0-ga5cab58e9a3f-prebuilt.qemu.org 04/01/2014
[   17.940608] Workqueue: nvme-reset-wq nvme_reset_work [nvme]
[   17.940609] RIP: 0010:irq_create_affinity_masks+0x222/0x330
[   17.940611] Code: 4c 8d 4c 24 28 4c 8d 44 24 30 e8 c9 fa ff ff 89 44 24 18 e8 c0 38 fa ff 8b 44 24 18 44 8b 54 24 1c 5a 44 01 d0 41 39 c4 76 02 <0f> 0b 48 89 df 44 01 e5 e8 f1 ce 10 00 48 8b 34 24 44 89 f0 44 01
[   17.940611] RSP: 0018:ffffc90002277c50 EFLAGS: 00010216
[   17.940612] RAX: 0000000000000008 RBX: ffff88807ca48860 RCX: 0000000000000000
[   17.940612] RDX: ffff88807bc03800 RSI: 0000000000000020 RDI: 0000000000000000
[   17.940613] RBP: 0000000000000001 R08: ffffc90002277c78 R09: ffffc90002277c70
[   17.940613] R10: 0000000000000008 R11: 0000000000000001 R12: 0000000000000020
[   17.940614] R13: 0000000000025d08 R14: 0000000000000001 R15: ffff88807bc03800
[   17.940614] FS:  0000000000000000(0000) GS:ffff88807db80000(0000) knlGS:0000000000000000
[   17.940616] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[   17.940617] CR2: 00005635e583f790 CR3: 000000000240a000 CR4: 00000000000006e0
[   17.940617] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[   17.940618] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[   17.940618] Call Trace:
[   17.940622]  __pci_enable_msix_range+0x215/0x540
[   17.940623]  ? kernfs_put+0x117/0x160
[   17.940625]  pci_alloc_irq_vectors_affinity+0x74/0x110
[   17.940626]  nvme_reset_work+0xc30/0x1397 [nvme]
[   17.940628]  ? __switch_to_asm+0x34/0x70
[   17.940628]  ? __switch_to_asm+0x40/0x70
[   17.940629]  ? __switch_to_asm+0x34/0x70
[   17.940630]  ? __switch_to_asm+0x40/0x70
[   17.940630]  ? __switch_to_asm+0x34/0x70
[   17.940631]  ? __switch_to_asm+0x40/0x70
[   17.940632]  ? nvme_irq_check+0x30/0x30 [nvme]
[   17.940633]  process_one_work+0x20b/0x3e0
[   17.940634]  worker_thread+0x1f9/0x3d0
[   17.940635]  ? cancel_delayed_work+0xa0/0xa0
[   17.940636]  kthread+0x117/0x120
[   17.940637]  ? kthread_stop+0xf0/0xf0
[   17.940638]  ret_from_fork+0x3a/0x50
[   17.940639] ---[ end trace aca8a131361cd42a ]---
[   17.942124] nvme nvme0: 7/1/0 default/read/poll queues

Minwoo Im (3):
  nvme-pci: remove unnecessary zero for static var
  nvme-pci: remove queue_count_ops for write,poll_queues
  nvme-pci: adjust irq max_vector using num_possible_cpus()

 drivers/nvme/host/pci.c | 31 ++++++++-----------------------
 1 file changed, 8 insertions(+), 23 deletions(-)

-- 
2.21.0

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 1/3] nvme-pci: remove unnecessary zero for static var
  2019-06-08 18:02 [PATCH 0/3] nvme-pci: adjust irq max_vector to avoid WARN() Minwoo Im
@ 2019-06-08 18:02 ` Minwoo Im
  2019-06-08 20:27   ` Chaitanya Kulkarni
  2019-06-08 18:02 ` [PATCH 2/3] nvme-pci: remove queue_count_ops for write,poll_queues Minwoo Im
  2019-06-08 18:02 ` [PATCH 3/3] nvme-pci: adjust irq max_vector using num_possible_cpus() Minwoo Im
  2 siblings, 1 reply; 13+ messages in thread
From: Minwoo Im @ 2019-06-08 18:02 UTC (permalink / raw)


poll_queues will be zero even without zero initialization here.

Cc: Jens Axboe <axboe at fb.com>
Cc: Ming Lei <ming.lei at redhat.com>
Cc: Keith Busch <kbusch at kernel.org>
Cc: Christoph Hellwig <hch at lst.de>
Cc: Sagi Grimberg <sagi at grimberg.me>
Signed-off-by: Minwoo Im <minwoo.im.dev at gmail.com>
---
 drivers/nvme/host/pci.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 5c051a8470d4..047785023892 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -80,7 +80,7 @@ MODULE_PARM_DESC(write_queues,
 	"Number of queues to use for writes. If not set, reads and writes "
 	"will share a queue set.");
 
-static int poll_queues = 0;
+static int poll_queues;
 module_param_cb(poll_queues, &queue_count_ops, &poll_queues, 0644);
 MODULE_PARM_DESC(poll_queues, "Number of queues to use for polled IO.");
 
-- 
2.21.0

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 2/3] nvme-pci: remove queue_count_ops for write,poll_queues
  2019-06-08 18:02 [PATCH 0/3] nvme-pci: adjust irq max_vector to avoid WARN() Minwoo Im
  2019-06-08 18:02 ` [PATCH 1/3] nvme-pci: remove unnecessary zero for static var Minwoo Im
@ 2019-06-08 18:02 ` Minwoo Im
  2019-06-10  1:51   ` Ming Lei
       [not found]   ` <CGME20190610015241epcas3p2a021735345864364cb7f8c6aded4685d@epcms2p2>
  2019-06-08 18:02 ` [PATCH 3/3] nvme-pci: adjust irq max_vector using num_possible_cpus() Minwoo Im
  2 siblings, 2 replies; 13+ messages in thread
From: Minwoo Im @ 2019-06-08 18:02 UTC (permalink / raw)


queue_count_set() seems like that it has been provided to limit the
number of queue entries for write/poll queues.  But, the
queue_count_set() has been doing nothing but a parameter check even it
has num_possible_cpus() which is nop.

This patch removes entire queue_count_ops from the write_queues and
poll_queues.

Cc: Jens Axboe <axboe at fb.com>
Cc: Ming Lei <ming.lei at redhat.com>
Cc: Keith Busch <kbusch at kernel.org>
Cc: Christoph Hellwig <hch at lst.de>
Cc: Sagi Grimberg <sagi at grimberg.me>
Signed-off-by: Minwoo Im <minwoo.im.dev at gmail.com>
---
 drivers/nvme/host/pci.c | 23 ++---------------------
 1 file changed, 2 insertions(+), 21 deletions(-)

diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 047785023892..20193b11ab0d 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -68,20 +68,14 @@ static int io_queue_depth = 1024;
 module_param_cb(io_queue_depth, &io_queue_depth_ops, &io_queue_depth, 0644);
 MODULE_PARM_DESC(io_queue_depth, "set io queue depth, should >= 2");
 
-static int queue_count_set(const char *val, const struct kernel_param *kp);
-static const struct kernel_param_ops queue_count_ops = {
-	.set = queue_count_set,
-	.get = param_get_int,
-};
-
 static int write_queues;
-module_param_cb(write_queues, &queue_count_ops, &write_queues, 0644);
+module_param(write_queues, int, 0644);
 MODULE_PARM_DESC(write_queues,
 	"Number of queues to use for writes. If not set, reads and writes "
 	"will share a queue set.");
 
 static int poll_queues;
-module_param_cb(poll_queues, &queue_count_ops, &poll_queues, 0644);
+module_param(poll_queues, int, 0644);
 MODULE_PARM_DESC(poll_queues, "Number of queues to use for polled IO.");
 
 struct nvme_dev;
@@ -146,19 +140,6 @@ static int io_queue_depth_set(const char *val, const struct kernel_param *kp)
 	return param_set_int(val, kp);
 }
 
-static int queue_count_set(const char *val, const struct kernel_param *kp)
-{
-	int n, ret;
-
-	ret = kstrtoint(val, 10, &n);
-	if (ret)
-		return ret;
-	if (n > num_possible_cpus())
-		n = num_possible_cpus();
-
-	return param_set_int(val, kp);
-}
-
 static inline unsigned int sq_idx(unsigned int qid, u32 stride)
 {
 	return qid * 2 * stride;
-- 
2.21.0

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 3/3] nvme-pci: adjust irq max_vector using num_possible_cpus()
  2019-06-08 18:02 [PATCH 0/3] nvme-pci: adjust irq max_vector to avoid WARN() Minwoo Im
  2019-06-08 18:02 ` [PATCH 1/3] nvme-pci: remove unnecessary zero for static var Minwoo Im
  2019-06-08 18:02 ` [PATCH 2/3] nvme-pci: remove queue_count_ops for write,poll_queues Minwoo Im
@ 2019-06-08 18:02 ` Minwoo Im
  2 siblings, 0 replies; 13+ messages in thread
From: Minwoo Im @ 2019-06-08 18:02 UTC (permalink / raw)


If the "irq_queues" are greater than num_possible_cpus(),
nvme_calc_irq_sets() can have irq set_size for HCTX_TYPE_DEFAULT greater
than it can be afforded.
2039         affd->set_size[HCTX_TYPE_DEFAULT] = nrirqs - nr_read_queues;

It might cause a WARN() from the irq_build_affinity_masks() like [1]:
220         if (nr_present < numvecs)
221                 WARN_ON(nr_present + nr_others < numvecs);

This patch prevents it from the WARN() by adjusting the max_vector value
from the nvme_setup_irqs().

[1] WARN messages when modprobe nvme write_queues=32 poll_queues=0:
root at target:~/nvme# nproc
8
root at target:~/nvme# modprobe nvme write_queues=32 poll_queues=0
[   17.925326] nvme nvme0: pci function 0000:00:04.0
[   17.940601] WARNING: CPU: 3 PID: 1030 at kernel/irq/affinity.c:221 irq_create_affinity_masks+0x222/0x330
[   17.940602] Modules linked in: nvme nvme_core [last unloaded: nvme]
[   17.940605] CPU: 3 PID: 1030 Comm: kworker/u17:4 Tainted: G        W         5.1.0+ #156
[   17.940605] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.12.1-0-ga5cab58e9a3f-prebuilt.qemu.org 04/01/2014
[   17.940608] Workqueue: nvme-reset-wq nvme_reset_work [nvme]
[   17.940609] RIP: 0010:irq_create_affinity_masks+0x222/0x330
[   17.940611] Code: 4c 8d 4c 24 28 4c 8d 44 24 30 e8 c9 fa ff ff 89 44 24 18 e8 c0 38 fa ff 8b 44 24 18 44 8b 54 24 1c 5a 44 01 d0 41 39 c4 76 02 <0f> 0b 48 89 df 44 01 e5 e8 f1 ce 10 00 48 8b 34 24 44 89 f0 44 01
[   17.940611] RSP: 0018:ffffc90002277c50 EFLAGS: 00010216
[   17.940612] RAX: 0000000000000008 RBX: ffff88807ca48860 RCX: 0000000000000000
[   17.940612] RDX: ffff88807bc03800 RSI: 0000000000000020 RDI: 0000000000000000
[   17.940613] RBP: 0000000000000001 R08: ffffc90002277c78 R09: ffffc90002277c70
[   17.940613] R10: 0000000000000008 R11: 0000000000000001 R12: 0000000000000020
[   17.940614] R13: 0000000000025d08 R14: 0000000000000001 R15: ffff88807bc03800
[   17.940614] FS:  0000000000000000(0000) GS:ffff88807db80000(0000) knlGS:0000000000000000
[   17.940616] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[   17.940617] CR2: 00005635e583f790 CR3: 000000000240a000 CR4: 00000000000006e0
[   17.940617] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[   17.940618] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[   17.940618] Call Trace:
[   17.940622]  __pci_enable_msix_range+0x215/0x540
[   17.940623]  ? kernfs_put+0x117/0x160
[   17.940625]  pci_alloc_irq_vectors_affinity+0x74/0x110
[   17.940626]  nvme_reset_work+0xc30/0x1397 [nvme]
[   17.940628]  ? __switch_to_asm+0x34/0x70
[   17.940628]  ? __switch_to_asm+0x40/0x70
[   17.940629]  ? __switch_to_asm+0x34/0x70
[   17.940630]  ? __switch_to_asm+0x40/0x70
[   17.940630]  ? __switch_to_asm+0x34/0x70
[   17.940631]  ? __switch_to_asm+0x40/0x70
[   17.940632]  ? nvme_irq_check+0x30/0x30 [nvme]
[   17.940633]  process_one_work+0x20b/0x3e0
[   17.940634]  worker_thread+0x1f9/0x3d0
[   17.940635]  ? cancel_delayed_work+0xa0/0xa0
[   17.940636]  kthread+0x117/0x120
[   17.940637]  ? kthread_stop+0xf0/0xf0
[   17.940638]  ret_from_fork+0x3a/0x50
[   17.940639] ---[ end trace aca8a131361cd42a ]---
[   17.942124] nvme nvme0: 7/1/0 default/read/poll queues

Cc: Jens Axboe <axboe at fb.com>
Cc: Ming Lei <ming.lei at redhat.com>
Cc: Keith Busch <kbusch at kernel.org>
Cc: Christoph Hellwig <hch at lst.de>
Cc: Sagi Grimberg <sagi at grimberg.me>
Signed-off-by: Minwoo Im <minwoo.im.dev at gmail.com>
---
 drivers/nvme/host/pci.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 20193b11ab0d..52fe785295cb 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -2051,6 +2051,7 @@ static int nvme_setup_irqs(struct nvme_dev *dev, unsigned int nr_io_queues)
 		.priv		= dev,
 	};
 	unsigned int irq_queues, this_p_queues;
+	unsigned int nr_cpus = num_possible_cpus();
 
 	/*
 	 * Poll queues don't need interrupts, but we need at least one IO
@@ -2061,7 +2062,10 @@ static int nvme_setup_irqs(struct nvme_dev *dev, unsigned int nr_io_queues)
 		this_p_queues = nr_io_queues - 1;
 		irq_queues = 1;
 	} else {
-		irq_queues = nr_io_queues - this_p_queues + 1;
+		if (nr_cpus < nr_io_queues - this_p_queues)
+			irq_queues = nr_cpus + 1;
+		else
+			irq_queues = nr_io_queues - this_p_queues + 1;
 	}
 	dev->io_queues[HCTX_TYPE_POLL] = this_p_queues;
 
-- 
2.21.0

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 1/3] nvme-pci: remove unnecessary zero for static var
  2019-06-08 18:02 ` [PATCH 1/3] nvme-pci: remove unnecessary zero for static var Minwoo Im
@ 2019-06-08 20:27   ` Chaitanya Kulkarni
  0 siblings, 0 replies; 13+ messages in thread
From: Chaitanya Kulkarni @ 2019-06-08 20:27 UTC (permalink / raw)


Looks good.

Reviewed-by: Chaitanya Kulkarni <chaitanya.kulkarni at wdc.com>
On 06/08/2019 11:02 AM, Minwoo Im wrote:
> poll_queues will be zero even without zero initialization here.
>
> Cc: Jens Axboe <axboe at fb.com>
> Cc: Ming Lei <ming.lei at redhat.com>
> Cc: Keith Busch <kbusch at kernel.org>
> Cc: Christoph Hellwig <hch at lst.de>
> Cc: Sagi Grimberg <sagi at grimberg.me>
> Signed-off-by: Minwoo Im <minwoo.im.dev at gmail.com>
> ---
>   drivers/nvme/host/pci.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
> index 5c051a8470d4..047785023892 100644
> --- a/drivers/nvme/host/pci.c
> +++ b/drivers/nvme/host/pci.c
> @@ -80,7 +80,7 @@ MODULE_PARM_DESC(write_queues,
>   	"Number of queues to use for writes. If not set, reads and writes "
>   	"will share a queue set.");
>
> -static int poll_queues = 0;
> +static int poll_queues;
>   module_param_cb(poll_queues, &queue_count_ops, &poll_queues, 0644);
>   MODULE_PARM_DESC(poll_queues, "Number of queues to use for polled IO.");
>
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 2/3] nvme-pci: remove queue_count_ops for write,poll_queues
  2019-06-08 18:02 ` [PATCH 2/3] nvme-pci: remove queue_count_ops for write,poll_queues Minwoo Im
@ 2019-06-10  1:51   ` Ming Lei
       [not found]   ` <CGME20190610015241epcas3p2a021735345864364cb7f8c6aded4685d@epcms2p2>
  1 sibling, 0 replies; 13+ messages in thread
From: Ming Lei @ 2019-06-10  1:51 UTC (permalink / raw)


On Sun, Jun 09, 2019@03:02:18AM +0900, Minwoo Im wrote:
> queue_count_set() seems like that it has been provided to limit the
> number of queue entries for write/poll queues.  But, the
> queue_count_set() has been doing nothing but a parameter check even it
> has num_possible_cpus() which is nop.

However, the check is valid, which shouldn't be nop, so could you fix
the check instead of removing it?

Thanks,
Ming

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 2/3] nvme-pci: remove queue_count_ops for write,poll_queues
       [not found]   ` <CGME20190610015241epcas3p2a021735345864364cb7f8c6aded4685d@epcms2p2>
@ 2019-06-10  2:25     ` Minwoo Im
  2019-06-10  2:41       ` Ming Lei
       [not found]       ` <CGME20190610015241epcas3p2a021735345864364cb7f8c6aded4685d@epcms2p1>
  0 siblings, 2 replies; 13+ messages in thread
From: Minwoo Im @ 2019-06-10  2:25 UTC (permalink / raw)


> However, the check is valid, which shouldn't be nop, so could you fix
> the check instead of removing it?

Hi, Ming.

I don't get what you really mean here.   What do you mean "the check is
valid"? I don't see any valid checks in queue_count_set(), not just for
check for the failure by kstrtoint().  I think current code is just checks
the nr_cpus and do nothing after.

Instead fixing this check inside of this function, I have posted the next
patch in this series to make sure the number of irqs requested not 
exceed the num_possible_num().

Could you please have a look the next patch in this series ?

If I missed something here, please let me know.

Thanks,

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 2/3] nvme-pci: remove queue_count_ops for write,poll_queues
  2019-06-10  2:25     ` Minwoo Im
@ 2019-06-10  2:41       ` Ming Lei
       [not found]       ` <CGME20190610015241epcas3p2a021735345864364cb7f8c6aded4685d@epcms2p1>
  1 sibling, 0 replies; 13+ messages in thread
From: Ming Lei @ 2019-06-10  2:41 UTC (permalink / raw)


On Mon, Jun 10, 2019@11:25:22AM +0900, Minwoo Im wrote:
> > However, the check is valid, which shouldn't be nop, so could you fix
> > the check instead of removing it?
> 
> Hi, Ming.
> 
> I don't get what you really mean here.   What do you mean "the check is
> valid"? I don't see any valid checks in queue_count_set(), not just for
> check for the failure by kstrtoint().  I think current code is just checks
> the nr_cpus and do nothing after.
> 
> Instead fixing this check inside of this function, I have posted the next
> patch in this series to make sure the number of irqs requested not 
> exceed the num_possible_num().

I suggest to cap 'write_queues' or 'poll_writes' to num_possible_num()
from the beginning, instead of starting with invalid number.


thanks,
Ming

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 2/3] nvme-pci: remove queue_count_ops for write,poll_queues
       [not found]       ` <CGME20190610015241epcas3p2a021735345864364cb7f8c6aded4685d@epcms2p1>
@ 2019-06-10  3:41         ` Minwoo Im
  2019-06-10  3:49           ` [PATCH 2/3] nvme-pci: remove queue_count_ops for write, poll_queues Ming Lei
  2019-06-10  3:52         ` Minwoo Im
  1 sibling, 1 reply; 13+ messages in thread
From: Minwoo Im @ 2019-06-10  3:41 UTC (permalink / raw)


> > > However, the check is valid, which shouldn't be nop, so could you fix
> > > the check instead of removing it?
> >
> > Hi, Ming.
> >
> > I don't get what you really mean here.   What do you mean "the check is
> > valid"? I don't see any valid checks in queue_count_set(), not just for
> > check for the failure by kstrtoint().  I think current code is just checks
> > the nr_cpus and do nothing after.
> >
> > Instead fixing this check inside of this function, I have posted the next
> > patch in this series to make sure the number of irqs requested not
> > exceed the num_possible_num().
> 
> I suggest to cap 'write_queues' or 'poll_writes' to num_possible_num()
> from the beginning, instead of starting with invalid number.

Ming,

Thanks for your review :)

I have already tried to limit the number of queues inside of queue_count_set()
in [1].  But Christoph had suggested to limit the number in nvme_calc_irq_sets()
instead.  It might be my code was not really good at that time, but could you
please have a look at the mail thread below and give some advices for me?

[1]  http://lists.infradead.org/pipermail/linux-nvme/2019-May/023868.html

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 2/3] nvme-pci: remove queue_count_ops for write, poll_queues
  2019-06-10  3:41         ` Minwoo Im
@ 2019-06-10  3:49           ` Ming Lei
  0 siblings, 0 replies; 13+ messages in thread
From: Ming Lei @ 2019-06-10  3:49 UTC (permalink / raw)


On Mon, Jun 10, 2019@11:41 AM Minwoo Im <minwoo.im@samsung.com> wrote:
>
> > > > However, the check is valid, which shouldn't be nop, so could you fix
> > > > the check instead of removing it?
> > >
> > > Hi, Ming.
> > >
> > > I don't get what you really mean here.   What do you mean "the check is
> > > valid"? I don't see any valid checks in queue_count_set(), not just for
> > > check for the failure by kstrtoint().  I think current code is just checks
> > > the nr_cpus and do nothing after.
> > >
> > > Instead fixing this check inside of this function, I have posted the next
> > > patch in this series to make sure the number of irqs requested not
> > > exceed the num_possible_num().
> >
> > I suggest to cap 'write_queues' or 'poll_writes' to num_possible_num()
> > from the beginning, instead of starting with invalid number.
>
> Ming,
>
> Thanks for your review :)
>
> I have already tried to limit the number of queues inside of queue_count_set()
> in [1].  But Christoph had suggested to limit the number in nvme_calc_irq_sets()
> instead.  It might be my code was not really good at that time, but could you
> please have a look at the mail thread below and give some advices for me?
>
> [1]  http://lists.infradead.org/pipermail/linux-nvme/2019-May/023868.html

Sorry for missing the previous discussion.

IMO, I prefer to the fix in above link. Cause it is the value of
kernel_param_ops
to make 'write_queue/poll_queue'  correct from the beginning. That is just
like OOP's concept in which each method just does one thing. Then we don't
need to bother nvme_calc_irq_sets() for verifying/fixing them.

Thanks,
Ming Lei

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 2/3] nvme-pci: remove queue_count_ops for write, poll_queues
       [not found]       ` <CGME20190610015241epcas3p2a021735345864364cb7f8c6aded4685d@epcms2p1>
  2019-06-10  3:41         ` Minwoo Im
@ 2019-06-10  3:52         ` Minwoo Im
  2019-06-16  4:51           ` [PATCH 2/3] nvme-pci: remove queue_count_ops for write,poll_queues Minwoo Im
  1 sibling, 1 reply; 13+ messages in thread
From: Minwoo Im @ 2019-06-10  3:52 UTC (permalink / raw)



> > > > > However, the check is valid, which shouldn't be nop, so could you fix
> > > > > the check instead of removing it?
> > > >
> > > > Hi, Ming.
> > > >
> > > > I don't get what you really mean here.   What do you mean "the check is
> > > > valid"? I don't see any valid checks in queue_count_set(), not just for
> > > > check for the failure by kstrtoint().  I think current code is just checks
> > > > the nr_cpus and do nothing after.
> > > >
> > > > Instead fixing this check inside of this function, I have posted the next
> > > > patch in this series to make sure the number of irqs requested not
> > > > exceed the num_possible_num().
> > >
> > > I suggest to cap 'write_queues' or 'poll_writes' to num_possible_num()
> > > from the beginning, instead of starting with invalid number.
> >
> > Ming,
> >
> > Thanks for your review :)
> >
> > I have already tried to limit the number of queues inside of
> queue_count_set()
> > in [1].  But Christoph had suggested to limit the number in
> nvme_calc_irq_sets()
> > instead.  It might be my code was not really good at that time, but could
> you
> > please have a look at the mail thread below and give some advices for me?
> >
> > [1]  http://lists.infradead.org/pipermail/linux-nvme/2019-May/023868.html
> 
> Sorry for missing the previous discussion.
> 
> IMO, I prefer to the fix in above link. Cause it is the value of
> kernel_param_ops
> to make 'write_queue/poll_queue'  correct from the beginning. That is just
> like OOP's concept in which each method just does one thing. Then we don't
> need to bother nvme_calc_irq_sets() for verifying/fixing them.

Christoph,

What do you think about what Ming's suggested?  I am actually okay with the
concept that the function just does its own things.

It would be great if you can give some advices here :)

Thanks,

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 2/3] nvme-pci: remove queue_count_ops for write,poll_queues
  2019-06-10  3:52         ` Minwoo Im
@ 2019-06-16  4:51           ` Minwoo Im
  2019-06-20  6:25             ` Christoph Hellwig
  0 siblings, 1 reply; 13+ messages in thread
From: Minwoo Im @ 2019-06-16  4:51 UTC (permalink / raw)


On 19-06-10 12:52:19, Minwoo Im wrote:
> 
> > > > > > However, the check is valid, which shouldn't be nop, so could you fix
> > > > > > the check instead of removing it?
> > > > >
> > > > > Hi, Ming.
> > > > >
> > > > > I don't get what you really mean here.   What do you mean "the check is
> > > > > valid"? I don't see any valid checks in queue_count_set(), not just for
> > > > > check for the failure by kstrtoint().  I think current code is just checks
> > > > > the nr_cpus and do nothing after.
> > > > >
> > > > > Instead fixing this check inside of this function, I have posted the next
> > > > > patch in this series to make sure the number of irqs requested not
> > > > > exceed the num_possible_num().
> > > >
> > > > I suggest to cap 'write_queues' or 'poll_writes' to num_possible_num()
> > > > from the beginning, instead of starting with invalid number.
> > >
> > > Ming,
> > >
> > > Thanks for your review :)
> > >
> > > I have already tried to limit the number of queues inside of
> > queue_count_set()
> > > in [1].  But Christoph had suggested to limit the number in
> > nvme_calc_irq_sets()
> > > instead.  It might be my code was not really good at that time, but could
> > you
> > > please have a look at the mail thread below and give some advices for me?
> > >
> > > [1]  http://lists.infradead.org/pipermail/linux-nvme/2019-May/023868.html
> > 
> > Sorry for missing the previous discussion.
> > 
> > IMO, I prefer to the fix in above link. Cause it is the value of
> > kernel_param_ops
> > to make 'write_queue/poll_queue'  correct from the beginning. That is just
> > like OOP's concept in which each method just does one thing. Then we don't
> > need to bother nvme_calc_irq_sets() for verifying/fixing them.
> 
> Christoph,
> 
> What do you think about what Ming's suggested?  I am actually okay with the
> concept that the function just does its own things.
> 
> It would be great if you can give some advices here :)
> 
> Thanks,

(Gentle ping)

Hi Christoph,

Could you please take a look at this patch ?

Thanks,

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 2/3] nvme-pci: remove queue_count_ops for write,poll_queues
  2019-06-16  4:51           ` [PATCH 2/3] nvme-pci: remove queue_count_ops for write,poll_queues Minwoo Im
@ 2019-06-20  6:25             ` Christoph Hellwig
  0 siblings, 0 replies; 13+ messages in thread
From: Christoph Hellwig @ 2019-06-20  6:25 UTC (permalink / raw)


On Sun, Jun 16, 2019@01:51:37PM +0900, Minwoo Im wrote:
> Hi Christoph,
> 
> Could you please take a look at this patch ?

I much prefer this simpler series.  I'll queue it up for 5.3 after
a little testing.

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2019-06-20  6:25 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-06-08 18:02 [PATCH 0/3] nvme-pci: adjust irq max_vector to avoid WARN() Minwoo Im
2019-06-08 18:02 ` [PATCH 1/3] nvme-pci: remove unnecessary zero for static var Minwoo Im
2019-06-08 20:27   ` Chaitanya Kulkarni
2019-06-08 18:02 ` [PATCH 2/3] nvme-pci: remove queue_count_ops for write,poll_queues Minwoo Im
2019-06-10  1:51   ` Ming Lei
     [not found]   ` <CGME20190610015241epcas3p2a021735345864364cb7f8c6aded4685d@epcms2p2>
2019-06-10  2:25     ` Minwoo Im
2019-06-10  2:41       ` Ming Lei
     [not found]       ` <CGME20190610015241epcas3p2a021735345864364cb7f8c6aded4685d@epcms2p1>
2019-06-10  3:41         ` Minwoo Im
2019-06-10  3:49           ` [PATCH 2/3] nvme-pci: remove queue_count_ops for write, poll_queues Ming Lei
2019-06-10  3:52         ` Minwoo Im
2019-06-16  4:51           ` [PATCH 2/3] nvme-pci: remove queue_count_ops for write,poll_queues Minwoo Im
2019-06-20  6:25             ` Christoph Hellwig
2019-06-08 18:02 ` [PATCH 3/3] nvme-pci: adjust irq max_vector using num_possible_cpus() Minwoo Im

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.