All of lore.kernel.org
 help / color / mirror / Atom feed
* NVME is not using CPU0
@ 2020-03-08 23:46 Yaroslav Isakov
  2020-03-09  1:55 ` Keith Busch
  0 siblings, 1 reply; 6+ messages in thread
From: Yaroslav Isakov @ 2020-03-08 23:46 UTC (permalink / raw)
  To: linux-nvme

Hello! I found that my nvme disk is not assigning any queue on CPU0. I
think, that maybe it could be a bug, related to admin queue. Function
queue_irq_offset return 1 with note that this is for admin queue. But
on my system, admin queue is on the same CPU as q2. Here is part of
/proc/interrupts
            CPU0       CPU1       CPU2       CPU3       CPU4
CPU5       CPU6       CPU7
 127:          0          0         28          0          0
0          0          0  IR-PCI-MSI 2097152-edge      nvme0q0
 129:          0     155413          0          0          0
0          0          0  IR-PCI-MSI 2097153-edge      nvme0q1
 130:          0          0      23274          0          0
0          0          0  IR-PCI-MSI 2097154-edge      nvme0q2
 131:          0          0          0        954          0
0          0          0  IR-PCI-MSI 2097155-edge      nvme0q3
 132:          0          0          0          0       1541
0          0          0  IR-PCI-MSI 2097156-edge      nvme0q4
 133:          0          0          0          0          0
1376          0          0  IR-PCI-MSI 2097157-edge      nvme0q5
 134:          0          0          0          0          0
0        851          0  IR-PCI-MSI 2097158-edge      nvme0q6
 135:          0          0          0          0          0
0          0       1419  IR-PCI-MSI 2097159-edge      nvme0q7

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: NVME is not using CPU0
  2020-03-08 23:46 NVME is not using CPU0 Yaroslav Isakov
@ 2020-03-09  1:55 ` Keith Busch
  2020-03-09  9:49   ` Yaroslav Isakov
  0 siblings, 1 reply; 6+ messages in thread
From: Keith Busch @ 2020-03-09  1:55 UTC (permalink / raw)
  To: Yaroslav Isakov; +Cc: linux-nvme

On Mon, Mar 09, 2020 at 12:46:24AM +0100, Yaroslav Isakov wrote:
> Hello! I found that my nvme disk is not assigning any queue on CPU0. I
> think, that maybe it could be a bug, related to admin queue. Function
> queue_irq_offset return 1 with note that this is for admin queue. But
> on my system, admin queue is on the same CPU as q2. Here is part of
> /proc/interrupts

All CPUs are assigned an nvme IO queue. Pin your IO process to CPU 0, it
will function just fine. Another way to confirm is run:

  # cat /sys/block/nvme0n1/mq/*/cpu_list

Every CPU should be accounted for in the output.

What you've observed is that your controller doesn't support enough IO
queues or MSI vectors (or both) to assign to each CPU. It just means that
a command you submit on the queue assigned to CPU 0 will get completed on
a different CPU.

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: NVME is not using CPU0
  2020-03-09  1:55 ` Keith Busch
@ 2020-03-09  9:49   ` Yaroslav Isakov
  2020-03-09 14:16     ` Keith Busch
  0 siblings, 1 reply; 6+ messages in thread
From: Yaroslav Isakov @ 2020-03-09  9:49 UTC (permalink / raw)
  To: Keith Busch; +Cc: linux-nvme

Hello, Keith! I've tried to pin fio process's threads, but with no
luck. On my system, your command gave this:
0, 1
2
3
4
5
6
7
So, it looks like first queue should use two CPUs, but using only
CPU1. Oh, and if I'm run fio with 2 threads, without pinning, I can
see increasing numbers in /proc/interrupts for all CPUs besides CPU0

пн, 9 мар. 2020 г. в 02:55, Keith Busch <kbusch@kernel.org>:
>
> On Mon, Mar 09, 2020 at 12:46:24AM +0100, Yaroslav Isakov wrote:
> > Hello! I found that my nvme disk is not assigning any queue on CPU0. I
> > think, that maybe it could be a bug, related to admin queue. Function
> > queue_irq_offset return 1 with note that this is for admin queue. But
> > on my system, admin queue is on the same CPU as q2. Here is part of
> > /proc/interrupts
>
> All CPUs are assigned an nvme IO queue. Pin your IO process to CPU 0, it
> will function just fine. Another way to confirm is run:
>
>   # cat /sys/block/nvme0n1/mq/*/cpu_list
>
> Every CPU should be accounted for in the output.
>
> What you've observed is that your controller doesn't support enough IO
> queues or MSI vectors (or both) to assign to each CPU. It just means that
> a command you submit on the queue assigned to CPU 0 will get completed on
> a different CPU.

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: NVME is not using CPU0
  2020-03-09  9:49   ` Yaroslav Isakov
@ 2020-03-09 14:16     ` Keith Busch
  2020-03-09 23:29       ` Yaroslav Isakov
  0 siblings, 1 reply; 6+ messages in thread
From: Keith Busch @ 2020-03-09 14:16 UTC (permalink / raw)
  To: Yaroslav Isakov; +Cc: linux-nvme

On Mon, Mar 09, 2020 at 10:49:43AM +0100, Yaroslav Isakov wrote:
> Hello, Keith! I've tried to pin fio process's threads, but with no
> luck. On my system, your command gave this:
> 0, 1
> 2
> 3
> 4
> 5
> 6
> 7
> So, it looks like first queue should use two CPUs, but using only
> CPU1. Oh, and if I'm run fio with 2 threads, without pinning, I can
> see increasing numbers in /proc/interrupts for all CPUs besides CPU0

/proc/interrupts shows which cpu handled a completion. It doesn't show
which CPU handled the submission. You don't have enough interrupt vectors
to assign to each CPU so some CPUs won't get interrupts.

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: NVME is not using CPU0
  2020-03-09 14:16     ` Keith Busch
@ 2020-03-09 23:29       ` Yaroslav Isakov
  2020-03-10  3:11         ` Keith Busch
  0 siblings, 1 reply; 6+ messages in thread
From: Yaroslav Isakov @ 2020-03-09 23:29 UTC (permalink / raw)
  To: Keith Busch; +Cc: linux-nvme

Keith, thank you!
After some debugging, I've found that kernel is setting 8 queues in
NVME disk (which is, BTW, Samsung 960 Pro), and trying to create 9 IRQ
vectors, but it looks like device supports no more than 8:
> lspci -s 02:00.0 -v | grep MSI-X
> Capabilities: [b0] MSI-X: Enable+ Count=8 Masked-
So, I wonder, if this Count is hardware limit of device, and cannot be
raised (so, cannot be changed via SET_FEATURE 7)?

пн, 9 мар. 2020 г. в 15:16, Keith Busch <kbusch@kernel.org>:
>
> On Mon, Mar 09, 2020 at 10:49:43AM +0100, Yaroslav Isakov wrote:
> > Hello, Keith! I've tried to pin fio process's threads, but with no
> > luck. On my system, your command gave this:
> > 0, 1
> > 2
> > 3
> > 4
> > 5
> > 6
> > 7
> > So, it looks like first queue should use two CPUs, but using only
> > CPU1. Oh, and if I'm run fio with 2 threads, without pinning, I can
> > see increasing numbers in /proc/interrupts for all CPUs besides CPU0
>
> /proc/interrupts shows which cpu handled a completion. It doesn't show
> which CPU handled the submission. You don't have enough interrupt vectors
> to assign to each CPU so some CPUs won't get interrupts.

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: NVME is not using CPU0
  2020-03-09 23:29       ` Yaroslav Isakov
@ 2020-03-10  3:11         ` Keith Busch
  0 siblings, 0 replies; 6+ messages in thread
From: Keith Busch @ 2020-03-10  3:11 UTC (permalink / raw)
  To: Yaroslav Isakov; +Cc: linux-nvme

On Tue, Mar 10, 2020 at 12:29:38AM +0100, Yaroslav Isakov wrote:
> Keith, thank you!
> After some debugging, I've found that kernel is setting 8 queues in
> NVME disk (which is, BTW, Samsung 960 Pro), and trying to create 9 IRQ
> vectors, but it looks like device supports no more than 8:
> > lspci -s 02:00.0 -v | grep MSI-X
> > Capabilities: [b0] MSI-X: Enable+ Count=8 Masked-
> So, I wonder, if this Count is hardware limit of device, and cannot be
> raised (so, cannot be changed via SET_FEATURE 7)?

The MSIx vector count is PCI property and it's shipped that way from
the manufacturer.

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2020-03-10  3:12 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-08 23:46 NVME is not using CPU0 Yaroslav Isakov
2020-03-09  1:55 ` Keith Busch
2020-03-09  9:49   ` Yaroslav Isakov
2020-03-09 14:16     ` Keith Busch
2020-03-09 23:29       ` Yaroslav Isakov
2020-03-10  3:11         ` Keith Busch

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.