linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* Possible to configure nvme irq cpu affinity?
@ 2019-10-31 22:20 Jeffrey Baker
  0 siblings, 0 replies; only message in thread
From: Jeffrey Baker @ 2019-10-31 22:20 UTC (permalink / raw)
  To: linux-nvme

I'm running kernel 4.15, the vendor kernel from Ubuntu 16.04. On a
system I have with four samsung nvme devices and 20 CPU cores (40 CPU
threads). I have a weird mapping of nvme irqs to cpus.  Each device
has 32 queues, each queue maps to 1 or 2 CPUs:

# grep -H . /sys/block/nvme0n1/mq/*/cpu_list | sort -t/ -k6n
/sys/block/nvme0n1/mq/0/cpu_list:0, 20
/sys/block/nvme0n1/mq/1/cpu_list:1, 21
/sys/block/nvme0n1/mq/2/cpu_list:2, 22
/sys/block/nvme0n1/mq/3/cpu_list:3, 23
/sys/block/nvme0n1/mq/4/cpu_list:4
/sys/block/nvme0n1/mq/5/cpu_list:5
/sys/block/nvme0n1/mq/6/cpu_list:6
/sys/block/nvme0n1/mq/7/cpu_list:7
/sys/block/nvme0n1/mq/8/cpu_list:8
/sys/block/nvme0n1/mq/9/cpu_list:9
/sys/block/nvme0n1/mq/10/cpu_list:24
/sys/block/nvme0n1/mq/11/cpu_list:25
/sys/block/nvme0n1/mq/12/cpu_list:26
/sys/block/nvme0n1/mq/13/cpu_list:27
/sys/block/nvme0n1/mq/14/cpu_list:28
/sys/block/nvme0n1/mq/15/cpu_list:29
/sys/block/nvme0n1/mq/16/cpu_list:10, 30
/sys/block/nvme0n1/mq/17/cpu_list:11, 31
/sys/block/nvme0n1/mq/18/cpu_list:12, 32
/sys/block/nvme0n1/mq/19/cpu_list:13, 33
/sys/block/nvme0n1/mq/20/cpu_list:14
/sys/block/nvme0n1/mq/21/cpu_list:15
/sys/block/nvme0n1/mq/22/cpu_list:16
/sys/block/nvme0n1/mq/23/cpu_list:17
/sys/block/nvme0n1/mq/24/cpu_list:18
/sys/block/nvme0n1/mq/25/cpu_list:19
/sys/block/nvme0n1/mq/26/cpu_list:34
/sys/block/nvme0n1/mq/27/cpu_list:35
/sys/block/nvme0n1/mq/28/cpu_list:36
/sys/block/nvme0n1/mq/29/cpu_list:37
/sys/block/nvme0n1/mq/30/cpu_list:38
/sys/block/nvme0n1/mq/31/cpu_list:39

There's 33 interrupts for these 32 queues, nvme0q0-q32. I don't
understand that but it's not the main problem. My problem is I can't
change them. I get EIO.

# cat /proc/irq/43/affinity_hint
00,00000000
# cat /proc/irq/43/smp_affinity
00,00100001
# echo 1 > /proc/irq/43/smp_affinity
-su: echo: write error: Input/output error

The reason I want to change this is the I/O rate on my application is
pretty modest but constant. On account of the way these IRQs are
spread around, such that there are 1-8 of them on every core, none of
my cores ever go to sleep (C6 state) and consequently none of the
active cores are ever able to clock up to higher P-states ("turbo").
In short, all 40 CPUs are stuck running at 2500MHz all the time, even
though under this workload I should be seeing 3100MHz
opportunistically. This measurably harms my service latency, not to
mention the power consumption.

If I could concentrate these queues on fewer CPUs, I would.  Is there
a newer kernel where this can be configured?

-jwb

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2019-10-31 22:21 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-31 22:20 Possible to configure nvme irq cpu affinity? Jeffrey Baker

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).