From: Keith Busch <kbusch@kernel.org>
To: linux-nvme@lists.infradead.org, hch@lst.de, sagi@grimberg.me
Cc: Keith Busch <kbusch@kernel.org>
Subject: [PATCH 0/3] nvme specialized queue fixes
Date: Sat, 7 Dec 2019 02:13:13 +0900 [thread overview]
Message-ID: <20191206171316.2421-1-kbusch@kernel.org> (raw)
The nvme pci module had been allowing bad values to the specialized
polled and write queues that could have caused the driver to allocate
queues incorrectly or unoptimally.
For example, on a system with 16 CPUs, if I have module parameter
nvme.write_queues=17, we would get only 1 read queue no matter how many
the drive supported:
# dmesg | grep nvme | grep "poll queues"
nvme nvme2: 16/1/2 default/read/poll queues
nvme nvme1: 16/1/2 default/read/poll queues
nvme nvme0: 16/1/2 default/read/poll queues
nvme nvme3: 16/1/2 default/read/poll queues
But after fixing:
# dmesg | grep nvme | grep "poll queues"
nvme nvme1: 16/16/2 default/read/poll queues
nvme nvme2: 16/13/2 default/read/poll queues
nvme nvme0: 16/16/2 default/read/poll queues
nvme nvme3: 16/13/2 default/read/poll queues
We just need to fix the calculation so that we don't throttle the total
number of possible desired queues incorrectly. The first two patches
ensure the module parameters are withing reasonable boundaries to
simplify counting the number of interrupts we want to allocate.
Keith Busch (3):
nvme/pci: Fix write and poll queue types
nvme/pci Limit write queue sizes to possible cpus
nvme/pci: Fix read queue count
drivers/nvme/host/pci.c | 17 ++++++++---------
1 file changed, 8 insertions(+), 9 deletions(-)
--
2.21.0
_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
next reply other threads:[~2019-12-06 17:13 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-12-06 17:13 Keith Busch [this message]
2019-12-06 17:13 ` [PATCH 1/3] nvme/pci: Fix write and poll queue types Keith Busch
2019-12-06 17:13 ` [PATCH 2/3] nvme/pci Limit write queue sizes to possible cpus Keith Busch
2019-12-06 17:13 ` [PATCH 3/3] nvme/pci: Fix read queue count Keith Busch
2019-12-07 8:55 ` Ming Lei
2019-12-06 17:46 ` [PATCH 0/3] nvme specialized queue fixes Jens Axboe
2019-12-06 17:58 ` Keith Busch
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20191206171316.2421-1-kbusch@kernel.org \
--to=kbusch@kernel.org \
--cc=hch@lst.de \
--cc=linux-nvme@lists.infradead.org \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).