All of lore.kernel.org
 help / color / mirror / Atom feed
From: ming.lei@redhat.com (Ming Lei)
Subject: [PATCH V2 3/3] nvme pci: introduce module parameter of 'default_queues'
Date: Tue, 1 Jan 2019 13:47:36 +0800	[thread overview]
Message-ID: <20190101054735.GB17588@ming.t460p> (raw)
In-Reply-To: <CABhMZUVU-XcvBC9OjNw9=4gsmspy+Bc4urkb5fSo-7JeDO9m=Q@mail.gmail.com>

On Mon, Dec 31, 2018@03:24:55PM -0600, Bjorn Helgaas wrote:
> On Fri, Dec 28, 2018@9:27 PM Ming Lei <ming.lei@redhat.com> wrote:
> >
> > On big system with lots of CPU cores, it is easy to consume up irq
> > vectors by assigning defaut queue with num_possible_cpus() irq vectors.
> > Meantime it is often not necessary to allocate so many vectors for
> > reaching NVMe's top performance under that situation.
> 
> s/defaut/default/
> 
> > This patch introduces module parameter of 'default_queues' to try
> > to address this issue reported by Shan Hai.
> 
> Is there a URL to this report by Shan?

http://lists.infradead.org/pipermail/linux-nvme/2018-December/021863.html
http://lists.infradead.org/pipermail/linux-nvme/2018-December/021862.html

http://lists.infradead.org/pipermail/linux-nvme/2018-December/021872.html

> 
> Is there some way you can figure this out automatically instead of
> forcing the user to use a module parameter?

Not yet, otherwise, I won't post this patch out.

> 
> If not, can you provide some guidance in the changelog for how a user
> is supposed to figure out when it's needed and what the value should
> be?  If you add the parameter, I assume that will eventually have to
> be mentioned in a release note, and it would be nice to have something
> to start from.

Ok, that is a good suggestion, how about documenting it via the
following words:

Number of IRQ vectors is system-wide resource, and usually it is big enough
for each device. However, we allocate num_possible_cpus() + 1 irq vectors for
each NVMe PCI controller. In case that system has lots of CPU cores, or there
are more than one NVMe controller, IRQ vectors can be consumed up
easily by NVMe. When this issue is triggered, please try to pass smaller
default queues via the module parameter of 'default_queues', usually
it have to be >= number of NUMA nodes, meantime it needs be big enough
to reach NVMe's top performance, which is often less than num_possible_cpus()
+ 1.


Thanks,
Ming

  reply	other threads:[~2019-01-01  5:47 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-12-29  3:26 [PATCH V2 0/3] nvme pci: two fixes on nvme_setup_irqs Ming Lei
2018-12-29  3:26 ` Ming Lei
2018-12-29  3:26 ` [PATCH V2 1/3] PCI/MSI: preference to returning -ENOSPC from pci_alloc_irq_vectors_affinity Ming Lei
2018-12-29  3:26   ` Ming Lei
2018-12-31 22:00   ` Bjorn Helgaas
2018-12-31 22:00     ` Bjorn Helgaas
2018-12-31 22:41     ` Keith Busch
2018-12-31 22:41       ` Keith Busch
2019-01-01  5:24     ` Ming Lei
2019-01-01  5:24       ` Ming Lei
2019-01-02 21:02       ` Bjorn Helgaas
2019-01-02 21:02         ` Bjorn Helgaas
2019-01-02 22:46         ` Keith Busch
2019-01-02 22:46           ` Keith Busch
2018-12-29  3:26 ` [PATCH V2 2/3] nvme pci: fix nvme_setup_irqs() Ming Lei
2018-12-29  3:26 ` [PATCH V2 3/3] nvme pci: introduce module parameter of 'default_queues' Ming Lei
2018-12-31 21:24   ` Bjorn Helgaas
2019-01-01  5:47     ` Ming Lei [this message]
2019-01-02  2:14       ` Shan Hai
     [not found]         ` <20190102073607.GA25590@ming.t460p>
     [not found]           ` <d59007c6-af13-318c-5c9d-438ad7d9149d@oracle.com>
     [not found]             ` <20190102083901.GA26881@ming.t460p>
2019-01-03  2:04               ` Shan Hai
2019-01-02 20:11       ` Bjorn Helgaas
2019-01-03  2:12         ` Ming Lei
2019-01-03  2:52           ` Shan Hai
2019-01-03  3:11             ` Shan Hai
2019-01-03  3:31               ` Ming Lei
2019-01-03  4:36                 ` Shan Hai
2019-01-03 10:34                   ` Ming Lei
2019-01-04  2:53                     ` Shan Hai
2019-01-03  4:51                 ` Shan Hai
2019-01-03  3:21             ` Ming Lei
2019-01-14 13:13 ` [PATCH V2 0/3] nvme pci: two fixes on nvme_setup_irqs John Garry
2019-01-14 13:13   ` John Garry

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190101054735.GB17588@ming.t460p \
    --to=ming.lei@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.