All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jens Axboe <axboe@kernel.dk>
To: Keith Busch <kbusch@kernel.org>,
	linux-nvme@lists.infradead.org, hch@lst.de, sagi@grimberg.me
Subject: Re: [PATCH 0/3] nvme specialized queue fixes
Date: Fri, 6 Dec 2019 10:46:04 -0700	[thread overview]
Message-ID: <360fdd5a-f2ce-86c6-55e5-a15ff2f9e1cc@kernel.dk> (raw)
In-Reply-To: <20191206171316.2421-1-kbusch@kernel.org>

On 12/6/19 10:13 AM, Keith Busch wrote:
> The nvme pci module had been allowing bad values to the specialized
> polled and write queues that could have caused the driver to allocate
> queues incorrectly or unoptimally.
> 
> For example, on a system with 16 CPUs, if I have module parameter
> nvme.write_queues=17, we would get only 1 read queue no matter how many
> the drive supported:
> 
>   # dmesg | grep nvme | grep "poll queues"
>   nvme nvme2: 16/1/2 default/read/poll queues
>   nvme nvme1: 16/1/2 default/read/poll queues
>   nvme nvme0: 16/1/2 default/read/poll queues
>   nvme nvme3: 16/1/2 default/read/poll queues
> 
> But after fixing:
> 
>   # dmesg | grep nvme | grep "poll queues"
>   nvme nvme1: 16/16/2 default/read/poll queues
>   nvme nvme2: 16/13/2 default/read/poll queues
>   nvme nvme0: 16/16/2 default/read/poll queues
>   nvme nvme3: 16/13/2 default/read/poll queues
> 
> We just need to fix the calculation so that we don't throttle the total
> number of possible desired queues incorrectly. The first two patches
> ensure the module parameters are withing reasonable boundaries to
> simplify counting the number of interrupts we want to allocate.
> 
> Keith Busch (3):
>   nvme/pci: Fix write and poll queue types
>   nvme/pci Limit write queue sizes to possible cpus
>   nvme/pci: Fix read queue count
> 
>  drivers/nvme/host/pci.c | 17 ++++++++---------
>  1 file changed, 8 insertions(+), 9 deletions(-)

Looks good to me:

Reviewed-by: Jens Axboe <axboe@kernel.dk>

-- 
Jens Axboe


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  parent reply	other threads:[~2019-12-06 17:46 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-12-06 17:13 [PATCH 0/3] nvme specialized queue fixes Keith Busch
2019-12-06 17:13 ` [PATCH 1/3] nvme/pci: Fix write and poll queue types Keith Busch
2019-12-06 17:13 ` [PATCH 2/3] nvme/pci Limit write queue sizes to possible cpus Keith Busch
2019-12-06 17:13 ` [PATCH 3/3] nvme/pci: Fix read queue count Keith Busch
2019-12-07  8:55   ` Ming Lei
2019-12-06 17:46 ` Jens Axboe [this message]
2019-12-06 17:58   ` [PATCH 0/3] nvme specialized queue fixes Keith Busch

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=360fdd5a-f2ce-86c6-55e5-a15ff2f9e1cc@kernel.dk \
    --to=axboe@kernel.dk \
    --cc=hch@lst.de \
    --cc=kbusch@kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.