From: Keith Busch <kbusch@kernel.org>
To: linux-nvme@lists.infradead.org, hch@lst.de, sagi@grimberg.me
Cc: Keith Busch <kbusch@kernel.org>
Subject: [PATCH 3/3] nvme/pci: Fix read queue count
Date: Sat, 7 Dec 2019 02:13:16 +0900 [thread overview]
Message-ID: <20191206171316.2421-4-kbusch@kernel.org> (raw)
In-Reply-To: <20191206171316.2421-1-kbusch@kernel.org>
If nvme.write_queues and poll_queues are specified, we expect the driver
may request more queues than CPUs if the device's queue count feature
is large enough. The driver, however, had been decreasing the number of
possible interrupt enabled queues, artifically limiting the number of
read queues even if the controller could support more.
The driver doesn't request more IO queues than CPUs for each queues type
anyway, so remove the cpu count comparison to the number of interrupt
enabled io queues.
Signed-off-by: Keith Busch <kbusch@kernel.org>
---
drivers/nvme/host/pci.c | 6 +-----
1 file changed, 1 insertion(+), 5 deletions(-)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 6b6452486155..d3bed5df9ef1 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -2062,7 +2062,6 @@ static int nvme_setup_irqs(struct nvme_dev *dev, unsigned int nr_io_queues)
.priv = dev,
};
unsigned int irq_queues, this_p_queues;
- unsigned int nr_cpus = num_possible_cpus();
/*
* Poll queues don't need interrupts, but we need at least one IO
@@ -2073,10 +2072,7 @@ static int nvme_setup_irqs(struct nvme_dev *dev, unsigned int nr_io_queues)
this_p_queues = nr_io_queues - 1;
irq_queues = 1;
} else {
- if (nr_cpus < nr_io_queues - this_p_queues)
- irq_queues = nr_cpus + 1;
- else
- irq_queues = nr_io_queues - this_p_queues + 1;
+ irq_queues = nr_io_queues - this_p_queues + 1;
}
dev->io_queues[HCTX_TYPE_POLL] = this_p_queues;
--
2.21.0
_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
next prev parent reply other threads:[~2019-12-06 17:14 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-12-06 17:13 [PATCH 0/3] nvme specialized queue fixes Keith Busch
2019-12-06 17:13 ` [PATCH 1/3] nvme/pci: Fix write and poll queue types Keith Busch
2019-12-06 17:13 ` [PATCH 2/3] nvme/pci Limit write queue sizes to possible cpus Keith Busch
2019-12-06 17:13 ` Keith Busch [this message]
2019-12-07 8:55 ` [PATCH 3/3] nvme/pci: Fix read queue count Ming Lei
2019-12-06 17:46 ` [PATCH 0/3] nvme specialized queue fixes Jens Axboe
2019-12-06 17:58 ` Keith Busch
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20191206171316.2421-4-kbusch@kernel.org \
--to=kbusch@kernel.org \
--cc=hch@lst.de \
--cc=linux-nvme@lists.infradead.org \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.