From: Sagi Grimberg <sagi@grimberg.me> To: linux-rdma@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org Cc: netdev@vger.kernel.org, Saeed Mahameed <saeedm@mellanox.com>, Or Gerlitz <ogerlitz@mellanox.com>, Christoph Hellwig <hch@lst.de> Subject: [PATCH rfc 6/6] nvme-rdma: use intelligent affinity based queue mappings Date: Sun, 2 Apr 2017 16:41:32 +0300 [thread overview] Message-ID: <1491140492-25703-7-git-send-email-sagi@grimberg.me> (raw) In-Reply-To: <1491140492-25703-1-git-send-email-sagi@grimberg.me> Use the geneic block layer affinity mapping helper. Also, limit nr_hw_queues to the rdma device number of irq vectors as we don't really need more. Signed-off-by: Sagi Grimberg <sagi@grimberg.me> --- drivers/nvme/host/rdma.c | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c index 4aae363943e3..81ee5b1207c8 100644 --- a/drivers/nvme/host/rdma.c +++ b/drivers/nvme/host/rdma.c @@ -19,6 +19,7 @@ #include <linux/string.h> #include <linux/atomic.h> #include <linux/blk-mq.h> +#include <linux/blk-mq-rdma.h> #include <linux/types.h> #include <linux/list.h> #include <linux/mutex.h> @@ -645,10 +646,14 @@ static int nvme_rdma_connect_io_queues(struct nvme_rdma_ctrl *ctrl) static int nvme_rdma_init_io_queues(struct nvme_rdma_ctrl *ctrl) { struct nvmf_ctrl_options *opts = ctrl->ctrl.opts; + struct ib_device *ibdev = ctrl->device->dev; unsigned int nr_io_queues; int i, ret; nr_io_queues = min(opts->nr_io_queues, num_online_cpus()); + nr_io_queues = min_t(unsigned int, nr_io_queues, + ibdev->num_comp_vectors); + ret = nvme_set_queue_count(&ctrl->ctrl, &nr_io_queues); if (ret) return ret; @@ -1523,6 +1528,13 @@ static void nvme_rdma_complete_rq(struct request *rq) nvme_complete_rq(rq); } +static int nvme_rdma_map_queues(struct blk_mq_tag_set *set) +{ + struct nvme_rdma_ctrl *ctrl = set->driver_data; + + return blk_mq_rdma_map_queues(set, ctrl->device->dev, 0); +} + static const struct blk_mq_ops nvme_rdma_mq_ops = { .queue_rq = nvme_rdma_queue_rq, .complete = nvme_rdma_complete_rq, @@ -1532,6 +1544,7 @@ static const struct blk_mq_ops nvme_rdma_mq_ops = { .init_hctx = nvme_rdma_init_hctx, .poll = nvme_rdma_poll, .timeout = nvme_rdma_timeout, + .map_queues = nvme_rdma_map_queues, }; static const struct blk_mq_ops nvme_rdma_admin_mq_ops = { -- 2.7.4
WARNING: multiple messages have this Message-ID (diff)
From: sagi@grimberg.me (Sagi Grimberg) Subject: [PATCH rfc 6/6] nvme-rdma: use intelligent affinity based queue mappings Date: Sun, 2 Apr 2017 16:41:32 +0300 [thread overview] Message-ID: <1491140492-25703-7-git-send-email-sagi@grimberg.me> (raw) In-Reply-To: <1491140492-25703-1-git-send-email-sagi@grimberg.me> Use the geneic block layer affinity mapping helper. Also, limit nr_hw_queues to the rdma device number of irq vectors as we don't really need more. Signed-off-by: Sagi Grimberg <sagi at grimberg.me> --- drivers/nvme/host/rdma.c | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c index 4aae363943e3..81ee5b1207c8 100644 --- a/drivers/nvme/host/rdma.c +++ b/drivers/nvme/host/rdma.c @@ -19,6 +19,7 @@ #include <linux/string.h> #include <linux/atomic.h> #include <linux/blk-mq.h> +#include <linux/blk-mq-rdma.h> #include <linux/types.h> #include <linux/list.h> #include <linux/mutex.h> @@ -645,10 +646,14 @@ static int nvme_rdma_connect_io_queues(struct nvme_rdma_ctrl *ctrl) static int nvme_rdma_init_io_queues(struct nvme_rdma_ctrl *ctrl) { struct nvmf_ctrl_options *opts = ctrl->ctrl.opts; + struct ib_device *ibdev = ctrl->device->dev; unsigned int nr_io_queues; int i, ret; nr_io_queues = min(opts->nr_io_queues, num_online_cpus()); + nr_io_queues = min_t(unsigned int, nr_io_queues, + ibdev->num_comp_vectors); + ret = nvme_set_queue_count(&ctrl->ctrl, &nr_io_queues); if (ret) return ret; @@ -1523,6 +1528,13 @@ static void nvme_rdma_complete_rq(struct request *rq) nvme_complete_rq(rq); } +static int nvme_rdma_map_queues(struct blk_mq_tag_set *set) +{ + struct nvme_rdma_ctrl *ctrl = set->driver_data; + + return blk_mq_rdma_map_queues(set, ctrl->device->dev, 0); +} + static const struct blk_mq_ops nvme_rdma_mq_ops = { .queue_rq = nvme_rdma_queue_rq, .complete = nvme_rdma_complete_rq, @@ -1532,6 +1544,7 @@ static const struct blk_mq_ops nvme_rdma_mq_ops = { .init_hctx = nvme_rdma_init_hctx, .poll = nvme_rdma_poll, .timeout = nvme_rdma_timeout, + .map_queues = nvme_rdma_map_queues, }; static const struct blk_mq_ops nvme_rdma_admin_mq_ops = { -- 2.7.4
next prev parent reply other threads:[~2017-04-02 13:42 UTC|newest] Thread overview: 52+ messages / expand[flat|nested] mbox.gz Atom feed top 2017-04-02 13:41 [PATCH rfc 0/6] Automatic affinity settings for nvme over rdma Sagi Grimberg 2017-04-02 13:41 ` Sagi Grimberg 2017-04-02 13:41 ` [PATCH rfc 1/6] mlx5: convert to generic pci_alloc_irq_vectors Sagi Grimberg 2017-04-02 13:41 ` Sagi Grimberg 2017-04-04 6:27 ` Christoph Hellwig 2017-04-04 6:27 ` Christoph Hellwig 2017-04-02 13:41 ` [PATCH rfc 2/6] mlx5: move affinity hints assignments to generic code Sagi Grimberg 2017-04-02 13:41 ` Sagi Grimberg 2017-04-04 6:32 ` Christoph Hellwig 2017-04-04 6:32 ` Christoph Hellwig 2017-04-06 8:29 ` Sagi Grimberg 2017-04-06 8:29 ` Sagi Grimberg 2017-04-02 13:41 ` [PATCH rfc 3/6] RDMA/core: expose affinity mappings per completion vector Sagi Grimberg 2017-04-02 13:41 ` Sagi Grimberg 2017-04-04 6:32 ` Christoph Hellwig 2017-04-04 6:32 ` Christoph Hellwig 2017-04-02 13:41 ` [PATCH rfc 4/6] mlx5: support ->get_vector_affinity Sagi Grimberg 2017-04-02 13:41 ` Sagi Grimberg 2017-04-04 6:33 ` Christoph Hellwig 2017-04-04 6:33 ` Christoph Hellwig 2017-04-02 13:41 ` [PATCH rfc 5/6] block: Add rdma affinity based queue mapping helper Sagi Grimberg 2017-04-02 13:41 ` Sagi Grimberg 2017-04-04 6:33 ` Christoph Hellwig 2017-04-04 6:33 ` Christoph Hellwig 2017-04-04 7:46 ` Max Gurtovoy 2017-04-04 7:46 ` Max Gurtovoy 2017-04-04 7:46 ` Max Gurtovoy 2017-04-04 7:46 ` Max Gurtovoy 2017-04-04 13:09 ` Christoph Hellwig 2017-04-04 13:09 ` Christoph Hellwig 2017-04-06 9:23 ` Sagi Grimberg 2017-04-06 9:23 ` Sagi Grimberg 2017-04-06 9:23 ` Sagi Grimberg 2017-04-05 14:17 ` Jens Axboe 2017-04-05 14:17 ` Jens Axboe 2017-04-02 13:41 ` Sagi Grimberg [this message] 2017-04-02 13:41 ` [PATCH rfc 6/6] nvme-rdma: use intelligent affinity based queue mappings Sagi Grimberg 2017-04-04 6:34 ` Christoph Hellwig 2017-04-04 6:34 ` Christoph Hellwig 2017-04-06 8:30 ` Sagi Grimberg 2017-04-06 8:30 ` Sagi Grimberg 2017-04-04 7:51 ` [PATCH rfc 0/6] Automatic affinity settings for nvme over rdma Max Gurtovoy 2017-04-04 7:51 ` Max Gurtovoy 2017-04-04 7:51 ` Max Gurtovoy 2017-04-04 7:51 ` Max Gurtovoy 2017-04-06 8:34 ` Sagi Grimberg 2017-04-06 8:34 ` Sagi Grimberg 2017-04-06 8:34 ` Sagi Grimberg 2017-04-10 18:05 ` Steve Wise 2017-04-10 18:05 ` Steve Wise 2017-04-12 6:34 ` Christoph Hellwig 2017-04-12 6:34 ` Christoph Hellwig
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=1491140492-25703-7-git-send-email-sagi@grimberg.me \ --to=sagi@grimberg.me \ --cc=hch@lst.de \ --cc=linux-block@vger.kernel.org \ --cc=linux-nvme@lists.infradead.org \ --cc=linux-rdma@vger.kernel.org \ --cc=netdev@vger.kernel.org \ --cc=ogerlitz@mellanox.com \ --cc=saeedm@mellanox.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.