All of lore.kernel.org
 help / color / mirror / Atom feed
From: Sagi Grimberg <sagi@grimberg.me>
To: linux-rdma@vger.kernel.org, linux-nvme@lists.infradead.org,
	linux-block@vger.kernel.org
Cc: netdev@vger.kernel.org, Saeed Mahameed <saeedm@mellanox.com>,
	Or Gerlitz <ogerlitz@mellanox.com>,
	Christoph Hellwig <hch@lst.de>
Subject: [PATCH rfc 0/6] Automatic affinity settings for nvme over rdma
Date: Sun,  2 Apr 2017 16:41:26 +0300	[thread overview]
Message-ID: <1491140492-25703-1-git-send-email-sagi@grimberg.me> (raw)

This patch set is aiming to automatically find the optimal
queue <-> irq multi-queue assignments in storage ULPs (demonstrated
on nvme-rdma) based on the underlying rdma device irq affinity
settings.

First two patches modify mlx5 core driver to use generic API
to allocate array of irq vectors with automatic affinity
settings instead of open-coding exactly what it does (and
slightly worse).

Then, in order to obtain an affinity map for a given completion
vector, we expose a new RDMA core API, and implement it in mlx5.

The third part is addition of a rdma-based queue mapping helper
to blk-mq that maps the tagset hctx's according to the device
affinity mappings.

I'd happily convert some more drivers, but I'll need volunteers
to test as I don't have access to any other devices.

I cc'd @netdev (and Saeed + Or) as this is the place that most of
mlx5 core action takes place, so Saeed, would love to hear your
feedback.

Any feedback is welcome.

Sagi Grimberg (6):
  mlx5: convert to generic pci_alloc_irq_vectors
  mlx5: move affinity hints assignments to generic code
  RDMA/core: expose affinity mappings per completion vector
  mlx5: support ->get_vector_affinity
  block: Add rdma affinity based queue mapping helper
  nvme-rdma: use intelligent affinity based queue mappings

 block/Kconfig                                      |   5 +
 block/Makefile                                     |   1 +
 block/blk-mq-rdma.c                                |  56 +++++++++++
 drivers/infiniband/hw/mlx5/main.c                  |  10 ++
 drivers/net/ethernet/mellanox/mlx5/core/en_main.c  |   5 +-
 drivers/net/ethernet/mellanox/mlx5/core/eq.c       |   9 +-
 drivers/net/ethernet/mellanox/mlx5/core/eswitch.c  |   2 +-
 drivers/net/ethernet/mellanox/mlx5/core/health.c   |   2 +-
 drivers/net/ethernet/mellanox/mlx5/core/main.c     | 106 +++------------------
 .../net/ethernet/mellanox/mlx5/core/mlx5_core.h    |   1 -
 drivers/nvme/host/rdma.c                           |  13 +++
 include/linux/blk-mq-rdma.h                        |  10 ++
 include/linux/mlx5/driver.h                        |   2 -
 include/rdma/ib_verbs.h                            |  24 +++++
 14 files changed, 138 insertions(+), 108 deletions(-)
 create mode 100644 block/blk-mq-rdma.c
 create mode 100644 include/linux/blk-mq-rdma.h

-- 
2.7.4

WARNING: multiple messages have this Message-ID (diff)
From: sagi@grimberg.me (Sagi Grimberg)
Subject: [PATCH rfc 0/6] Automatic affinity settings for nvme over rdma
Date: Sun,  2 Apr 2017 16:41:26 +0300	[thread overview]
Message-ID: <1491140492-25703-1-git-send-email-sagi@grimberg.me> (raw)

This patch set is aiming to automatically find the optimal
queue <-> irq multi-queue assignments in storage ULPs (demonstrated
on nvme-rdma) based on the underlying rdma device irq affinity
settings.

First two patches modify mlx5 core driver to use generic API
to allocate array of irq vectors with automatic affinity
settings instead of open-coding exactly what it does (and
slightly worse).

Then, in order to obtain an affinity map for a given completion
vector, we expose a new RDMA core API, and implement it in mlx5.

The third part is addition of a rdma-based queue mapping helper
to blk-mq that maps the tagset hctx's according to the device
affinity mappings.

I'd happily convert some more drivers, but I'll need volunteers
to test as I don't have access to any other devices.

I cc'd @netdev (and Saeed + Or) as this is the place that most of
mlx5 core action takes place, so Saeed, would love to hear your
feedback.

Any feedback is welcome.

Sagi Grimberg (6):
  mlx5: convert to generic pci_alloc_irq_vectors
  mlx5: move affinity hints assignments to generic code
  RDMA/core: expose affinity mappings per completion vector
  mlx5: support ->get_vector_affinity
  block: Add rdma affinity based queue mapping helper
  nvme-rdma: use intelligent affinity based queue mappings

 block/Kconfig                                      |   5 +
 block/Makefile                                     |   1 +
 block/blk-mq-rdma.c                                |  56 +++++++++++
 drivers/infiniband/hw/mlx5/main.c                  |  10 ++
 drivers/net/ethernet/mellanox/mlx5/core/en_main.c  |   5 +-
 drivers/net/ethernet/mellanox/mlx5/core/eq.c       |   9 +-
 drivers/net/ethernet/mellanox/mlx5/core/eswitch.c  |   2 +-
 drivers/net/ethernet/mellanox/mlx5/core/health.c   |   2 +-
 drivers/net/ethernet/mellanox/mlx5/core/main.c     | 106 +++------------------
 .../net/ethernet/mellanox/mlx5/core/mlx5_core.h    |   1 -
 drivers/nvme/host/rdma.c                           |  13 +++
 include/linux/blk-mq-rdma.h                        |  10 ++
 include/linux/mlx5/driver.h                        |   2 -
 include/rdma/ib_verbs.h                            |  24 +++++
 14 files changed, 138 insertions(+), 108 deletions(-)
 create mode 100644 block/blk-mq-rdma.c
 create mode 100644 include/linux/blk-mq-rdma.h

-- 
2.7.4

             reply	other threads:[~2017-04-02 13:41 UTC|newest]

Thread overview: 52+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-04-02 13:41 Sagi Grimberg [this message]
2017-04-02 13:41 ` [PATCH rfc 0/6] Automatic affinity settings for nvme over rdma Sagi Grimberg
2017-04-02 13:41 ` [PATCH rfc 1/6] mlx5: convert to generic pci_alloc_irq_vectors Sagi Grimberg
2017-04-02 13:41   ` Sagi Grimberg
2017-04-04  6:27   ` Christoph Hellwig
2017-04-04  6:27     ` Christoph Hellwig
2017-04-02 13:41 ` [PATCH rfc 2/6] mlx5: move affinity hints assignments to generic code Sagi Grimberg
2017-04-02 13:41   ` Sagi Grimberg
2017-04-04  6:32   ` Christoph Hellwig
2017-04-04  6:32     ` Christoph Hellwig
2017-04-06  8:29     ` Sagi Grimberg
2017-04-06  8:29       ` Sagi Grimberg
2017-04-02 13:41 ` [PATCH rfc 3/6] RDMA/core: expose affinity mappings per completion vector Sagi Grimberg
2017-04-02 13:41   ` Sagi Grimberg
2017-04-04  6:32   ` Christoph Hellwig
2017-04-04  6:32     ` Christoph Hellwig
2017-04-02 13:41 ` [PATCH rfc 4/6] mlx5: support ->get_vector_affinity Sagi Grimberg
2017-04-02 13:41   ` Sagi Grimberg
2017-04-04  6:33   ` Christoph Hellwig
2017-04-04  6:33     ` Christoph Hellwig
2017-04-02 13:41 ` [PATCH rfc 5/6] block: Add rdma affinity based queue mapping helper Sagi Grimberg
2017-04-02 13:41   ` Sagi Grimberg
2017-04-04  6:33   ` Christoph Hellwig
2017-04-04  6:33     ` Christoph Hellwig
2017-04-04  7:46   ` Max Gurtovoy
2017-04-04  7:46     ` Max Gurtovoy
2017-04-04  7:46     ` Max Gurtovoy
2017-04-04  7:46     ` Max Gurtovoy
2017-04-04 13:09     ` Christoph Hellwig
2017-04-04 13:09       ` Christoph Hellwig
2017-04-06  9:23     ` Sagi Grimberg
2017-04-06  9:23       ` Sagi Grimberg
2017-04-06  9:23       ` Sagi Grimberg
2017-04-05 14:17   ` Jens Axboe
2017-04-05 14:17     ` Jens Axboe
2017-04-02 13:41 ` [PATCH rfc 6/6] nvme-rdma: use intelligent affinity based queue mappings Sagi Grimberg
2017-04-02 13:41   ` Sagi Grimberg
2017-04-04  6:34   ` Christoph Hellwig
2017-04-04  6:34     ` Christoph Hellwig
2017-04-06  8:30     ` Sagi Grimberg
2017-04-06  8:30       ` Sagi Grimberg
2017-04-04  7:51 ` [PATCH rfc 0/6] Automatic affinity settings for nvme over rdma Max Gurtovoy
2017-04-04  7:51   ` Max Gurtovoy
2017-04-04  7:51   ` Max Gurtovoy
2017-04-04  7:51   ` Max Gurtovoy
2017-04-06  8:34   ` Sagi Grimberg
2017-04-06  8:34     ` Sagi Grimberg
2017-04-06  8:34     ` Sagi Grimberg
2017-04-10 18:05 ` Steve Wise
2017-04-10 18:05   ` Steve Wise
2017-04-12  6:34   ` Christoph Hellwig
2017-04-12  6:34     ` Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1491140492-25703-1-git-send-email-sagi@grimberg.me \
    --to=sagi@grimberg.me \
    --cc=hch@lst.de \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=ogerlitz@mellanox.com \
    --cc=saeedm@mellanox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.