All of lore.kernel.org
 help / color / mirror / Atom feed
From: jglisse@redhat.com
To: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org,
	"Jérôme Glisse" <jglisse@redhat.com>,
	linux-rdma@vger.kernel.org, "Jason Gunthorpe" <jgg@mellanox.com>,
	"Leon Romanovsky" <leonro@mellanox.com>,
	"Doug Ledford" <dledford@redhat.com>,
	"Artemy Kovalyov" <artemyko@mellanox.com>,
	"Moni Shoua" <monis@mellanox.com>,
	"Mike Marciniszyn" <mike.marciniszyn@intel.com>,
	"Kaike Wan" <kaike.wan@intel.com>,
	"Dennis Dalessandro" <dennis.dalessandro@intel.com>
Subject: [RFC PATCH 0/1] Use HMM for ODP
Date: Tue, 29 Jan 2019 11:58:38 -0500	[thread overview]
Message-ID: <20190129165839.4127-1-jglisse@redhat.com> (raw)

From: Jérôme Glisse <jglisse@redhat.com>

This patchset convert RDMA ODP to use HMM underneath this is motivated
by stronger code sharing for same feature (share virtual memory SVM or
Share Virtual Address SVA) and also stronger integration with mm code to
achieve that. It depends on HMM patchset posted for inclusion in 5.1 so
earliest target for this should be 5.2. I welcome any testing people can
do on this.

Moreover they are some features of HMM in the works like peer to peer
support, fast CPU page table snapshot, fast IOMMU mapping update ...
It will be easier for RDMA devices with ODP to leverage those if they
use HMM underneath.

Quick summary of what HMM is:
    HMM is a toolbox for device driver to implement software support for
    Share Virtual Memory (SVM). Not only it provides helpers to mirror a
    process address space on a device (hmm_mirror). It also provides
    helper to allow to use device memory to back regular valid virtual
    address of a process (any valid mmap that is not an mmap of a device
    or a DAX mapping). They are two kinds of device memory. Private memory
    that is not accessible to CPU because it does not have all the expected
    properties (this is for all PCIE devices) or public memory which can
    also be access by CPU without restriction (with OpenCAPI or CCIX or
    similar cache-coherent and atomic inter-connect).

    Device driver can use each of HMM tools separatly. You do not have to
    use all the tools it provides.

For RDMA device i do not expect a need to use the device memory support
of HMM. This device memory support is geared toward accelerator like GPU.


You can find a branch [1] with all the prerequisite in. This patch is on
top of 5.0rc2+ but i can rebase it on any specific branch before it is
consider for inclusion (5.2 at best).

Questions and reviews are more than welcome.

[1] https://cgit.freedesktop.org/~glisse/linux/log/?h=odp-hmm
[2] https://cgit.freedesktop.org/~glisse/linux/log/?h=hmm-for-5.1

Cc: linux-rdma@vger.kernel.org
Cc: Jason Gunthorpe <jgg@mellanox.com>
Cc: Leon Romanovsky <leonro@mellanox.com>
Cc: Doug Ledford <dledford@redhat.com>
Cc: Artemy Kovalyov <artemyko@mellanox.com>
Cc: Moni Shoua <monis@mellanox.com>
Cc: Mike Marciniszyn <mike.marciniszyn@intel.com>
Cc: Kaike Wan <kaike.wan@intel.com>
Cc: Dennis Dalessandro <dennis.dalessandro@intel.com>

Jérôme Glisse (1):
  RDMA/odp: convert to use HMM for ODP

 drivers/infiniband/core/umem_odp.c | 483 ++++++++---------------------
 drivers/infiniband/hw/mlx5/mem.c   |  20 +-
 drivers/infiniband/hw/mlx5/mr.c    |   2 +-
 drivers/infiniband/hw/mlx5/odp.c   |  95 +++---
 include/rdma/ib_umem_odp.h         |  54 +---
 5 files changed, 202 insertions(+), 452 deletions(-)

-- 
2.17.2

             reply	other threads:[~2019-01-29 16:58 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-01-29 16:58 jglisse [this message]
2019-01-29 16:58 ` [PATCH 1/1] RDMA/odp: convert to use HMM for ODP jglisse
2019-02-06  8:44   ` Haggai Eran
2019-02-12 16:11     ` Jerome Glisse
2019-02-13  6:28       ` Haggai Eran
2019-02-20 22:20       ` Jason Gunthorpe
2019-02-20 22:29         ` Jerome Glisse
2019-02-21 22:59           ` Jason Gunthorpe
2019-02-22  2:01             ` Jerome Glisse

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190129165839.4127-1-jglisse@redhat.com \
    --to=jglisse@redhat.com \
    --cc=artemyko@mellanox.com \
    --cc=dennis.dalessandro@intel.com \
    --cc=dledford@redhat.com \
    --cc=jgg@mellanox.com \
    --cc=kaike.wan@intel.com \
    --cc=leonro@mellanox.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=mike.marciniszyn@intel.com \
    --cc=monis@mellanox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.