linux-pci.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: jglisse@redhat.com
To: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org,
	"Jérôme Glisse" <jglisse@redhat.com>,
	"Logan Gunthorpe" <logang@deltatee.com>,
	"Greg Kroah-Hartman" <gregkh@linuxfoundation.org>,
	"Rafael J . Wysocki" <rafael@kernel.org>,
	"Bjorn Helgaas" <bhelgaas@google.com>,
	"Christian Koenig" <christian.koenig@amd.com>,
	"Felix Kuehling" <Felix.Kuehling@amd.com>,
	"Jason Gunthorpe" <jgg@mellanox.com>,
	linux-pci@vger.kernel.org, dri-devel@lists.freedesktop.org,
	"Christoph Hellwig" <hch@lst.de>,
	"Marek Szyprowski" <m.szyprowski@samsung.com>,
	"Robin Murphy" <robin.murphy@arm.com>,
	"Joerg Roedel" <jroedel@suse.de>,
	iommu@lists.linux-foundation.org
Subject: [RFC PATCH 3/5] mm/vma: add support for peer to peer to device vma
Date: Tue, 29 Jan 2019 12:47:26 -0500	[thread overview]
Message-ID: <20190129174728.6430-4-jglisse@redhat.com> (raw)
In-Reply-To: <20190129174728.6430-1-jglisse@redhat.com>

From: Jérôme Glisse <jglisse@redhat.com>

Allow mmap of device file to export device memory to peer to peer
devices. This will allow for instance a network device to access a
GPU memory or to access a storage device queue directly.

The common case will be a vma created by userspace device driver
that is then share to another userspace device driver which call
in its kernel device driver to map that vma.

The vma does not need to have any valid CPU mapping so that only
peer to peer device might access its content. Or it could have
valid CPU mapping too in that case it should point to same memory
for consistency.

Note that peer to peer mapping is highly platform and device
dependent and it might not work in all the cases. However we do
expect supports for this to grow on more hardware platform.

This patch only adds new call backs to vm_operations_struct bulk
of code light within common bus driver (like pci) and device
driver (both the exporting and importing device).

Current design mandate that the importer must obey mmu_notifier
and invalidate any peer to peer mapping anytime a notification
of invalidation happens for a range that have been peer to peer
mapped. This allows exporter device to easily invalidate mapping
for any importer device.

Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
Cc: Logan Gunthorpe <logang@deltatee.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Rafael J. Wysocki <rafael@kernel.org>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: Christian Koenig <christian.koenig@amd.com>
Cc: Felix Kuehling <Felix.Kuehling@amd.com>
Cc: Jason Gunthorpe <jgg@mellanox.com>
Cc: linux-kernel@vger.kernel.org
Cc: linux-pci@vger.kernel.org
Cc: dri-devel@lists.freedesktop.org
Cc: Christoph Hellwig <hch@lst.de>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Joerg Roedel <jroedel@suse.de>
Cc: iommu@lists.linux-foundation.org
---
 include/linux/mm.h | 38 ++++++++++++++++++++++++++++++++++++++
 1 file changed, 38 insertions(+)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 80bb6408fe73..1bd60a90e575 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -429,6 +429,44 @@ struct vm_operations_struct {
 			pgoff_t start_pgoff, pgoff_t end_pgoff);
 	unsigned long (*pagesize)(struct vm_area_struct * area);
 
+	/*
+	 * Optional for device driver that want to allow peer to peer (p2p)
+	 * mapping of their vma (which can be back by some device memory) to
+	 * another device.
+	 *
+	 * Note that the exporting device driver might not have map anything
+	 * inside the vma for the CPU but might still want to allow a peer
+	 * device to access the range of memory corresponding to a range in
+	 * that vma.
+	 *
+	 * FOR PREDICTABILITY IF DRIVER SUCCESSFULY MAP A RANGE ONCE FOR A
+	 * DEVICE THEN FURTHER MAPPING OF THE SAME IF THE VMA IS STILL VALID
+	 * SHOULD ALSO BE SUCCESSFUL. Following this rule allow the importing
+	 * device to map once during setup and report any failure at that time
+	 * to the userspace. Further mapping of the same range might happen
+	 * after mmu notifier invalidation over the range. The exporting device
+	 * can use this to move things around (defrag BAR space for instance)
+	 * or do other similar task.
+	 *
+	 * IMPORTER MUST OBEY mmu_notifier NOTIFICATION AND CALL p2p_unmap()
+	 * WHEN A NOTIFIER IS CALL FOR THE RANGE ! THIS CAN HAPPEN AT ANY
+	 * POINT IN TIME WITH NO LOCK HELD.
+	 *
+	 * In below function, the device argument is the importing device,
+	 * the exporting device is the device to which the vma belongs.
+	 */
+	long (*p2p_map)(struct vm_area_struct *vma,
+			struct device *device,
+			unsigned long start,
+			unsigned long end,
+			dma_addr_t *pa,
+			bool write);
+	long (*p2p_unmap)(struct vm_area_struct *vma,
+			  struct device *device,
+			  unsigned long start,
+			  unsigned long end,
+			  dma_addr_t *pa);
+
 	/* notification that a previously read-only page is about to become
 	 * writable, if an error is returned it will cause a SIGBUS */
 	vm_fault_t (*page_mkwrite)(struct vm_fault *vmf);
-- 
2.17.2


  parent reply	other threads:[~2019-01-29 17:47 UTC|newest]

Thread overview: 95+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-01-29 17:47 [RFC PATCH 0/5] Device peer to peer (p2p) through vma jglisse
2019-01-29 17:47 ` [RFC PATCH 1/5] pci/p2p: add a function to test peer to peer capability jglisse
2019-01-29 18:24   ` Logan Gunthorpe
2019-01-29 19:44     ` Greg Kroah-Hartman
2019-01-29 19:53       ` Jerome Glisse
2019-01-29 20:44       ` Logan Gunthorpe
2019-01-29 21:00         ` Jerome Glisse
2019-01-29 19:56   ` Alex Deucher
2019-01-29 20:00     ` Jerome Glisse
2019-01-29 20:24     ` Logan Gunthorpe
2019-01-29 21:28       ` Alex Deucher
2019-01-30 10:25       ` Christian König
2019-01-29 17:47 ` [RFC PATCH 2/5] drivers/base: " jglisse
2019-01-29 18:26   ` Logan Gunthorpe
2019-01-29 19:54     ` Jerome Glisse
2019-01-29 19:46   ` Greg Kroah-Hartman
2019-01-29 19:56     ` Jerome Glisse
2019-01-29 17:47 ` jglisse [this message]
2019-01-29 18:36   ` [RFC PATCH 3/5] mm/vma: add support for peer to peer to device vma Logan Gunthorpe
2019-01-29 19:11     ` Jerome Glisse
2019-01-29 19:24       ` Logan Gunthorpe
2019-01-29 19:44         ` Jerome Glisse
2019-01-29 20:43           ` Logan Gunthorpe
2019-01-30  7:52             ` Christoph Hellwig
2019-01-29 19:32       ` Jason Gunthorpe
2019-01-29 19:50         ` Jerome Glisse
2019-01-29 20:24           ` Jason Gunthorpe
2019-01-29 20:44             ` Jerome Glisse
2019-01-29 23:02               ` Jason Gunthorpe
2019-01-30  0:08                 ` Jerome Glisse
2019-01-30  4:30                   ` Jason Gunthorpe
2019-01-30 15:43                     ` Jerome Glisse
2019-01-29 20:39         ` Logan Gunthorpe
2019-01-29 20:57           ` Jerome Glisse
2019-01-29 21:30             ` Logan Gunthorpe
2019-01-29 21:50               ` Jerome Glisse
2019-01-29 22:58                 ` Logan Gunthorpe
2019-01-29 23:47                   ` Jerome Glisse
2019-01-30  1:17                     ` Logan Gunthorpe
2019-01-30  2:48                       ` Jerome Glisse
2019-01-30  4:18                       ` Jason Gunthorpe
2019-01-30  8:00                         ` Christoph Hellwig
2019-01-30 15:49                           ` Jerome Glisse
2019-01-30 19:06                           ` Jason Gunthorpe
2019-01-30 19:45                             ` Logan Gunthorpe
2019-01-30 19:59                               ` Jason Gunthorpe
2019-01-30 21:01                                 ` Logan Gunthorpe
2019-01-30 21:50                                   ` Jason Gunthorpe
2019-01-30 22:52                                     ` Logan Gunthorpe
2019-01-30 23:30                                       ` Jason Gunthorpe
2019-01-31  8:13                                       ` Christoph Hellwig
2019-01-31 15:37                                         ` Jerome Glisse
2019-01-31 19:02                                         ` Jason Gunthorpe
2019-01-31 19:19                                           ` Logan Gunthorpe
2019-01-31 19:54                                             ` Jason Gunthorpe
2019-01-31 19:35                                           ` Jerome Glisse
2019-01-31 19:44                                             ` Logan Gunthorpe
2019-01-31 19:58                                             ` Jason Gunthorpe
2019-01-30 17:17                         ` Logan Gunthorpe
2019-01-30 18:56                           ` Jason Gunthorpe
2019-01-30 19:22                             ` Jerome Glisse
2019-01-30 19:38                               ` Jason Gunthorpe
2019-01-30 20:00                                 ` Logan Gunthorpe
2019-01-30 20:11                                   ` Jason Gunthorpe
2019-01-30 20:43                                     ` Jerome Glisse
2019-01-30 20:50                                       ` Jason Gunthorpe
2019-01-30 21:45                                         ` Jerome Glisse
2019-01-30 21:56                                           ` Jason Gunthorpe
2019-01-30 22:30                                             ` Jerome Glisse
2019-01-30 22:33                                               ` Jason Gunthorpe
2019-01-30 22:47                                                 ` Jerome Glisse
2019-01-30 22:51                                                   ` Jason Gunthorpe
2019-01-30 22:58                                                     ` Jerome Glisse
2019-01-30 19:52                               ` Logan Gunthorpe
2019-01-30 20:35                                 ` Jerome Glisse
2019-01-29 20:58           ` Jason Gunthorpe
2019-01-30  8:02             ` Christoph Hellwig
2019-01-30 10:33               ` Koenig, Christian
2019-01-30 15:55                 ` Jerome Glisse
2019-01-30 17:26                   ` Christoph Hellwig
2019-01-30 17:32                     ` Logan Gunthorpe
2019-01-30 17:39                     ` Jason Gunthorpe
2019-01-30 18:05                     ` Jerome Glisse
2019-01-30 17:44               ` Jason Gunthorpe
2019-01-30 18:13                 ` Logan Gunthorpe
2019-01-30 18:50                   ` Jerome Glisse
2019-01-31  8:02                     ` Christoph Hellwig
2019-01-31 15:03                       ` Jerome Glisse
2019-01-30 19:19                   ` Jason Gunthorpe
2019-01-30 19:48                     ` Logan Gunthorpe
2019-01-30 20:44                       ` Jason Gunthorpe
2019-01-31  8:05                         ` Christoph Hellwig
2019-01-31 15:11                           ` Jerome Glisse
2019-01-29 17:47 ` [RFC PATCH 4/5] mm/hmm: add support for peer to peer to HMM device memory jglisse
2019-01-29 17:47 ` [RFC PATCH 5/5] mm/hmm: add support for peer to peer to special device vma jglisse

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190129174728.6430-4-jglisse@redhat.com \
    --to=jglisse@redhat.com \
    --cc=Felix.Kuehling@amd.com \
    --cc=bhelgaas@google.com \
    --cc=christian.koenig@amd.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=gregkh@linuxfoundation.org \
    --cc=hch@lst.de \
    --cc=iommu@lists.linux-foundation.org \
    --cc=jgg@mellanox.com \
    --cc=jroedel@suse.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=logang@deltatee.com \
    --cc=m.szyprowski@samsung.com \
    --cc=rafael@kernel.org \
    --cc=robin.murphy@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).