linux-pci.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Christian König" <christian.koenig@amd.com>
To: Jason Gunthorpe <jgg@mellanox.com>, lsf-pc@lists.linux-foundation.org
Cc: linux-mm@kvack.org, linux-pci@vger.kernel.org,
	linux-rdma@vger.kernel.org,
	"Daniel Vetter" <daniel.vetter@ffwll.ch>,
	"Logan Gunthorpe" <logang@deltatee.com>,
	"Stephen Bates" <sbates@raithlin.com>,
	"Jérôme Glisse" <jglisse@redhat.com>,
	"Ira Weiny" <iweiny@intel.com>, "Christoph Hellwig" <hch@lst.de>,
	"John Hubbard" <jhubbard@nvidia.com>,
	"Ralph Campbell" <rcampbell@nvidia.com>,
	"Dan Williams" <dan.j.williams@intel.com>,
	"Don Dutile" <ddutile@redhat.com>
Subject: Re: [LSF/MM TOPIC] get_user_pages() for PCI BAR Memory
Date: Sat, 8 Feb 2020 14:10:59 +0100	[thread overview]
Message-ID: <20e3149e-4240-13e7-d16e-3975cfbe4d38@amd.com> (raw)
In-Reply-To: <20200207182457.GM23346@mellanox.com>

Am 07.02.20 um 19:24 schrieb Jason Gunthorpe:
> Many systems can now support direct DMA between two PCI devices, for
> instance between a RDMA NIC and a NVMe CMB, or a RDMA NIC and GPU
> graphics memory. In many system architectures this peer-to-peer PCI-E
> DMA transfer is critical to achieving performance as there is simply
> not enough system memory/PCI-E bandwidth for data traffic to go
> through the CPU socket.
>
> For many years various out of tree solutions have existed to serve
> this need. Recently some components have been accpeted into mainline,
> such as the p2pdma system, which allows co-operating drivers to setup
> P2P DMA transfers at the PCI level. This has allowed some kernel P2P
> DMA transfers related to NVMe CMB and RDMA to become supported.
>
> A major next step is to enable P2P transfers under userspace
> control. This is a very broad topic, but for this session I propose to
> focus on initial cases of supporting drivers can setup a P2P transfer
> from a PCI BAR page mmap'd to userspace. This is the basic starting
> point for future discussions on how to adapt get_user_pages() IO paths
> (ie O_DIRECT, net zero copy TX, RDMA, etc) to support PCI BAR memory.
>
> As all current drivers doing DMA from user space must go through
> get_user_pages() (or its new sibling hmm_range_fault()), some
> extension of the get_user_pages() API is needed to allow drivers
> supporting P2P to see the pages.
>
> get_user_pages() will require some 'struct page' and 'struct
> vm_area_struct' representation of the BAR memory beyond what today's
> io_remap_pfn_range()/etc produces.
>
> This topic has been discussed in small groups in various conferences
> over the last year, (plumbers, ALPSS, LSF/MM 2019, etc). Having a
> larger group together would be productive, especially as the direction
> has a notable impact on the general mm.
>
> For patch sets, we've seen a number of attempts so far, but little has
> been merged yet. Common elements of past discussions have been:
>   - Building struct page for BAR memory
>   - Stuffing BAR memory into scatter/gather lists, bios and skbs
>   - DMA mapping BAR memory
>   - Referencing BAR memory without a struct page
>   - Managing lifetime of BAR memory across multiple drivers

I can only repeat Jérôme that this most likely will never work correctly 
with get_user_pages().

One of the main issues is that if you want to cover all use cases you 
also need to take into account P2P operations which are hidden from the CPU.

E.g. you have memory which is not even CPU addressable, but can be 
shared between GPUs using XGMI, NVLink, SLI etc....

Since you can't get a struct page for something the CPU can't even have 
an address for the whole idea of using get_user_pages() fails from the 
very beginning.

That's also the reason why for GPUs we opted to use DMA-buf based 
sharing of buffers between drivers instead.

So we need to figure out how express DMA addresses outside of the CPU 
address space first before we can even think about something like 
extending get_user_pages() for P2P in an HMM scenario.

Regards,
Christian.

>
> Based on past work, the people in the CC list would be recommended
> participants:
>
>   Christian König <christian.koenig@amd.com>
>   Daniel Vetter <daniel.vetter@ffwll.ch>
>   Logan Gunthorpe <logang@deltatee.com>
>   Stephen Bates <sbates@raithlin.com>
>   Jérôme Glisse <jglisse@redhat.com>
>   Ira Weiny <iweiny@intel.com>
>   Christoph Hellwig <hch@lst.de>
>   John Hubbard <jhubbard@nvidia.com>
>   Ralph Campbell <rcampbell@nvidia.com>
>   Dan Williams <dan.j.williams@intel.com>
>   Don Dutile <ddutile@redhat.com>
>
> Regards,
> Jason
>
> Description of the p2pdma work:
>   https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flwn.net%2FArticles%2F767281%2F&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C942df05e20d14566df3708d7abfb0dbb%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637166967083315894&amp;sdata=j5YBrBF2zIjn0oZwbBn5%2BYabv8uWaawwtkVIWnO2GPs%3D&amp;reserved=0
>
> Discussion slot at Plumbers:
>   https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flinuxplumbersconf.org%2Fevent%2F4%2Fcontributions%2F369%2F&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C942df05e20d14566df3708d7abfb0dbb%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637166967083325894&amp;sdata=TbXLNXBDExHiViEE%2FYRpavsJ%2Fd68KOfg8xp%2BKk1ZJJU%3D&amp;reserved=0
>
> DRM work on DMABUF as a user facing object for P2P:
>   https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.spinics.net%2Flists%2Famd-gfx%2Fmsg32469.html&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C942df05e20d14566df3708d7abfb0dbb%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637166967083325894&amp;sdata=LBVbNR5bsknqL4MQf9RUyh7TDD9nD6yR5KJvKx5STds%3D&amp;reserved=0


  parent reply	other threads:[~2020-02-08 13:11 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-02-07 18:24 [LSF/MM TOPIC] get_user_pages() for PCI BAR Memory Jason Gunthorpe
2020-02-07 19:46 ` Matthew Wilcox
2020-02-07 20:13   ` Jason Gunthorpe
2020-02-07 20:42     ` Matthew Wilcox
2020-02-14 10:35       ` Michal Hocko
2020-02-08 13:10 ` Christian König [this message]
2020-02-08 13:54   ` Jason Gunthorpe
2020-02-08 16:38     ` Christian König
2020-02-08 17:43       ` Jason Gunthorpe
2020-02-10 18:39         ` Logan Gunthorpe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20e3149e-4240-13e7-d16e-3975cfbe4d38@amd.com \
    --to=christian.koenig@amd.com \
    --cc=dan.j.williams@intel.com \
    --cc=daniel.vetter@ffwll.ch \
    --cc=ddutile@redhat.com \
    --cc=hch@lst.de \
    --cc=iweiny@intel.com \
    --cc=jgg@mellanox.com \
    --cc=jglisse@redhat.com \
    --cc=jhubbard@nvidia.com \
    --cc=linux-mm@kvack.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=logang@deltatee.com \
    --cc=lsf-pc@lists.linux-foundation.org \
    --cc=rcampbell@nvidia.com \
    --cc=sbates@raithlin.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).