linux-pci.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Matthew Wilcox <willy@infradead.org>
To: Jason Gunthorpe <jgg@mellanox.com>
Cc: lsf-pc@lists.linux-foundation.org, linux-mm@kvack.org,
	linux-pci@vger.kernel.org, linux-rdma@vger.kernel.org,
	"Christian König" <christian.koenig@amd.com>,
	"Daniel Vetter" <daniel.vetter@ffwll.ch>,
	"Logan Gunthorpe" <logang@deltatee.com>,
	"Stephen Bates" <sbates@raithlin.com>,
	"Jérôme Glisse" <jglisse@redhat.com>,
	"Ira Weiny" <iweiny@intel.com>, "Christoph Hellwig" <hch@lst.de>,
	"John Hubbard" <jhubbard@nvidia.com>,
	"Ralph Campbell" <rcampbell@nvidia.com>,
	"Dan Williams" <dan.j.williams@intel.com>,
	"Don Dutile" <ddutile@redhat.com>,
	"Thomas Hellström (VMware)" <thomas_os@shipmail.org>,
	"Joao Martins" <joao.m.martins@oracle.com>
Subject: Re: [LSF/MM TOPIC] get_user_pages() for PCI BAR Memory
Date: Fri, 7 Feb 2020 11:46:20 -0800	[thread overview]
Message-ID: <20200207194620.GG8731@bombadil.infradead.org> (raw)
In-Reply-To: <20200207182457.GM23346@mellanox.com>

On Fri, Feb 07, 2020 at 02:24:57PM -0400, Jason Gunthorpe wrote:
> Many systems can now support direct DMA between two PCI devices, for
> instance between a RDMA NIC and a NVMe CMB, or a RDMA NIC and GPU
> graphics memory. In many system architectures this peer-to-peer PCI-E
> DMA transfer is critical to achieving performance as there is simply
> not enough system memory/PCI-E bandwidth for data traffic to go
> through the CPU socket.
> 
> For many years various out of tree solutions have existed to serve
> this need. Recently some components have been accpeted into mainline,
> such as the p2pdma system, which allows co-operating drivers to setup
> P2P DMA transfers at the PCI level. This has allowed some kernel P2P
> DMA transfers related to NVMe CMB and RDMA to become supported.
> 
> A major next step is to enable P2P transfers under userspace
> control. This is a very broad topic, but for this session I propose to
> focus on initial cases of supporting drivers can setup a P2P transfer
> from a PCI BAR page mmap'd to userspace. This is the basic starting
> point for future discussions on how to adapt get_user_pages() IO paths
> (ie O_DIRECT, net zero copy TX, RDMA, etc) to support PCI BAR memory.
> 
> As all current drivers doing DMA from user space must go through
> get_user_pages() (or its new sibling hmm_range_fault()), some
> extension of the get_user_pages() API is needed to allow drivers
> supporting P2P to see the pages.
> 
> get_user_pages() will require some 'struct page' and 'struct
> vm_area_struct' representation of the BAR memory beyond what today's
> io_remap_pfn_range()/etc produces.
> 
> This topic has been discussed in small groups in various conferences
> over the last year, (plumbers, ALPSS, LSF/MM 2019, etc). Having a
> larger group together would be productive, especially as the direction
> has a notable impact on the general mm.
> 
> For patch sets, we've seen a number of attempts so far, but little has
> been merged yet. Common elements of past discussions have been:
>  - Building struct page for BAR memory
>  - Stuffing BAR memory into scatter/gather lists, bios and skbs
>  - DMA mapping BAR memory
>  - Referencing BAR memory without a struct page
>  - Managing lifetime of BAR memory across multiple drivers
> 
> Based on past work, the people in the CC list would be recommended
> participants:
> 
>  Christian König <christian.koenig@amd.com>
>  Daniel Vetter <daniel.vetter@ffwll.ch>
>  Logan Gunthorpe <logang@deltatee.com>
>  Stephen Bates <sbates@raithlin.com>
>  Jérôme Glisse <jglisse@redhat.com>
>  Ira Weiny <iweiny@intel.com>
>  Christoph Hellwig <hch@lst.de>
>  John Hubbard <jhubbard@nvidia.com>
>  Ralph Campbell <rcampbell@nvidia.com>
>  Dan Williams <dan.j.williams@intel.com>
>  Don Dutile <ddutile@redhat.com>

That's a long list, and you're missing 

"Thomas Hellström (VMware)" <thomas_os@shipmail.org>
Joao Martins <joao.m.martins@oracle.com>

both of whom have been working on related projects (for PFNs without pages).
Hey, you missed me too!  ;-)


  reply	other threads:[~2020-02-07 19:46 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-02-07 18:24 [LSF/MM TOPIC] get_user_pages() for PCI BAR Memory Jason Gunthorpe
2020-02-07 19:46 ` Matthew Wilcox [this message]
2020-02-07 20:13   ` Jason Gunthorpe
2020-02-07 20:42     ` Matthew Wilcox
2020-02-14 10:35       ` Michal Hocko
2020-02-08 13:10 ` Christian König
2020-02-08 13:54   ` Jason Gunthorpe
2020-02-08 16:38     ` Christian König
2020-02-08 17:43       ` Jason Gunthorpe
2020-02-10 18:39         ` Logan Gunthorpe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200207194620.GG8731@bombadil.infradead.org \
    --to=willy@infradead.org \
    --cc=christian.koenig@amd.com \
    --cc=dan.j.williams@intel.com \
    --cc=daniel.vetter@ffwll.ch \
    --cc=ddutile@redhat.com \
    --cc=hch@lst.de \
    --cc=iweiny@intel.com \
    --cc=jgg@mellanox.com \
    --cc=jglisse@redhat.com \
    --cc=jhubbard@nvidia.com \
    --cc=joao.m.martins@oracle.com \
    --cc=linux-mm@kvack.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=logang@deltatee.com \
    --cc=lsf-pc@lists.linux-foundation.org \
    --cc=rcampbell@nvidia.com \
    --cc=sbates@raithlin.com \
    --cc=thomas_os@shipmail.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).