dri-devel.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
From: Daniel Vetter <daniel@ffwll.ch>
To: John Hubbard <jhubbard@nvidia.com>
Cc: Leon Romanovsky <leon@kernel.org>,
	linux-rdma <linux-rdma@vger.kernel.org>,
	Maling list - DRI developers <dri-devel@lists.freedesktop.org>,
	Doug Ledford <dledford@redhat.com>,
	Jason Gunthorpe <jgg@nvidia.com>,
	Daniel Vetter <daniel.vetter@intel.com>,
	Christian Koenig <christian.koenig@amd.com>,
	Jianxin Xiong <jianxin.xiong@intel.com>
Subject: Re: [PATCH v16 0/4] RDMA: Add dma-buf support
Date: Fri, 5 Feb 2021 16:39:47 +0100	[thread overview]
Message-ID: <YB1mw/uYwueFwUdh@phenom.ffwll.local> (raw)
In-Reply-To: <8e731fce-95c1-4ace-d8bc-dc0df7432d22@nvidia.com>

On Thu, Feb 04, 2021 at 11:00:32AM -0800, John Hubbard wrote:
> On 2/4/21 10:44 AM, Alex Deucher wrote:
> ...
> > > > The argument is that vram is a scarce resource, but I don't know if
> > > > that is really the case these days.  At this point, we often have as
> > > > much vram as system ram if not more.
> > > 
> > > I thought the main argument was that GPU memory could move at any time
> > > between the GPU and CPU and the DMA buf would always track its current
> > > location?
> > 
> > I think the reason for that is that VRAM is scarce so we have to be
> > able to move it around.  We don't enforce the same limitations for
> > buffers in system memory.  We could just support pinning dma-bufs in
> > vram like we do with system ram.  Maybe with some conditions, e.g.,
> > p2p is possible, and the device has a large BAR so you aren't tying up
> > the BAR window.

Minimally we need cgroups for that vram, so it can be managed. Which is a
bit stuck unfortunately. But if we have cgroups with some pin limit, I
think we can easily lift this.

> Excellent. And yes, we are already building systems in which VRAM is
> definitely not scarce, but on the other hand, those newer systems can
> also handle GPU (and NIC) page faults, so not really an issue. For that,
> we just need to enhance HMM so that it does peer to peer.
> 
> We also have some older hardware with large BAR1 apertures, specifically
> for this sort of thing.
> 
> And again, for slightly older hardware, without pinning to VRAM there is
> no way to use this solution here for peer-to-peer. So I'm glad to see that
> so far you're not ruling out the pinning option.

Since HMM and ZONE_DEVICE came up, I'm kinda tempted to make ZONE_DEVICE
ZONE_MOVEABLE (at least if you don't have a pinned vram contigent in your
cgroups) or something like that, so we could benefit from the work to make
sure pin_user_pages and all these never end up in there?

https://lwn.net/Articles/843326/

Kind inspired by the recent lwn article.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

  reply	other threads:[~2021-02-05 15:39 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-12-15 21:27 [PATCH v16 0/4] RDMA: Add dma-buf support Jianxin Xiong
2020-12-15 21:27 ` [PATCH v16 1/4] RDMA/umem: Support importing dma-buf as user memory region Jianxin Xiong
2020-12-15 21:27 ` [PATCH v16 2/4] RDMA/core: Add device method for registering dma-buf based " Jianxin Xiong
2020-12-15 21:27 ` [PATCH v16 3/4] RDMA/uverbs: Add uverbs command for dma-buf based MR registration Jianxin Xiong
2020-12-15 21:27 ` [PATCH v16 4/4] RDMA/mlx5: Support dma-buf based userspace memory region Jianxin Xiong
2021-01-11 15:24 ` [PATCH v16 0/4] RDMA: Add dma-buf support Xiong, Jianxin
2021-01-11 15:42   ` Jason Gunthorpe
2021-01-11 17:44     ` Xiong, Jianxin
2021-01-11 17:47       ` Alex Deucher
2021-01-11 17:55         ` Xiong, Jianxin
2021-01-12 12:49           ` Yishai Hadas
2021-01-12 18:11             ` Xiong, Jianxin
2021-01-21 16:59 ` Jason Gunthorpe
2021-02-04  7:48 ` John Hubbard
2021-02-04 13:50   ` Alex Deucher
2021-02-04 18:29     ` Jason Gunthorpe
2021-02-04 18:44       ` Alex Deucher
2021-02-04 19:00         ` John Hubbard
2021-02-05 15:39           ` Daniel Vetter [this message]
2021-02-05 15:43             ` Jason Gunthorpe
2021-02-05 15:53               ` Daniel Vetter
2021-02-05 16:00                 ` Jason Gunthorpe
2021-02-05 16:06                   ` Daniel Vetter
2021-02-05 20:24                 ` John Hubbard

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YB1mw/uYwueFwUdh@phenom.ffwll.local \
    --to=daniel@ffwll.ch \
    --cc=christian.koenig@amd.com \
    --cc=daniel.vetter@intel.com \
    --cc=dledford@redhat.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=jgg@nvidia.com \
    --cc=jhubbard@nvidia.com \
    --cc=jianxin.xiong@intel.com \
    --cc=leon@kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).