linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Christian König" <christian.koenig@amd.com>
To: Jason Gunthorpe <jgg@ziepe.ca>, John Hubbard <jhubbard@nvidia.com>
Cc: Gal Pressman <galpress@amazon.com>,
	Daniel Vetter <daniel@ffwll.ch>,
	Sumit Semwal <sumit.semwal@linaro.org>,
	Doug Ledford <dledford@redhat.com>,
	"open list:DMA BUFFER SHARING FRAMEWORK" 
	<linux-media@vger.kernel.org>,
	dri-devel <dri-devel@lists.freedesktop.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	linux-rdma <linux-rdma@vger.kernel.org>,
	Oded Gabbay <ogabbay@habana.ai>, Tomer Tayar <ttayar@habana.ai>,
	Yossi Leybovich <sleybo@amazon.com>,
	Alexander Matushevsky <matua@amazon.com>,
	Leon Romanovsky <leonro@nvidia.com>,
	Jianxin Xiong <jianxin.xiong@intel.com>
Subject: Re: [RFC] Make use of non-dynamic dmabuf in RDMA
Date: Wed, 25 Aug 2021 08:17:51 +0200	[thread overview]
Message-ID: <a716ae41-2d8c-c75a-a779-cc85b189fea2@amd.com> (raw)
In-Reply-To: <20210824173228.GE543798@ziepe.ca>



Am 24.08.21 um 19:32 schrieb Jason Gunthorpe:
> On Tue, Aug 24, 2021 at 10:27:23AM -0700, John Hubbard wrote:
>> On 8/24/21 2:32 AM, Christian König wrote:
>>> Am 24.08.21 um 11:06 schrieb Gal Pressman:
>>>> On 23/08/2021 13:43, Christian König wrote:
>>>>> Am 21.08.21 um 11:16 schrieb Gal Pressman:
>>>>>> On 20/08/2021 17:32, Jason Gunthorpe wrote:
>>>>>>> On Fri, Aug 20, 2021 at 03:58:33PM +0300, Gal Pressman wrote:
>> ...
>>>>>> IIUC, we're talking about three different exporter "types":
>>>>>> - Dynamic with move_notify (requires ODP)
>>>>>> - Dynamic with revoke_notify
>>>>>> - Static
>>>>>>
>>>>>> Which changes do we need to make the third one work?
>>>>> Basically none at all in the framework.
>>>>>
>>>>> You just need to properly use the dma_buf_pin() function when you start using a
>>>>> buffer (e.g. before you create an attachment) and the dma_buf_unpin() function
>>>>> after you are done with the DMA-buf.
>>>> I replied to your previous mail, but I'll ask again.
>>>> Doesn't the pin operation migrate the memory to host memory?
>>> Sorry missed your previous reply.
>>>
>>> And yes at least for the amdgpu driver we migrate the memory to host
>>> memory as soon as it is pinned and I would expect that other GPU drivers
>>> do something similar.
>> Well...for many topologies, migrating to host memory will result in a
>> dramatically slower p2p setup. For that reason, some GPU drivers may
>> want to allow pinning of video memory in some situations.
>>
>> Ideally, you've got modern ODP devices and you don't even need to pin.
>> But if not, and you still hope to do high performance p2p between a GPU
>> and a non-ODP Infiniband device, then you would need to leave the pinned
>> memory in vidmem.
>>
>> So I think we don't want to rule out that behavior, right? Or is the
>> thinking more like, "you're lucky that this old non-ODP setup works at
>> all, and we'll make it work by routing through host/cpu memory, but it
>> will be slow"?
> I think it depends on the user, if the user creates memory which is
> permanently located on the GPU then it should be pinnable in this way
> without force migration. But if the memory is inherently migratable
> then it just cannot be pinned in the GPU at all as we can't
> indefinately block migration from happening eg if the CPU touches it
> later or something.

Yes, exactly that's the point. Especially GPUs have a great variety of 
setups.

For example we have APUs where the local memory is just stolen system 
memory and all buffers must be migrate-able because you might need all 
of this stolen memory for scanout or page tables. In this case P2P only 
makes sense to avoid the migration overhead in the first place.

Then you got dGPUs where only a fraction of the VRAM is accessible from 
the PCIe BUS. Here you also absolutely don't want to pin any buffers 
because that can easily crash when we need to migrate something into the 
visible window for CPU access.

The only real option where you could do P2P with buffer pinning are 
those compute boards where we know that everything is always accessible 
to everybody and we will never need to migrate anything. But even then 
you want some mechanism like cgroups to take care of limiting this. 
Otherwise any runaway process can bring down your whole system.

Key question at least for me as GPU maintainer is if we are going to see 
modern compute boards together with old non-ODP setups. Since those 
compute boards are usually used with new hardware (like PCIe v4 for 
example) the answer I think is most likely "no".

Christian.

>
> Jason


  parent reply	other threads:[~2021-08-25  6:18 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-08-18  7:43 [RFC] Make use of non-dynamic dmabuf in RDMA Gal Pressman
2021-08-18  8:00 ` Christian König
2021-08-18  8:37   ` Gal Pressman
2021-08-18  9:34 ` Daniel Vetter
2021-08-19 23:06   ` Jason Gunthorpe
2021-08-20  7:25     ` Daniel Vetter
2021-08-20 12:33       ` Jason Gunthorpe
2021-08-20 12:58         ` Gal Pressman
2021-08-20 14:32           ` Jason Gunthorpe
2021-08-21  9:16             ` Gal Pressman
2021-08-23 10:43               ` Christian König
2021-08-24  9:06                 ` Gal Pressman
2021-08-24  9:32                   ` Christian König
2021-08-24 17:27                     ` John Hubbard
2021-08-24 17:32                       ` Jason Gunthorpe
2021-08-24 17:35                         ` John Hubbard
2021-08-24 19:15                           ` Dave Airlie
2021-08-24 19:30                             ` Jason Gunthorpe
2021-08-24 19:43                             ` Alex Deucher
2021-08-24 20:00                               ` Xiong, Jianxin
2021-08-25  6:17                         ` Christian König [this message]
2021-08-25  6:47                           ` John Hubbard
2021-08-25 12:18                           ` Jason Gunthorpe
2021-08-25 12:27                             ` Christian König
2021-08-25 12:38                               ` Jason Gunthorpe
2021-08-25 13:51                                 ` Christian König
2021-08-25 14:47                                   ` Jason Gunthorpe
2021-08-25 15:14                                     ` Christian König
2021-08-25 15:49                                       ` Jason Gunthorpe
2021-08-25 16:02                                       ` Oded Gabbay
2021-09-01 11:20                         ` Gal Pressman
2021-09-01 11:24                           ` Christian König
2021-09-02  6:56                             ` Gal Pressman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a716ae41-2d8c-c75a-a779-cc85b189fea2@amd.com \
    --to=christian.koenig@amd.com \
    --cc=daniel@ffwll.ch \
    --cc=dledford@redhat.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=galpress@amazon.com \
    --cc=jgg@ziepe.ca \
    --cc=jhubbard@nvidia.com \
    --cc=jianxin.xiong@intel.com \
    --cc=leonro@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-media@vger.kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=matua@amazon.com \
    --cc=ogabbay@habana.ai \
    --cc=sleybo@amazon.com \
    --cc=sumit.semwal@linaro.org \
    --cc=ttayar@habana.ai \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).