linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Christian König" <christian.koenig@amd.com>
To: Danilo Krummrich <dakr@redhat.com>,
	airlied@gmail.com, daniel@ffwll.ch, tzimmermann@suse.de,
	mripard@kernel.org, corbet@lwn.net, bskeggs@redhat.com,
	Liam.Howlett@oracle.com, matthew.brost@intel.com,
	boris.brezillon@collabora.com, alexdeucher@gmail.com,
	ogabbay@kernel.org, bagasdotme@gmail.com, willy@infradead.org,
	jason@jlekstrand.net
Cc: dri-devel@lists.freedesktop.org, nouveau@lists.freedesktop.org,
	linux-doc@vger.kernel.org, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org,
	Donald Robson <donald.robson@imgtec.com>,
	Dave Airlie <airlied@redhat.com>
Subject: Re: [PATCH drm-next v5 03/14] drm: manager to keep track of GPUs VA mappings
Date: Fri, 23 Jun 2023 17:34:19 +0200	[thread overview]
Message-ID: <923e914f-d367-2f74-9774-f0024d946bdd@amd.com> (raw)
In-Reply-To: <d18a4ea5-4e8e-be69-84c3-ca658f5cfd24@redhat.com>

Am 23.06.23 um 15:55 schrieb Danilo Krummrich:
> [SNIP]
>>>>> How do you efficiently find only the mappings of a BO in one VM?
>>>>
>>>> Actually, I think this case should even be more efficient than with 
>>>> a BO having a list of GPUVAs (or mappings):
>>>
>>> *than with a BO having a list of VMs:
>>>
>>>>
>>>> Having a list of GPUVAs per GEM, each GPUVA has a pointer to it's 
>>>> VM. Hence, you'd only need to iterate the list of mappings for a 
>>>> given BO and check the mappings VM pointer.
>>
>> Yeah, and that is extremely time consuming if you have tons of 
>> mappings in different VMs.
>>
>>>>
>>>> Having a list of VMs per BO, you'd have to iterate the whole VM to 
>>>> find the mappings having a pointer to the given BO, right?
>>
>> No, you don't seem to understand what I'm suggesting.
>>
>> Currently you have a list of mappings attached to the BO, so when you 
>> need to make sure that a specific BO is up to date in a specific VM 
>> you either need to iterate over the VM or the BO. Neither of that is 
>> a good idea.
>>
>> What you need is a representation of the data used for each BO+VM 
>> combination. In other words another indirection which allows you to 
>> handle all the mappings of a BO inside a VM at once.
>
> Ok, after having a quick look at amdgpu, I can see what you mean.
>
> The missing piece for me was that the BO+VM abstraction itself keeps a 
> list of mappings for this specific BO and VM.
>
> Just to make it obvious for other people following the discussion, let 
> me quickly sketch up how this approach would look like for the GPUVA 
> manager:
>
> 1. We would need a new structure to represent the BO+VM combination, 
> something like:
>
>     struct drm_gpuva_mgr_gem {
>             struct drm_gpuva_manager *mgr;
>         struct drm_gem_object *obj;
>         struct list_head gpuva_list;
>     };
>
> with a less horrible name, hopefully.
>
> 2. Create an instance of struct drm_gpuva_mgr_gem once a GEM becomes 
> associated with a GPUVA manager (VM) and attach it to the GEMs, as by 
> now, "gpuva" list.
>
> In amdgpu, for example, this seems to be the case once a GEM object is 
> opened, since there is one VM per file_priv.
>
> However, for other drivers this could be different, hence drivers 
> would need to take care about this.

Yes, exactly that.

>
>
> 3. Attach GPUVAs to the new gpuva_list of the corresponding instance of
> struct drm_gpuva_mgr_gem.
>
> 4. Drivers would need to clean up the instance of struct 
> drm_gpuva_mgr_gem, once the GEM is not associated with the GPUVA 
> manager anymore.
>
> As pointed out by Christian, this would optimize the "get all mappings 
> backed by a specific BO from a given VM" use case.
>
> The question for me is, do other drivers than amdgpu commonly need this?

I have no idea.

>
> And what does amdgpu need this for? Maybe amdgpu does something we're 
> not doing (yet)?

Basically when we do a CS we need to make sure that the VM used by this 
CS is up to date. For this we walk over the relocation list of BOs and 
check the status of each BO+VM structure.

This is done because we don't want to update all VMs at the same time, 
but rather just those who needs the update.

>
> Christian - I know you didn't ask for "do it the way amdgpu does", 
> instead you voted for keeping it entirely driver specific. But I think 
> everyone is pretty close and I'm still optimistic that we could just 
> generalize this.

Well, you should *not* necessarily do it like amdgpu does! Basically the 
implementation in amdgpu was driven by requirements, e.g. we need that, 
let's do it like this.

It's perfectly possible that other requirements (e.g. focus on Vulkan) 
lead to a completely different implementation.

It's just that ideally I would like to have an implementation where I 
can apply at least the basics to amdgpu as well.

Regards,
Christian.

>
> - Danilo
>
>>
>>>>
>>>> I'd think that a single VM potentially has more mapping entries 
>>>> than a single BO was mapped in multiple VMs.
>>>>
>>>> Another case to consider is the case I originally had in mind 
>>>> choosing this relationship: finding all mappings for a given BO, 
>>>> which I guess all drivers need to do in order to invalidate 
>>>> mappings on BO eviction.
>>>>
>>>> Having a list of VMs per BO, wouldn't you need to iterate all of 
>>>> the VMs entirely?
>>
>> No, see how amdgpu works.
>>
>> Regards,
>> Christian.
>>
>


  reply	other threads:[~2023-06-23 15:34 UTC|newest]

Thread overview: 43+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-06-20  0:42 [PATCH drm-next v5 00/14] [RFC] DRM GPUVA Manager & Nouveau VM_BIND UAPI Danilo Krummrich
2023-06-19 23:06 ` Danilo Krummrich
2023-06-20  4:05   ` Dave Airlie
2023-06-20  7:06     ` Oded Gabbay
2023-06-20  7:13       ` Dave Airlie
2023-06-20  7:34         ` Oded Gabbay
2023-06-20  0:42 ` [PATCH drm-next v5 01/14] drm: execution context for GEM buffers v4 Danilo Krummrich
2023-06-20  0:42 ` [PATCH drm-next v5 02/14] maple_tree: split up MA_STATE() macro Danilo Krummrich
2023-06-20  0:42 ` [PATCH drm-next v5 03/14] drm: manager to keep track of GPUs VA mappings Danilo Krummrich
2023-06-20  3:00   ` kernel test robot
2023-06-20  3:32   ` kernel test robot
2023-06-20  4:54   ` Christoph Hellwig
2023-06-20  6:45   ` Christian König
2023-06-20 12:23     ` Danilo Krummrich
2023-06-22 13:54       ` Christian König
2023-06-22 14:22         ` Danilo Krummrich
2023-06-22 14:42           ` Christian König
2023-06-22 15:04             ` Danilo Krummrich
2023-06-22 15:07               ` Danilo Krummrich
2023-06-23  2:24                 ` Matthew Brost
2023-06-23  7:16                 ` Christian König
2023-06-23 13:55                   ` Danilo Krummrich
2023-06-23 15:34                     ` Christian König [this message]
2023-06-26 22:38                       ` Dave Airlie
2023-06-21 18:58   ` Donald Robson
2023-06-20  0:42 ` [PATCH drm-next v5 04/14] drm: debugfs: provide infrastructure to dump a DRM GPU VA space Danilo Krummrich
2023-06-20  0:42 ` [PATCH drm-next v5 05/14] drm/nouveau: new VM_BIND uapi interfaces Danilo Krummrich
2023-06-20  0:42 ` [PATCH drm-next v5 06/14] drm/nouveau: get vmm via nouveau_cli_vmm() Danilo Krummrich
2023-06-20  0:42 ` [PATCH drm-next v5 07/14] drm/nouveau: bo: initialize GEM GPU VA interface Danilo Krummrich
2023-06-20  0:42 ` [PATCH drm-next v5 08/14] drm/nouveau: move usercopy helpers to nouveau_drv.h Danilo Krummrich
2023-06-20  0:42 ` [PATCH drm-next v5 09/14] drm/nouveau: fence: separate fence alloc and emit Danilo Krummrich
2023-06-21  2:26   ` kernel test robot
2023-06-20  0:42 ` [PATCH drm-next v5 10/14] drm/nouveau: fence: fail to emit when fence context is killed Danilo Krummrich
2023-06-20  0:42 ` [PATCH drm-next v5 11/14] drm/nouveau: chan: provide nouveau_channel_kill() Danilo Krummrich
2023-06-20  0:42 ` [PATCH drm-next v5 12/14] drm/nouveau: nvkm/vmm: implement raw ops to manage uvmm Danilo Krummrich
2023-06-20  0:42 ` [PATCH drm-next v5 13/14] drm/nouveau: implement new VM_BIND uAPI Danilo Krummrich
2023-06-20  0:42 ` [PATCH drm-next v5 14/14] drm/nouveau: debugfs: implement DRM GPU VA debugfs Danilo Krummrich
2023-06-20  9:25 ` [PATCH drm-next v5 00/14] [RFC] DRM GPUVA Manager & Nouveau VM_BIND UAPI Boris Brezillon
2023-06-20 12:46   ` Danilo Krummrich
2023-06-22 13:01     ` Boris Brezillon
2023-06-22 13:58       ` Danilo Krummrich
2023-06-22 15:19         ` Boris Brezillon
2023-06-22 15:27           ` Danilo Krummrich

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=923e914f-d367-2f74-9774-f0024d946bdd@amd.com \
    --to=christian.koenig@amd.com \
    --cc=Liam.Howlett@oracle.com \
    --cc=airlied@gmail.com \
    --cc=airlied@redhat.com \
    --cc=alexdeucher@gmail.com \
    --cc=bagasdotme@gmail.com \
    --cc=boris.brezillon@collabora.com \
    --cc=bskeggs@redhat.com \
    --cc=corbet@lwn.net \
    --cc=dakr@redhat.com \
    --cc=daniel@ffwll.ch \
    --cc=donald.robson@imgtec.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=jason@jlekstrand.net \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=matthew.brost@intel.com \
    --cc=mripard@kernel.org \
    --cc=nouveau@lists.freedesktop.org \
    --cc=ogabbay@kernel.org \
    --cc=tzimmermann@suse.de \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).