linux-doc.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Christian König" <christian.koenig@amd.com>
To: Dave Airlie <airlied@gmail.com>
Cc: Danilo Krummrich <dakr@redhat.com>,
	Matthew Brost <matthew.brost@intel.com>,
	daniel@ffwll.ch, corbet@lwn.net, dri-devel@lists.freedesktop.org,
	linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org,
	mripard@kernel.org, bskeggs@redhat.com, jason@jlekstrand.net,
	nouveau@lists.freedesktop.org, airlied@redhat.com
Subject: Re: [Nouveau] [PATCH drm-next 05/14] drm/nouveau: new VM_BIND uapi interfaces
Date: Thu, 2 Feb 2023 12:53:52 +0100	[thread overview]
Message-ID: <a1c526e0-0df7-12cb-c5a1-06e9cd0d876b@amd.com> (raw)
In-Reply-To: <CAPM=9txON8VCb3H7vDY_DOgtUg2Ad3mBvYVxgSMyZ1noOu-rBQ@mail.gmail.com>

Am 01.02.23 um 09:10 schrieb Dave Airlie:
> [SNIP]
>>> For drivers that don't intend to merge at all and (somehow) are
>>> capable of dealing with sparse regions without knowing the sparse
>>> region's boundaries, it'd be easy to make those gpuva_regions optional.
>> Yeah, but this then defeats the approach of having the same hw
>> independent interface/implementation for all drivers.
> I think you are running a few steps ahead here. The plan isn't to have
> an independent interface, it's to provide a set of routines and
> tracking that will be consistent across drivers, so that all drivers
> once using them will operate in mostly the same fashion with respect
> to GPU VA tracking and VA/BO lifetimes. Already in the tree we have
> amdgpu and freedreno which I think end up operating slightly different
> around lifetimes. I'd like to save future driver writers the effort of
> dealing with those decisions and this should drive their user api
> design so to enable vulkan sparse bindings.

Ok in this case I'm pretty sure this is *NOT* a good idea.

See this means that we define the UAPI implicitly by saying to drivers 
to use a common framework for their VM implementation which then results 
in behavior A,B,C,D....

If a driver strides away from this common framework because it has 
different requirements based on how his hw work you certainly get 
different behavior again (and you have tons of hw specific requirements 
in here).

What we should do instead if we want to have some common handling among 
drivers (which I totally agree on makes sense) then we should define the 
UAPI explicitly.

For example we could have a DRM_IOCTL_GPU_VM which takes both driver 
independent as well as driver dependent information and then has the 
documented behavior:
a) VAs do (or don't) vanish automatically when the GEM handle is closed.
b) GEM BOs do (or don't) get an additional reference for each VM they 
are used in.
c) Can handle some common use cases driver independent (BO mappings, 
readonly, writeonly, sparse etc...).
d) Has a well defined behavior when the operation is executed async. 
E.g. in/out fences.
e) Can still handle hw specific stuff like (for example) trap on access 
etc....
...

Especially d is what Bas and I have pretty much already created a 
prototype for the amdgpu specific IOCTL for, but essentially this is 
completely driver independent and actually the more complex stuff. 
Compared to that common lifetime of BOs is just nice to have.

I strongly think we should concentrate on getting this right as well.

> Now if merging is a feature that makes sense to one driver maybe it
> makes sense to all, however there may be reasons amdgpu gets away
> without merging that other drivers might not benefit from, there might
> also be a benefit to amdgpu from merging that you haven't looked at
> yet, so I think we could leave merging as an optional extra driver
> knob here. The userspace API should operate the same, it would just be
> the gpu pagetables that would end up different sizes.

Yeah, agree completely. The point is that we should not have complexity 
inside the kernel which is not necessarily needed in the kernel.

So merging or not is something we have gone back and forth for amdgpu, 
one the one hand it reduces the memory footprint of the housekeeping 
overhead on the other hand it makes the handling more complex, error 
prone and use a few more CPU cycles.

For amdgpu merging is mostly beneficial when you can get rid of a whole 
page tables layer in the hierarchy, but for this you need to merge at 
least 2MiB or 1GiB together. And since that case doesn't happen that 
often we stopped doing it.

But for my understanding why you need the ranges for the merging? Isn't 
it sufficient to check that the mappings have the same type, flags, BO, 
whatever backing them?

Regards,
Christian.


>
> Dave.


  reply	other threads:[~2023-02-02 11:54 UTC|newest]

Thread overview: 75+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-01-18  6:12 [PATCH drm-next 00/14] [RFC] DRM GPUVA Manager & Nouveau VM_BIND UAPI Danilo Krummrich
2023-01-18  6:12 ` [PATCH drm-next 01/14] drm: execution context for GEM buffers Danilo Krummrich
2023-01-18  6:12 ` [PATCH drm-next 02/14] drm/exec: fix memory leak in drm_exec_prepare_obj() Danilo Krummrich
2023-01-18  8:51   ` Christian König
2023-01-18 19:00     ` Danilo Krummrich
2023-01-18  6:12 ` [PATCH drm-next 03/14] drm: manager to keep track of GPUs VA mappings Danilo Krummrich
2023-01-19  4:14   ` Bagas Sanjaya
2023-01-20 18:32     ` Danilo Krummrich
2023-01-23 23:23   ` Niranjana Vishwanathapura
2023-01-26 23:43   ` Matthew Brost
2023-01-27  0:24   ` Matthew Brost
2023-02-03 17:37   ` Matthew Brost
2023-02-06 13:35     ` Christian König
2023-02-06 13:46       ` Danilo Krummrich
2023-02-14 11:52     ` Danilo Krummrich
2023-01-18  6:12 ` [PATCH drm-next 04/14] drm: debugfs: provide infrastructure to dump a DRM GPU VA space Danilo Krummrich
2023-01-18 13:55   ` kernel test robot
2023-01-18 15:47   ` kernel test robot
2023-01-18  6:12 ` [PATCH drm-next 05/14] drm/nouveau: new VM_BIND uapi interfaces Danilo Krummrich
2023-01-27  1:05   ` Matthew Brost
2023-01-27  1:26     ` Danilo Krummrich
2023-01-27  7:55       ` Christian König
2023-01-27 13:12         ` Danilo Krummrich
2023-01-27 13:23           ` Christian König
2023-01-27 14:44             ` Danilo Krummrich
2023-01-27 15:17               ` Christian König
2023-01-27 20:25                 ` David Airlie
2023-01-30 12:58                   ` Christian König
2023-01-27 21:09                 ` Danilo Krummrich
2023-01-29 18:46                   ` Danilo Krummrich
2023-01-30 13:02                     ` Christian König
2023-01-30 23:38                       ` Danilo Krummrich
2023-02-01  8:10                       ` [Nouveau] " Dave Airlie
2023-02-02 11:53                         ` Christian König [this message]
2023-02-02 18:31                           ` Danilo Krummrich
2023-02-06  9:48                             ` Christian König
2023-02-06 13:27                               ` Danilo Krummrich
2023-02-06 16:14                                 ` Christian König
2023-02-06 18:20                                   ` Danilo Krummrich
2023-02-07  9:35                                     ` Christian König
2023-02-07 10:50                                       ` Danilo Krummrich
2023-02-10 11:50                                         ` Christian König
2023-02-10 12:47                                           ` Danilo Krummrich
2023-01-27  1:43     ` Danilo Krummrich
2023-01-27  3:21       ` Matthew Brost
2023-01-27  3:33         ` Danilo Krummrich
2023-01-18  6:12 ` [PATCH drm-next 06/14] drm/nouveau: get vmm via nouveau_cli_vmm() Danilo Krummrich
2023-01-18  6:12 ` [PATCH drm-next 07/14] drm/nouveau: bo: initialize GEM GPU VA interface Danilo Krummrich
2023-01-18  6:12 ` [PATCH drm-next 08/14] drm/nouveau: move usercopy helpers to nouveau_drv.h Danilo Krummrich
2023-01-18  6:12 ` [PATCH drm-next 09/14] drm/nouveau: fence: fail to emit when fence context is killed Danilo Krummrich
2023-01-18  6:12 ` [PATCH drm-next 10/14] drm/nouveau: chan: provide nouveau_channel_kill() Danilo Krummrich
2023-01-18  6:12 ` [PATCH drm-next 11/14] drm/nouveau: nvkm/vmm: implement raw ops to manage uvmm Danilo Krummrich
2023-01-20  3:37   ` kernel test robot
2023-01-18  6:12 ` [PATCH drm-next 12/14] drm/nouveau: implement uvmm for user mode bindings Danilo Krummrich
2023-01-18  6:12 ` [PATCH drm-next 13/14] drm/nouveau: implement new VM_BIND UAPI Danilo Krummrich
2023-01-18 20:37   ` Thomas Hellström (Intel)
2023-01-19  3:44     ` Danilo Krummrich
2023-01-19  4:58       ` Matthew Brost
2023-01-19  7:32         ` Thomas Hellström (Intel)
2023-01-20 10:08         ` Boris Brezillon
2023-01-18  6:12 ` [PATCH drm-next 14/14] drm/nouveau: debugfs: implement DRM GPU VA debugfs Danilo Krummrich
2023-01-18  8:53 ` [PATCH drm-next 00/14] [RFC] DRM GPUVA Manager & Nouveau VM_BIND UAPI Christian König
2023-01-18 15:34   ` Danilo Krummrich
2023-01-18 15:37     ` Christian König
2023-01-18 16:19       ` Danilo Krummrich
2023-01-18 16:30         ` Alex Deucher
2023-01-18 16:50           ` Danilo Krummrich
2023-01-18 16:54             ` Alex Deucher
2023-01-18 19:17               ` Dave Airlie
2023-01-18 19:48                 ` Christian König
2023-01-19  4:04                   ` Danilo Krummrich
2023-01-19  5:23                     ` Matthew Brost
2023-01-19 11:33                       ` drm_gpuva_manager requirements (was Re: [PATCH drm-next 00/14] [RFC] DRM GPUVA Manager & Nouveau VM_BIND UAPI) Christian König
2023-02-06 14:48                       ` [PATCH drm-next 00/14] [RFC] DRM GPUVA Manager & Nouveau VM_BIND UAPI Oded Gabbay
2023-03-16 16:39                         ` Danilo Krummrich

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a1c526e0-0df7-12cb-c5a1-06e9cd0d876b@amd.com \
    --to=christian.koenig@amd.com \
    --cc=airlied@gmail.com \
    --cc=airlied@redhat.com \
    --cc=bskeggs@redhat.com \
    --cc=corbet@lwn.net \
    --cc=dakr@redhat.com \
    --cc=daniel@ffwll.ch \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=jason@jlekstrand.net \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=matthew.brost@intel.com \
    --cc=mripard@kernel.org \
    --cc=nouveau@lists.freedesktop.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).