linux-doc.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Alex Deucher <alexdeucher@gmail.com>
To: Danilo Krummrich <dakr@redhat.com>
Cc: "Christian König" <christian.koenig@amd.com>,
	daniel@ffwll.ch, airlied@redhat.com, bskeggs@redhat.com,
	jason@jlekstrand.net, tzimmermann@suse.de, mripard@kernel.org,
	corbet@lwn.net, nouveau@lists.freedesktop.org,
	linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org,
	linux-doc@vger.kernel.org
Subject: Re: [PATCH drm-next 00/14] [RFC] DRM GPUVA Manager & Nouveau VM_BIND UAPI
Date: Wed, 18 Jan 2023 11:30:44 -0500	[thread overview]
Message-ID: <CADnq5_PGaXFW-z3gt+R+W+vBVdeuL4wMuMOQh4muxU13Bemy3A@mail.gmail.com> (raw)
In-Reply-To: <bcbef353-f579-4e90-1c77-be36bbe61c0f@redhat.com>

On Wed, Jan 18, 2023 at 11:19 AM Danilo Krummrich <dakr@redhat.com> wrote:
>
> On 1/18/23 16:37, Christian König wrote:
> > Am 18.01.23 um 16:34 schrieb Danilo Krummrich:
> >> Hi Christian,
> >>
> >> On 1/18/23 09:53, Christian König wrote:
> >>> Am 18.01.23 um 07:12 schrieb Danilo Krummrich:
> >>>> This patch series provides a new UAPI for the Nouveau driver in
> >>>> order to
> >>>> support Vulkan features, such as sparse bindings and sparse residency.
> >>>>
> >>>> Furthermore, with the DRM GPUVA manager it provides a new DRM core
> >>>> feature to
> >>>> keep track of GPU virtual address (VA) mappings in a more generic way.
> >>>>
> >>>> The DRM GPUVA manager is indented to help drivers implement
> >>>> userspace-manageable
> >>>> GPU VA spaces in reference to the Vulkan API. In order to achieve
> >>>> this goal it
> >>>> serves the following purposes in this context.
> >>>>
> >>>>      1) Provide a dedicated range allocator to track GPU VA
> >>>> allocations and
> >>>>         mappings, making use of the drm_mm range allocator.
> >>>
> >>> This means that the ranges are allocated by the kernel? If yes that's
> >>> a really really bad idea.
> >>
> >> No, it's just for keeping track of the ranges userspace has allocated.
> >
> > Ok, that makes more sense.
> >
> > So basically you have an IOCTL which asks kernel for a free range? Or
> > what exactly is the drm_mm used for here?
>
> Not even that, userspace provides both the base address and the range,
> the kernel really just keeps track of things. Though, writing a UAPI on
> top of the GPUVA manager asking for a free range instead would be
> possible by just adding the corresponding wrapper functions to get a
> free hole.
>
> Currently, and that's what I think I read out of your question, the main
> benefit of using drm_mm over simply stuffing the entries into a list or
> something boils down to easier collision detection and iterating
> sub-ranges of the whole VA space.

Why not just do this in userspace?  We have a range manager in
libdrm_amdgpu that you could lift out into libdrm or some other
helper.

Alex


>
> >
> > Regards,
> > Christian.
> >
> >>
> >> - Danilo
> >>
> >>>
> >>> Regards,
> >>> Christian.
> >>>
> >>>>
> >>>>      2) Generically connect GPU VA mappings to their backing
> >>>> buffers, in
> >>>>         particular DRM GEM objects.
> >>>>
> >>>>      3) Provide a common implementation to perform more complex mapping
> >>>>         operations on the GPU VA space. In particular splitting and
> >>>> merging
> >>>>         of GPU VA mappings, e.g. for intersecting mapping requests
> >>>> or partial
> >>>>         unmap requests.
> >>>>
> >>>> The new VM_BIND Nouveau UAPI build on top of the DRM GPUVA manager,
> >>>> itself
> >>>> providing the following new interfaces.
> >>>>
> >>>>      1) Initialize a GPU VA space via the new
> >>>> DRM_IOCTL_NOUVEAU_VM_INIT ioctl
> >>>>         for UMDs to specify the portion of VA space managed by the
> >>>> kernel and
> >>>>         userspace, respectively.
> >>>>
> >>>>      2) Allocate and free a VA space region as well as bind and
> >>>> unbind memory
> >>>>         to the GPUs VA space via the new DRM_IOCTL_NOUVEAU_VM_BIND
> >>>> ioctl.
> >>>>
> >>>>      3) Execute push buffers with the new DRM_IOCTL_NOUVEAU_EXEC ioctl.
> >>>>
> >>>> Both, DRM_IOCTL_NOUVEAU_VM_BIND and DRM_IOCTL_NOUVEAU_EXEC, make use
> >>>> of the DRM
> >>>> scheduler to queue jobs and support asynchronous processing with DRM
> >>>> syncobjs
> >>>> as synchronization mechanism.
> >>>>
> >>>> By default DRM_IOCTL_NOUVEAU_VM_BIND does synchronous processing,
> >>>> DRM_IOCTL_NOUVEAU_EXEC supports asynchronous processing only.
> >>>>
> >>>> The new VM_BIND UAPI for Nouveau makes also use of drm_exec
> >>>> (execution context
> >>>> for GEM buffers) by Christian König. Since the patch implementing
> >>>> drm_exec was
> >>>> not yet merged into drm-next it is part of this series, as well as a
> >>>> small fix
> >>>> for this patch, which was found while testing this series.
> >>>>
> >>>> This patch series is also available at [1].
> >>>>
> >>>> There is a Mesa NVK merge request by Dave Airlie [2] implementing the
> >>>> corresponding userspace parts for this series.
> >>>>
> >>>> The Vulkan CTS test suite passes the sparse binding and sparse
> >>>> residency test
> >>>> cases for the new UAPI together with Dave's Mesa work.
> >>>>
> >>>> There are also some test cases in the igt-gpu-tools project [3] for
> >>>> the new UAPI
> >>>> and hence the DRM GPU VA manager. However, most of them are testing
> >>>> the DRM GPU
> >>>> VA manager's logic through Nouveau's new UAPI and should be
> >>>> considered just as
> >>>> helper for implementation.
> >>>>
> >>>> However, I absolutely intend to change those test cases to proper
> >>>> kunit test
> >>>> cases for the DRM GPUVA manager, once and if we agree on it's
> >>>> usefulness and
> >>>> design.
> >>>>
> >>>> [1]
> >>>> https://gitlab.freedesktop.org/nouvelles/kernel/-/tree/new-uapi-drm-next /
> >>>> https://gitlab.freedesktop.org/nouvelles/kernel/-/merge_requests/1
> >>>> [2] https://gitlab.freedesktop.org/nouveau/mesa/-/merge_requests/150/
> >>>> [3]
> >>>> https://gitlab.freedesktop.org/dakr/igt-gpu-tools/-/tree/wip_nouveau_vm_bind
> >>>>
> >>>> I also want to give credit to Dave Airlie, who contributed a lot of
> >>>> ideas to
> >>>> this patch series.
> >>>>
> >>>> Christian König (1):
> >>>>    drm: execution context for GEM buffers
> >>>>
> >>>> Danilo Krummrich (13):
> >>>>    drm/exec: fix memory leak in drm_exec_prepare_obj()
> >>>>    drm: manager to keep track of GPUs VA mappings
> >>>>    drm: debugfs: provide infrastructure to dump a DRM GPU VA space
> >>>>    drm/nouveau: new VM_BIND uapi interfaces
> >>>>    drm/nouveau: get vmm via nouveau_cli_vmm()
> >>>>    drm/nouveau: bo: initialize GEM GPU VA interface
> >>>>    drm/nouveau: move usercopy helpers to nouveau_drv.h
> >>>>    drm/nouveau: fence: fail to emit when fence context is killed
> >>>>    drm/nouveau: chan: provide nouveau_channel_kill()
> >>>>    drm/nouveau: nvkm/vmm: implement raw ops to manage uvmm
> >>>>    drm/nouveau: implement uvmm for user mode bindings
> >>>>    drm/nouveau: implement new VM_BIND UAPI
> >>>>    drm/nouveau: debugfs: implement DRM GPU VA debugfs
> >>>>
> >>>>   Documentation/gpu/driver-uapi.rst             |   11 +
> >>>>   Documentation/gpu/drm-mm.rst                  |   43 +
> >>>>   drivers/gpu/drm/Kconfig                       |    6 +
> >>>>   drivers/gpu/drm/Makefile                      |    3 +
> >>>>   drivers/gpu/drm/amd/amdgpu/Kconfig            |    1 +
> >>>>   drivers/gpu/drm/drm_debugfs.c                 |   56 +
> >>>>   drivers/gpu/drm/drm_exec.c                    |  294 ++++
> >>>>   drivers/gpu/drm/drm_gem.c                     |    3 +
> >>>>   drivers/gpu/drm/drm_gpuva_mgr.c               | 1323
> >>>> +++++++++++++++++
> >>>>   drivers/gpu/drm/nouveau/Kbuild                |    3 +
> >>>>   drivers/gpu/drm/nouveau/Kconfig               |    2 +
> >>>>   drivers/gpu/drm/nouveau/include/nvif/if000c.h |   23 +-
> >>>>   drivers/gpu/drm/nouveau/include/nvif/vmm.h    |   17 +-
> >>>>   .../gpu/drm/nouveau/include/nvkm/subdev/mmu.h |   10 +
> >>>>   drivers/gpu/drm/nouveau/nouveau_abi16.c       |   23 +
> >>>>   drivers/gpu/drm/nouveau/nouveau_abi16.h       |    1 +
> >>>>   drivers/gpu/drm/nouveau/nouveau_bo.c          |  152 +-
> >>>>   drivers/gpu/drm/nouveau/nouveau_bo.h          |    2 +-
> >>>>   drivers/gpu/drm/nouveau/nouveau_chan.c        |   16 +-
> >>>>   drivers/gpu/drm/nouveau/nouveau_chan.h        |    1 +
> >>>>   drivers/gpu/drm/nouveau/nouveau_debugfs.c     |   24 +
> >>>>   drivers/gpu/drm/nouveau/nouveau_drm.c         |   25 +-
> >>>>   drivers/gpu/drm/nouveau/nouveau_drv.h         |   92 +-
> >>>>   drivers/gpu/drm/nouveau/nouveau_exec.c        |  310 ++++
> >>>>   drivers/gpu/drm/nouveau/nouveau_exec.h        |   55 +
> >>>>   drivers/gpu/drm/nouveau/nouveau_fence.c       |    7 +
> >>>>   drivers/gpu/drm/nouveau/nouveau_fence.h       |    2 +-
> >>>>   drivers/gpu/drm/nouveau/nouveau_gem.c         |   83 +-
> >>>>   drivers/gpu/drm/nouveau/nouveau_mem.h         |    5 +
> >>>>   drivers/gpu/drm/nouveau/nouveau_prime.c       |    2 +-
> >>>>   drivers/gpu/drm/nouveau/nouveau_sched.c       |  780 ++++++++++
> >>>>   drivers/gpu/drm/nouveau/nouveau_sched.h       |   98 ++
> >>>>   drivers/gpu/drm/nouveau/nouveau_svm.c         |    2 +-
> >>>>   drivers/gpu/drm/nouveau/nouveau_uvmm.c        |  575 +++++++
> >>>>   drivers/gpu/drm/nouveau/nouveau_uvmm.h        |   68 +
> >>>>   drivers/gpu/drm/nouveau/nouveau_vmm.c         |    4 +-
> >>>>   drivers/gpu/drm/nouveau/nvif/vmm.c            |   73 +-
> >>>>   .../gpu/drm/nouveau/nvkm/subdev/mmu/uvmm.c    |  168 ++-
> >>>>   .../gpu/drm/nouveau/nvkm/subdev/mmu/uvmm.h    |    1 +
> >>>>   drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c |   32 +-
> >>>>   drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.h |    3 +
> >>>>   include/drm/drm_debugfs.h                     |   25 +
> >>>>   include/drm/drm_drv.h                         |    6 +
> >>>>   include/drm/drm_exec.h                        |  144 ++
> >>>>   include/drm/drm_gem.h                         |   75 +
> >>>>   include/drm/drm_gpuva_mgr.h                   |  527 +++++++
> >>>>   include/uapi/drm/nouveau_drm.h                |  216 +++
> >>>>   47 files changed, 5266 insertions(+), 126 deletions(-)
> >>>>   create mode 100644 drivers/gpu/drm/drm_exec.c
> >>>>   create mode 100644 drivers/gpu/drm/drm_gpuva_mgr.c
> >>>>   create mode 100644 drivers/gpu/drm/nouveau/nouveau_exec.c
> >>>>   create mode 100644 drivers/gpu/drm/nouveau/nouveau_exec.h
> >>>>   create mode 100644 drivers/gpu/drm/nouveau/nouveau_sched.c
> >>>>   create mode 100644 drivers/gpu/drm/nouveau/nouveau_sched.h
> >>>>   create mode 100644 drivers/gpu/drm/nouveau/nouveau_uvmm.c
> >>>>   create mode 100644 drivers/gpu/drm/nouveau/nouveau_uvmm.h
> >>>>   create mode 100644 include/drm/drm_exec.h
> >>>>   create mode 100644 include/drm/drm_gpuva_mgr.h
> >>>>
> >>>>
> >>>> base-commit: 0b45ac1170ea6416bc1d36798414c04870cd356d
> >>>
> >>
> >
>

  reply	other threads:[~2023-01-18 16:33 UTC|newest]

Thread overview: 75+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-01-18  6:12 [PATCH drm-next 00/14] [RFC] DRM GPUVA Manager & Nouveau VM_BIND UAPI Danilo Krummrich
2023-01-18  6:12 ` [PATCH drm-next 01/14] drm: execution context for GEM buffers Danilo Krummrich
2023-01-18  6:12 ` [PATCH drm-next 02/14] drm/exec: fix memory leak in drm_exec_prepare_obj() Danilo Krummrich
2023-01-18  8:51   ` Christian König
2023-01-18 19:00     ` Danilo Krummrich
2023-01-18  6:12 ` [PATCH drm-next 03/14] drm: manager to keep track of GPUs VA mappings Danilo Krummrich
2023-01-19  4:14   ` Bagas Sanjaya
2023-01-20 18:32     ` Danilo Krummrich
2023-01-23 23:23   ` Niranjana Vishwanathapura
2023-01-26 23:43   ` Matthew Brost
2023-01-27  0:24   ` Matthew Brost
2023-02-03 17:37   ` Matthew Brost
2023-02-06 13:35     ` Christian König
2023-02-06 13:46       ` Danilo Krummrich
2023-02-14 11:52     ` Danilo Krummrich
2023-01-18  6:12 ` [PATCH drm-next 04/14] drm: debugfs: provide infrastructure to dump a DRM GPU VA space Danilo Krummrich
2023-01-18 13:55   ` kernel test robot
2023-01-18 15:47   ` kernel test robot
2023-01-18  6:12 ` [PATCH drm-next 05/14] drm/nouveau: new VM_BIND uapi interfaces Danilo Krummrich
2023-01-27  1:05   ` Matthew Brost
2023-01-27  1:26     ` Danilo Krummrich
2023-01-27  7:55       ` Christian König
2023-01-27 13:12         ` Danilo Krummrich
2023-01-27 13:23           ` Christian König
2023-01-27 14:44             ` Danilo Krummrich
2023-01-27 15:17               ` Christian König
2023-01-27 20:25                 ` David Airlie
2023-01-30 12:58                   ` Christian König
2023-01-27 21:09                 ` Danilo Krummrich
2023-01-29 18:46                   ` Danilo Krummrich
2023-01-30 13:02                     ` Christian König
2023-01-30 23:38                       ` Danilo Krummrich
2023-02-01  8:10                       ` [Nouveau] " Dave Airlie
2023-02-02 11:53                         ` Christian König
2023-02-02 18:31                           ` Danilo Krummrich
2023-02-06  9:48                             ` Christian König
2023-02-06 13:27                               ` Danilo Krummrich
2023-02-06 16:14                                 ` Christian König
2023-02-06 18:20                                   ` Danilo Krummrich
2023-02-07  9:35                                     ` Christian König
2023-02-07 10:50                                       ` Danilo Krummrich
2023-02-10 11:50                                         ` Christian König
2023-02-10 12:47                                           ` Danilo Krummrich
2023-01-27  1:43     ` Danilo Krummrich
2023-01-27  3:21       ` Matthew Brost
2023-01-27  3:33         ` Danilo Krummrich
2023-01-18  6:12 ` [PATCH drm-next 06/14] drm/nouveau: get vmm via nouveau_cli_vmm() Danilo Krummrich
2023-01-18  6:12 ` [PATCH drm-next 07/14] drm/nouveau: bo: initialize GEM GPU VA interface Danilo Krummrich
2023-01-18  6:12 ` [PATCH drm-next 08/14] drm/nouveau: move usercopy helpers to nouveau_drv.h Danilo Krummrich
2023-01-18  6:12 ` [PATCH drm-next 09/14] drm/nouveau: fence: fail to emit when fence context is killed Danilo Krummrich
2023-01-18  6:12 ` [PATCH drm-next 10/14] drm/nouveau: chan: provide nouveau_channel_kill() Danilo Krummrich
2023-01-18  6:12 ` [PATCH drm-next 11/14] drm/nouveau: nvkm/vmm: implement raw ops to manage uvmm Danilo Krummrich
2023-01-20  3:37   ` kernel test robot
2023-01-18  6:12 ` [PATCH drm-next 12/14] drm/nouveau: implement uvmm for user mode bindings Danilo Krummrich
2023-01-18  6:12 ` [PATCH drm-next 13/14] drm/nouveau: implement new VM_BIND UAPI Danilo Krummrich
2023-01-18 20:37   ` Thomas Hellström (Intel)
2023-01-19  3:44     ` Danilo Krummrich
2023-01-19  4:58       ` Matthew Brost
2023-01-19  7:32         ` Thomas Hellström (Intel)
2023-01-20 10:08         ` Boris Brezillon
2023-01-18  6:12 ` [PATCH drm-next 14/14] drm/nouveau: debugfs: implement DRM GPU VA debugfs Danilo Krummrich
2023-01-18  8:53 ` [PATCH drm-next 00/14] [RFC] DRM GPUVA Manager & Nouveau VM_BIND UAPI Christian König
2023-01-18 15:34   ` Danilo Krummrich
2023-01-18 15:37     ` Christian König
2023-01-18 16:19       ` Danilo Krummrich
2023-01-18 16:30         ` Alex Deucher [this message]
2023-01-18 16:50           ` Danilo Krummrich
2023-01-18 16:54             ` Alex Deucher
2023-01-18 19:17               ` Dave Airlie
2023-01-18 19:48                 ` Christian König
2023-01-19  4:04                   ` Danilo Krummrich
2023-01-19  5:23                     ` Matthew Brost
2023-01-19 11:33                       ` drm_gpuva_manager requirements (was Re: [PATCH drm-next 00/14] [RFC] DRM GPUVA Manager & Nouveau VM_BIND UAPI) Christian König
2023-02-06 14:48                       ` [PATCH drm-next 00/14] [RFC] DRM GPUVA Manager & Nouveau VM_BIND UAPI Oded Gabbay
2023-03-16 16:39                         ` Danilo Krummrich

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CADnq5_PGaXFW-z3gt+R+W+vBVdeuL4wMuMOQh4muxU13Bemy3A@mail.gmail.com \
    --to=alexdeucher@gmail.com \
    --cc=airlied@redhat.com \
    --cc=bskeggs@redhat.com \
    --cc=christian.koenig@amd.com \
    --cc=corbet@lwn.net \
    --cc=dakr@redhat.com \
    --cc=daniel@ffwll.ch \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=jason@jlekstrand.net \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mripard@kernel.org \
    --cc=nouveau@lists.freedesktop.org \
    --cc=tzimmermann@suse.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).