All of lore.kernel.org
 help / color / mirror / Atom feed
From: Daniel Vetter <daniel@ffwll.ch>
To: "Christian König" <christian.koenig@amd.com>
Cc: Simon Shields <simon@lineageos.org>,
	devicetree@vger.kernel.org, Connor Abbott <cwabbott0@gmail.com>,
	Marek Vasut <marex@denx.de>,
	Neil Armstrong <narmstrong@baylibre.com>,
	Andrei Paulau <7134956@gmail.com>,
	dri-devel <dri-devel@lists.freedesktop.org>,
	Vasily Khoruzhick <anarsoul@gmail.com>,
	Qiang Yu <yuq825@gmail.com>, Erico Nunes <nunes.erico@gmail.com>
Subject: Re: [PATCH RFC 00/24] Lima DRM driver
Date: Thu, 24 May 2018 09:25:48 +0200	[thread overview]
Message-ID: <CAKMK7uFHpX12RKJewZmSBnTgpcAvr9OpLgyYL__RGQW7vqkJYQ@mail.gmail.com> (raw)
In-Reply-To: <cfc190af-1d46-7dca-fded-19b3b7896166@amd.com>

On Thu, May 24, 2018 at 8:27 AM, Christian König
<christian.koenig@amd.com> wrote:
> Am 24.05.2018 um 02:31 schrieb Qiang Yu:
>>
>> On Wed, May 23, 2018 at 11:44 PM, Daniel Vetter <daniel@ffwll.ch> wrote:
>>>
>>> On Wed, May 23, 2018 at 3:52 PM, Qiang Yu <yuq825@gmail.com> wrote:
>>>>
>>>> On Wed, May 23, 2018 at 5:29 PM, Christian König
>>>> <ckoenig.leichtzumerken@gmail.com> wrote:
>>>>>
>>>>> Am 18.05.2018 um 11:27 schrieb Qiang Yu:
>>>>>>
>>>>>> Kernel DRM driver for ARM Mali 400/450 GPUs.
>>>>>>
>>>>>> This implementation mainly take amdgpu DRM driver as reference.
>>>>>>
>>>>>> - Mali 4xx GPUs have two kinds of processors GP and PP. GP is for
>>>>>>     OpenGL vertex shader processing and PP is for fragment shader
>>>>>>     processing. Each processor has its own MMU so prcessors work in
>>>>>>     virtual address space.
>>>>>> - There's only one GP but multiple PP (max 4 for mali 400 and 8
>>>>>>     for mali 450) in the same mali 4xx GPU. All PPs are grouped
>>>>>>     togather to handle a single fragment shader task divided by
>>>>>>     FB output tiled pixels. Mali 400 user space driver is
>>>>>>     responsible for assign target tiled pixels to each PP, but mali
>>>>>>     450 has a HW module called DLBU to dynamically balance each
>>>>>>     PP's load.
>>>>>> - User space driver allocate buffer object and map into GPU
>>>>>>     virtual address space, upload command stream and draw data with
>>>>>>     CPU mmap of the buffer object, then submit task to GP/PP with
>>>>>>     a register frame indicating where is the command stream and misc
>>>>>>     settings.
>>>>>> - There's no command stream validation/relocation due to each user
>>>>>>     process has its own GPU virtual address space. GP/PP's MMU switch
>>>>>>     virtual address space before running two tasks from different
>>>>>>     user process. Error or evil user space code just get MMU fault
>>>>>>     or GP/PP error IRQ, then the HW/SW will be recovered.
>>>>>> - Use TTM as MM. TTM_PL_TT type memory is used as the content of
>>>>>>     lima buffer object which is allocated from TTM page pool. all
>>>>>>     lima buffer object gets pinned with TTM_PL_FLAG_NO_EVICT when
>>>>>>     allocation, so there's no buffer eviction and swap for now. We
>>>>>>     need reverse engineering to see if and how GP/PP support MMU
>>>>>>     fault recovery (continue execution). Otherwise we have to
>>>>>>     pin/unpin each envolved buffer when task creation/deletion.
>>>>>
>>>>>
>>>>> Well pinning all memory is usually a no-go for upstreaming. But since
>>>>> you
>>>>> are already using the drm_sched for GPU task scheduling why are you
>>>>> actually
>>>>> needing this?
>>>>>
>>>>> The scheduler should take care of signaling all fences when the
>>>>> hardware is
>>>>> done with it's magic and that is enough for TTM to note that a buffer
>>>>> object
>>>>> is movable again (e.g. unpin them).
>>>>
>>>> Please correct me if I'm wrong.
>>>>
>>>> One way to implement eviction/swap is like this:
>>>> call validation on each buffers involved in a task, but this won't
>>>> prevent it from
>>>> eviction/swap when executing, so a GPU MMU fault may happen and in the
>>>> handler we need to recover the buffer evicted/swapped.
>>>>
>>>> Another way is pin/unpin buffers evolved when task create/free.
>>>>
>>>> First way is better when memory load is low and second way is better
>>>> when
>>>> memory load is high. First way also need less memory.
>>>>
>>>> So I'd prefer first way but due to the GPU MMU fault
>>>> HW op need reverse engineering, I have to pin all buffers now. After
>>>> the HW op is clear, I can choose one way to implement.
>>>
>>> All the drivers using ttm have something that looks like vram, or a
>>> requirement to move buffers around. Afaiui that includes virtio drm
>>> driver.
>>
>> Does virtio drm driver need to move buffers around? amdgpu also
>> has no vram when APU.

Afaiui APUs have a range of stolen memory which looks and acts and is
managed like discrete vram. Including moving buffers around.

>>>  From your description you don't have such a requirement, and
>>> then doing what etnaviv has done would be a lot simpler. Everything
>>> that's not related to buffer movement handling is also available
>>> outside of ttm already.
>>
>> Yeah, I could do like etnaviv, but it's not simpler than using ttm
>> directly especially want some optimization (like ttm page pool,
>> ttm_eu_reserve_buffers, ttm_bo_mmap). If I have/want to implement
>> them, why not just use TTM directly with all those helper functions.
>
>
> Well TTM has some design flaws (e.g. heavily layered design etc...), but it
> also offers some rather nice functionality.

Yeah, but I still think that for non-discrete drivers just moving a
bunch of more of the neat ttm functionality into helpers where
everyone can use them (instead of the binary ttm y/n decision) would
be much better. E.g. the allocator pool definitely sounds like
something gem helpers should be able to do, same for reserving a pile
of buffers or default mmap implementations. A lot of that also exists
already, thanks to lots of efforts from Noralf Tronnes and others.

I think ideally the long-term goal would be to modularize ttm concepts
as much as possible, so that drivers can flexibly pick&choose the bits
they need. We're slowly getting there (but definitely not yet there if
you need to manage discrete vram I think).
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

  reply	other threads:[~2018-05-24  7:25 UTC|newest]

Thread overview: 75+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-05-18  9:27 [PATCH RFC 00/24] Lima DRM driver Qiang Yu
2018-05-18  9:27 ` [PATCH RFC 01/24] ARM: dts: add gpu node to exynos4 Qiang Yu
2018-05-23 17:06   ` Rob Herring
2018-05-18  9:27 ` [PATCH RFC 02/24] dt-bindings: add switch-delay property for mali-utgard Qiang Yu
2018-05-23 17:04   ` Rob Herring
2018-05-24  1:52     ` Qiang Yu
2018-05-18  9:27 ` [PATCH RFC 03/24] arm64/dts: add switch-delay for meson mali Qiang Yu
2018-05-21 14:16   ` Neil Armstrong
2018-05-21 14:16     ` Neil Armstrong
2018-05-22  0:48     ` Qiang Yu
2018-05-22  0:48       ` Qiang Yu
2018-05-18  9:27 ` [PATCH RFC 04/24] " Qiang Yu
2018-05-21 14:16   ` Neil Armstrong
2018-05-21 14:16     ` Neil Armstrong
2018-05-18  9:27 ` [PATCH RFC 05/24] Revert "drm: Nerf the preclose callback for modern drivers" Qiang Yu
2018-05-23  9:35   ` Christian König
2018-05-23 13:13     ` Qiang Yu
2018-05-23 13:41       ` Christian König
2018-05-24  1:38         ` Qiang Yu
2018-05-24  6:46           ` Christian König
2018-05-24  9:24             ` Qiang Yu
2018-05-24  9:41               ` Christian König
2018-05-24 12:54                 ` Qiang Yu
2018-05-18  9:27 ` [PATCH RFC 06/24] drm/lima: add lima uapi header Qiang Yu
2018-05-18  9:33   ` Marek Vasut
2018-05-20  7:22     ` Qiang Yu
2018-05-20  9:52       ` Marek Vasut
2018-05-20  7:25     ` Qiang Yu
2018-05-18  9:27 ` [PATCH RFC 07/24] drm/lima: add mali 4xx GPU hardware regs Qiang Yu
2018-05-23 17:24   ` Rob Herring
2018-05-23 17:31     ` Vasily Khoruzhick
2018-05-24  0:58     ` Qiang Yu
2018-05-24 14:31       ` Rob Herring
2018-05-18  9:27 ` [PATCH RFC 08/24] drm/lima: add lima core driver Qiang Yu
2018-05-18  9:28 ` [PATCH RFC 09/24] drm/lima: add GPU device functions Qiang Yu
2018-05-18  9:28 ` [PATCH RFC 10/24] drm/lima: add PMU related functions Qiang Yu
2018-05-18  9:28 ` [PATCH RFC 11/24] drm/lima: add L2 cache functions Qiang Yu
2018-05-18  9:28 ` [PATCH RFC 12/24] drm/lima: add GP related functions Qiang Yu
2018-05-23 17:12   ` Marek Vasut
2018-05-24  0:38     ` Qiang Yu
2018-05-18  9:28 ` [PATCH RFC 13/24] drm/lima: add PP " Qiang Yu
2018-05-18  9:28 ` [PATCH RFC 14/24] drm/lima: add MMU " Qiang Yu
2018-05-18  9:28 ` [PATCH RFC 15/24] drm/lima: add BCAST related function Qiang Yu
2018-05-18  9:28 ` [PATCH RFC 16/24] drm/lima: add DLBU related functions Qiang Yu
2018-05-18  9:28 ` [PATCH RFC 17/24] drm/lima: add GPU virtual memory space handing Qiang Yu
2018-05-18  9:28 ` [PATCH RFC 18/24] drm/lima: add TTM subsystem functions Qiang Yu
2018-05-18  9:28 ` [PATCH RFC 19/24] drm/lima: add buffer object functions Qiang Yu
2018-05-18  9:28 ` [PATCH RFC 20/24] drm/lima: add GEM related functions Qiang Yu
2018-05-18  9:28 ` [PATCH RFC 21/24] drm/lima: add GEM Prime " Qiang Yu
2018-05-18  9:28 ` [PATCH RFC 22/24] drm/lima: add GPU schedule using DRM_SCHED Qiang Yu
2018-05-18  9:28 ` [PATCH RFC 23/24] drm/lima: add context related functions Qiang Yu
2018-05-18  9:28 ` [PATCH RFC 24/24] drm/lima: add makefile and kconfig Qiang Yu
2018-05-23 17:16   ` Marek Vasut
2018-05-23 17:26     ` Rob Herring
2018-05-24  0:49       ` Qiang Yu
2018-06-15 17:23     ` Andre Przywara
2018-07-14  1:14       ` Qiang Yu
2018-07-14 12:06         ` André Przywara
2018-07-14 14:18           ` Qiang Yu
2018-07-14 19:15             ` André Przywara
2018-07-15  2:23               ` Qiang Yu
2018-05-23  9:02 ` [PATCH RFC 00/24] Lima DRM driver Daniel Vetter
2018-05-23 13:24   ` Qiang Yu
2018-05-23  9:29 ` Christian König
2018-05-23 13:52   ` Qiang Yu
2018-05-23 13:59     ` Christian König
2018-05-23 14:13       ` Qiang Yu
2018-05-23 14:19         ` Christian König
2018-05-23 14:27           ` Qiang Yu
2018-05-23 15:44     ` Daniel Vetter
2018-05-24  0:31       ` Qiang Yu
2018-05-24  6:27         ` Christian König
2018-05-24  7:25           ` Daniel Vetter [this message]
2018-05-24  9:53             ` Christian König
2018-05-19  6:52 Qiang Yu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAKMK7uFHpX12RKJewZmSBnTgpcAvr9OpLgyYL__RGQW7vqkJYQ@mail.gmail.com \
    --to=daniel@ffwll.ch \
    --cc=7134956@gmail.com \
    --cc=anarsoul@gmail.com \
    --cc=christian.koenig@amd.com \
    --cc=cwabbott0@gmail.com \
    --cc=devicetree@vger.kernel.org \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=marex@denx.de \
    --cc=narmstrong@baylibre.com \
    --cc=nunes.erico@gmail.com \
    --cc=simon@lineageos.org \
    --cc=yuq825@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.