All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
To: "Christian König" <christian.koenig@amd.com>,
	"Zack Rusin" <zackr@vmware.com>,
	"dri-devel@lists.freedesktop.org"
	<dri-devel@lists.freedesktop.org>
Subject: Re: [PATCH 0/5] drm/vmwgfx: Support module unload and hotunplug
Date: Mon, 11 Oct 2021 14:04:16 +0200	[thread overview]
Message-ID: <4041c72ff1d3d149437cedbcf3c598ae2238519d.camel@linux.intel.com> (raw)
In-Reply-To: <232f45e9-8748-1243-09bf-56763e6668b3@amd.com>

On Mon, 2021-10-11 at 10:17 +0200, Christian König wrote:
> Am 08.10.21 um 23:13 schrieb Thomas Hellström:
> > On Fri, 2021-10-08 at 20:40 +0000, Zack Rusin wrote:
> > > On Fri, 2021-10-08 at 22:28 +0200, Thomas Hellström wrote:
> > > > On Fri, 2021-10-08 at 13:31 -0400, Zack Rusin wrote:
> > > > > This is a largely trivial set that makes vmwgfx support
> > > > > module
> > > > > reload
> > > > > and PCI hot-unplug. It also makes IGT's core_hotunplug pass
> > > > > instead
> > > > > of kernel oops'ing.
> > > > > 
> > > > > The one "ugly" change is the "Introduce a new placement for
> > > > > MOB
> > > > > page
> > > > > tables". It seems vmwgfx has been violating a TTM assumption
> > > > > that
> > > > > TTM_PL_SYSTEM buffers are never fenced for a while. Apart
> > > > > from a
> > > > > kernel
> > > > > oops on module unload it didn't seem to wreak too much havoc,
> > > > > but
> > > > > we
> > > > > shouldn't be abusing TTM. So to solve it we're introducing a
> > > > > new
> > > > > placement, which is basically system, but can deal with
> > > > > fenced
> > > > > bo's.
> > > > > 
> > > > > Cc: Christian König <christian.koenig@amd.com>
> > > > > Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> > > > Hi, Zack,
> > > > 
> > > > What part of TTM doesn't allow fenced system memory currently?
> > > > It
> > > > was
> > > > certainly designed to allow that and vmwgfx has been relying on
> > > > that
> > > > since the introduction of MOBs IIRC. Also i915 is currently
> > > > relying
> > > > on
> > > > that.
> > > It's the shutdown. BO's allocated through the ttm system manager
> > > might
> > > be busy during ttm_bo_put which results in them being scheduled
> > > for a
> > > delayed deletion. The ttm system manager is disabled before the
> > > final
> > > delayed deletion is ran in ttm_device_fini. This results in
> > > crashes
> > > during freeing of the BO resources because they're trying to
> > > remove
> > > themselves from a no longer existent ttm_resource_manager (e.g.
> > > in
> > > IGT's core_hotunplug on vmwgfx)
> > > 
> > > During review of the trivial patch that was fixing it in ttm
> > > Christian
> > > said that system domain buffers must be idle or otherwise a
> > > number of
> > > assumptions in ttm breaks:
> > > 
> > > 

> > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Farchives%2Fdri-devel%2F2021-September%2F324027.html&amp;data=04%7C01%7Cchristian.koenig%40amd.com%7C2391a82208e6464c8db208d98aa08dd2%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637693244449717755%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=BZ3C00rZDDdpKNoGa0PYwoHeM89uVzN1Md4iN2qkGB0%3D&amp;reserved=0
> > > And later clarified that in fact system domain buffers being
> > > fenced
> > > is
> > > illegal from a design point of view:
> > > 
> > > 

> > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Farchives%2Fdri-devel%2F2021-September%2F324697.html&amp;data=04%7C01%7Cchristian.koenig%40amd.com%7C2391a82208e6464c8db208d98aa08dd2%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637693244449717755%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=3eXNqeh7Ifqe6lllRMvdfJX%2F7rX7%2FqH3wldNE5AodMc%3D&amp;reserved=0
> > Hmm, this looks very odd, because I remember reminding Christian as
> > late as this spring that both vmwgfx and i915 sets up GPU bindings
> > to
> > system buffers, as part of the review of a patch series pushing a
> > couple of changes to the swapout path that apparently had missed
> > this.
> 
> Well that was the trigger to look more deeply into this and as far as
> I 
> can tell TTM was never capable of doing this correctly.

So apart from the teardown, which appear to be an oversight when the
system manager was introduced where do whe fail currently with this? 

> 
> > This more sounds like there have been changes to TTM happening not
> > taking into account or knowing that TTM was designed for system
> > buffers
> > bound to GPU and that there were drivers actually doing that.
> > 
> > And there is still older code around clearly implying system
> > buffers
> > can be fenced, like ttm_bo_swapout(), and that there is dma fences
> > signaling completion on work that has never touched the GPU, not to
> > mention async eviction where a bo may be evicted to system but has
> > tons
> > of outstanding fenced work in the pipe.
> > 
> > So if there has been a design change WRT this I believe it should
> > have
> > been brought up on dri-devel to have it properly discussed so
> > affected
> > drivers could agree on the different options.
> > 
> > Perhaps Christian can enlighten us here. Christian?
> 
> There are multiple occasions where we assume that BOs in the system 
> domain are not accessible by the GPU, swapout and teardown are just
> two 
> examples.
> 

At swapout we *do* wait for idle after moving to system, It's relying
on the swap_notifier to unbind. That's why the swap_notifier is there,
so swapout is working perfectly fine.


> When Dave reorganized the buffer move code we also had to insert
> waits 
> for moves to complete for anything which goes into the SYSTEM domain.
> 
> Apart from that it certainly makes sense to have that restriction. 
> Memory which is accessed by the hardware and not directly evictable
> must 
> be accounted differently.

Could you elaborate a bit on this? From a swapout point of view, it
looks to me like SYSTEM is treated just like TT by TTM? Or is the
accounting you mention something amdgpu-specific and more related to
the amd GEM domains than to the TTM memory types?

Note that TTM was never designed to deal with GPU binding, but to
provide a set of placements or memory-types where the memory can be
mapped by the CPU and bound by the GPU. TT was a special case solely
because of the mappable apertures. A bind mechanism had to be provided
for TTM to be able to map TT buffers, and most drivers used that bound
mechanism for convenience.

So now if this is going to be changed, I think we need to understand
why and think this through really thoroughly:

* What is not working and why (the teardown seems to be a trivial fix).
* How did we end up here,
* What's the cost of fixing that up compared to refactoring the drivers
that rely on bindable system memory,
* What's the justification of a system type at all if it's not GPU-
bindable, meaning it's basically equivalent to swapped-out shmem with
the exception that it's mappable?

It's probably a non-trivial effort to refactor i915 to not use system
for gpu-binding and in that case I think we need some solid
justification why we need to do that rather than fix up what's not
working with TTM + bindable system:

So could you please elaborate (assuming that the teardown is fixable)
on the other parts that don't work.

Thanks

/Thomas


> 
> Regards,
> Christian.
> 
> > 
> > /Thomas
> > 
> > 
> > > z
> > 
> 



  reply	other threads:[~2021-10-11 12:05 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-10-08 17:31 [PATCH 0/5] drm/vmwgfx: Support module unload and hotunplug Zack Rusin
2021-10-08 17:31 ` [PATCH 1/5] drm/vmwgfx: Remove the deprecated lower mem limit Zack Rusin
2021-10-08 17:31 ` [PATCH 2/5] drm/vmwgfx: Release ttm memory if probe fails Zack Rusin
2021-10-08 17:31 ` [PATCH 3/5] drm/vmwgfx: Fail to initialize on broken configs Zack Rusin
2021-10-08 17:31 ` [PATCH 4/5] drm/vmwgfx: Introduce a new placement for MOB page tables Zack Rusin
2021-10-12 18:57   ` Thomas Hellström
2021-10-13  4:09     ` Zack Rusin
2021-10-08 17:31 ` [PATCH 5/5] drm/vmwgfx: Switch the internal BO's to ttm_bo_type_kernel Zack Rusin
2021-10-08 20:28 ` [PATCH 0/5] drm/vmwgfx: Support module unload and hotunplug Thomas Hellström
2021-10-08 20:40   ` Zack Rusin
2021-10-08 21:13     ` Thomas Hellström
2021-10-11  8:17       ` Christian König
2021-10-11 12:04         ` Thomas Hellström [this message]
2021-10-12  8:27           ` Christian König
2021-10-12  9:10             ` Thomas Hellström
2021-10-12 17:34               ` Zack Rusin
2021-10-13 12:50                 ` Daniel Vetter
2021-10-13 14:56                   ` Zack Rusin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4041c72ff1d3d149437cedbcf3c598ae2238519d.camel@linux.intel.com \
    --to=thomas.hellstrom@linux.intel.com \
    --cc=christian.koenig@amd.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=zackr@vmware.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.