linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Daniel Vetter <daniel@ffwll.ch>
To: John Stultz <john.stultz@linaro.org>
Cc: lkml <linux-kernel@vger.kernel.org>,
	"Sandeep Patil" <sspatil@google.com>,
	dri-devel <dri-devel@lists.freedesktop.org>,
	"Ezequiel Garcia" <ezequiel@collabora.com>,
	"Robin Murphy" <robin.murphy@arm.com>,
	"James Jones" <jajones@nvidia.com>,
	"Liam Mark" <lmark@codeaurora.org>,
	"Laura Abbott" <labbott@kernel.org>,
	"Chris Goldsworthy" <cgoldswo@codeaurora.org>,
	"Hridya Valsaraju" <hridya@google.com>,
	"Ørjan Eide" <orjan.eide@arm.com>,
	linux-media <linux-media@vger.kernel.org>,
	"Suren Baghdasaryan" <surenb@google.com>,
	"Daniel Mentz" <danielmentz@google.com>
Subject: Re: [RFC][PATCH 2/3] dma-buf: system_heap: Add pagepool support to system heap
Date: Tue, 2 Feb 2021 15:04:05 +0100	[thread overview]
Message-ID: <YBlb1V62cFP9Fz1/@phenom.ffwll.local> (raw)
In-Reply-To: <CALAqxLWdq9pKpFLzXmV60LQHpu8BgckDuX1HX5hY4jspHvLK5Q@mail.gmail.com>

On Fri, Jan 22, 2021 at 05:28:32PM -0800, John Stultz wrote:
> On Mon, Dec 21, 2020 at 2:09 PM Daniel Vetter <daniel@ffwll.ch> wrote:
> >
> > On Fri, Dec 18, 2020 at 05:16:56PM -0800, John Stultz wrote:
> > > On Fri, Dec 18, 2020 at 6:36 AM Daniel Vetter <daniel@ffwll.ch> wrote:
> > > > On Thu, Dec 17, 2020 at 11:06:11PM +0000, John Stultz wrote:
> > > > > Reuse/abuse the pagepool code from the network code to speed
> > > > > up allocation performance.
> > > > >
> > > > > This is similar to the ION pagepool usage, but tries to
> > > > > utilize generic code instead of a custom implementation.
> > > >
> > > > We also have one of these in ttm. I think we should have at most one of
> > > > these for the gpu ecosystem overall, maybe as a helper that can be plugged
> > > > into all the places.
> > > >
> > > > Or I'm kinda missing something, which could be since I only glanced at
> > > > yours for a bit. But it's also called page pool for buffer allocations,
> > > > and I don't think there's that many ways to implement that really :-)
> > >
> > > Yea, when I was looking around the ttm one didn't seem quite as
> > > generic as the networking one, which more easily fit in here.
> >
> > Oops, I didn't look that closely and didn't realize you're reusing the one
> > from net/core/.
> >
> > > The main benefit for the system heap is not so much the pool itself
> > > (the normal page allocator is pretty good), as it being able to defer
> > > the free and zero the pages in a background thread, so the pool is
> > > effectively filled with pre-zeroed pages.
> > >
> > > But I'll take another look at the ttm implementation and see if it can
> > > be re-used or the shared code refactored and pulled out somehow.
> >
> > I think moving the page_pool from net into lib and using it in ttm might
> > also be an option. Lack of shrinker in the networking one might be a bit a
> > problem.
> 
> Yea. I've been looking at this, to see how to abstract out a generic
> pool implementation, but each pool implementation has been tweaked for
> the specific use cases, so a general abstraction is a bit tough right
> off.
> 
> For example the ttm pool's handling allocations both from alloc_pages
> and dma_alloc in a pool, where the net page pool only uses alloc_pages
> (but can pre map via dma_map_attr).
> 
> And as you mentioned, the networking page pool is statically sized
> where the ttm pool is dynamic and shrinker controlled.
> 
> Further, as the ttm pool is utilized for keeping pools of pages set
> for specific cache types, it makes it difficult to abstract that out
> as we have to be able to reset the caching (set_pages_wb()) when
> shrinking, so that would also have to be pushed down into the pool
> attributes as well.
> 
> So far, in my attempts to share an abstraction for both the net
> page_pool and the ttm page pool, it seems to make the code complexity
> worse on both sides -  so while I'm interested in continuing to try to
> find a way to share code here, I'm not sure it makes sense to hold up
> this series (which is already re-using an existing implementation and
> provide a performance bump in microbenchmarks) for the
> grand-unified-page-pool. Efforts to refactor the ttm pool and net page
> pool can continue on indepently, and I'd be happy to move the system
> heap to whatever that ends up being.

The thing is, I'm not sure sharing code with net/core is a really good
idea, at least it seems like we have some impendence mismatch with the ttm
pool. And going forward I expect sooner or later we need alignment between
the pools/caches under drm with dma-buf heap pools a lot more than between
dma-buf and net/core.

So this feels like a bit code sharing for code sharing's sake and not
where it makes sense. Expecting net/core and gpu stacks to have the exact
same needs for a page pool allocator has good chances to bite us in the
long run.
-Daniel

> 
> thanks
> -john
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

  reply	other threads:[~2021-02-02 14:11 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-12-17 23:06 [RFC][PATCH 1/3] dma-buf: heaps: Add deferred-free-helper library code John Stultz
2020-12-17 23:06 ` [RFC][PATCH 2/3] dma-buf: system_heap: Add pagepool support to system heap John Stultz
2020-12-18 14:36   ` Daniel Vetter
2020-12-19  1:16     ` John Stultz
2020-12-21 22:08       ` Daniel Vetter
2021-01-23  1:28         ` John Stultz
2021-02-02 14:04           ` Daniel Vetter [this message]
2021-02-03  5:56             ` John Stultz
2021-02-03  9:46               ` Daniel Vetter
2020-12-17 23:06 ` [RFC][PATCH 3/3] dma-buf: system_heap: Add deferred freeing to the " John Stultz
2020-12-19 15:35   ` [dma] adc430f226: WARNING:at_kernel/sched/core.c:#__might_sleep kernel test robot
2020-12-22 17:41 ` [RFC][PATCH 1/3] dma-buf: heaps: Add deferred-free-helper library code Suren Baghdasaryan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YBlb1V62cFP9Fz1/@phenom.ffwll.local \
    --to=daniel@ffwll.ch \
    --cc=cgoldswo@codeaurora.org \
    --cc=danielmentz@google.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=ezequiel@collabora.com \
    --cc=hridya@google.com \
    --cc=jajones@nvidia.com \
    --cc=john.stultz@linaro.org \
    --cc=labbott@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-media@vger.kernel.org \
    --cc=lmark@codeaurora.org \
    --cc=orjan.eide@arm.com \
    --cc=robin.murphy@arm.com \
    --cc=sspatil@google.com \
    --cc=surenb@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).