All of lore.kernel.org
 help / color / mirror / Atom feed
From: Alexander Duyck <alexander.duyck@gmail.com>
To: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Cc: Yunsheng Lin <linyunsheng@huawei.com>,
	Netdev <netdev@vger.kernel.org>,
	hawk@kernel.org, David Miller <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>,
	Alexander Duyck <alexanderduyck@fb.com>
Subject: Re: [net-next PATCH v2] page_pool: Refactor page_pool to enable fragmenting after allocation
Date: Sat, 29 Jan 2022 10:35:06 -0800	[thread overview]
Message-ID: <CAKgT0Ucjp3HKOPcjYNGh4ShGQzq7_7dMhQip_1P5Ei-xrv_pLQ@mail.gmail.com> (raw)
In-Reply-To: <YfUOTkboCcHok27N@hades>

On Sat, Jan 29, 2022 at 1:52 AM Ilias Apalodimas
<ilias.apalodimas@linaro.org> wrote:
>
> On Sat, Jan 29, 2022 at 05:20:37PM +0800, Yunsheng Lin wrote:
> > On 2022/1/27 22:57, Alexander Duyck wrote:
> > > From: Alexander Duyck <alexanderduyck@fb.com>
> > >
> > > This change is meant to permit a driver to perform "fragmenting" of the
> > > page from within the driver instead of the current model which requires
> > > pre-partitioning the page. The main motivation behind this is to support
> > > use cases where the page will be split up by the driver after DMA instead
> > > of before.
> > >
> > > With this change it becomes possible to start using page pool to replace
> > > some of the existing use cases where multiple references were being used
> > > for a single page, but the number needed was unknown as the size could be
> > > dynamic.
> > >
> > > For example, with this code it would be possible to do something like
> > > the following to handle allocation:
> > >   page = page_pool_alloc_pages();
> > >   if (!page)
> > >     return NULL;
> > >   page_pool_fragment_page(page, DRIVER_PAGECNT_BIAS_MAX);
> > >   rx_buf->page = page;
> > >   rx_buf->pagecnt_bias = DRIVER_PAGECNT_BIAS_MAX;
> > >
> > > Then we would process a received buffer by handling it with:
> > >   rx_buf->pagecnt_bias--;
> > >
> > > Once the page has been fully consumed we could then flush the remaining
> > > instances with:
> > >   if (page_pool_defrag_page(page, rx_buf->pagecnt_bias))
> > >     continue;
> > >   page_pool_put_defragged_page(pool, page -1, !!budget);
> >
> > page_pool_put_defragged_page(pool, page, -1, !!budget);
> >
> > Also I am not sure exporting the frag count to the driver is a good
> > idea, as the above example seems a little complex, maybe adding
> > the fragmenting after allocation support for a existing driver
> > is a good way to show if the API is really a good one.
>
> This is already kind of exposed since no one limits drivers from calling
> page_pool_atomic_sub_frag_count_return() right?
> What this patchset does is allow the drivers to actually use it and release
> pages without having to atomically decrement all the refcnt bias.
>
> And I do get the point that a driver might choose to do the refcounting
> internally.  That was the point all along with the fragment support in
> page_pool.  There's a wide variety of interfaces out there and each one
> handles buffers differently.
>
> What I am missing though is how this works with the current recycling
> scheme? The driver will still have to to make sure that
> page_pool_defrag_page(page, 1) == 0 for that to work no?

The general idea here is that we are getting away from doing in-driver
recycling and instead letting page pool take care of all that. That
was the original idea behind page pool, however the original
implementation was limited to a single use per page only.

So most of the legacy code out there is having to use the
page_ref_count == 1 or page_ref_count == bias trick in order to
determine if it can recycle the page. The page pool already takes care
of the page recycling by returning the pages to the pool when
page_ref_count == 1, what we get by adding the frag count is the
ability for the drivers to drop the need to perform their own ref
count tricks and instead offloads that to the kernel so when
page_pool_defrag_page(page, 1) == 0 it can then go immediately into
the checks for page_ref_count == 1 and just recycle the page into the
page pool.

> >
> >
> > >
> > > The general idea is that we want to have the ability to allocate a page
> > > with excess fragment count and then trim off the unneeded fragments.
> > >
> > > Signed-off-by: Alexander Duyck <alexanderduyck@fb.com>
> > > ---
> > >
> > > v2: Added page_pool_is_last_frag
> > >     Moved comment about CONFIG_PAGE_POOL to page_pool_put_page
> > >     Wrapped statements for page_pool_is_last_frag in parenthesis
> > >
> > >  include/net/page_pool.h |   82 ++++++++++++++++++++++++++++++-----------------
> > >  net/core/page_pool.c    |   23 ++++++-------
> > >  2 files changed, 62 insertions(+), 43 deletions(-)
> > >
> > > diff --git a/include/net/page_pool.h b/include/net/page_pool.h
> > > index 79a805542d0f..fbed91469d42 100644
> > > --- a/include/net/page_pool.h
> > > +++ b/include/net/page_pool.h
> > > @@ -201,21 +201,67 @@ static inline void page_pool_put_page_bulk(struct page_pool *pool, void **data,
> > >  }
> > >  #endif
> > >
> > > -void page_pool_put_page(struct page_pool *pool, struct page *page,
> > > -                   unsigned int dma_sync_size, bool allow_direct);
> > > +void page_pool_put_defragged_page(struct page_pool *pool, struct page *page,
> > > +                             unsigned int dma_sync_size,
> > > +                             bool allow_direct);
> > >
> > > -/* Same as above but will try to sync the entire area pool->max_len */
> > > -static inline void page_pool_put_full_page(struct page_pool *pool,
> > > -                                      struct page *page, bool allow_direct)
> > > +static inline void page_pool_fragment_page(struct page *page, long nr)
> > > +{
> > > +   atomic_long_set(&page->pp_frag_count, nr);
> > > +}
> > > +
> > > +static inline long page_pool_defrag_page(struct page *page, long nr)
> > > +{
> > > +   long ret;
> > > +
> > > +   /* If nr == pp_frag_count then we are have cleared all remaining
>
> s/are//

Will fix for v3.

Thanks,

Alex

  reply	other threads:[~2022-01-29 18:35 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-01-27 14:57 [net-next PATCH v2] page_pool: Refactor page_pool to enable fragmenting after allocation Alexander Duyck
2022-01-29  9:20 ` Yunsheng Lin
2022-01-29  9:52   ` Ilias Apalodimas
2022-01-29 18:35     ` Alexander Duyck [this message]
2022-01-29 18:49       ` Ilias Apalodimas

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAKgT0Ucjp3HKOPcjYNGh4ShGQzq7_7dMhQip_1P5Ei-xrv_pLQ@mail.gmail.com \
    --to=alexander.duyck@gmail.com \
    --cc=alexanderduyck@fb.com \
    --cc=davem@davemloft.net \
    --cc=hawk@kernel.org \
    --cc=ilias.apalodimas@linaro.org \
    --cc=kuba@kernel.org \
    --cc=linyunsheng@huawei.com \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.