netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: alloc_pages_bulk()
       [not found]                     ` <20210222094256.GH3697@techsingularity.net>
@ 2021-02-22 11:42                       ` Jesper Dangaard Brouer
  2021-02-22 14:08                         ` alloc_pages_bulk() Mel Gorman
  0 siblings, 1 reply; 3+ messages in thread
From: Jesper Dangaard Brouer @ 2021-02-22 11:42 UTC (permalink / raw)
  To: Mel Gorman
  Cc: Chuck Lever, Mel Gorman, Linux NFS Mailing List, linux-mm,
	Jakub Kicinski, brouer, netdev

On Mon, 22 Feb 2021 09:42:56 +0000
Mel Gorman <mgorman@techsingularity.net> wrote:

> On Mon, Feb 15, 2021 at 05:10:38PM +0100, Jesper Dangaard Brouer wrote:
> > 
> > On Mon, 15 Feb 2021 12:00:56 +0000
> > Mel Gorman <mgorman@techsingularity.net> wrote:
> >   
> > > On Thu, Feb 11, 2021 at 01:26:28PM +0100, Jesper Dangaard Brouer wrote:  
> > [...]  
> > >   
> > > > I also suggest the API can return less pages than requested. Because I
> > > > want to to "exit"/return if it need to go into an expensive code path
> > > > (like buddy allocator or compaction).  I'm assuming we have a flags to
> > > > give us this behavior (via gfp_flags or alloc_flags)?
> > > >     
> > > 
> > > The API returns the number of pages returned on a list so policies
> > > around how aggressive it should be allocating the requested number of
> > > pages could be adjusted without changing the API. Passing in policy
> > > requests via gfp_flags may be problematic as most (all?) bits are
> > > already used.  
> > 
> > Well, I was just thinking that I would use GFP_ATOMIC instead of
> > GFP_KERNEL to "communicate" that I don't want this call to take too
> > long (like sleeping).  I'm not requesting any fancy policy :-)
> >   
> 
> The NFS use case requires opposite semantics
> -- it really needs those allocations to succeed
> https://lore.kernel.org/r/161340498400.7780.962495219428962117.stgit@klimt.1015granger.net.

Sorry, but that is not how I understand the code.

The code is doing exactly what I'm requesting. If the alloc_pages_bulk()
doesn't return expected number of pages, then check if others need to
run.  The old code did schedule_timeout(msecs_to_jiffies(500)), while
Chuck's patch change this to ask for cond_resched().  Thus, it tries to
avoid blocking the CPU for too long (when allocating many pages).

And the nfsd code seems to handle that the code can be interrupted (via
return -EINTR) via signal_pending(current).  Thus, the nfsd code seems
to be able to handle if the page allocations failed.


> I've asked what code it's based on as it's not 5.11 and I'll iron that
> out first.
>
> Then it might be clearer what the "can fail" semantics should look like.
> I think it would be best to have pairs of patches where the first patch
> adjusts the semantics of the bulk allocator and the second adds a user.
> That will limit the amount of code code carried in the implementation.
> When the initial users are in place then the implementation can be
> optimised as the optimisations will require significant refactoring and
> I not want to refactor multiple times.

I guess, I should try to code-up the usage in page_pool.

What is the latest patch for adding alloc_pages_bulk() ?

The nfsd code (svc_alloc_arg) is called in a context where it can
sleep, and thus use GFP_KERNEL.  In most cases the page_pool will be
called with GFP_ATOMIC.  I don't think I/page_pool will retry the call
like Chuck did, as I cannot (re)schedule others to run.

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: alloc_pages_bulk()
  2021-02-22 11:42                       ` alloc_pages_bulk() Jesper Dangaard Brouer
@ 2021-02-22 14:08                         ` Mel Gorman
  0 siblings, 0 replies; 3+ messages in thread
From: Mel Gorman @ 2021-02-22 14:08 UTC (permalink / raw)
  To: Jesper Dangaard Brouer
  Cc: Chuck Lever, Mel Gorman, Linux NFS Mailing List, linux-mm,
	Jakub Kicinski, netdev

On Mon, Feb 22, 2021 at 12:42:46PM +0100, Jesper Dangaard Brouer wrote:
> On Mon, 22 Feb 2021 09:42:56 +0000
> Mel Gorman <mgorman@techsingularity.net> wrote:
> 
> > On Mon, Feb 15, 2021 at 05:10:38PM +0100, Jesper Dangaard Brouer wrote:
> > > 
> > > On Mon, 15 Feb 2021 12:00:56 +0000
> > > Mel Gorman <mgorman@techsingularity.net> wrote:
> > >   
> > > > On Thu, Feb 11, 2021 at 01:26:28PM +0100, Jesper Dangaard Brouer wrote:  
> > > [...]  
> > > >   
> > > > > I also suggest the API can return less pages than requested. Because I
> > > > > want to to "exit"/return if it need to go into an expensive code path
> > > > > (like buddy allocator or compaction).  I'm assuming we have a flags to
> > > > > give us this behavior (via gfp_flags or alloc_flags)?
> > > > >     
> > > > 
> > > > The API returns the number of pages returned on a list so policies
> > > > around how aggressive it should be allocating the requested number of
> > > > pages could be adjusted without changing the API. Passing in policy
> > > > requests via gfp_flags may be problematic as most (all?) bits are
> > > > already used.  
> > > 
> > > Well, I was just thinking that I would use GFP_ATOMIC instead of
> > > GFP_KERNEL to "communicate" that I don't want this call to take too
> > > long (like sleeping).  I'm not requesting any fancy policy :-)
> > >   
> > 
> > The NFS use case requires opposite semantics
> > -- it really needs those allocations to succeed
> > https://lore.kernel.org/r/161340498400.7780.962495219428962117.stgit@klimt.1015granger.net.
> 
> Sorry, but that is not how I understand the code.
> 
> The code is doing exactly what I'm requesting. If the alloc_pages_bulk()
> doesn't return expected number of pages, then check if others need to
> run.  The old code did schedule_timeout(msecs_to_jiffies(500)), while
> Chuck's patch change this to ask for cond_resched().  Thus, it tries to
> avoid blocking the CPU for too long (when allocating many pages).
> 
> And the nfsd code seems to handle that the code can be interrupted (via
> return -EINTR) via signal_pending(current).  Thus, the nfsd code seems
> to be able to handle if the page allocations failed.
> 

I'm waiting to find out exactly what NFSD is currently doing as the code
in 5.11 is not the same as what Chuck was coding against so I'm not 100%
certain how it currently works.

> 
> > I've asked what code it's based on as it's not 5.11 and I'll iron that
> > out first.
> >
> > Then it might be clearer what the "can fail" semantics should look like.
> > I think it would be best to have pairs of patches where the first patch
> > adjusts the semantics of the bulk allocator and the second adds a user.
> > That will limit the amount of code code carried in the implementation.
> > When the initial users are in place then the implementation can be
> > optimised as the optimisations will require significant refactoring and
> > I not want to refactor multiple times.
> 
> I guess, I should try to code-up the usage in page_pool.
> 
> What is the latest patch for adding alloc_pages_bulk() ?
> 

There isn't a usable latest version until I reconcile the nfsd caller.
The only major change in the API right now is dropping order. It handles
order-0 only.

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: alloc_pages_bulk()
       [not found]                 ` <20210215120608.GE3697@techsingularity.net>
@ 2021-02-22 20:44                   ` Jesper Dangaard Brouer
  0 siblings, 0 replies; 3+ messages in thread
From: Jesper Dangaard Brouer @ 2021-02-22 20:44 UTC (permalink / raw)
  To: Mel Gorman
  Cc: Chuck Lever, Mel Gorman, Linux NFS Mailing List, linux-mm,
	brouer, netdev

On Mon, 15 Feb 2021 12:06:09 +0000
Mel Gorman <mgorman@techsingularity.net> wrote:

> On Thu, Feb 11, 2021 at 04:20:31PM +0000, Chuck Lever wrote:
> > > On Feb 11, 2021, at 4:12 AM, Mel Gorman <mgorman@techsingularity.net> wrote:
> > > 
> > > <SNIP>
> > > 
> > > Parameters to __rmqueue_pcplist are garbage as the parameter order changed.
> > > I'm surprised it didn't blow up in a spectacular fashion. Again, this
> > > hasn't been near any testing and passing a list with high orders to
> > > free_pages_bulk() will corrupt lists too. Mostly it's a curiousity to see
> > > if there is justification for reworking the allocator to fundamentally
> > > deal in batches and then feed batches to pcp lists and the bulk allocator
> > > while leaving the normal GFP API as single page "batches". While that
> > > would be ideal, it's relatively high risk for regressions. There is still
> > > some scope for adding a basic bulk allocator before considering a major
> > > refactoring effort.
> > > 
> > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > > index f8353ea7b977..8f3fe7de2cf7 100644
> > > --- a/mm/page_alloc.c
> > > +++ b/mm/page_alloc.c
> > > @@ -5892,7 +5892,7 @@ __alloc_pages_bulk_nodemask(gfp_t gfp_mask, unsigned int order,
> > > 	pcp_list = &pcp->lists[migratetype];
> > > 
> > > 	while (nr_pages) {
> > > -		page = __rmqueue_pcplist(zone, gfp_mask, migratetype,
> > > +		page = __rmqueue_pcplist(zone, migratetype, alloc_flags,
> > > 								pcp, pcp_list);
> > > 		if (!page)
> > > 			break;  
> > 
> > The NFS server is considerably more stable now. Thank you!
> >   
> 
> Thanks for testing!

I've done some testing here:
 https://github.com/xdp-project/xdp-project/blob/master/areas/mem/page_pool06_alloc_pages_bulk.org

Performance summary:
 - Before: 3,677,958 pps
 - After : 4,066,028 pps

I'll describe/show the page_pool changes tomorrow.
-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2021-02-22 20:46 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <2A0C36E7-8CB0-486F-A8DB-463CA28C5C5D@oracle.com>
     [not found] ` <EEB0B974-6E63-41A0-9C01-F0DEA39FC4BF@oracle.com>
     [not found]   ` <20210209113108.1ca16cfa@carbon>
     [not found]     ` <20210210084155.GA3697@techsingularity.net>
     [not found]       ` <20210210124103.56ed1e95@carbon>
     [not found]         ` <20210210130705.GC3629@suse.de>
     [not found]           ` <B123FB11-661F-45A6-8235-2982BF3C4B83@oracle.com>
     [not found]             ` <20210211091235.GC3697@techsingularity.net>
     [not found]               ` <20210211132628.1fe4f10b@carbon>
     [not found]                 ` <20210215120056.GD3697@techsingularity.net>
     [not found]                   ` <20210215171038.42f62438@carbon>
     [not found]                     ` <20210222094256.GH3697@techsingularity.net>
2021-02-22 11:42                       ` alloc_pages_bulk() Jesper Dangaard Brouer
2021-02-22 14:08                         ` alloc_pages_bulk() Mel Gorman
     [not found]               ` <F3CD435E-905F-4262-B4DA-0C721A4235E1@oracle.com>
     [not found]                 ` <20210215120608.GE3697@techsingularity.net>
2021-02-22 20:44                   ` alloc_pages_bulk() Jesper Dangaard Brouer

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).