All of lore.kernel.org
 help / color / mirror / Atom feed
From: Mel Gorman <mgorman@techsingularity.net>
To: Alexander Duyck <alexander.duyck@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Chuck Lever <chuck.lever@oracle.com>,
	Jesper Dangaard Brouer <brouer@redhat.com>,
	Christoph Hellwig <hch@infradead.org>,
	LKML <linux-kernel@vger.kernel.org>,
	Linux-Net <netdev@vger.kernel.org>, Linux-MM <linux-mm@kvack.org>,
	Linux-NFS <linux-nfs@vger.kernel.org>
Subject: Re: [PATCH 2/5] mm/page_alloc: Add a bulk page allocator
Date: Fri, 12 Mar 2021 07:32:26 +0000	[thread overview]
Message-ID: <20210312073226.GT3697@techsingularity.net> (raw)
In-Reply-To: <CAKgT0UcgiS0DpU4weOeVUN7o9dzoP=R20ytWC434sY4FxgQbtg@mail.gmail.com>

On Thu, Mar 11, 2021 at 08:42:16AM -0800, Alexander Duyck wrote:
> > @@ -4919,6 +4934,9 @@ static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order,
> >                 struct alloc_context *ac, gfp_t *alloc_mask,
> >                 unsigned int *alloc_flags)
> >  {
> > +       gfp_mask &= gfp_allowed_mask;
> > +       *alloc_mask = gfp_mask;
> > +
> >         ac->highest_zoneidx = gfp_zone(gfp_mask);
> >         ac->zonelist = node_zonelist(preferred_nid, gfp_mask);
> >         ac->nodemask = nodemask;
> 
> It might be better to pull this and the change from the bottom out
> into a seperate patch. I was reviewing this and when I hit the bottom
> I apparently had the same question other reviewers had wondering if it
> was intentional. By splitting it out it would be easier to review.
> 

Done. I felt it was obvious from context that the paths were sharing code
and splitting it out felt like patch count stuffing. Still, you're the
second person to point it out so now it's a separate patch in v4.

> > @@ -4960,6 +4978,104 @@ static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order,
> >         return true;
> >  }
> >
> > +/*
> > + * This is a batched version of the page allocator that attempts to
> > + * allocate nr_pages quickly from the preferred zone and add them to list.
> > + *
> > + * Returns the number of pages allocated.
> > + */
> > +int __alloc_pages_bulk_nodemask(gfp_t gfp_mask, int preferred_nid,
> > +                       nodemask_t *nodemask, int nr_pages,
> > +                       struct list_head *alloc_list)
> > +{
> > +       struct page *page;
> > +       unsigned long flags;
> > +       struct zone *zone;
> > +       struct zoneref *z;
> > +       struct per_cpu_pages *pcp;
> > +       struct list_head *pcp_list;
> > +       struct alloc_context ac;
> > +       gfp_t alloc_mask;
> > +       unsigned int alloc_flags;
> > +       int alloced = 0;
> > +
> > +       if (nr_pages == 1)
> > +               goto failed;
> 
> I might change this to "<= 1" just to cover the case where somebody
> messed something up and passed a negative value.
> 

I put in a WARN_ON_ONCE check that returns 0 allocated pages. It should
be the case that it only happens during the development of a new user but
better safe than sorry. It's an open question whether the max nr_pages
should be clamped but stupidly large values will either fail the watermark
check or wrap and hit the <= 0 check. I guess it's still possible the zone
would hit a dangerously low level of free pages but that is no different
to a user calling __alloc_pages_nodemask a stupidly large number of times.

> > +
> > +       /* May set ALLOC_NOFRAGMENT, fragmentation will return 1 page. */
> > +       if (!prepare_alloc_pages(gfp_mask, 0, preferred_nid, nodemask, &ac, &alloc_mask, &alloc_flags))
> > +               return 0;
> > +       gfp_mask = alloc_mask;
> > +
> > +       /* Find an allowed local zone that meets the high watermark. */
> > +       for_each_zone_zonelist_nodemask(zone, z, ac.zonelist, ac.highest_zoneidx, ac.nodemask) {
> > +               unsigned long mark;
> > +
> > +               if (cpusets_enabled() && (alloc_flags & ALLOC_CPUSET) &&
> > +                   !__cpuset_zone_allowed(zone, gfp_mask)) {
> > +                       continue;
> > +               }
> > +
> > +               if (nr_online_nodes > 1 && zone != ac.preferred_zoneref->zone &&
> > +                   zone_to_nid(zone) != zone_to_nid(ac.preferred_zoneref->zone)) {
> > +                       goto failed;
> > +               }
> > +
> > +               mark = wmark_pages(zone, alloc_flags & ALLOC_WMARK_MASK) + nr_pages;
> > +               if (zone_watermark_fast(zone, 0,  mark,
> > +                               zonelist_zone_idx(ac.preferred_zoneref),
> > +                               alloc_flags, gfp_mask)) {
> > +                       break;
> > +               }
> > +       }
> > +       if (!zone)
> > +               return 0;
> > +
> > +       /* Attempt the batch allocation */
> > +       local_irq_save(flags);
> > +       pcp = &this_cpu_ptr(zone->pageset)->pcp;
> > +       pcp_list = &pcp->lists[ac.migratetype];
> > +
> > +       while (alloced < nr_pages) {
> > +               page = __rmqueue_pcplist(zone, ac.migratetype, alloc_flags,
> > +                                                               pcp, pcp_list);
> > +               if (!page)
> > +                       break;
> > +
> > +               list_add(&page->lru, alloc_list);
> > +               alloced++;
> > +       }
> > +
> > +       if (!alloced)
> > +               goto failed_irq;
> 
> Since we already covered the case above verifying the nr_pages is
> greater than one it might make sense to move this check inside the
> loop for the !page case. Then we only are checking this if we failed
> an allocation.
> 

Yes, good idea, it moves a branch into a very unlikely path.

> > +
> > +       if (alloced) {
> 
> Isn't this redundant? In the previous lines you already checked
> "alloced" was zero before jumping to the label so you shouldn't need a
> second check as it isn't going to change after we already verified it
> is non-zero.
> 

Yes, it is redundant and a left-over artifact during implementation.
It's even more redundant when the !allocated case is checked in the
while loop.

> Also not a fan of the name "alloced". Maybe nr_alloc or something.
> Trying to make that abbreviation past tense just doesn't read right.
> 

I used allocated and created a preparation patch that renames alloced in
other parts of the per-cpu allocator so it is consistent.

> > +               __count_zid_vm_events(PGALLOC, zone_idx(zone), alloced);
> > +               zone_statistics(zone, zone);
> > +       }
> > +
> > +       local_irq_restore(flags);
> > +
> > +       /* Prep page with IRQs enabled to reduce disabled times */
> > +       list_for_each_entry(page, alloc_list, lru)
> > +               prep_new_page(page, 0, gfp_mask, 0);
> > +
> > +       return alloced;
> > +
> > +failed_irq:
> > +       local_irq_restore(flags);
> > +
> > +failed:
> > +       page = __alloc_pages_nodemask(gfp_mask, 0, preferred_nid, nodemask);
> > +       if (page) {
> > +               alloced++;
> 
> You could be explicit here and just set alloced to 1 and make this a
> write instead of bothering with the increment. Either that or just
> simplify this and return 1 after the list_add, and return 0 in the
> default case assuming you didn't allocate a page.
> 

The intent was to deal with the case that someone in the future used
the failed path when a page had already been allocated. I cannot imagine
why that would be done so I can explicitly used allocated = 1. I'm still
letting it fall through to avoid two return paths in failed path.  I do
not think it really matters but it feels redundant.

Thanks Alexander!

-- 
Mel Gorman
SUSE Labs

  reply	other threads:[~2021-03-12  7:33 UTC|newest]

Thread overview: 35+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-11 11:49 [PATCH 0/5 v3] Introduce a bulk order-0 page allocator with two in-tree users Mel Gorman
2021-03-11 11:49 ` [PATCH 1/5] SUNRPC: Set rq_page_end differently Mel Gorman
2021-03-11 11:49 ` [PATCH 2/5] mm/page_alloc: Add a bulk page allocator Mel Gorman
2021-03-11 16:42   ` Alexander Duyck
2021-03-11 16:42     ` Alexander Duyck
2021-03-12  7:32     ` Mel Gorman [this message]
2021-03-11 11:49 ` [PATCH 3/5] SUNRPC: Refresh rq_pages using " Mel Gorman
2021-03-11 14:23   ` Chuck Lever III
2021-03-11 11:49 ` [PATCH 4/5] net: page_pool: refactor dma_map into own function page_pool_dma_map Mel Gorman
2021-03-11 11:49 ` [PATCH 5/5] net: page_pool: use alloc_pages_bulk in refill code path Mel Gorman
  -- strict thread matches above, loose matches on Subject: below --
2021-03-10 10:46 [PATCH 0/5] Introduce a bulk order-0 page allocator with two in-tree users Mel Gorman
2021-03-10 10:46 ` [PATCH 2/5] mm/page_alloc: Add a bulk page allocator Mel Gorman
2021-03-10 23:46   ` Andrew Morton
2021-03-11  8:42     ` Mel Gorman
2021-03-12 11:46       ` Jesper Dangaard Brouer
2021-03-12 13:44         ` Mel Gorman
2021-03-12 14:58         ` Matthew Wilcox
2021-03-12 16:03           ` Mel Gorman
2021-03-12 21:08             ` Matthew Wilcox
2021-03-13 13:16               ` Mel Gorman
2021-03-13 16:39                 ` Matthew Wilcox
2021-03-13 16:56                   ` Chuck Lever III
2021-03-13 19:33                     ` Matthew Wilcox
2021-03-14 12:52                       ` Mel Gorman
2021-03-14 15:22                         ` Chuck Lever III
2021-03-15 10:42                           ` Mel Gorman
2021-03-15 16:42                             ` Jesper Dangaard Brouer
2021-03-19 17:10                         ` Jesper Dangaard Brouer
2021-03-12 12:43   ` Matthew Wilcox
2021-03-12 14:15     ` Mel Gorman
2021-03-01 16:11 [PATCH 0/5] Introduce a bulk order-0 page allocator with two in-tree users Mel Gorman
2021-03-01 16:11 ` [PATCH 2/5] mm/page_alloc: Add a bulk page allocator Mel Gorman
2021-03-09 17:12   ` Christoph Hellwig
2021-03-09 18:10     ` Mel Gorman
2021-03-10 11:04   ` Shay Agroskin
2021-03-10 11:38     ` Mel Gorman
2021-03-12 12:01       ` Jesper Dangaard Brouer

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210312073226.GT3697@techsingularity.net \
    --to=mgorman@techsingularity.net \
    --cc=akpm@linux-foundation.org \
    --cc=alexander.duyck@gmail.com \
    --cc=brouer@redhat.com \
    --cc=chuck.lever@oracle.com \
    --cc=hch@infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-nfs@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.