All of lore.kernel.org
 help / color / mirror / Atom feed
From: Andrew Morton <akpm@linux-foundation.org>
To: Mel Gorman <mgorman@techsingularity.net>
Cc: Chuck Lever <chuck.lever@oracle.com>,
	Jesper Dangaard Brouer <brouer@redhat.com>,
	Christoph Hellwig <hch@infradead.org>,
	LKML <linux-kernel@vger.kernel.org>,
	Linux-Net <netdev@vger.kernel.org>, Linux-MM <linux-mm@kvack.org>,
	Linux-NFS <linux-nfs@vger.kernel.org>
Subject: Re: [PATCH 2/5] mm/page_alloc: Add a bulk page allocator
Date: Wed, 10 Mar 2021 15:46:50 -0800	[thread overview]
Message-ID: <20210310154650.ad9760cd7cb9ac4acccf77ee@linux-foundation.org> (raw)
In-Reply-To: <20210310104618.22750-3-mgorman@techsingularity.net>

On Wed, 10 Mar 2021 10:46:15 +0000 Mel Gorman <mgorman@techsingularity.net> wrote:

> This patch adds a new page allocator interface via alloc_pages_bulk,
> and __alloc_pages_bulk_nodemask. A caller requests a number of pages
> to be allocated and added to a list. They can be freed in bulk using
> free_pages_bulk().

Why am I surprised we don't already have this.

> The API is not guaranteed to return the requested number of pages and
> may fail if the preferred allocation zone has limited free memory, the
> cpuset changes during the allocation or page debugging decides to fail
> an allocation. It's up to the caller to request more pages in batch
> if necessary.
> 
> Note that this implementation is not very efficient and could be improved
> but it would require refactoring. The intent is to make it available early
> to determine what semantics are required by different callers. Once the
> full semantics are nailed down, it can be refactored.
> 
> ...
>
> +/* Drop reference counts and free order-0 pages from a list. */
> +void free_pages_bulk(struct list_head *list)
> +{
> +	struct page *page, *next;
> +
> +	list_for_each_entry_safe(page, next, list, lru) {
> +		trace_mm_page_free_batched(page);
> +		if (put_page_testzero(page)) {
> +			list_del(&page->lru);
> +			__free_pages_ok(page, 0, FPI_NONE);
> +		}
> +	}
> +}
> +EXPORT_SYMBOL_GPL(free_pages_bulk);

I expect that batching games are planned in here as well?

>  static inline unsigned int
>  gfp_to_alloc_flags(gfp_t gfp_mask)
>  {
> @@ -4919,6 +4934,9 @@ static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order,
>  		struct alloc_context *ac, gfp_t *alloc_mask,
>  		unsigned int *alloc_flags)
>  {
> +	gfp_mask &= gfp_allowed_mask;
> +	*alloc_mask = gfp_mask;
> +
>  	ac->highest_zoneidx = gfp_zone(gfp_mask);
>  	ac->zonelist = node_zonelist(preferred_nid, gfp_mask);
>  	ac->nodemask = nodemask;
> @@ -4960,6 +4978,99 @@ static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order,
>  	return true;
>  }
>  
> +/*
> + * This is a batched version of the page allocator that attempts to
> + * allocate nr_pages quickly from the preferred zone and add them to list.
> + */

Documentation is rather lame.  Returns number of pages allocated...

> +int __alloc_pages_bulk_nodemask(gfp_t gfp_mask, int preferred_nid,
> +			nodemask_t *nodemask, int nr_pages,
> +			struct list_head *alloc_list)
> +{
> +	struct page *page;
> +	unsigned long flags;
> +	struct zone *zone;
> +	struct zoneref *z;
> +	struct per_cpu_pages *pcp;
> +	struct list_head *pcp_list;
> +	struct alloc_context ac;
> +	gfp_t alloc_mask;
> +	unsigned int alloc_flags;
> +	int alloced = 0;
> +
> +	if (nr_pages == 1)
> +		goto failed;
> +
> +	/* May set ALLOC_NOFRAGMENT, fragmentation will return 1 page. */
> +	if (!prepare_alloc_pages(gfp_mask, 0, preferred_nid, nodemask, &ac, &alloc_mask, &alloc_flags))
> +		return 0;
> +	gfp_mask = alloc_mask;
> +
> +	/* Find an allowed local zone that meets the high watermark. */
> +	for_each_zone_zonelist_nodemask(zone, z, ac.zonelist, ac.highest_zoneidx, ac.nodemask) {
> +		unsigned long mark;
> +
> +		if (cpusets_enabled() && (alloc_flags & ALLOC_CPUSET) &&
> +		    !__cpuset_zone_allowed(zone, gfp_mask)) {
> +			continue;
> +		}
> +
> +		if (nr_online_nodes > 1 && zone != ac.preferred_zoneref->zone &&
> +		    zone_to_nid(zone) != zone_to_nid(ac.preferred_zoneref->zone)) {
> +			goto failed;
> +		}
> +
> +		mark = wmark_pages(zone, alloc_flags & ALLOC_WMARK_MASK) + nr_pages;
> +		if (zone_watermark_fast(zone, 0,  mark,
> +				zonelist_zone_idx(ac.preferred_zoneref),
> +				alloc_flags, gfp_mask)) {
> +			break;
> +		}
> +	}

I suspect the above was stolen from elsewhere and that some code
commonification is planned.


> +	if (!zone)
> +		return 0;
> +
> +	/* Attempt the batch allocation */
> +	local_irq_save(flags);
> +	pcp = &this_cpu_ptr(zone->pageset)->pcp;
> +	pcp_list = &pcp->lists[ac.migratetype];
> +
> +	while (alloced < nr_pages) {
> +		page = __rmqueue_pcplist(zone, ac.migratetype, alloc_flags,
> +								pcp, pcp_list);
> +		if (!page)
> +			break;
> +
> +		prep_new_page(page, 0, gfp_mask, 0);

I wonder if it would be worth running prep_new_page() in a second pass,
after reenabling interrupts.

Speaking of which, will the realtime people get upset about the
irqs-off latency?  How many pages are we talking about here?

> +		list_add(&page->lru, alloc_list);
> +		alloced++;
> +	}
> +
> +	if (!alloced)
> +		goto failed_irq;
> +
> +	if (alloced) {
> +		__count_zid_vm_events(PGALLOC, zone_idx(zone), alloced);
> +		zone_statistics(zone, zone);
> +	}
> +
> +	local_irq_restore(flags);
> +
> +	return alloced;
> +
> +failed_irq:
> +	local_irq_restore(flags);
> +
> +failed:

Might we need some counter to show how often this path happens?

> +	page = __alloc_pages_nodemask(gfp_mask, 0, preferred_nid, nodemask);
> +	if (page) {
> +		alloced++;
> +		list_add(&page->lru, alloc_list);
> +	}
> +
> +	return alloced;
> +}
> +EXPORT_SYMBOL_GPL(__alloc_pages_bulk_nodemask);
> +


  reply	other threads:[~2021-03-10 23:47 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-10 10:46 [PATCH 0/5] Introduce a bulk order-0 page allocator with two in-tree users Mel Gorman
2021-03-10 10:46 ` [PATCH 1/5] SUNRPC: Set rq_page_end differently Mel Gorman
2021-03-10 10:46 ` [PATCH 2/5] mm/page_alloc: Add a bulk page allocator Mel Gorman
2021-03-10 23:46   ` Andrew Morton [this message]
2021-03-11  8:42     ` Mel Gorman
2021-03-12 11:46       ` Jesper Dangaard Brouer
2021-03-12 13:44         ` Mel Gorman
2021-03-12 14:58         ` Matthew Wilcox
2021-03-12 16:03           ` Mel Gorman
2021-03-12 21:08             ` Matthew Wilcox
2021-03-13 13:16               ` Mel Gorman
2021-03-13 16:39                 ` Matthew Wilcox
2021-03-13 16:56                   ` Chuck Lever III
2021-03-13 19:33                     ` Matthew Wilcox
2021-03-14 12:52                       ` Mel Gorman
2021-03-14 15:22                         ` Chuck Lever III
2021-03-15 10:42                           ` Mel Gorman
2021-03-15 16:42                             ` Jesper Dangaard Brouer
2021-03-19 17:10                         ` Jesper Dangaard Brouer
2021-03-12 12:43   ` Matthew Wilcox
2021-03-12 14:15     ` Mel Gorman
2021-03-10 10:46 ` [PATCH 3/5] SUNRPC: Refresh rq_pages using " Mel Gorman
2021-03-10 10:46 ` [PATCH 4/5] net: page_pool: refactor dma_map into own function page_pool_dma_map Mel Gorman
2021-03-10 10:46 ` [PATCH 5/5] net: page_pool: use alloc_pages_bulk in refill code path Mel Gorman
2021-03-10 23:47 ` [PATCH 0/5] Introduce a bulk order-0 page allocator with two in-tree users Andrew Morton
2021-03-11  8:48   ` Mel Gorman
2021-03-12 15:10     ` Matthew Wilcox
  -- strict thread matches above, loose matches on Subject: below --
2021-03-11 11:49 [PATCH 0/5 v3] " Mel Gorman
2021-03-11 11:49 ` [PATCH 2/5] mm/page_alloc: Add a bulk page allocator Mel Gorman
2021-03-11 16:42   ` Alexander Duyck
2021-03-11 16:42     ` Alexander Duyck
2021-03-12  7:32     ` Mel Gorman
2021-03-01 16:11 [PATCH 0/5] Introduce a bulk order-0 page allocator with two in-tree users Mel Gorman
2021-03-01 16:11 ` [PATCH 2/5] mm/page_alloc: Add a bulk page allocator Mel Gorman
2021-03-09 17:12   ` Christoph Hellwig
2021-03-09 18:10     ` Mel Gorman
2021-03-10 11:04   ` Shay Agroskin
2021-03-10 11:38     ` Mel Gorman
2021-03-12 12:01       ` Jesper Dangaard Brouer

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210310154650.ad9760cd7cb9ac4acccf77ee@linux-foundation.org \
    --to=akpm@linux-foundation.org \
    --cc=brouer@redhat.com \
    --cc=chuck.lever@oracle.com \
    --cc=hch@infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-nfs@vger.kernel.org \
    --cc=mgorman@techsingularity.net \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.