From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A41D1C433DB for ; Fri, 12 Mar 2021 15:43:47 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 44A1364D9E for ; Fri, 12 Mar 2021 15:43:47 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 44A1364D9E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=techsingularity.net Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0A7B38D0001; Fri, 12 Mar 2021 10:43:37 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 00CD16B0081; Fri, 12 Mar 2021 10:43:36 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D78748D0001; Fri, 12 Mar 2021 10:43:36 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0110.hostedemail.com [216.40.44.110]) by kanga.kvack.org (Postfix) with ESMTP id B1D116B0080 for ; Fri, 12 Mar 2021 10:43:36 -0500 (EST) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 538F518034F6F for ; Fri, 12 Mar 2021 15:43:36 +0000 (UTC) X-FDA: 77911642032.26.E167D8B Received: from outbound-smtp54.blacknight.com (outbound-smtp54.blacknight.com [46.22.136.238]) by imf24.hostedemail.com (Postfix) with ESMTP id 3AEE1A0009C5 for ; Fri, 12 Mar 2021 15:43:29 +0000 (UTC) Received: from mail.blacknight.com (pemlinmail06.blacknight.ie [81.17.255.152]) by outbound-smtp54.blacknight.com (Postfix) with ESMTPS id 59178FB08D for ; Fri, 12 Mar 2021 15:43:32 +0000 (GMT) Received: (qmail 19821 invoked from network); 12 Mar 2021 15:43:32 -0000 Received: from unknown (HELO stampy.112glenside.lan) (mgorman@techsingularity.net@[84.203.22.4]) by 81.17.254.9 with ESMTPA; 12 Mar 2021 15:43:32 -0000 From: Mel Gorman To: Andrew Morton Cc: Chuck Lever , Jesper Dangaard Brouer , Christoph Hellwig , Alexander Duyck , Matthew Wilcox , LKML , Linux-Net , Linux-MM , Linux-NFS , Mel Gorman Subject: [PATCH 3/7] mm/page_alloc: Add a bulk page allocator Date: Fri, 12 Mar 2021 15:43:27 +0000 Message-Id: <20210312154331.32229-4-mgorman@techsingularity.net> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210312154331.32229-1-mgorman@techsingularity.net> References: <20210312154331.32229-1-mgorman@techsingularity.net> MIME-Version: 1.0 X-Stat-Signature: gsca7rsmwrxipaydan16s8g7apsu914q X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 3AEE1A0009C5 Received-SPF: none (techsingularity.net>: No applicable sender policy available) receiver=imf24; identity=mailfrom; envelope-from=""; helo=outbound-smtp54.blacknight.com; client-ip=46.22.136.238 X-HE-DKIM-Result: none/none X-HE-Tag: 1615563809-237268 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This patch adds a new page allocator interface via alloc_pages_bulk, and __alloc_pages_bulk_nodemask. A caller requests a number of pages to be allocated and added to a list. They can be freed in bulk using free_pages_bulk(). The API is not guaranteed to return the requested number of pages and may fail if the preferred allocation zone has limited free memory, the cpuset changes during the allocation or page debugging decides to fail an allocation. It's up to the caller to request more pages in batch if necessary. Note that this implementation is not very efficient and could be improved but it would require refactoring. The intent is to make it available earl= y to determine what semantics are required by different callers. Once the full semantics are nailed down, it can be refactored. Signed-off-by: Mel Gorman --- include/linux/gfp.h | 12 +++++ mm/page_alloc.c | 116 ++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 128 insertions(+) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 0a88f84b08f4..e2cd98dba72e 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -518,6 +518,17 @@ static inline int arch_make_page_accessible(struct p= age *page) struct page *__alloc_pages(gfp_t gfp, unsigned int order, int preferred_= nid, nodemask_t *nodemask); =20 +int __alloc_pages_bulk(gfp_t gfp, int preferred_nid, + nodemask_t *nodemask, int nr_pages, + struct list_head *list); + +/* Bulk allocate order-0 pages */ +static inline unsigned long +alloc_pages_bulk(gfp_t gfp, unsigned long nr_pages, struct list_head *li= st) +{ + return __alloc_pages_bulk(gfp, numa_mem_id(), NULL, nr_pages, list); +} + /* * Allocate pages, preferring the node given as nid. The node must be va= lid and * online. For more general interface, see alloc_pages_node(). @@ -581,6 +592,7 @@ void * __meminit alloc_pages_exact_nid(int nid, size_= t size, gfp_t gfp_mask); =20 extern void __free_pages(struct page *page, unsigned int order); extern void free_pages(unsigned long addr, unsigned int order); +extern void free_pages_bulk(struct list_head *list); =20 struct page_frag_cache; extern void __page_frag_cache_drain(struct page *page, unsigned int coun= t); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 880b1d6368bd..f48f94375b66 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4436,6 +4436,21 @@ static void wake_all_kswapds(unsigned int order, g= fp_t gfp_mask, } } =20 +/* Drop reference counts and free order-0 pages from a list. */ +void free_pages_bulk(struct list_head *list) +{ + struct page *page, *next; + + list_for_each_entry_safe(page, next, list, lru) { + trace_mm_page_free_batched(page); + if (put_page_testzero(page)) { + list_del(&page->lru); + __free_pages_ok(page, 0, FPI_NONE); + } + } +} +EXPORT_SYMBOL_GPL(free_pages_bulk); + static inline unsigned int gfp_to_alloc_flags(gfp_t gfp_mask) { @@ -4963,6 +4978,107 @@ static inline bool prepare_alloc_pages(gfp_t gfp,= unsigned int order, return true; } =20 +/* + * This is a batched version of the page allocator that attempts to + * allocate nr_pages quickly from the preferred zone and add them to lis= t. + * + * Returns the number of pages allocated. + */ +int __alloc_pages_bulk(gfp_t gfp, int preferred_nid, + nodemask_t *nodemask, int nr_pages, + struct list_head *alloc_list) +{ + struct page *page; + unsigned long flags; + struct zone *zone; + struct zoneref *z; + struct per_cpu_pages *pcp; + struct list_head *pcp_list; + struct alloc_context ac; + gfp_t alloc_gfp; + unsigned int alloc_flags; + int allocated =3D 0; + + if (WARN_ON_ONCE(nr_pages <=3D 0)) + return 0; + + if (nr_pages =3D=3D 1) + goto failed; + + /* May set ALLOC_NOFRAGMENT, fragmentation will return 1 page. */ + if (!prepare_alloc_pages(gfp, 0, preferred_nid, nodemask, &ac, + &alloc_gfp, &alloc_flags)) + return 0; + gfp =3D alloc_gfp; + + /* Find an allowed local zone that meets the high watermark. */ + for_each_zone_zonelist_nodemask(zone, z, ac.zonelist, ac.highest_zoneid= x, ac.nodemask) { + unsigned long mark; + + if (cpusets_enabled() && (alloc_flags & ALLOC_CPUSET) && + !__cpuset_zone_allowed(zone, gfp)) { + continue; + } + + if (nr_online_nodes > 1 && zone !=3D ac.preferred_zoneref->zone && + zone_to_nid(zone) !=3D zone_to_nid(ac.preferred_zoneref->zone)) { + goto failed; + } + + mark =3D wmark_pages(zone, alloc_flags & ALLOC_WMARK_MASK) + nr_pages; + if (zone_watermark_fast(zone, 0, mark, + zonelist_zone_idx(ac.preferred_zoneref), + alloc_flags, gfp)) { + break; + } + } + if (!zone) + return 0; + + /* Attempt the batch allocation */ + local_irq_save(flags); + pcp =3D &this_cpu_ptr(zone->pageset)->pcp; + pcp_list =3D &pcp->lists[ac.migratetype]; + + while (allocated < nr_pages) { + page =3D __rmqueue_pcplist(zone, ac.migratetype, alloc_flags, + pcp, pcp_list); + if (!page) { + /* Try and get at least one page */ + if (!allocated) + goto failed_irq; + break; + } + + list_add(&page->lru, alloc_list); + allocated++; + } + + __count_zid_vm_events(PGALLOC, zone_idx(zone), allocated); + zone_statistics(zone, zone); + + local_irq_restore(flags); + + /* Prep page with IRQs enabled to reduce disabled times */ + list_for_each_entry(page, alloc_list, lru) + prep_new_page(page, 0, gfp, 0); + + return allocated; + +failed_irq: + local_irq_restore(flags); + +failed: + page =3D __alloc_pages(gfp, 0, preferred_nid, nodemask); + if (page) { + list_add(&page->lru, alloc_list); + allocated =3D 1; + } + + return allocated; +} +EXPORT_SYMBOL_GPL(__alloc_pages_bulk); + /* * This is the 'heart' of the zoned buddy allocator. */ --=20 2.26.2