From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 123ECC433B4 for ; Thu, 22 Apr 2021 11:15:46 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 84D466145A for ; Thu, 22 Apr 2021 11:15:45 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 84D466145A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=techsingularity.net Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 06FD96B0073; Thu, 22 Apr 2021 07:15:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 020826B0074; Thu, 22 Apr 2021 07:15:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DB7976B0075; Thu, 22 Apr 2021 07:15:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0105.hostedemail.com [216.40.44.105]) by kanga.kvack.org (Postfix) with ESMTP id BED776B0073 for ; Thu, 22 Apr 2021 07:15:44 -0400 (EDT) Received: from smtpin39.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 82DFA5923C06 for ; Thu, 22 Apr 2021 11:15:44 +0000 (UTC) X-FDA: 78059747808.39.256F979 Received: from outbound-smtp56.blacknight.com (outbound-smtp56.blacknight.com [46.22.136.240]) by imf03.hostedemail.com (Postfix) with ESMTP id 71F60C0007F2 for ; Thu, 22 Apr 2021 11:15:39 +0000 (UTC) Received: from mail.blacknight.com (pemlinmail03.blacknight.ie [81.17.254.16]) by outbound-smtp56.blacknight.com (Postfix) with ESMTPS id E2673FAA48 for ; Thu, 22 Apr 2021 12:15:42 +0100 (IST) Received: (qmail 13033 invoked from network); 22 Apr 2021 11:15:42 -0000 Received: from unknown (HELO stampy.112glenside.lan) (mgorman@techsingularity.net@[84.203.17.248]) by 81.17.254.9 with ESMTPA; 22 Apr 2021 11:15:42 -0000 From: Mel Gorman To: Andrew Morton Cc: Chuck Lever , Jesper Dangaard Brouer , Thomas Gleixner , Peter Zijlstra , Ingo Molnar , Michal Hocko , Vlastimil Babka , Linux-MM , Linux-RT-Users , LKML , Mel Gorman Subject: [PATCH 5/9] mm/page_alloc: Batch the accounting updates in the bulk allocator Date: Thu, 22 Apr 2021 12:14:37 +0100 Message-Id: <20210422111441.24318-6-mgorman@techsingularity.net> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210422111441.24318-1-mgorman@techsingularity.net> References: <20210422111441.24318-1-mgorman@techsingularity.net> MIME-Version: 1.0 X-Stat-Signature: mzxdc7gxkmsgnk486q3yccoexi64hpgg X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 71F60C0007F2 Received-SPF: none (techsingularity.net>: No applicable sender policy available) receiver=imf03; identity=mailfrom; envelope-from=""; helo=outbound-smtp56.blacknight.com; client-ip=46.22.136.240 X-HE-DKIM-Result: none/none X-HE-Tag: 1619090139-117423 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now that the zone_statistics are simple counters that do not require special protection, the bulk allocator accounting updates can be batch updated without adding too much complexity with protected RMW updates or using xchg. Signed-off-by: Mel Gorman Acked-by: Vlastimil Babka --- include/linux/vmstat.h | 8 ++++++++ mm/page_alloc.c | 30 +++++++++++++----------------- 2 files changed, 21 insertions(+), 17 deletions(-) diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h index c1d2c316ce7d..9bf194d507e7 100644 --- a/include/linux/vmstat.h +++ b/include/linux/vmstat.h @@ -247,6 +247,14 @@ __count_numa_event(struct zone *zone, enum numa_stat= _item item) raw_cpu_inc(pzstats->vm_numa_event[item]); } =20 +static inline void +__count_numa_events(struct zone *zone, enum numa_stat_item item, long de= lta) +{ + struct per_cpu_zonestat __percpu *pzstats =3D zone->per_cpu_zonestats; + + raw_cpu_add(pzstats->vm_numa_event[item], delta); +} + extern unsigned long sum_zone_node_page_state(int node, enum zone_stat_item item); extern unsigned long sum_zone_numa_event_state(int node, enum numa_stat_= item item); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 9d0f047647e3..cff0f1c98b28 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3411,7 +3411,8 @@ void __putback_isolated_page(struct page *page, uns= igned int order, int mt) * * Must be called with interrupts disabled. */ -static inline void zone_statistics(struct zone *preferred_zone, struct z= one *z) +static inline void zone_statistics(struct zone *preferred_zone, struct z= one *z, + long nr_account) { #ifdef CONFIG_NUMA enum numa_stat_item local_stat =3D NUMA_LOCAL; @@ -3424,12 +3425,12 @@ static inline void zone_statistics(struct zone *p= referred_zone, struct zone *z) local_stat =3D NUMA_OTHER; =20 if (zone_to_nid(z) =3D=3D zone_to_nid(preferred_zone)) - __count_numa_event(z, NUMA_HIT); + __count_numa_events(z, NUMA_HIT, nr_account); else { - __count_numa_event(z, NUMA_MISS); - __count_numa_event(preferred_zone, NUMA_FOREIGN); + __count_numa_events(z, NUMA_MISS, nr_account); + __count_numa_events(preferred_zone, NUMA_FOREIGN, nr_account); } - __count_numa_event(z, local_stat); + __count_numa_events(z, local_stat, nr_account); #endif } =20 @@ -3475,7 +3476,7 @@ static struct page *rmqueue_pcplist(struct zone *pr= eferred_zone, page =3D __rmqueue_pcplist(zone, migratetype, alloc_flags, pcp, list); if (page) { __count_zid_vm_events(PGALLOC, page_zonenum(page), 1); - zone_statistics(preferred_zone, zone); + zone_statistics(preferred_zone, zone, 1); } local_unlock_irqrestore(&pagesets.lock, flags); return page; @@ -3536,7 +3537,7 @@ struct page *rmqueue(struct zone *preferred_zone, get_pcppage_migratetype(page)); =20 __count_zid_vm_events(PGALLOC, page_zonenum(page), 1 << order); - zone_statistics(preferred_zone, zone); + zone_statistics(preferred_zone, zone, 1); local_irq_restore(flags); =20 out: @@ -5019,7 +5020,7 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int pre= ferred_nid, struct alloc_context ac; gfp_t alloc_gfp; unsigned int alloc_flags =3D ALLOC_WMARK_LOW; - int nr_populated =3D 0; + int nr_populated =3D 0, nr_account =3D 0; =20 if (unlikely(nr_pages <=3D 0)) return 0; @@ -5092,15 +5093,7 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int pr= eferred_nid, goto failed_irq; break; } - - /* - * Ideally this would be batched but the best way to do - * that cheaply is to first convert zone_statistics to - * be inaccurate per-cpu counter like vm_events to avoid - * a RMW cycle then do the accounting with IRQs enabled. - */ - __count_zid_vm_events(PGALLOC, zone_idx(zone), 1); - zone_statistics(ac.preferred_zoneref->zone, zone); + nr_account++; =20 prep_new_page(page, 0, gfp, 0); if (page_list) @@ -5110,6 +5103,9 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int pre= ferred_nid, nr_populated++; } =20 + __count_zid_vm_events(PGALLOC, zone_idx(zone), nr_account); + zone_statistics(ac.preferred_zoneref->zone, zone, nr_account); + local_unlock_irqrestore(&pagesets.lock, flags); =20 return nr_populated; --=20 2.26.2