From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2C2F4C433C1 for ; Mon, 29 Mar 2021 12:07:53 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A52E461930 for ; Mon, 29 Mar 2021 12:07:52 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A52E461930 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=techsingularity.net Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 410006B0083; Mon, 29 Mar 2021 08:07:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3E5E26B0085; Mon, 29 Mar 2021 08:07:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2878D6B0087; Mon, 29 Mar 2021 08:07:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0130.hostedemail.com [216.40.44.130]) by kanga.kvack.org (Postfix) with ESMTP id 093EB6B0083 for ; Mon, 29 Mar 2021 08:07:52 -0400 (EDT) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id B423612E8 for ; Mon, 29 Mar 2021 12:07:51 +0000 (UTC) X-FDA: 77972787942.24.D2FB9A2 Received: from outbound-smtp62.blacknight.com (outbound-smtp62.blacknight.com [46.22.136.251]) by imf22.hostedemail.com (Postfix) with ESMTP id DB1BBC0001FA for ; Mon, 29 Mar 2021 12:07:48 +0000 (UTC) Received: from mail.blacknight.com (pemlinmail01.blacknight.ie [81.17.254.10]) by outbound-smtp62.blacknight.com (Postfix) with ESMTPS id 11E9BFA79F for ; Mon, 29 Mar 2021 13:07:50 +0100 (IST) Received: (qmail 20025 invoked from network); 29 Mar 2021 12:07:49 -0000 Received: from unknown (HELO stampy.112glenside.lan) (mgorman@techsingularity.net@[84.203.22.4]) by 81.17.254.9 with ESMTPA; 29 Mar 2021 12:07:49 -0000 From: Mel Gorman To: Linux-MM Cc: Linux-RT-Users , LKML , Chuck Lever , Jesper Dangaard Brouer , Matthew Wilcox , Mel Gorman Subject: [PATCH 5/6] mm/page_alloc: Batch the accounting updates in the bulk allocator Date: Mon, 29 Mar 2021 13:06:47 +0100 Message-Id: <20210329120648.19040-6-mgorman@techsingularity.net> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210329120648.19040-1-mgorman@techsingularity.net> References: <20210329120648.19040-1-mgorman@techsingularity.net> MIME-Version: 1.0 X-Stat-Signature: wii4h14fyggd7kttpewwz8kdhskccjbk X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: DB1BBC0001FA Received-SPF: none (techsingularity.net>: No applicable sender policy available) receiver=imf22; identity=mailfrom; envelope-from=""; helo=outbound-smtp62.blacknight.com; client-ip=46.22.136.251 X-HE-DKIM-Result: none/none X-HE-Tag: 1617019668-898348 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now that the zone_statistics are a simple counter that does not require special protection, the bulk allocator accounting updates can be batch updated without requiring IRQs to be disabled. Signed-off-by: Mel Gorman --- include/linux/vmstat.h | 8 ++++++++ mm/page_alloc.c | 30 +++++++++++++----------------- 2 files changed, 21 insertions(+), 17 deletions(-) diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h index dde4dec4e7dd..8473b8fa9756 100644 --- a/include/linux/vmstat.h +++ b/include/linux/vmstat.h @@ -246,6 +246,14 @@ __count_numa_event(struct zone *zone, enum numa_stat= _item item) raw_cpu_inc(pzstats->vm_numa_event[item]); } =20 +static inline void +__count_numa_events(struct zone *zone, enum numa_stat_item item, long de= lta) +{ + struct per_cpu_zonestat __percpu *pzstats =3D zone->per_cpu_zonestats; + + raw_cpu_add(pzstats->vm_numa_event[item], delta); +} + extern void __count_numa_event(struct zone *zone, enum numa_stat_item it= em); extern unsigned long sum_zone_node_page_state(int node, enum zone_stat_item item); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 7eb48632bcac..32c64839c145 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3398,7 +3398,8 @@ void __putback_isolated_page(struct page *page, uns= igned int order, int mt) * * Must be called with interrupts disabled. */ -static inline void zone_statistics(struct zone *preferred_zone, struct z= one *z) +static inline void zone_statistics(struct zone *preferred_zone, struct z= one *z, + long nr_account) { #ifdef CONFIG_NUMA enum numa_stat_item local_stat =3D NUMA_LOCAL; @@ -3411,12 +3412,12 @@ static inline void zone_statistics(struct zone *p= referred_zone, struct zone *z) local_stat =3D NUMA_OTHER; =20 if (zone_to_nid(z) =3D=3D zone_to_nid(preferred_zone)) - __count_numa_event(z, NUMA_HIT); + __count_numa_events(z, NUMA_HIT, nr_account); else { - __count_numa_event(z, NUMA_MISS); - __count_numa_event(preferred_zone, NUMA_FOREIGN); + __count_numa_events(z, NUMA_MISS, nr_account); + __count_numa_events(preferred_zone, NUMA_FOREIGN, nr_account); } - __count_numa_event(z, local_stat); + __count_numa_events(z, local_stat, nr_account); #endif } =20 @@ -3462,7 +3463,7 @@ static struct page *rmqueue_pcplist(struct zone *pr= eferred_zone, page =3D __rmqueue_pcplist(zone, migratetype, alloc_flags, pcp, list); if (page) { __count_zid_vm_events(PGALLOC, page_zonenum(page), 1); - zone_statistics(preferred_zone, zone); + zone_statistics(preferred_zone, zone, 1); } local_unlock_irqrestore(&pagesets.lock, flags); return page; @@ -3523,7 +3524,7 @@ struct page *rmqueue(struct zone *preferred_zone, get_pcppage_migratetype(page)); =20 __count_zid_vm_events(PGALLOC, page_zonenum(page), 1 << order); - zone_statistics(preferred_zone, zone); + zone_statistics(preferred_zone, zone, 1); local_irq_restore(flags); =20 out: @@ -5006,7 +5007,7 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int pre= ferred_nid, struct alloc_context ac; gfp_t alloc_gfp; unsigned int alloc_flags; - int nr_populated =3D 0; + int nr_populated =3D 0, nr_account =3D 0; =20 if (unlikely(nr_pages <=3D 0)) return 0; @@ -5079,15 +5080,7 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int pr= eferred_nid, goto failed_irq; break; } - - /* - * Ideally this would be batched but the best way to do - * that cheaply is to first convert zone_statistics to - * be inaccurate per-cpu counter like vm_events to avoid - * a RMW cycle then do the accounting with IRQs enabled. - */ - __count_zid_vm_events(PGALLOC, zone_idx(zone), 1); - zone_statistics(ac.preferred_zoneref->zone, zone); + nr_account++; =20 prep_new_page(page, 0, gfp, 0); if (page_list) @@ -5097,6 +5090,9 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int pre= ferred_nid, nr_populated++; } =20 + __count_zid_vm_events(PGALLOC, zone_idx(zone), nr_account); + zone_statistics(ac.preferred_zoneref->zone, zone, nr_account); + local_unlock_irqrestore(&pagesets.lock, flags); =20 return nr_populated; --=20 2.26.2