From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-11.5 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 293B1C43457 for ; Thu, 8 Oct 2020 17:55:09 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 60D3A22200 for ; Thu, 8 Oct 2020 17:55:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 60D3A22200 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 70DA0940007; Thu, 8 Oct 2020 13:55:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6C02E900002; Thu, 8 Oct 2020 13:55:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 587DB940007; Thu, 8 Oct 2020 13:55:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0150.hostedemail.com [216.40.44.150]) by kanga.kvack.org (Postfix) with ESMTP id 27215900002 for ; Thu, 8 Oct 2020 13:55:07 -0400 (EDT) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 44E60181AE865 for ; Thu, 8 Oct 2020 17:55:06 +0000 (UTC) X-FDA: 77349509412.17.table40_3e07e21271d9 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin17.hostedemail.com (Postfix) with ESMTP id 11217180D0181 for ; Thu, 8 Oct 2020 17:55:06 +0000 (UTC) X-HE-Tag: table40_3e07e21271d9 X-Filterd-Recvd-Size: 5549 Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) by imf46.hostedemail.com (Postfix) with ESMTP for ; Thu, 8 Oct 2020 17:55:05 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 299F6AEEE; Thu, 8 Oct 2020 17:55:04 +0000 (UTC) Subject: Re: [PATCH v2 5/7] mm, page_alloc: cache pageset high and batch in struct zone To: Michal Hocko Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Pavel Tatashin , David Hildenbrand , Oscar Salvador , Joonsoo Kim References: <20201008114201.18824-1-vbabka@suse.cz> <20201008114201.18824-6-vbabka@suse.cz> <20201008123129.GC4967@dhcp22.suse.cz> From: Vlastimil Babka Message-ID: Date: Thu, 8 Oct 2020 19:55:02 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.3.1 MIME-Version: 1.0 In-Reply-To: <20201008123129.GC4967@dhcp22.suse.cz> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 10/8/20 2:31 PM, Michal Hocko wrote: > On Thu 08-10-20 13:41:59, Vlastimil Babka wrote: >> All per-cpu pagesets for a zone use the same high and batch values, that are >> duplicated there just for performance (locality) reasons. This patch adds the >> same variables also to struct zone as a shared copy. >> >> This will be useful later for making possible to disable pcplists temporarily >> by setting high value to 0, while remembering the values for restoring them >> later. But we can also immediately benefit from not updating pagesets of all >> possible cpus in case the newly recalculated values (after sysctl change or >> memory online/offline) are actually unchanged from the previous ones. >> >> Signed-off-by: Vlastimil Babka > > Acked-by: Michal Hocko Thanks! > I would consider the check flipped with early return more pleasing to my > eyes but nothing to lose sleep over. Right, here's updated patch: ----8<---- From 6ab0f03762d122a896349d5e568f75c20875eb42 Mon Sep 17 00:00:00 2001 From: Vlastimil Babka Date: Mon, 7 Sep 2020 14:20:08 +0200 Subject: [PATCH v2 5/7] mm, page_alloc: cache pageset high and batch in struct zone All per-cpu pagesets for a zone use the same high and batch values, that are duplicated there just for performance (locality) reasons. This patch adds the same variables also to struct zone as a shared copy. This will be useful later for making possible to disable pcplists temporarily by setting high value to 0, while remembering the values for restoring them later. But we can also immediately benefit from not updating pagesets of all possible cpus in case the newly recalculated values (after sysctl change or memory online/offline) are actually unchanged from the previous ones. Signed-off-by: Vlastimil Babka Acked-by: Michal Hocko --- include/linux/mmzone.h | 6 ++++++ mm/page_alloc.c | 16 ++++++++++++++-- 2 files changed, 20 insertions(+), 2 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index fb3bf696c05e..c63863794afc 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -470,6 +470,12 @@ struct zone { #endif struct pglist_data *zone_pgdat; struct per_cpu_pageset __percpu *pageset; + /* + * the high and batch values are copied to individual pagesets for + * faster access + */ + int pageset_high; + int pageset_batch; #ifndef CONFIG_SPARSEMEM /* diff --git a/mm/page_alloc.c b/mm/page_alloc.c index f33c36312eb5..057baefba8f3 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5900,6 +5900,9 @@ static void build_zonelists(pg_data_t *pgdat) * Other parts of the kernel may not check if the zone is available. */ static void pageset_init(struct per_cpu_pageset *p); +/* These effectively disable the pcplists in the boot pageset completely */ +#define BOOT_PAGESET_HIGH 0 +#define BOOT_PAGESET_BATCH 1 static DEFINE_PER_CPU(struct per_cpu_pageset, boot_pageset); static DEFINE_PER_CPU(struct per_cpu_nodestat, boot_nodestats); @@ -6289,8 +6292,8 @@ static void pageset_init(struct per_cpu_pageset *p) * need to be as careful as pageset_update() as nobody can access the * pageset yet. */ - pcp->high = 0; - pcp->batch = 1; + pcp->high = BOOT_PAGESET_HIGH; + pcp->batch = BOOT_PAGESET_BATCH; } /* @@ -6314,6 +6317,13 @@ static void zone_set_pageset_high_and_batch(struct zone *zone) new_batch = max(1UL, 1 * new_batch); } + if (zone->pageset_high == new_high && + zone->pageset_batch == new_batch) + return; + + zone->pageset_high = new_high; + zone->pageset_batch = new_batch; + for_each_possible_cpu(cpu) { p = per_cpu_ptr(zone->pageset, cpu); pageset_update(&p->pcp, new_high, new_batch); @@ -6374,6 +6384,8 @@ static __meminit void zone_pcp_init(struct zone *zone) * offset of a (static) per cpu variable into the per cpu area. */ zone->pageset = &boot_pageset; + zone->pageset_high = BOOT_PAGESET_HIGH; + zone->pageset_batch = BOOT_PAGESET_BATCH; if (populated_zone(zone)) printk(KERN_DEBUG " %s zone: %lu pages, LIFO batch:%u\n", -- 2.28.0