From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.3 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 74105C4708C for ; Fri, 28 May 2021 14:38:10 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DD90F613D4 for ; Fri, 28 May 2021 14:38:09 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DD90F613D4 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 384556B0036; Fri, 28 May 2021 10:38:09 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 334836B006E; Fri, 28 May 2021 10:38:09 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1AEF16B0070; Fri, 28 May 2021 10:38:09 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0126.hostedemail.com [216.40.44.126]) by kanga.kvack.org (Postfix) with ESMTP id DD9186B0036 for ; Fri, 28 May 2021 10:38:08 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 693E99895 for ; Fri, 28 May 2021 14:38:08 +0000 (UTC) X-FDA: 78190894656.28.37651CD Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by imf09.hostedemail.com (Postfix) with ESMTP id AC95960002D0 for ; Fri, 28 May 2021 14:38:02 +0000 (UTC) Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 6F3CB218B3; Fri, 28 May 2021 14:38:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1622212686; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SzREePWPek7xbQooscouTLGKXsYpABXEKB69ilAutZ4=; b=GCYu/8YI40f/UjSWypYlhNAsp6tqcZwSHL2Dyj7PgMJmx7o1hLgJMGWYYMW5tbPeq6P5Dg U6lKizf45Isn8gHkB/HdzVzzQpLYTVU/RH1Mltyjs4Vba3QUtl75kgfbOwWwm3zQpI44aI k2OlhlgfDxvu4kcOzrupUGA99QQUtuE= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1622212686; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SzREePWPek7xbQooscouTLGKXsYpABXEKB69ilAutZ4=; b=+XkhBEbaAic5hiU6fcz3speKi9SedkZPv/mki141EIMM1Lv4mye9MSvV1oEw0dQkVBP9JB CKN4iRiph2FVAsBQ== Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47]) by imap.suse.de (Postfix) with ESMTP id 58E2C11A98; Fri, 28 May 2021 14:38:06 +0000 (UTC) Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA id 3CMuFU4AsWBncAAALh3uQQ (envelope-from ); Fri, 28 May 2021 14:38:06 +0000 Subject: Re: [PATCH 6/6] mm/page_alloc: Introduce vm.percpu_pagelist_high_fraction To: Mel Gorman Cc: Andrew Morton , Hillf Danton , Dave Hansen , Michal Hocko , LKML , Linux-MM References: <20210525080119.5455-1-mgorman@techsingularity.net> <20210525080119.5455-7-mgorman@techsingularity.net> <018c4b99-81a5-bc12-03cd-662a938ef05a@suse.cz> <20210528125334.GP30378@techsingularity.net> From: Vlastimil Babka Message-ID: <3a6670d3-63eb-f97a-62a0-ec752d933a13@suse.cz> Date: Fri, 28 May 2021 16:38:04 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.10.2 MIME-Version: 1.0 In-Reply-To: <20210528125334.GP30378@techsingularity.net> Content-Type: text/plain; charset=iso-8859-15 Content-Language: en-US Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: AC95960002D0 Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b="GCYu/8YI"; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=+XkhBEba; spf=pass (imf09.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.28 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none X-Rspamd-Server: rspam04 X-Stat-Signature: 8zgbipw1n54jdoytrju848nwah8zpiog X-HE-Tag: 1622212682-139575 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 5/28/21 2:53 PM, Mel Gorman wrote: > On Fri, May 28, 2021 at 01:59:37PM +0200, Vlastimil Babka wrote: >> On 5/25/21 10:01 AM, Mel Gorman wrote: >> > This introduces a new sysctl vm.percpu_pagelist_high_fraction. It is >> > similar to the old vm.percpu_pagelist_fraction. The old sysctl increased >> > both pcp->batch and pcp->high with the higher pcp->high potentially >> > reducing zone->lock contention. However, the higher pcp->batch value also >> > potentially increased allocation latency while the PCP was refilled. >> > This sysctl only adjusts pcp->high so that zone->lock contention is >> > potentially reduced but allocation latency during a PCP refill remains >> > the same. >> > >> > # grep -E "high:|batch" /proc/zoneinfo | tail -2 >> > high: 649 >> > batch: 63 >> > >> > # sysctl vm.percpu_pagelist_high_fraction=8 >> > # grep -E "high:|batch" /proc/zoneinfo | tail -2 >> > high: 35071 >> > batch: 63 >> > >> > # sysctl vm.percpu_pagelist_high_fraction=64 >> > high: 4383 >> > batch: 63 >> > >> > # sysctl vm.percpu_pagelist_high_fraction=0 >> > high: 649 >> > batch: 63 >> > >> > Signed-off-by: Mel Gorman >> > Acked-by: Dave Hansen >> >> Acked-by: Vlastimil Babka >> > > Thanks. > >> Documentation nit below: >> >> > @@ -789,6 +790,25 @@ panic_on_oom=2+kdump gives you very strong tool to investigate >> > why oom happens. You can get snapshot. >> > >> > >> > +percpu_pagelist_high_fraction >> > +============================= >> > + >> > +This is the fraction of pages in each zone that are allocated for each >> > +per cpu page list. The min value for this is 8. It means that we do >> > +not allow more than 1/8th of pages in each zone to be allocated in any >> > +single per_cpu_pagelist. >> >> This, while technically correct (as an upper limit) is somewhat misleading as >> the limit for a single per_cpu_pagelist also considers the number of local cpus. >> >> > This entry only changes the value of hot per >> > +cpu pagelists. User can specify a number like 100 to allocate 1/100th >> > +of each zone to each per cpu page list. >> >> This is worse. Anyone trying to reproduce this example on a system with multiple >> cpus per node and checking the result will be puzzled. >> So I think the part about number of local cpus should be mentioned to avoid >> confusion. >> > > Is this any better? Ack, thanks > diff --git a/Documentation/admin-guide/sysctl/vm.rst b/Documentation/admin-guide/sysctl/vm.rst > index e85c2f21d209..2da25735a629 100644 > --- a/Documentation/admin-guide/sysctl/vm.rst > +++ b/Documentation/admin-guide/sysctl/vm.rst > @@ -793,15 +793,16 @@ why oom happens. You can get snapshot. > percpu_pagelist_high_fraction > ============================= > > -This is the fraction of pages in each zone that are allocated for each > -per cpu page list. The min value for this is 8. It means that we do > -not allow more than 1/8th of pages in each zone to be allocated in any > -single per_cpu_pagelist. This entry only changes the value of hot per > -cpu pagelists. User can specify a number like 100 to allocate 1/100th > -of each zone to each per cpu page list. > - > -The batch value of each per cpu pagelist remains the same regardless of the > -value of the high fraction so allocation latencies are unaffected. > +This is the fraction of pages in each zone that are can be stored to > +per-cpu page lists. It is an upper boundary that is divided depending > +on the number of online CPUs. The min value for this is 8 which means > +that we do not allow more than 1/8th of pages in each zone to be stored > +on per-cpu page lists. This entry only changes the value of hot per-cpu > +page lists. A user can specify a number like 100 to allocate 1/100th of > +each zone between per-cpu lists. > + > +The batch value of each per-cpu page list remains the same regardless of > +the value of the high fraction so allocation latencies are unaffected. > > The initial value is zero. Kernel uses this value to set the high pcp->high > mark based on the low watermark for the zone and the number of local >