From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC933C47087 for ; Fri, 28 May 2021 09:49:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AA703613DA for ; Fri, 28 May 2021 09:49:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236082AbhE1Jv2 (ORCPT ); Fri, 28 May 2021 05:51:28 -0400 Received: from outbound-smtp02.blacknight.com ([81.17.249.8]:57690 "EHLO outbound-smtp02.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234926AbhE1Jv0 (ORCPT ); Fri, 28 May 2021 05:51:26 -0400 Received: from mail.blacknight.com (pemlinmail05.blacknight.ie [81.17.254.26]) by outbound-smtp02.blacknight.com (Postfix) with ESMTPS id 8D3FBBAAEE for ; Fri, 28 May 2021 10:49:51 +0100 (IST) Received: (qmail 24766 invoked from network); 28 May 2021 09:49:51 -0000 Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[84.203.23.168]) by 81.17.254.9 with ESMTPSA (AES256-SHA encrypted, authenticated); 28 May 2021 09:49:51 -0000 Date: Fri, 28 May 2021 10:49:49 +0100 From: Mel Gorman To: David Hildenbrand Cc: Dave Hansen , Andrew Morton , Hillf Danton , Dave Hansen , Vlastimil Babka , Michal Hocko , LKML , Linux-MM , "Tang, Feng" Subject: Re: [PATCH 0/6 v2] Calculate pcp->high based on zone sizes and active CPUs Message-ID: <20210528094949.GL30378@techsingularity.net> References: <20210525080119.5455-1-mgorman@techsingularity.net> <7177f59b-dc05-daff-7dc6-5815b539a790@intel.com> <20210528085545.GJ30378@techsingularity.net> <54ff0363-2f39-71d1-e26c-962c3fddedae@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, May 28, 2021 at 11:08:01AM +0200, David Hildenbrand wrote: > On 28.05.21 11:03, David Hildenbrand wrote: > > On 28.05.21 10:55, Mel Gorman wrote: > > > On Thu, May 27, 2021 at 12:36:21PM -0700, Dave Hansen wrote: > > > > Hi Mel, > > > > > > > > Feng Tang tossed these on a "Cascade Lake" system with 96 threads and > > > > ~512G of persistent memory and 128G of DRAM. The PMEM is in "volatile > > > > use" mode and being managed via the buddy just like the normal RAM. > > > > > > > > The PMEM zones are big ones: > > > > > > > > present 65011712 = 248 G > > > > high 134595 = 525 M > > > > > > > > The PMEM nodes, of course, don't have any CPUs in them. > > > > > > > > With your series, the pcp->high value per-cpu is 69584 pages or about > > > > 270MB per CPU. Scaled up by the 96 CPU threads, that's ~26GB of > > > > worst-case memory in the pcps per zone, or roughly 10% of the size of > > > > the zone. > > > > When I read about having such big amounts of free memory theoretically > > stuck in PCP lists, I guess we really want to start draining the PCP in > > alloc_contig_range(), just as we do with memory hotunplug when offlining. > > > > Correction: we already drain the pcp, we just don't temporarily disable it, > so a race as described in offline_pages() could apply: > > "Disable pcplists so that page isolation cannot race with freeing > in a way that pages from isolated pageblock are left on pcplists." > > Guess we'd then want to move the draining before start_isolate_page_range() > in alloc_contig_range(). > Or instead of draining, validate the PFN range in alloc_contig_range is within the same zone and if so, call zone_pcp_disable() before start_isolate_page_range and enable after __alloc_contig_migrate_range. -- Mel Gorman SUSE Labs