From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_MUTT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 23B80C46477 for ; Tue, 18 Jun 2019 01:42:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 08B2B2084B for ; Tue, 18 Jun 2019 01:42:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727121AbfFRBmt (ORCPT ); Mon, 17 Jun 2019 21:42:49 -0400 Received: from mga12.intel.com ([192.55.52.136]:30642 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726023AbfFRBms (ORCPT ); Mon, 17 Jun 2019 21:42:48 -0400 X-Amp-Result: UNKNOWN X-Amp-Original-Verdict: FILE UNKNOWN X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga106.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 17 Jun 2019 18:42:48 -0700 X-ExtLoop1: 1 Received: from richard.sh.intel.com (HELO localhost) ([10.239.159.54]) by fmsmga005.fm.intel.com with ESMTP; 17 Jun 2019 18:42:46 -0700 Date: Tue, 18 Jun 2019 09:42:23 +0800 From: Wei Yang To: Dan Williams Cc: akpm@linux-foundation.org, mhocko@suse.com, Pavel Tatashin , linux-nvdimm@lists.01.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Vlastimil Babka , osalvador@suse.de Subject: Re: [PATCH v9 03/12] mm/hotplug: Prepare shrink_{zone, pgdat}_span for sub-section removal Message-ID: <20190618014223.GD18161@richard> Reply-To: Wei Yang References: <155977186863.2443951.9036044808311959913.stgit@dwillia2-desk3.amr.corp.intel.com> <155977188458.2443951.9573565800736334460.stgit@dwillia2-desk3.amr.corp.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <155977188458.2443951.9573565800736334460.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jun 05, 2019 at 02:58:04PM -0700, Dan Williams wrote: >Sub-section hotplug support reduces the unit of operation of hotplug >from section-sized-units (PAGES_PER_SECTION) to sub-section-sized units >(PAGES_PER_SUBSECTION). Teach shrink_{zone,pgdat}_span() to consider >PAGES_PER_SUBSECTION boundaries as the points where pfn_valid(), not >valid_section(), can toggle. > >Cc: Michal Hocko >Cc: Vlastimil Babka >Cc: Logan Gunthorpe >Reviewed-by: Pavel Tatashin >Reviewed-by: Oscar Salvador >Signed-off-by: Dan Williams >--- > mm/memory_hotplug.c | 29 ++++++++--------------------- > 1 file changed, 8 insertions(+), 21 deletions(-) > >diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c >index 7b963c2d3a0d..647859a1d119 100644 >--- a/mm/memory_hotplug.c >+++ b/mm/memory_hotplug.c >@@ -318,12 +318,8 @@ static unsigned long find_smallest_section_pfn(int nid, struct zone *zone, > unsigned long start_pfn, > unsigned long end_pfn) > { >- struct mem_section *ms; >- >- for (; start_pfn < end_pfn; start_pfn += PAGES_PER_SECTION) { >- ms = __pfn_to_section(start_pfn); >- >- if (unlikely(!valid_section(ms))) >+ for (; start_pfn < end_pfn; start_pfn += PAGES_PER_SUBSECTION) { >+ if (unlikely(!pfn_valid(start_pfn))) > continue; Hmm, we change the granularity of valid section from SECTION to SUBSECTION. But we didn't change the granularity of node id and zone information. For example, we found the node id of a pfn mismatch, we can skip the whole section instead of a subsection. Maybe this is not a big deal. > > if (unlikely(pfn_to_nid(start_pfn) != nid)) >@@ -343,15 +339,12 @@ static unsigned long find_biggest_section_pfn(int nid, struct zone *zone, > unsigned long start_pfn, > unsigned long end_pfn) > { >- struct mem_section *ms; > unsigned long pfn; > > /* pfn is the end pfn of a memory section. */ > pfn = end_pfn - 1; >- for (; pfn >= start_pfn; pfn -= PAGES_PER_SECTION) { >- ms = __pfn_to_section(pfn); >- >- if (unlikely(!valid_section(ms))) >+ for (; pfn >= start_pfn; pfn -= PAGES_PER_SUBSECTION) { >+ if (unlikely(!pfn_valid(pfn))) > continue; > > if (unlikely(pfn_to_nid(pfn) != nid)) >@@ -373,7 +366,6 @@ static void shrink_zone_span(struct zone *zone, unsigned long start_pfn, > unsigned long z = zone_end_pfn(zone); /* zone_end_pfn namespace clash */ > unsigned long zone_end_pfn = z; > unsigned long pfn; >- struct mem_section *ms; > int nid = zone_to_nid(zone); > > zone_span_writelock(zone); >@@ -410,10 +402,8 @@ static void shrink_zone_span(struct zone *zone, unsigned long start_pfn, > * it check the zone has only hole or not. > */ > pfn = zone_start_pfn; >- for (; pfn < zone_end_pfn; pfn += PAGES_PER_SECTION) { >- ms = __pfn_to_section(pfn); >- >- if (unlikely(!valid_section(ms))) >+ for (; pfn < zone_end_pfn; pfn += PAGES_PER_SUBSECTION) { >+ if (unlikely(!pfn_valid(pfn))) > continue; > > if (page_zone(pfn_to_page(pfn)) != zone) >@@ -441,7 +431,6 @@ static void shrink_pgdat_span(struct pglist_data *pgdat, > unsigned long p = pgdat_end_pfn(pgdat); /* pgdat_end_pfn namespace clash */ > unsigned long pgdat_end_pfn = p; > unsigned long pfn; >- struct mem_section *ms; > int nid = pgdat->node_id; > > if (pgdat_start_pfn == start_pfn) { >@@ -478,10 +467,8 @@ static void shrink_pgdat_span(struct pglist_data *pgdat, > * has only hole or not. > */ > pfn = pgdat_start_pfn; >- for (; pfn < pgdat_end_pfn; pfn += PAGES_PER_SECTION) { >- ms = __pfn_to_section(pfn); >- >- if (unlikely(!valid_section(ms))) >+ for (; pfn < pgdat_end_pfn; pfn += PAGES_PER_SUBSECTION) { >+ if (unlikely(!pfn_valid(pfn))) > continue; > > if (pfn_to_nid(pfn) != nid) > >_______________________________________________ >Linux-nvdimm mailing list >Linux-nvdimm@lists.01.org >https://lists.01.org/mailman/listinfo/linux-nvdimm -- Wei Yang Help you, Help me