From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 749F82127675B for ; Thu, 6 Jun 2019 10:21:14 -0700 (PDT) Date: Thu, 6 Jun 2019 19:21:10 +0200 From: Oscar Salvador Subject: Re: [PATCH v9 07/12] mm/sparsemem: Prepare for sub-section ranges Message-ID: <20190606172110.GC31194@linux> References: <155977186863.2443951.9036044808311959913.stgit@dwillia2-desk3.amr.corp.intel.com> <155977191770.2443951.1506588644989416699.stgit@dwillia2-desk3.amr.corp.intel.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <155977191770.2443951.1506588644989416699.stgit@dwillia2-desk3.amr.corp.intel.com> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" To: Dan Williams Cc: Michal Hocko , Pavel Tatashin , linux-nvdimm@lists.01.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, Vlastimil Babka List-ID: On Wed, Jun 05, 2019 at 02:58:37PM -0700, Dan Williams wrote: > Prepare the memory hot-{add,remove} paths for handling sub-section > ranges by plumbing the starting page frame and number of pages being > handled through arch_{add,remove}_memory() to > sparse_{add,remove}_one_section(). > > This is simply plumbing, small cleanups, and some identifier renames. No > intended functional changes. > > Cc: Michal Hocko > Cc: Vlastimil Babka > Cc: Logan Gunthorpe > Cc: Oscar Salvador > Reviewed-by: Pavel Tatashin > Signed-off-by: Dan Williams > --- > include/linux/memory_hotplug.h | 5 +- > mm/memory_hotplug.c | 114 +++++++++++++++++++++++++--------------- > mm/sparse.c | 15 ++--- > 3 files changed, 81 insertions(+), 53 deletions(-) > > diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h > index 79e0add6a597..3ab0282b4fe5 100644 > --- a/include/linux/memory_hotplug.h > +++ b/include/linux/memory_hotplug.h > @@ -348,9 +348,10 @@ extern int add_memory_resource(int nid, struct resource *resource); > extern void move_pfn_range_to_zone(struct zone *zone, unsigned long start_pfn, > unsigned long nr_pages, struct vmem_altmap *altmap); > extern bool is_memblock_offlined(struct memory_block *mem); > -extern int sparse_add_one_section(int nid, unsigned long start_pfn, > - struct vmem_altmap *altmap); > +extern int sparse_add_section(int nid, unsigned long pfn, > + unsigned long nr_pages, struct vmem_altmap *altmap); > extern void sparse_remove_one_section(struct mem_section *ms, > + unsigned long pfn, unsigned long nr_pages, > unsigned long map_offset, struct vmem_altmap *altmap); > extern struct page *sparse_decode_mem_map(unsigned long coded_mem_map, > unsigned long pnum); > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c > index 4b882c57781a..399bf78bccc5 100644 > --- a/mm/memory_hotplug.c > +++ b/mm/memory_hotplug.c > @@ -252,51 +252,84 @@ void __init register_page_bootmem_info_node(struct pglist_data *pgdat) > } > #endif /* CONFIG_HAVE_BOOTMEM_INFO_NODE */ > > -static int __meminit __add_section(int nid, unsigned long phys_start_pfn, > - struct vmem_altmap *altmap) > +static int __meminit __add_section(int nid, unsigned long pfn, > + unsigned long nr_pages, struct vmem_altmap *altmap) > { > int ret; > > - if (pfn_valid(phys_start_pfn)) > + if (pfn_valid(pfn)) > return -EEXIST; > > - ret = sparse_add_one_section(nid, phys_start_pfn, altmap); > + ret = sparse_add_section(nid, pfn, nr_pages, altmap); > return ret < 0 ? ret : 0; > } > > +static int check_pfn_span(unsigned long pfn, unsigned long nr_pages, > + const char *reason) > +{ > + /* > + * Disallow all operations smaller than a sub-section and only > + * allow operations smaller than a section for > + * SPARSEMEM_VMEMMAP. Note that check_hotplug_memory_range() > + * enforces a larger memory_block_size_bytes() granularity for > + * memory that will be marked online, so this check should only > + * fire for direct arch_{add,remove}_memory() users outside of > + * add_memory_resource(). > + */ > + unsigned long min_align; > + > + if (IS_ENABLED(CONFIG_SPARSEMEM_VMEMMAP)) > + min_align = PAGES_PER_SUBSECTION; > + else > + min_align = PAGES_PER_SECTION; > + if (!IS_ALIGNED(pfn, min_align) > + || !IS_ALIGNED(nr_pages, min_align)) { > + WARN(1, "Misaligned __%s_pages start: %#lx end: #%lx\n", > + reason, pfn, pfn + nr_pages - 1); > + return -EINVAL; > + } > + return 0; > +} This caught my eye. Back in patch#4 "Convert kmalloc_section_memmap() to populate_section_memmap()", you placed a mis-usage check for !CONFIG_SPARSEMEM_VMEMMAP in populate_section_memmap(). populate_section_memmap() gets called from sparse_add_one_section(), which means that we should have passed this check, otherwise we cannot go further and call __add_section(). So, unless I am missing something it seems to me that the check from patch#4 could go? And I think the same applies to depopulate_section_memmap()? Besides that, it looks good to me: Reviewed-by: Oscar Salvador -- Oscar Salvador SUSE L3 _______________________________________________ Linux-nvdimm mailing list Linux-nvdimm@lists.01.org https://lists.01.org/mailman/listinfo/linux-nvdimm From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.5 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_MUTT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D6EA6C28EB4 for ; Thu, 6 Jun 2019 17:21:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id BE091208C0 for ; Thu, 6 Jun 2019 17:21:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730099AbfFFRVP (ORCPT ); Thu, 6 Jun 2019 13:21:15 -0400 Received: from mx2.suse.de ([195.135.220.15]:60334 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725782AbfFFRVO (ORCPT ); Thu, 6 Jun 2019 13:21:14 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id ADFCEABD0; Thu, 6 Jun 2019 17:21:12 +0000 (UTC) Date: Thu, 6 Jun 2019 19:21:10 +0200 From: Oscar Salvador To: Dan Williams Cc: akpm@linux-foundation.org, Michal Hocko , Vlastimil Babka , Logan Gunthorpe , Pavel Tatashin , linux-mm@kvack.org, linux-nvdimm@lists.01.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v9 07/12] mm/sparsemem: Prepare for sub-section ranges Message-ID: <20190606172110.GC31194@linux> References: <155977186863.2443951.9036044808311959913.stgit@dwillia2-desk3.amr.corp.intel.com> <155977191770.2443951.1506588644989416699.stgit@dwillia2-desk3.amr.corp.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <155977191770.2443951.1506588644989416699.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jun 05, 2019 at 02:58:37PM -0700, Dan Williams wrote: > Prepare the memory hot-{add,remove} paths for handling sub-section > ranges by plumbing the starting page frame and number of pages being > handled through arch_{add,remove}_memory() to > sparse_{add,remove}_one_section(). > > This is simply plumbing, small cleanups, and some identifier renames. No > intended functional changes. > > Cc: Michal Hocko > Cc: Vlastimil Babka > Cc: Logan Gunthorpe > Cc: Oscar Salvador > Reviewed-by: Pavel Tatashin > Signed-off-by: Dan Williams > --- > include/linux/memory_hotplug.h | 5 +- > mm/memory_hotplug.c | 114 +++++++++++++++++++++++++--------------- > mm/sparse.c | 15 ++--- > 3 files changed, 81 insertions(+), 53 deletions(-) > > diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h > index 79e0add6a597..3ab0282b4fe5 100644 > --- a/include/linux/memory_hotplug.h > +++ b/include/linux/memory_hotplug.h > @@ -348,9 +348,10 @@ extern int add_memory_resource(int nid, struct resource *resource); > extern void move_pfn_range_to_zone(struct zone *zone, unsigned long start_pfn, > unsigned long nr_pages, struct vmem_altmap *altmap); > extern bool is_memblock_offlined(struct memory_block *mem); > -extern int sparse_add_one_section(int nid, unsigned long start_pfn, > - struct vmem_altmap *altmap); > +extern int sparse_add_section(int nid, unsigned long pfn, > + unsigned long nr_pages, struct vmem_altmap *altmap); > extern void sparse_remove_one_section(struct mem_section *ms, > + unsigned long pfn, unsigned long nr_pages, > unsigned long map_offset, struct vmem_altmap *altmap); > extern struct page *sparse_decode_mem_map(unsigned long coded_mem_map, > unsigned long pnum); > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c > index 4b882c57781a..399bf78bccc5 100644 > --- a/mm/memory_hotplug.c > +++ b/mm/memory_hotplug.c > @@ -252,51 +252,84 @@ void __init register_page_bootmem_info_node(struct pglist_data *pgdat) > } > #endif /* CONFIG_HAVE_BOOTMEM_INFO_NODE */ > > -static int __meminit __add_section(int nid, unsigned long phys_start_pfn, > - struct vmem_altmap *altmap) > +static int __meminit __add_section(int nid, unsigned long pfn, > + unsigned long nr_pages, struct vmem_altmap *altmap) > { > int ret; > > - if (pfn_valid(phys_start_pfn)) > + if (pfn_valid(pfn)) > return -EEXIST; > > - ret = sparse_add_one_section(nid, phys_start_pfn, altmap); > + ret = sparse_add_section(nid, pfn, nr_pages, altmap); > return ret < 0 ? ret : 0; > } > > +static int check_pfn_span(unsigned long pfn, unsigned long nr_pages, > + const char *reason) > +{ > + /* > + * Disallow all operations smaller than a sub-section and only > + * allow operations smaller than a section for > + * SPARSEMEM_VMEMMAP. Note that check_hotplug_memory_range() > + * enforces a larger memory_block_size_bytes() granularity for > + * memory that will be marked online, so this check should only > + * fire for direct arch_{add,remove}_memory() users outside of > + * add_memory_resource(). > + */ > + unsigned long min_align; > + > + if (IS_ENABLED(CONFIG_SPARSEMEM_VMEMMAP)) > + min_align = PAGES_PER_SUBSECTION; > + else > + min_align = PAGES_PER_SECTION; > + if (!IS_ALIGNED(pfn, min_align) > + || !IS_ALIGNED(nr_pages, min_align)) { > + WARN(1, "Misaligned __%s_pages start: %#lx end: #%lx\n", > + reason, pfn, pfn + nr_pages - 1); > + return -EINVAL; > + } > + return 0; > +} This caught my eye. Back in patch#4 "Convert kmalloc_section_memmap() to populate_section_memmap()", you placed a mis-usage check for !CONFIG_SPARSEMEM_VMEMMAP in populate_section_memmap(). populate_section_memmap() gets called from sparse_add_one_section(), which means that we should have passed this check, otherwise we cannot go further and call __add_section(). So, unless I am missing something it seems to me that the check from patch#4 could go? And I think the same applies to depopulate_section_memmap()? Besides that, it looks good to me: Reviewed-by: Oscar Salvador -- Oscar Salvador SUSE L3